What is the third stage of artificial intelligence (Chatgpt) and why is it considered ‘very dangerous’?

Since its launch in late November 2022, ChatGPT, a chatbot that uses artificial intelligence to answer questions or generate text on demand from users, has been the fastest-growing in Internet history.

The application was created. Ten crore active users have accessed it in just two months. According to data from technology monitoring company ‘Sensortown’, TikTok took nine months to reach this milestone while Instagram took two and a half years.

UBS analysts who told about this record in February have said that “For 20 years we have been tracking the Internet, we do not remember a faster development of any Internet application.

” A chatbot called ChatGPT was launched in late November by the artificial intelligence research company OpenAI. Its popularity has sparked all kinds of debate and speculation about the technology’s impact.

It is the branch of AI dedicated to generating original content from existing data (usually taken from the Internet) in response to user instructions. Chat GPT is actually the latest addition to the company’s AI-based software, which the company is calling Generative Pre-Trend Transformer or GPT.

If you ask ChatGPT any question, it is capable of giving you a detailed and better answer regarding it. Whether you use it to write a poem or get guidance on your school assignment, this software will often not let you down. In this chat format, OpenAI said, artificial intelligence has the freedom to “ask questions, admit mistakes, challenge misconceptions, and reject inappropriate questions.

Narrow Artificial Intelligence (ANI) chatgpt

” From essays, poetry, and jokes to computer codes and diagrams, photos, any style of artwork, and much more can be produced through it. Its scope ranges from students who use it to do their homework, to politicians who prepare their speeches with its help. This trend has revolutionized human resources.

In the world of technology, big companies like ‘IBM’ have announced the termination of about 8000 jobs. Now they will take this task with the help of artificial intelligence. A report by investment bank Goldman Sachs estimated in late March that artificial intelligence could replace a quarter of human jobs today, though it would also create more productivity and new jobs.

If all these changes overwhelm you, prepare yourself for a reality that could be even more disturbing, and that is that with all the implications we are now experiencing, AI. is only the first stage of development.

The second phase, according to experts, will be more revolutionary and may come soon while the third and final phase is so advanced that it will completely change the world, even at the cost of human existence.

Three steps

Artificial intelligence technologies are classified according to their ability to mimic human characteristics as follows:

1- Narrow Artificial Intelligence (ANI)

The most basic category of AI is better known by its acronym: ANI, for Artificial Narrow Intelligence. It is so named because it focuses on a single task, performing a repetitive task within a pre-defined limit by its creators.

ANI systems are typically trained using a large data set (eg from the Internet) and can make decisions or take actions based on this training. ANI can match or exceed human intelligence and performance but only in the specific area where it works. An example is chess programs that use AI, capable of beating the world champion of that discipline, but unable to perform other tasks.

That is why it is also called ‘weak AI’. All programs and tools that use AI today, even the most advanced and complex ones, are forms of ANI. And these systems are everywhere. Smartphones are full of apps that use this technology, from GPS maps that can find you anywhere in the world or check the weather, to music and video programs that know your tastes.

Artificial Intelligence

And make recommendations. Also virtual assistants like ‘Siri’ and ‘Alexa’ are forms of ANI. Like the Google search engine and robots that clean your house. The business world also uses this technology a lot. It is used in the internal computers of cars, in the manufacture of thousands of products, in the financial world, and even for diagnostics in hospitals.

Even more sophisticated systems such as driverless cars (or autonomous vehicles) and the popular Chat GPT are also forms of ANI because they cannot operate beyond the limits predetermined by their programmers. So they cannot make decisions by themselves. They do not even have self-awareness, a feature of human intelligence.

However, some experts believe that systems that are programmed to learn automatically (machine learning) such as ChatGPT or AutoGPT (an ‘autonomous agent’ or ‘intelligent agent’ that ChatGPT Uses the information to perform some subtasks autonomously can move to the next stage of development.

2- Artificial General Intelligence (AGI)

This stage, artificial general intelligence, is reached when a machine achieves human-level cognitive abilities. That is when you can perform any intellectual task that a human does. Convinced that we are on the verge of reaching this level of development, last March more than 1,000 technologists asked AI companies to stop training for at least six months, a program that GPT -4 is more powerful than the latest version of Chat GPT.

Apple co-founder Steve Wozniak, Tesla, and Twitter’s Elon Musk were among those who warned in an open letter that ‘AI systems with intelligence that competes with humans pose profound threats to society and humanity.

Artificial General Intelligence (AGI)

Can be.’ In a letter published by the nonprofit Future of Life Institute, experts said that if companies don’t agree to halt their projects immediately, “governments should step in and put a temporary moratorium on it.” So that security measures can be designed and implemented. Although this is something that hasn’t happened yet, the US government has taken ownership of major AI companies.

Alphabet, Anthropic, Microsoft, and OpenAI – were called upon to agree on new initiatives to promote responsible innovation. “AI is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate its risks,” the White House said in a statement on May 4. For its part, the US Congress summoned OpenAI CEO Sam Altman to answer questions about ChatGPT on Tuesday.

During the Senate hearing, Altman said it was “important” that his industry be regulated by the government because AI was becoming “increasingly powerful.” Carlos Ignacio Gutiérrez, public policy researcher at the Future of Life Institute, told BBC Mundo that one of the biggest challenges facing AI is that ‘there is no body of experts or universities that can make this decision.

How should it be regulated? In the experts’ letter, they explained what their main concerns were. He raised the question, ‘Should we develop non-human minds that will eventually outsmart us, surpass us, replace us by making us jobless and irrelevant?’ He also asked, “Should we take the risk of losing control over our civilization?”

3- Artificial Super Intelligence (ASI)

The concern of these computer scientists is related to a well-established theory that when we reach AGI, we will soon reach the final stage of this technology’s development: artificial superintelligence, which It’s when artificial intelligence overtakes humans.

Oxford University scholar and AI expert Nick Bostrom define superintelligence as ‘an intelligence that is more intelligent than the best human brain in virtually every domain, including scientific creativity, general wisdom, and social skill.’ The theory is that when a machine achieves human-like intelligence, its ability to rapidly increase that intelligence through its own autonomous learning should take it far ahead of us in a short time to reach ASI.

Will According to him, ‘humans should study for a long time to become engineers, nurses or lawyers. The problem with AGI is that it is immediately scalable. This is thanks to a process called recursive self-improvement that allows an AI application to ‘continually improve itself, at times when we couldn’t.’ While there is much debate about whether a machine can truly achieve the kind of broad intelligence that humans possess—especially when it comes to emotional intelligence—it is one of the most Disturbing those who believe we are close to achieving AGI.

Artificial Super Intelligence (ASI)

Recently the so-called ‘Godfather of artificial intelligence’ Geoffrey Hinton, a pioneer in research into neural networks and deep learning that allows machines to learn from experience like humans, gave an interview to the BBC.

Warned that we may be very close to that milestone. “(Machines) aren’t smarter than us right now, as far as I can see, but I think they could be soon,” says Geoffrey Hinton, 75, who recently retired from Google.

Extinction or Immortality

There are generally two schools of thought regarding ASI: those who believe that this superintelligence will be beneficial to humanity and those who are the opposite. Among those against it was the famous British physicist Stephen Hawking, who believed that super-intelligent machines are a threat to our existence.

He told the BBC in 2014, four years before his death, that ‘the development of full artificial intelligence could mean the end of the human race.’ He added that a machine with this level of intelligence would “restart itself and redesign itself at an increasing rate.” According to his prediction, “humans, who do not have that ability due to slow biological evolution, will not be able to compete and will therefore lose to this technology.

” One of ASI’s biggest enthusiasts is American futurist inventor and author Ray Kurzweil, an AI researcher at Google and co-founder of Silicon Valley’s Singularity University. Singularity is another name for the era in which machines become highly intelligent. Ray Croswell believes that humans will be able to use super-intelligent AI to overcome our biological barriers and improve our lives and our world.

In 2015, he even predicted that by 2030, humans will be able to achieve immortality thanks to nanobots, tiny robots that will work inside our bodies, treating diseases that develop over time. will be able to treat any disorder or disease including In his statement to Congress on Tuesday, OpenAI’s Sam Altman was also optimistic about AI’s potential, saying it could solve “mankind’s biggest challenges, like climate change and curing cancer.”

Is. In the middle are people like Hinton, who believe that AI has great potential for humanity, but that the current pace of development without clear rules and boundaries is troubling. Announcing his departure from Google in a statement sent to The New York Times, Hinton said he now regrets his actions because he fears “bad elements” will use AI to do “bad things”.

“Imagine, for example, that some bad actor, like Russian President Vladimir Putin, decided to give robots the ability to create sub-targets of their own.” Machines may eventually create ‘sub-goals such as: ‘I need to get more power’, which will create an ‘existential threat’.

At the same time, the British-Canadian expert said that in the short term, AI will bring more benefits than risks, so “we should not stop developing it.” The question is, now that we’ve discovered that it works better than it did a few years ago, what do we do to reduce the long-term risks of things that are smarter than we are?’

Guterres agrees that the key is to build an AI governance system before developing intelligence that can make its own decisions. “If these institutions are created with their own motivations, what does it mean when we don’t control those motivations?”

The expert pointed out that the danger is not only that an AGI or ASI, either on its own initiative or under the control of people with ‘bad motives’, starts a war or disrupts financial, productive systems, and energy infrastructure.

Manipulate structures, transportation, or any other system, which is now computerized. He warned that a superintelligence could dominate us in much more subtle ways.

“Imagine a future where an entity has so much information about every person on the planet and their habits, thanks to our internet searches, that it can control us in ways that we can’t,” he says. We don’t even realize it.’ “The worst-case scenario is not human-versus-robot wars. The worst part is that we don’t realize that we are being manipulated because we are sharing the planet with an entity that is much more intelligent than us.

Leave a Comment