Why do I feel the best?
Here are some things that I believe about about artificial intelligence:
I think that during the past few years, artificial intelligence systems began to overcome humans in a number of areas – mathematics, Coding And medical diagnosis, to name a few – and that they improve every day.
I think very soon-perhaps in 2026 or 2027, but probably just this year-one or more company of artificial intelligence calls that it created an artificial public intelligence, or AGI, which is usually defined as something like “the artificial intelligence system for general purposes that can do almost all cognitive tasks that a person can do.”
I think that when AGI is announced, there will be discussions about definitions and arguments about whether Agi is considered “real” or not, but this is often not important, because the broader point-that we lose our monopoly of intelligence at the human level, and moving to a world with AI systems very strong-will be correct.
I think that over the next decade, strong artificial intelligence will generate trillion dollars in terms of economic value and tend to balance the political and military power towards the nations that it controls – and that most governments and major companies are already clear, as evidenced by the huge amounts of money spent to get there first.
I think most people and institutions are not fully prepared for artificial intelligence systems that exist today, not to mention more strongest, and that there is no realistic plan at any government level to reduce risks or capture the benefits of these regimes.
I think skeptics of artificial intelligence – who insist that progress is all smoke and mirrors, who refuse AGI as an imaginary imagination – not only a mistake in the existing foundations, but they give people a false feeling of safety.
I think that whether you think AGI will be great or terrible for humanity – frankly, it may be too early to say – its arrival raises important economic, political and technological questions that we have no answers.
I think the time to start preparing for AGI is now.
This may look all crazy. But I did not reach these opinions as a star of the stars, or an investor who threatens the artificial intelligence portfolio or a man who took a lot of magic mushrooms and witnessed “terminator 2.”
I arrived at them as a journalist who spent a lot of time speaking to engineers who build strong artificial intelligence systems, investors who finance them and researchers study their effects. I came to believe that what is happening in artificial intelligence is now greater than that most people understand.
In San Francisco, where I am resident, the idea of AGI is not marginal or strange. People here Talk about “Feeling of AGI”, and the construction of artificial intelligence systems has become the most intelligent intelligence of artificial intelligence. Every week, I meet engineers and businessmen who work on artificial intelligence who told me that change-great change, the change of the world, the type of transformation that we have not seen before-is just around the corner.
“Over the past year or two, what was called” short time tables “(thinking that AGI is likely to be built) has become a semi -Konisos,” Miles Brunding, a researcher in the independent artificial intelligence policy that left Openai last year, told me recently.
Outside the Gulf region, only a few people have heard about AGI, not to mention planning for that. In my industry, the journalists who get the progress of Amnesty International are still being ridiculous Brilliant deception or Industry shillings.
Frankly, I get the reaction. Although we now have artificial intelligence systems that contribute to award -winning Nobel breakthroughs, though this 400 million people per week They use Chatgpt, and a lot of artificial intelligence that people face in their daily lives. I sympathize with people who see artificial intelligence all over their Facebook extracts, or have a clumsy reaction with Customer Customer Customer Service and Thought: this Will the world be overlooked?
I used to make fun of the idea as well. But I came to believe that I was wrong. Some things convinced me to take the progress of Amnesty International seriously.
The insiders feel anxious.
The most bothering me in the artificial intelligence industry today is that people closest to technology – employees and executives in the leading AI laboratories – tend to be the most anxious about how quickly it improves.
This is very unusual. In 2010, when I was covering the rise of social media, no one was inside Twitter, Foulsquare or Pinterest warning that their applications could cause social chaos. Mark Zuckerberg did not test Facebook to find evidence that it can be used to create new vital weapons, or implement independent electronic attacks.
But today, people who have the best information about the progress of artificial intelligence-people who build a strong Amnesty International, who have access to more systems that define than the audience-tell us that the great change is close. The leading artificial intelligence companies Preparation actively For the arrival of AGI, studying frightening characteristics of its models, such as if they are able to Planning and DeceptionIn anticipation of becoming more capable and independent.
Sam, CEO of Openai written “The systems that begin to refer to AGI are considering.”
Dimis Hasabis, CEO of Google DeepMind, He said Aji is likely to be “three to five years”.
Dario Ameudi, CEO of anthropologist (who does not like the term AGI but agrees to the general principle), last month that he believed that we are a year or two from the presence of “a very large number of artificial intelligence systems more intelligent than humans in almost everything.”
Perhaps we must deduct these predictions. After all, executive officials from artificial intelligence stand to profit from the amplified noise, and they may have incentives to exaggerate.
But many independent experts – including Jeffrey Hunton and Yoshua BingioTwo of the world’s most artificial intelligence researchers, Ben Boucanan, who was the best expert in Amnesty International Administration in the Biden Administration – they say similar things. As well as a group of other prominent Economistsand Mathematics scholars and National security officials.
To be fair, Some experts Doubt that Aji is imminent. But even if you ignore everyone working in artificial intelligence companies, or has a firm share of the result, there is still enough independent sounds with credibility with the short timelines AGI that we must take seriously.
Artificial intelligence models continue to improve.
For me, just as convincing as expert’s opinion is the evidence that today’s artificial intelligence systems are improving quickly, in somewhat clear ways for anyone to use it.
In 2022, when Openai Chatgpt released, the leading artificial intelligence models struggled with the basic account, often failed in complex thinking problems and are often “hallucinogenic”, or making facts that are not present. Chatbots of that era can do impressive things with the right claim, but you will never use one for anything very important.
Artificial intelligence models today are much better. Now, specialized models are offered Medal level degrees In the International Mathematics Olympics, the multiple models of general purposes have become good in solving complex problems that we had to create new tests more difficult to measure their capabilities. Hells and realistic errors are still occurring, but they are rare in the latest models. Many companies now trust artificial intelligence models enough to build them in basic functions facing customers.
(The New York Times filed a lawsuit against Openai and its partner, Microsoft, accusing them of violating copyright for news content related to artificial intelligence systems. Openai and Microsoft denied these claims.)
Some improvement is the function of size. In artificial intelligence, the largest models, which have been trained using more data and processing strength, tend to achieve better results, and today’s leading models are much larger than their predecessors.
But it also stems from the breakthroughs made by artificial intelligence researchers in recent years – most notably, the emergence of “thinking” models, which are designed to take an additional mathematical step before providing a response.
Thinking models, which include Openai’s O1 and Deepseek’s R1, are trained to work through complex problems, and are designed with reinforcement learning – a technique that has been used to teach artificial intelligence Play the painting game, go At a super -level level. It seems that it succeeds in the things that stumbled on the previous models. (One example: GPT-4O, a standard model released by Openai, set 9 per cent in AIME 2024, a set of very hard mathematics problems; O1, thinking model that Openai Absolute After several months, 74 percent recorded the same test.)
With these tools improved, it has become useful for many types of types of white collars. My colleague Ezra Klein recently wrote that the outputs of the research were deep, a distinct feature that produced complex analytical summaries, was “at least the mediator” of the human researchers who worked with them.
I also found many uses of artificial intelligence tools in my work. I do not use artificial intelligence to write my columns, but I use them in many other things – preparation for interviews, summarizing research papers, and building applications for my help in administrative tasks. None of this was possible a few years ago. I find that it is unreasonable that anyone uses these systems regularly for hard work may conclude that it hit a plateau.
If you really want to understand how artificial intelligence occurs recently, talk to a programmer. A year or two, artificial intelligence coding tools were present, but they were aimed at speeding up human programmers than replacing them. Today, software engineers told me that artificial intelligence is doing most of their actual coding, and that they are increasingly feeling that their job is to oversee artificial intelligence systems.
Jared Friedman, a partner in Y Combinator, an emerging accelerator, Recently said A quarter of the current emerging company group used artificial intelligence to write almost all their code.
He said: “A year ago, they had built their products from the zero point – but now 95 percent of them were built by artificial intelligence.”
Excessive emission is better than deviation.
In a spirit of cognitive humility, I must say that I, and many others, may be wrong in our time tables.
Perhaps the progress of artificial intelligence will reach the bottle neck that we did not expect – a lack of energy that prevents artificial intelligence companies from building larger data centers, or limited access to strong chips used to train artificial intelligence models. Perhaps the structure of the model and training techniques can not take us all the way to AGI, and more breakthroughs are needed.
But even if I arrived AGI after a decade of what I expected – in 2036, instead of 2026 – I think we should start preparing for that now.
Most of the advice that I heard about how institutions facilitate more preciously are summarized in the things that we must do anyway: updating our energy infrastructure, hardening our cybersecurity defenses, speeding up the speed of approval of medications designed from artificial intelligence, writing regulations for the most dangerous prevention of male intelligence, teaching the teaching assistant that can treat intelligence in schools and support social development as soon as it is to cancel. These are all reasonable ideas, with or without AGI
Some technology leaders are concerned that early concerns about AGI will cause us to organize artificial intelligence strongly. But the Trump administration indicated that it wanted to accelerate the development of artificial intelligence, not to slow it down. Sufficient money is spent to create the next generation of artificial intelligence models – hundreds of billions of dollars, with more on the road – it appears that it is unlikely that the leading AI companies voluntarily pump the brakes.
Don’t worry about the excessive individuals in AGI, too. There is a greater danger, as I think, most people will not realize that strong artificial intelligence exists here until they stared at them – eliminating their job, involving them in a fraud process, hurting them or any person they love. This, almost what happened during the social media era, when we failed to learn about the risks of tools such as Facebook and Twitter until it was very large and entrenched for change.
For this reason, I believe in taking AGI seriously now, even if we don’t know exactly when it will reach or exactly the form it will require.
If we are in a state of denial – or if we do not notice simply – we may lose an opportunity to form this technology when it is more important.