Notwithstanding claims that scaling laws have started to halt the development of sophisticated AI systems, a recent blog post by CEO Sam Altman indicates that his business may be well on its way to develop an Artificial General Intelligence (AGI).
The field of generative AI has rapidly advanced, moving from simple AI tools that can produce text and images to complex systems that can reason. An arms race is underway among leading AI laboratories, including OpenAI, Google, Anthropic, Microsoft, and Apple, to create and develop sophisticated AI models that potentially make up artificial general intelligence. As you may be aware, artificial general intelligence, or AGI, is a sophisticated AI system that outperforms humans in a variety of cognitive tasks.
According to Sam Altman, AGI is possible with the technology that is now available, and even if the AI revolution calls for new hardware, “you’ll be happy to have a new device.”
However, despite the rapid advancement of AI, people continue to raise safety concerns about the possible harm it could cause to humans.
Roman Yampolskiy, a researcher on AI safety and the director of the University of Louisville’s Cyber Security Laboratory, stated that there is a 99.999999% chance that AI would eliminate humans and that the only way to avoid this is to avoid developing AI in the first place.
It’s important to remember that the AI company was projected to lose $5 billion in a year and was on the brink of bankruptcy. But in a round of fundraising, Microsoft, NVIDIA, Thrive Capital, and SoftBank raised $6.6 billion, keeping the ChatGPT maker’s business afloat and pushing its market capitalization well over $157 billion.
As stated by OpenAI CEO Sam Altman in a different blog post, superintelligence is just “a few thousand days away.” Furthermore, it appears that the corporation may be moving toward Superintelligence which is much beyond AGI.
“We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”
“”OpenAI may already know how to achieve superintelligence,” said Stephen McAleer, safety research agent. AGI would “look a lot like a product release” with “surprisingly little” societal consequence.
Lastly, according to OpenAI CEO Sam Altman, superintelligence may lead to a tenfold increase in scientific AI discoveries, making a year as groundbreaking as a decade.