Skip to main content

OpenAI building new team to stop superintelligent AI going rogue

If the individuals who are at the very forefront of artificial intelligence technology are commenting about the potentially catastrophic effects of highly intelligent AI systems, then it’s probably wise to sit up and take notice.

Just a couple of months ago, Geoffrey Hinton, a man considered one of the “godfathers” of AI for his pioneering work in the field, said that the technology’s rapid pace of development meant that it was “not inconceivable” that superintelligent AI — considered as being superior to the human mind — could end up wiping out humanity.

And Sam Altman, CEO of OpenAI, the company behind the viral ChatGPT chatbot, had admitted to being “a little bit scared” about the potential effects of advanced AI systems on society.

Altman is so concerned that on Wednesday his company announced it’s setting up a new unit called Superalignment aimed at ensuring that superintelligent AI doesn’t end up causing chaos or something far worse.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” OpenAI said in a post introducing the new initiative. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

OpenAI said that although superintelligent AI may seem like it’s a way off, it believes it could be developed by 2030. And it readily admits that at the current time, no system exists “for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

To deal with the situation, OpenAI wants to build a “roughly human-level automated alignment researcher” that would perform safety checks on a superintelligent AI, adding that managing these risks will also require new institutions for governance and solving the problem of superintelligence alignment.

For Superalignment to have an effect, OpenAI needs to assemble a crack team of top machine learning researchers and engineers.

The company appears very frank about its effort, describing it as an “incredibly ambitious goal” while also admitting that it’s “not guaranteed to succeed.” But it adds that it’s “optimistic that a focused, concerted effort can solve this problem.”

New AI tools like OpenAI’s ChatGPT and Google’s Bard, among many others, are so revolutionary that experts are certain that even at this pre-superintelligence level, the workplace and wider society face fundamental changes in the near term.

It’s why governments around the world are scrambling to play catchup, hurriedly moving to impose regulations on the rapidly developing AI industry in a bid to ensure the technology is deployed in a safe and responsible manner. However, unless a single body is formed, each country will have its own views on how best to use the technology, meaning those regulations could vary widely and lead to markedly different outcomes. And it’s these different approaches that will make Superalignment’s goal all the harder to achieve.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
I taught ChatGPT to teach me board games, and now I won’t ever go back
Catan board game close up.

Teaching complex board games seems like the perfect task for ChatGPT. Being able to ask it specific questions is a whole lot easier than flipping through a rules manual or digging through forum posts online.

And after weeks of effort and a healthy dose of prompt engineering, I have created the ultimate board game rules lawyer with ChatGPT. It can "learn" any game you throw at it, base games and expansions, and can answer nuanced questions accurately, with rulebook references so you can check its work.

Read more
NY lawyers fined for using fake ChatGPT cases in legal brief
The ChatGPT website on an iPhone.

The clumsy use of ChatGPT has landed a New York City law firm with a $5,000 fine.

Having heard so much about OpenAI’s impressive AI-powered chatbot, lawyer Steven Schwartz decided to use it for research, adding ChatGPT-generated case citations to a legal brief handed to a judge earlier this year. But it soon emerged that the cases had been entirely made up by the chatbot.

Read more
This web browser integrates ChatGPT in a fascinating new way
how to clear your browsing history opera browser

It’s no secret that artificial intelligence (AI) and chatbots have taken the tech world by storm in recent months. Now, the Opera browser is trying to get in on the action by releasing Opera One, which it dubs “the first AI-powered browser.”

Opera (the company) describes it as “the latest incarnation of the Opera browser,” one that has been given a “major makeover.” The company “reimagined and rebuilt Opera from the ground up,” it says, “paving the way for a new era in which AI isn’t just an add-on, but a core part of your browsing experience.”

Read more