Technology and Science

‘AI a profound risk to humanity’: Elon Musk and others call for halt

The Future of Life Institute has submitted an open letter citing experts, including Elon Musk, on the potential risks associated with conducting ‘giant’ Artificial Intelligence (AI) experiments.

The letter[1] calls for a six-month pause to assess the potential risks prior to conducting any further AI experimentation.

Dangers of AI

Elon Musk has been outspoken about the risks of AI for years. Who recalls the creepy Joe Rogan podcast?

Advertisement

Watch: Musk isn’t a fan

When asked about the risks, he went still before ominously saying:

“I tried to convince people to slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody listened. Nobody listened.”

Yes, he repeated it thrice.

Advertisement

Recently, the world’s richest man took aim at Bill Gates.

Musk, who co-founded OpenAI back in 2015 and left the board three years later, said Gates had a “limited understanding of AI”.

Latest petition to halt AI

The Future of Life Institute said AI systems with “human competitive intelligence can pose profound risks to society and humanity.”

Advertisement

The institute raises several vital questions, such as:

  • Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?
  • Should we risk the loss of control of our civilisation?
  • Should we automate away all the jobs, including the fulfilling ones? Should we let machines flood our information channels with propaganda?

Moreover, the team of researchers believe AI companies, such as OpenAI, are not prioritising the research and contingency plans required should AI get out of hand.

Soon after unveiling GPT-4, OpenAI said it’s important to get independent review “at some point”, with the institute replying:

Advertisement

We agree. That point is now.

The institute, along with the experts who support the petition, now calls on all AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

AGI ‘not a threat’

OpenAI, however, says its mission has always been to ensure that “artificial general intelligence (AGI) – AI systems that are generally smarter than humans – benefits all of humanity.”

Advertisement

AGI, according to OpenAI, has the “potential to give everyone incredible new capabilities,” and describes it as a “great force multiplier for human ingenuity and creativity.”

However, if misused, AGI, could cause social disruption.

Signatories who support the petition include:

  • Elon Musk, CEO of SpaceX, Tesla & Twitter
  • Steve Wozniak, Co-founder, Apple
  • Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
  • John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks
  • Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute
  • Louis Rosenberg, Unanimous AI, CEO & Chief Scientist
  • Frank van den Bosch, Yale University, Professor of Theoretical Astrophysics
  • And 1 200 others.

However, the petition might have come too late, and we are already in the middle of a full-fledged AI war as Microsoft and Google compete for dominance in the market.

Microsoft recently incorporated GPT-4 in its Bing Search Engine, while Google created a ChatGPT competitor named Bard.

Call for AI regulation made years ago

Back in 2020, Future of Life warned that the “emergence of AI promises dramatic changes in our economic and social structures”.

At the time, the institute said the experts who signed the petition have been involved in core technologies for decades, and they would like to emphasise one point:

“While it is difficult to forecast exactly how or how fast technological progress will occur, it is easy to predict that it will occur.

“It is imperative, then, to consider AI not just as it is now, represented largely by a few particular classes of data-driven machine learning systems, but in the forms, it is likely to take.”

That was two years and the legislation governing the advancements is yet to be implemented.

[1] Future of Life Institute, Open Letter

For more news your way

Download our app and read this and other great stories on the move. Available for Android and iOS.

Published by
By Cheryl Kahla