AI makers warn their technology is 'at risk of extinction'

Those leading the AI revolution are also calling for regulations, drawing comparisons to pandemics and nuclear war.

** Brace yourselves: Artificial intelligence could lead to the extinction of the human race. **

On Tuesday, hundreds of AI industry leaders and researchers; including executives from Microsoft, Google and OpenAI, issued a sobering warning. They claim that the artificial intelligence technology they are designing could one day pose a real and present threat to human existence. In addition to the horrors of pandemics and nuclear war, they see AI as a risk to society to a similar degree.

In a letter published by the Center for AI Safety, AI experts offered this succinct statement: "Mitigating the risk of AI extinction should be compared to other societal-scale risks such as pandemics and nuclear war. Together as a global priority."

That's it, that's what they say.

Source: Center for AI Security

The statement described artificial intelligence as an imminent threat, akin to a nuclear catastrophe or a global pandemic. But the signatories, these wizards of the technology industry, failed to expand their ominous warnings.

How exactly is this doomsday scenario supposed to go down? When should we mark our calendars for the rise of our robot overlords? Why would an invention of human innovation such as artificial intelligence betray its creator? The silence of these AI architects was resounding, and they gave no answers.

In fact, these industry leaders provide no more information than a chatbot with canned responses. In the world of global threats, artificial intelligence seems to have jumped the queue, defeating climate change, geopolitical conflict, and even alien invasion Google keyword searches.

Google searches for artificial intelligence compared to other global issues such as wars, alien invasions, and climate change. Image: Google

Interestingly, companies tend to advocate for regulation when it is in their interest. This can be seen as their way of saying "we want to be a part of making these regulations," akin to a fox in the henhouse pleading for new rules.

It's also worth noting that OpenAI CEO Sam Altman has been pushing for U.S. regulation. Still, he has threatened to leave Europe if politicians on the continent continue to try to regulate AI. "We will try to comply," Altman told a panel meeting at University College London. "If we can comply, we will. If we can't, we will cease operations."

To be fair, he changed his tune a few days later and said OpenAI had no plans to leave Europe. Of course, this came after he had the opportunity to discuss the issue with regulators during a "very productive week".

**AI is risky, but is it that big of a risk? **

Experts have not ignored the potential harms of artificial intelligence. A previous open letter signed by 31,810 advocates called for a moratorium on the training of powerful AI models, including Elon Musk, Steve Wozniak, Yuval Harari and Andrew Yang.

“These protocols should ensure beyond doubt that the systems that comply with them are safe,” the letter said, clarifying that “this does not imply a general moratorium on AI development of black-box models with emergent capabilities, only that the dangerous race to back into the larger unpredictable race."

The potential AI Foom problem (in which AI is able to improve its own systems, increasing its capabilities beyond human intelligence) has been discussed for years. However, today's rapid pace of change, coupled with heavy media coverage, has brought this debate into the global spotlight.

Source: Columbia Journalism Review

This has sparked differing views on how AI will affect the future of social interaction.

Some envision a utopian era where artificial intelligence interacts with humans and technological advancement reigns supreme. Others argue that humans will adapt to AI, creating new jobs around the technology, similar to the job growth that followed the invention of the automobile. Others, however, insist that there is a good chance that AI will mature and become uncontrollable, posing a real threat to humanity.

Until then, business as usual in the AI world. Keep an eye on your ChatGPT, your Bard, or your Siri, they might just need a software update to rule the world. But for now, it appears that humanity's greatest threat is not our own inventions but our limitless gift for exaggeration.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments