Top Tech Talent Warns of AIs Threat to Human Existence in Open Letter

  /     /     /  
Publicated : 23/11/2024   Category : security


Top Tech Talent Warns of AIs Threat to Human Existence in Open Letter


Elon Musk, Steve Wozniak, and Andrew Yang are among more than 1,000 tech leaders asking for time to establish human safety parameters around AI.



More than 1,000 of technologys top talent names — including Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and politician Andrew Yang — have signed an open letter urging AI pioneers to pump the brakes on the AI development race, because of its potential danger to humanity.
Powerful AI systems should be developed only once we are confident that their effects will be positive, and their
risks will be manageable
, the open letter, published on the Future of Life Institute site said, in part. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
The potential danger of unchecked training of large learning models (LLMs), as outlined in the letter, is nothing less than humans being fully replaced by more intelligent AI systems. 
Contemporary AI systems are now becoming
human-competitive at general tasks
, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? the letter asks. Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
Possible harm from advanced AI is a worry even for its proponents: Greg Brockman, CEO of ChatGPT, which created OpenAI, recently told a crowd at SXSW he was concerned about AIs ability to both spread disinformation and
launch cyberattacks
. But those worries are a far cry from fears that
AI could become sentient
.
To be clear, the six-month pause is intended to give policymakers, and AI safety researchers, time to put safety parameters around the technology, the letter explained. The
Pause Giant AI Experiments letter
stresses that the group is not calling for a halt on all AI development, but rather wants developers to stop the rush to roll out new capabilities without fully understanding their potential harm.
Even so, skeptics might look at CEOs like Musk, with potential commercial interests at stake in slowing
OpenAIs development of GPT-5
, and dismiss the Pause AI Open Letter as little more than a public relations ploy.
We have to be a little suspicious of the intentions here — many of the authors of the letter have commercial interests in their own companies getting a chance to catch up with OpenAIs progress, Chris Doman, CTO of Cado Security said in a statement provided to Dark Reading. Frankly, its likely that the only company currently training an AI system more powerful than GPT-4 is OpenAI, as they are currently training GPT-5.
Beyond the celebrity names, the varied backgrounds, and public points of view of the signatories makes the letter worth taking seriously, according to Dan Shiebler, researcher with Abnormal Security. Indeed, the signatories include some of the brightest academic minds in the AI field, including John Hopfield, Professor Emeritus of Princeton University, and the inventor of associative neural networks, and Max Tegmark, professor of physics at MITs Center for Artificial Intelligence & Fundamental Interactions.
The interesting thing about this letter is how diverse the signers and their motivations are, Shiebler said in a statement to Dark Reading. Elon Musk has been pretty vocal that he believes [artificial general intelligence] (computers figuring out how to make themselves better and therefore exploding in capability) to be an imminent danger, whereas AI skeptics like Gary Marcus are clearly coming to this letter from a different angle. 
However, ultimately, Shiebler doesnt predict the letter will do anything to slow AI development.
The cat is out of the bag on these models, Shiebler said. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development.
Still, shining a light on the safety and ethics considerations is a good thing, according to John Bambenek, principal threat hunter at Netenrich.
While its doubtful that anyone is going to pause anything, there is a growing awareness that consideration of the ethical implications of AI projects is lagging far behind the speed of development, he said via email. I think it is good to reassess what we are doing and the profound impacts it will have.

Last News

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security

▸ DHS-funded SWAMP scans code for bugs. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Top Tech Talent Warns of AIs Threat to Human Existence in Open Letter