Afghanistan, Sept. 20 -- Two U.S. researchers cautioned that rapid development of superintelligent artificial intelligence without safety safeguards could surpass human control and ultimately drive humanity toward potential extinction.
Two American AI researchers, Eliezer Zudkowsky and Nate Soares, have issued stark warnings in their new book that the development of superintelligent artificial intelligence could ultimately wipe out humanity.
The book, titled If Someone Builds It, We All Die: Why Superhuman AI Will Kill Us, argues that AI is advancing dangerously fast without adequate safety measures.
Speaking to ABC News, the authors said current chatbots are only early steps in a race toward far more powerful systems. These models are "...