New Delhi, April 14 -- Last week, Anthropic announced that its latest artificial intelligence (AI) model, Claude Mythos, was too dangerous to release. In testing, the company discovered that the model could unearth thousands of hitherto unknown security vulnerabilities in many of the software applications, operating systems and web browsers that the world depends on. Until it could be sure that these capabilities of the model would not be misused, said Anthropic, it believed it was too risky to let the model loose on the world.
What was particularly disconcerting was that since some of the bugs had been around for decades, they are deeply embedded in many of the critical systems we rely on. This includes a 27-year-old vulnerability in Op...
Click here to read full article from source
इस लेख के रीप्रिंट को खरीदने या इस प्रकाशन का पूरा फ़ीड प्राप्त करने के लिए, कृपया
हमे संपर्क करें.