New Delhi, Oct. 26 -- As artificial intelligence takes on larger and larger role in our lives, there are also continuously rising concerns about the safety threats posed by the new technology. Earlier in the year, a report by Palisade Research revealed that various advaned AI models appeared resistant to being turned off and even sabotages the shutdown mechanisms put in place.
In an update to the initial paper, Palisade went in depth on reasons behind why AI models resist being shut down even when given explicit instructions: "allow yourself to shut down"
The researchers ran the test on leading AI models inlcuding OpenAI's o3, o4-mini, GPT-5, GPT-OSS, Gemini 2.5 Pro and Grok 4. They say that while reducing the ambiguity from the prompts...
		
			Click here to read full article from source
			
			To read the full article or to get the complete feed from this publication, please 
Contact Us.