India, April 13 -- 32 pm, a Bengaluru-based finance manager received a video call from his CFO, or at least someone who looked and sounded like him. The instruction was routine: Approve an urgent vendor payment tied to an ongoing deal. The face was familiar, tone carried authority, and context matched the internal communication. The transfer was processed within minutes. It was a deepfake. Incidents of this nature are no longer isolated. They are routine, and scary.
A recent report by a Parliament committee highlighted the new and upcoming AI-linked risks in operational terms. It identified automated transaction bots, synthetic identity creation, deep fake impersonation, and AI-generated shell entities as active fraud vectors rather than...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.