New Delhi, Oct. 23 -- With deepfakes and other AI-generated content flooding the virtual world, it's all but impossible to make out what's authentic and what's not. The government is trying to control the phenomenon.

On Wednesday, it proposed draft rules that require AI content to be labelled by its creators and social media platforms; the latter would also need to scrutinize such content for takedowns.

The menace of fakes is real and intervention might help, but policing every byte of content that goes online may simply not be feasible.

A more pragmatic approach may be to enlist market incentives for the task. We could devise a system of provenance certification for authentic content to be tested and labelled as such.

Those who want ...