New Delhi, Nov. 11 -- Chief Justice of India (CJI) Bhushan R Gavai on Monday said that judges are aware of the misuse of artificial intelligence (AI) tools, including those used to create and circulate morphed images targeting members of the judiciary, but observed that any move to regulate such technologies must come from the executive, not the courts. "We have seen our morphed pictures too," remarked the CJI, while hearing a public interest litigation (PIL) seeking the formulation of a legal or policy framework to govern the use of generative AI (GenAI) in judicial and quasi-judicial bodies. ".this is essentially a policy matter. It is for the executive to take a call," he added. The bench, also comprising justice K Vinod Chandran, indicated its reluctance to interfere, observing that questions relating to the governance of emerging technology fell squarely within the policymaking domain. However, at the request of counsel, the matter was adjourned for two weeks. The PIL, filed by advocate Kartikeya Rawal and argued with the assistance of advocate-on-record Abhinav Shrivastava, seeks directions to the Centre to enact a law or frame a comprehensive policy to ensure the "regulated and uniform" use of GenAI within judicial systems. The plea distinguished GenAI from traditional AI, arguing that its ability to autonomously generate new text, data and reasoning patterns poses risks of hallucinations -- instances where the system produces non-existent legal principles or fabricated case citations. "The characteristic of GenAI being a black box and having opaqueness has the possibility of creating ambiguity in the legal system," stated the petition, adding that such outputs may lead to fake case laws, biased interpretations, and arbitrary reasoning, potentially violating Article 14 (right to equality). According to the petitioner, judicial systems depend heavily on precedent and traceable reasoning. The opacity of GenAI models, often referred to in technology as "black boxes", means that even developers may not fully understand how conclusions are reached, making oversight difficult. The plea further cautioned that GenAI models trained on real-world data are prone to replicating, or even amplifying, existing social biases against marginalised communities. It argued that without clear standards on data neutrality and ownership, AI-assisted judicial processes risk compromising citizens' right to know under Article 19(1)(a). The petition also flagged the heightened risk of cyberattacks targeting AI-driven systems, especially if court processes or documents are integrated into automated platforms....