India, March 10 -- What if a weak research paper did not need better ideas, better data, or better science: just a hidden line of text to fool an AI reviewer? That is the unsettling question behind a new security-focused study led by Prof. Dhruv Kumar, Department of Computer Science & Information Systems, BITS Pilani, which examines how invisible prompt injections inside PDFs can manipulate Large Language Model-based review systems. In simple terms, the research argues that an attacker may not need to persuade a human at all. They may only need to quietly plant instructions that the human never sees, but the AI does.

And that is what makes the finding hard to ignore. According to his team, the real danger is not just sloppy automation in...