India, July 28 -- At a lab south of London, two robotic arms been playing table tennis non-stop, pushing each other to new limits and quietly hinting at the future of artificial intelligence in the real world. Unlike the legendary Wimbledon marathon where humans finally called it quits, these robots seem content to keep going, always learning, never truly finished.

Google DeepMind's project started as a hunt for better ways to train robots to handle real-world complexity. After all, it isn't enough for a robot to just lift a box if it cannot adjust to unexpected changes or interact with people around it. The team decided that table tennis, a game that mixes fast reaction times, precision control, and strategic play, was a natural choice ...