ALEXANDRIA, Va., March 17 -- United States Patent no. 12,579,974, issued on March 17, was assigned to Amazon Technologies Inc. (Seattle).

"Cache techniques for large language model processing" was invented by Sixing Lu (Bellevue, Wash.), Xiaocheng Deng (Sammamish, Wash.), Yicheng Wang (Seattle), Chengyuan Ma (Bellevue, Wash.) and Gang Chen (Bellevue, Wash.).

According to the abstract* released by the U.S. Patent & Trademark Office: "Techniques for cache management for LLM processing are described. Example embodiments include a signal hashing model that generates a key for particular context data. An LLM output corresponding to the context data is stored in a cache along with the key. For a user input received by the system, a cache lookup...