India, April 30 -- The "goblin" problem that OpenAI has identified gives a pretty funny look at the inner workings of how artificial intelligence gets trained, but it's also a bit of a sobering reminder of just how much even small bits of feedback can have an effect on a model over time, especially when those bits keep showing up across multiple uses of the same AI system.
The weird thing is that when you keep tossing out terms like "goblins," "gremlins," or any of these other creatures back at the model, it starts to pick up on some interesting patterns. Apparently this even led researchers to believe that these kinds of creatures are basically the key to understanding the origins of the old "nerdy" personality mode in the model. What t...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.