Ouroboros AI

:date: 2023-07-23 16:28 :tags:

Almost as soon ChatGPT became the first computer system to impress me with an interaction that passed the Turing Test to my satisfaction, it was not long before I realized these systems would soon struggle with a fundamental problem impeding their continued progress. This is what I call the Ouroboros problem.

ouroboros.jpg

The 2023-04-23 issue of The Economist introduces the problem with a related one: "But the most important limit to the continued improvement of LLMs is the amount of training data available. GPT-3 has already been trained on what amounts to all of the high quality text that is available to download from the internet." They cite a paper which predicts that high-quality language data will be entirely exhausted before 2026.

Is this the most important problem? What I realized is that we're not only going to have fewer words that remain unread by language models, but we will have more  —  perhaps exponentially  —  text generated by AIs. If these AIs train on this  —  their own output  —  we will see the Ouroboros effect of the snake that eats its own tail resulting in obvious deleterious effects.

I was primed for noticing this by already spotting it in another area of AI research. I remember watching one of Tesla's long tech demos and buried deep within was an engineer explaining how the Tesla training was cleverly exploiting access to millions of cameras on their deployed fleet of cars to collect training data. He went on to outline how human driver behavior could be gleaned for training from this collection. Hang on a second! I immediately suspected that this concept was either pessimistic or seriously flawed. It would be pessimistic if Tesla's self driving ambitions were never realized, but it would be foolish if they were! If some significant percentage of other cars on the road were being driven by software which learned a sense of "other drivers" by watching other cars, then there would be, fundamentally, nothing teaching the cars the behavior of other human drivers.

What can be done about this? The first thing is to understand that this is an issue. I think once AI starts to pollute the ecosystem its own training data is drawn from, it must be factored in that this training data is inferior to pristine sources.

For cars, maybe it will be possible to have good classifiers that can look for subtle tells that the other car is piloted by a human brain (fewer lidars, worse lane keeping, etc) and use that human-ness value as an input parameter.

For LLMs it may be trickier since the chatbots are really doing a great job of simulating ordinary text. A strategy that may work for large deployments is to not care where the training data comes from. The systems could do something like A/B testing where some users are conversed with using a model trained on one training set and other users communicate with a slight, perhaps random, variation on that. The winner is preserved.

Another potential solution is to take written text as solved. The next frontier will probably be your telephone conversations and video chat meetings. Transcripts of those will inform AIs about real human extemporaneous communication and subtle ancillary cues ("um"s and "hmms", facial expressions in video, pitch changes, etc). That should provide orders of magnitude more text than has ever been written. Obviously, for the same reasons, that approach stops when AIs are doing much of the live talking.

Of course if that goes well, the corporations that control the listening devices you religiously share your private conversations with will be strongly tempted to go ahead and train on every utterance ever made audible. After all, you did express your enthusiasm for such a tactic by agreeing to their EULA quicker than any human could possibly read it!

It may turn out that at some point in the future the most valuable conversations to eavesdrop on are the ones weirdos like me have  —  people who do not like suffering eavesdropping and know how to prevent it.