AI Cannibalism and the Law


Lawyers are already using—and misusing—large language models (“LLMs”) like ChatGPT in their daily lives as they practice law. Despite recent headlines pointing out the very real downsides of misusing the technology, it is all but certain that lawyers will use LLMs with increasing frequency in the coming years. Indeed, many law schools, recognizing that lawyers need to understand LLMs, are scrambling to train students on best practices. However, LLMs are racing toward a potential cliff that could severely undercut their usefulness to lawyers, and potentially even stifle the development of law itself.

As news articles, blog posts, and even works of fiction generated by artificial intelligence (“AI”) make up more and more of the internet, those AI-generated outputs will form an ever-larger share of the data training sets of future LLMs. Recent studies suggest this recursive loop is potentially catastrophic for the models’ stability and could result in more misinformation and increasing “AI hallucinations.” Such a result would lessen the utility of these tools for lawyers.


Link to Full Article:

AI Cannibalism and the Law

Scroll to Top