(coauthored by Harry Surden)
There is probably no single definition of artificial intelligence that most scholars would agree to. However, one practically useful definition of AI is "using computers to solve problems, make predictions, answer questions, generate creative output, or make automated decisions or actions, on tasks that when done by people, typically require 'intelligence.'" In this view, we can think of AI in terms of particular tasks that we associate with human intelligence, and whether we are able to fully or partially automate these tasks using computers.
Starting in the 1950s and continuing through the 1980s, AI was largely focused upon computer rules and knowledge representation. The goal was to represent different aspects of the world using expert knowledge manually encoded in formal programming languages that computers could easily process. For example, in medicine, such systems aimed to codify the diagnostic knowledge and processes of doctors into formal computer rules, allowing computers to sometimes deduce non-obvious diagnoses. Although this early symbolic AI approach achieved some successes, its limitations became quickly apparent: hand-coded expert rules about law, medicine, or other phenomena were often "brittle" in the sense that they couldn't handle exceptions, non-standard "hybrid" scenarios, discretion, or nuances.
A new AI era began in November 2022 with the release of ChatGPT 3.5 by OpenAI. Much to the surprise of most AI researchers, this was the first AI system that could sensibly react to and analyze just about any textual input or document. ChatGPT was an example of a large language model (LLM), a type of AI natural language processing system that was designed to generate coherent, human-like text. Through "training" on billions of pages of previously written human pages available on the internet and elsewhere-including various legal documents such as federal and state statutes, court decisions, contracts on sites like EDGAR, and legal motions-these AI models learned to understand and generate language in a way that closely simulated human-like writing.
To be clear, ChatGPT 3.5 was not always accurate in its responses or analysis-it suffered from well-known accuracy problems and a tendency to make up facts-a phenomenon known as hallucination. But factual accuracy was not even the biggest technical hurdle for such AI systems prior to that time. Rather, going back to November 2022, LLMs prior to ChatGPT had much more severe limitations-these systems could not even respond sensibly to arbitrary inputs that were too far outside of their training. So, even though ChatGPT made factual and reasoning errors, what astonished AI researchers was that it could analyze and respond to arbitrary text of any kind sensibly at all.
Today, judges and others can use AI models to seek legal analysis and answers about constitutional and statutory interpretation, case law, and nearly any other legal question. Modern AI systems usually respond with coherent, well-reasoned, and persuasive text. We explore the implications in our new
article, "Artificial Intelligence and Constitutional Interpretation."
https://balkin.blogspot.com/2024/11/how-ai-learned-to-talk.html