I noticed a bizarre reaction to my prompt for a list of court opinions on a particular topic. chatGPT listed five court opinions no problem. However, the court opinions were made-up. chatGPT, as I understand it, looks for patterns to predict results or even the next word in a sentence. I probably have that wrong-ish but you know what I mean.
My prompt was in 3.5 and I could not get 4.0 to cooperate at all. I admit I did not try that hard. Meanwhile, CaseText launched products in which it uses chatGPT 4.0 for legal research. Asking the same question I asked 3.5, casetext successfully listed court opinions.
Circling back to 3.5, I tried again and it continued to make up case law. I tried to train it to understand that court opinions are published ver batim from actual courts. That is, historical records. It inststed that its most recent list were actual court opinons. One from the Wyoming Supreme Court no less.
I asked CaseText its opinion and it confirmed that the citation was a real case even though Mr. Google had never heard of it. When I asked casetext to summarize the opinion, it wrote âthe court LIKELY âŚâ
Ruh ro!
My next question to CaseText 4.0 was as follows: what must a plaintiff prove to prevail on a claim of product liability?