What Happened When a Startup Tried to Bring an AI Chatbot to Traffic Court

DoNotPay claimed its GPT-powered chatbot could successfully argue a traffic case. Attorneys weren't so sure.

We may earn a commission from links on this page.
Image for article titled What Happened When a Startup Tried to Bring an AI Chatbot to Traffic Court
Photo: Heide Benser (Getty Images), Illustration: DSGPro (Getty Images)

If you’ve ever tried to fight a parking ticket or negotiate a cable bill, you may have heard of a company called DoNotPay. It offers a subscription-based service to automate those boring, time-consuming tasks by using chatbots and AI to talk to customer service representatives or deal with endless forms and paperwork. But recently, it’s been promising more. Earlier this month, the company issued a challenge: It offered $1,000,000 to anyone willing to let its chatbot argue a case before the U.S. Supreme Court. It seems the Supreme is still out of reach, but the company got hundreds of applicants for a smaller challenge: Representation via AI to fight speeding charges in a real-life courtroom. At least, that’s what was supposed to happen.

Instead, the effort was called off just days after its announcement. DoNotPay CEO Joshua Browder claims his tweets about the project led various state Bar Associations to open investigations into his company — the kind that could lead to jail time. But how was the experiment actually supposed to go? More importantly, would it have worked? To find out, I talked with traffic attorneys across multiple jurisdictions, and with Browder himself.

Advertisement
Image for article titled What Happened When a Startup Tried to Bring an AI Chatbot to Traffic Court
Photo: ROBYN BECK/AFP (Getty Images)

In the original tweet announcing the effort, Browder promised that DoNotPay’s AI would “whisper in someone’s ear exactly what to say” in court. He cites rules that allow Bluetooth-connected hearing aids in some courtrooms to justify bringing internet-enabled wearable devices in front of a judge. In DoNotPay’s case, the plan was to use bone-conduction eyeglasses to carry audio to and from the AI.

Advertisement

It’s difficult to tell whether the experiment would be legal. Browder never revealed where the test would occur, seemingly to avoid tipping off the judge. I spoke with two attorneys, both with years of traffic law experience, and neither could definitively tell me whether the move would be allowed — every court has its own rules surrounding electronics. To DoNotPay’s credit, the company appears to have audited this kind of viability: Browder told me that DoNotPay looked at 300 potential traffic cases, assessing each for the legality of an AI appearance.

Advertisement

Since the AI was meant to speak to a defendant directly, DoNotPay had to be concerned with charges of unauthorized practice of law. To try and avoid this, Browder focused on jurisdictions where “legal representation” is explicitly defined as a person, hoping that the courts wouldn’t count an AI. That meant the defendant in the test would be viewed as proceeding pro se — representing themself.

Defendants who opt to represent themselves have been known to invest in pre-trial coaching, and DoNotPay could conceivably argue that its AI would simply be coaching in real time. That certainly fits Browder’s claim that use of AI is “not outright illegal,” but it’s enough of a gray area that his concerns over a six-month stint in jail may have been warranted.

Advertisement

Of course, it’s unlikely that an AI could successfully argue the sort of cases we’ve all come to know from movies and TV. GPT-3 is no Rafael Barba or Vincent Gambini, and it’s unclear whether any machine-learning algorithm could ever perfect the human elements of going to court: Negotiating with opposing counsel, navigating plea bargains, even tailoring a legal approach to the whims of a particular judge.

Advertisement

DoNotPay’s pre-trial assessment process didn’t just look at whether its AI could enter a courtroom. Browder and his legal team wanted a case the AI could win. With its legal experience primarily built around filling out forms and pre-writing letters, DoNotPay’s AI needed a case that would be simple to execute. The company worked with a legal team to review cases, and found one that it expected to fall apart over a simple lack of evidence. The AI would need to request opposing counsel’s evidence before the court date, but the actual in-court appearance wouldn’t be a protracted legal battle — just a simple motion to dismiss.

DoNotPay’s AI did, in fact, prepare the documents to request evidence in the speeding case. But it did so with input from DoNotPay’s legal team, who knew that the case would fall apart on an evidentiary basis — in our conversation, Browder wouldn’t confirm whether the AI, left to its own devices, would know to make the same request. To meet the goals of the experiment, the chatbot would’ve had to act on its own while asking for a dismissal in court, but that would only require the AI to generate a few short sentences. Does such a narrow scope of work really qualify as “representation by AI”? Maybe, but only on a technicality.

Advertisement

Since the experiment’s been cancelled, it’s unlikely we’ll ever truly know the outside capabilities of DoNotPay’s AI. Unless, of course, the cancellation is a misdirect — throwing the Bar Association off the scent to let the trial run as scheduled. When asked if the cancellation was a fake-out, Browder only had two words to say: “No comment.”