[ad_1]
There is some disagreement at ATL was to whether DoNotPay CEO Joshua Browder is an obnoxious charlatan or a pioneering entrepreneur providing a good service for the underserved. Time will tell, JOE.
But there can be zero dispute that the 26-year-old CEO made a spectacularly bad choice when he agreed to do Bob Ambrogi’s Legalnext podcast.
Ambrogi’s avuncular prodding nudged Browder, who recently made headlines by offering $1 million to anyone who would allow his company’s AI to argue a case before the Supreme Court, to say some insanely damning things.
First Browder suggested that the tweet was some kind of attempt at viral marketing.
“Well, it was a real offer in the sense that, we would do it if someone was willing to do it, because the publicity if it actually happened would be a hundred-x. And you spend a million dollars, it’s worth it for the publicity,” he gabbled.
“It’s not a crime to do good marketing,” he said defensively when the host suggested that what he was proposing would be against the law. “It’s not illegal to make that offer.”
Sure, lawyers called his proposal ridiculous, because his real audience, “people not even in the big cities, in places like Kentucky who get ripped off by Comcast,” loved it. Which may be true, but his site promises to let users draft divorce settlements, execute a power of attorney, or “sue anyone” — all of which have slightly higher stakes than yelling at the Comcast rep.
Or it did anyway. After getting repeatedly spanked on Twitter by investigator paralegal Kathryn Tewson, the company nixed those services, and Browder announced that he’d be “sticking to consumer rights.” It was an abrupt about-face for the company which bills itself as “The World’s First Robot Lawyer” and whose founder vowed to “make the $200 billion legal profession free for consumers.”
Tewson requested three documents: A divorce agreement and a defamation letter, both of which were promised in several hours, but neither of which showed up, and a letter threatening to take the recipient to small claims court, which was generated immediately, although with an amount in excess of the statutory limit for the requested jurisdiction.
“I didn’t get either of the first two documents I generated, and got the last one instantly, and I realized that the other two documents promised personalization with relevant legal information based on facts I had given them in the prompts, and the one I got didn’t,” Tewson told ATL. “And then I got REALLY suspicious, because the timers they had given me were for 1 hour and 8 hours. Those are human time frames not computer time frames.”
I have literally no way to know what the fuck is actually going on here, but I can think of two likely options. The first is that the whole tool is just fucking broken, and Joshua Browder is scamming people out of almost $20 a month for a service that simply does not work. The second, though — and I find this much more likely based on the one-hour and eight-hour timelines given — is that this isn’t AI at all; DoNotPay collects the information from the prompt and then hands it to a human to go find the relevant law and customize the doc.
But Browder has a different explanation, and it is that he is a dick.
She signed up for DoNotPay, she generated a letter instantly, and then we were like, “Why are we letting this lady submit all sorts of fake data to DoNotPay? She doesn’t have any real cases.” So, our systems banned her. And then she tried to submit a second letter, and it said, it kind of gaslit her. It said, “You have twelve hours to go.” And then she tried to it again, and it gaslit her again. And then she messaged me, “Why is this going on?” and I said, “Well, you can’t submit fake data, and we’ll unblock you if you don’t submit any fake data. You’re welcome to test the service, but you have to use real data. We don’t want you suing James Joyce.” Which is a real fictional character she sued. And she acknowledged that, and then she started generating more fake cases, so we just permanently banned her.
It should be noted that Tewson disputes much of this account, including the order of operations: she says she got the demand letter last, when Browder’s computer was supposedly already gaslighting her. It should also be noted that Tewson has caught out Browder in multiple lies, including about a charitable donation and having graduated from Stanford University.
But Browder came close in this interview to admitting the gravamen of her accusation when he talked about his plan to deploy his chatbot in traffic court through glasses which can transmit audio both ways to the defendant. Because this is not the AI lawyering that Browder promised:
The specific case we chose, there was lacking of evidence. So, we actually did like an evidence request a few weeks before the hearing. And we know in this jurisdiction the police or central government don’t respond. So our plan was to have Chat GPT-style technology say “I’m requesting to have the case dismissed for lack of evidence, they didn’t respond to this request, etc etc.” And we’ve observed a lot of cases in this jurisdiction, and the judge typically grants it if they don’t respond. So we were planning to win.
That is Browder choosing a case and telling the bot what to say. And you can argue about whether that amounts to the unauthorized practice of law. You can argue that it’s a net social good, since it puts pro se defendants in a better position than they’d otherwise be. You can even argue that it’s providing good service for his clients to have a lawyer check the chatbot’s work.
But what you cannot do is argue that it’s proof his computer program is ready to practice law in the real world, even in traffic court.
Liz Dye lives in Baltimore where she writes about law and politics.
[ad_2]