11 Comments
May 11, 2023·edited May 11, 2023Liked by Yassine Meskhout

I would be very careful about things like that. ChatGPT frequently hallucinates information that sounds completely believable.

For example, I recently asked it for a simple Javascript function to take in a list of objects and split it into N smaller lists. It returned a function that it claimed could do this, along with a long explanation of exactly how it worked, which looked reasonable. I tested the function on lists of length 1, 2, 3, 4; seemed to work fine. Next day I discover a bug in my program; after spending quite a while tracking it down, turns out ChatGPT's code fails when asked to divide a list into 20 smaller lists. I looked into what the function was actually doing and it turned out to be doing something completely different under the hood from what I had asked for, which only returned correct answers for certain small inputs.

This sort of behavior is common; it's trying to generate text that sounds realistic, not text that's actually true. Here's a similar story from an acquaintance trying to use it to find a physics paper: https://imgur.com/a/zNafwRy

My general approach is to only use ChatGPT for anything that I can verify myself afterwards. e.g. I might give it a description of concept or event and ask it me what it's named, but I'll never trust the name it gives me without googling it and verifying that it's actually correct. I would be really hesitant to use it to summarize a longer text, because I have no way to verify that its summary is correct, and I think it's quite likely that it will contain errors.

Expand full comment
author

I agree with this caution

Expand full comment

I think you are right that LLMs (probably specialised tools, not vanilla ChatGPT) can automate away a hell of a lot of what lawyers do. I wonder about the protectionist response.

On the one hand, American lawyers have the most powerful guild on earth. Nothing will topple them without violence. On the other hand, that might not save the lowly lawyer. The leaders of big firms have a lot of profit they could capture by automating away the drudge work of their juniors. And that's where the political power is.

Expand full comment
author

A significant portion of the value big law firms comes from the apparent respectability and relationships they can leverage. They already rely on chewing through junior associates to get the grunt work done, so overall I don't think much will change. LLMs will replace some associate work but you'll still have the partners' names there to leverage their networking.

Expand full comment
May 11, 2023Liked by Yassine Meskhout

ChatGPT and LLMs are similar to movable type. The effects will be wild, mundane, and unpredictable.

And I think you overestimate the level of computer savvy in all cohorts. For example, newly minted computer science grads have their minds blown when shown vast network speed improvements on ethernet versus wifi.

Anywho, I’m curious how many people can leverage an LLM to go pro se.

Expand full comment
May 11, 2023Liked by Yassine Meskhout

Ooh, I wrote an article on the Luddites and AI last month. It's a complex issue for sure, and I had to go through a lot of 'unlearning' to understand who they were (and weren't).

https://exmultitude.substack.com/p/artificial-intelligence-and-luddite

Expand full comment
author

That's a great post! I edited my post to link to it.

Expand full comment

I've messed around with AI for legal research and have so far found it to be pretty useless. I trialed an AI legal research assistant and found it added very little value, if any, beyond keyword and natural language searches. I wanted it to pump out some useable boilerplate, but I couldn't even really get it to do that. Then there's the fact that most of my work is going through discovery, trial transcripts, etc., and identifying issues. Nothing I've seen suggests that AI could do a remotely competent job of this yet, much less identify issues, draft a compelling facts section, locate relevant case law, and construct a persuasive argument.

In short, for all the hype, I think we still have a good few years of trawling through endless documents, statutes, and case law and writing briefs ahead of us before we become obsolete. And in the meantime, I don't see AI as making our jobs substantially easier. But I can see it could have some use for translating everyday legal concepts into more readily understandable language for non-lawyers, and translating information from specialized fields into more readily understandable language for lawyers. Famous last words though, right?

Expand full comment

You wrote, "it's just so much harder to crime and get away with it nowadays. A murder investigation in the 1950s might get lucky with a fingerprint but would otherwise be heavily reliant on eyewitness testimony and alibi investigations."

But for murder, isn't the opposite true? Clearance rates for murder have gone significantly down over the past few decades.

https://www.theatlantic.com/newsletters/archive/2022/07/police-murder-clearance-rate/661500/

Expand full comment
author

You already included the exact same link I was going to post, the clearance rates from before the 1970s are not reliable. Either way whether or not murder is harder/easier to get away with is collateral to my point about the avalanche of discovery we have to deal with.

Expand full comment

> The indexes they created was sometimes nonsensically organized

Typo, "was" should be "were" I think.

Expand full comment