
10 Ways To Incriminate Yourself If You Really Need To
Alec Baldwin is in serious trouble in New Mexico; Donald Trump's lawyer Alina Habba is in more serious trouble than he is; Brian Walshe's Google searches have caused him serious trouble
Dear listeners,
Welcome to another episode of Serious Trouble!
This week, Ken and I talked about a substantial sanctions order — nearly $1 million — that Donald Trump and his attorney Alina Habba must pay for the frivolous RICO lawsuit they brought against dozens of defendants. As sanctions orders go, this is really big — and Ken thinks Habba should worry about being disbarred — but is this really an effective deterrent against Trump, who got lots of publicity and fundraising opportunities from the suit?
We also looked at criminal charges against Alec Baldwin. He is to be tried for involuntary manslaughter related to his fatal shooting of cinematographer Halyna Hutchins on the set of his film Rust in 2021. When is a terrible accident also a crime? Ken got to know the New Mexico manslaughter statute — thanks Ken — and has some thoughts on how Baldwin’s comments in public interviews, and his choice to make himself a plaintiff in civil litigation related to the shootings, have complicated the defense he’ll need to offer.
And we talked about Brian Walshe, whose Google searches helped lead to his arrest for the murder of his still-missing wife, Ana. It turns out, Googling “10 ways to dispose of a dead body if you really need to” before your wife disappears can help with the issuance of an arrest warrant against you. We talked how this evidence is used after the fact, and why it’s not practical to use this real-time information to stop crimes in progress.
We hope you enjoy the episode.
Josh
Episode links and references
Click here for a transcript of this episode.
Trump/Habba sanctions order.
CNN: A chronology of Brian Walshe’s Google searches.
Boston Globe: After Brian Walshe, should Google warn police of an impending murder?
10 Ways To Incriminate Yourself If You Really Need To
This is pedantic, but it's an important concept in testing, so I'm going to say it anyway in hopes that it helps someone.
In the scenario where two friends search how to get rid of a body to fact-check Law and Order, if the algorithm identified them as possible murderers, then that would be a false positive, not a false negative. If someone searched how to get rid of a body because they had murdered someone, and the algorithm did NOT identify the person as a possible murderer, then that would be a false negative.
As an aside, if I were to write this algorithm, then I would try to write it to minimize false negatives even if it meant more false positive results because this approach means fewer people would be murdered. The problem with that approach (as Ken and Josh accurately assess) is that there would be WAY too many false positives for any human to sort through. In which case, another algorithm would be needed to sort through the false positives, and now we are right back where we started.
Now, in theory you could make the algorithm more complicated or use a neural network, but then you would need a lot of test scenarios or training data to ensure the algorithm was working correctly. Ironically, this means a lot of people need to get murdered for the algorithm to get good at distinguishing between benign suspicious searches and malevolent suspicious searches.
It would be expensive for Google to develop this algorithm (and maintain it); they have no financial incentive to do it; and it would cause all kinds of 4th amendment issues. Those are all pretty important reasons to not develop an algorithm like this.
So disappointed in the last segment. Not a single mention of in corpus delicti. I wanted Josh to ask Ken why the law thinks corpses are delicious.