Judges Now Rely On Secret AI Algorithms To Guide Their Sentencing of Criminal Defendants
Daily Kos, May 25, 2021. 'Sentenced By Algorithm.'
Imagine for a moment that youve been convicted of a crime, and are awaiting sentencing. The prosecutor hands a computer-generated analysis to the judge that declares, based on a secret analysis performed by a complex algorithm, that you should receive the harshest possible sentence, since according to the algorithm you are highly likely to commit future crimes.
Your attorney, hoping to rebut this conclusion, asks how the report was prepared, but the judge rules that neither you nor your attorney are entitled to know anything about its preparation, only the results. The judge then proceeds to impose the maximum sentence, based on this secret calculation.
If that sounds like something out of a dystopian science fiction novel, well, its going on right now in several states throughout this country.
Jed Rakoff is a federal district judge for the Southern District of New York. A former federal prosecutor appointed to the bench in 1996, Rakoff has presided over the some of the most significant white-collar crime cases in this country. He is generally recognized as one of the leading authorities on securities and criminal law, and as a regular contributor to the New York Review of Books, he often writes about novel and emergent criminal justice issues. His latest essay addresses the increasingly widespread use by criminal prosecutors of artificial-intelligence-based (AI) computer programs or algorithms to support sentencing recommendations for convicted criminal defendants.
These programs, using a variety of controversial sociological theories and methods, are primarily used to assess recidivism (the propensity of a defendant to commit future crimes) and they are often given heavy weight by judges in determining the length of the sentence to be imposed. They also factor in decisions regarding setting bail or bond limits. The consideration of potential recidivism is based on the theory of incapacitation: the idea that criminal sentencing should serve the dual purpose of punishment as well as preventing a defendant from committing future crimes, in order to protect society.
Rakoff finds the use of these predictive algorithms troubling for a number of reasons, not the least of which are their demonstrated error rates and propensity for inherent racial bias. He notes that the theories on which they purportedly analyze a persons propensity to commit future crimes are often untested, unreliable, and otherwise questionable.
However, his most recent essay for the NYRB, titled Sentenced by Algorithm and reviewing former district judge Kathleen Forrests, When Machines Can Be Judge, Jury, and Executioner, implicates even more disturbing questions raised by the introduction of artificial intelligence technology into our criminal justice system.
Is it fair for a judge to increase a defendants prison time on the basis of an algorithmic score that predicts the likelihood that he will commit future crimes? Many states now say yes, even when the algorithms they use for this purpose have a high error rate, a secret design, and a demonstrable racial bias...
More,
https://www.dailykos.com/stories/2021/5/25/2031872/-Judges-now-rely-on-secret-AI-algorithms-to-guide-their-sentencing-of-criminal-defendants
regnaD kciN
(26,044 posts)theres now an insurance company, named Lemonaid, that requires its clients who need to file a claim to provide a video statement about the incident that they can then analyze with their top-secret algorithm to determine if the client is lying. They claim its of great benefit to the company in preventing losses due to fraud; Im wondering why anyone would choose an insurance company that basically states in advance that they are going to bend over backwards to find any basis for denying your claim.
appalachiablue
(41,146 posts)they have to follow in suit in order to 'remain competitive,' and not be at a disadvantage- or some other cover.
It's fast becoming a very strange world...
bucolic_frolic
(43,182 posts)yonder
(9,667 posts)berni_mccoy
(23,018 posts)FBaggins
(26,748 posts)Sentencing has long included a point system of aggravating factors and sentencing guidelines. Using a computer (silly to call it AI) to do the math doesnt change that.
Such calculations are intended to make sentencing less open to racial bias... not more.
There are no doubt errors - but judges make errors too...
Eugene
(61,900 posts)Computer algorithms are based on expectations and assumptions of the designers. If the data are faulty or the assumptions are prejudiced, the garbage results can now be cloaked as proprietary "trade secrets."
Facial recognition is an example of tech that is poorly tuned to persons of color and sometimes incompetently used. Garbage in garbage out.
bucolic_frolic
(43,182 posts)By historical statistics? Which would be a formula for racial bias. AI is based on theory and numbers somewhere. Programmers make the code. If I were a judge this would be an interesting sidelight, an awareness only.
So we have driver-less cars that crash, AI mutual funds and asset allocation based on indexing and rebalancing, and now AI judgments to make "safe" judicial decisions. Oh yeah, we have sports management based on metrics to strip the heart out of sports.
BUT, elections are rigged by bamboo sticks in the paper ballots.