Don’t ask AI to make life-and-death decisions

US

Sometimes, AI really can be a matter of life and death.

Last year, a Belgian man tragically ended his life after allegedly being persuaded to do so by a chatbot. In the Netherlands, there’s an ongoing debate over whether to allow the use of artificial intelligence, or AI, to support decisions around doctor-assisted suicide. Elsewhere, researchers are using AI to predict how likely late-stage cancer patients are to survive the next 30 days, which could allow patients to opt out of unpleasant treatments in their final weeks.

I’ve seen the tendency to ask AI life-and-death questions first-hand. After hearing I was a computer scientist, a professor at a university I was visiting immediately asked: “So, can your algorithms tell me the best time to kill myself?”

The woman wasn’t at risk of self-harm. Instead, she feared the onset of Alzheimer’s disease as she aged, and longed for an AI model capable of helping her determine the optimal time to end her life before cognitive decline left her unable to make consequential decisions.

Fortunately, I don’t often get such requests. But I do meet plenty of people who hope new technologies will eliminate existential uncertainties from their lives. Earlier this year, Danish researchers created an algorithm, dubbed the “doom calculator,” that could predict people’s likelihood of dying within four years with more than 78 percent accuracy. Within weeks, I found that several copycat bots purporting to predict users’ death dates were appearing online.

From “Seinfeld” jokes to sci-fi stories and horror movies, the notion of an advanced computer telling us when we’ll die is hardly new — but in the age of ChatGPT, the idea of AI doing amazing things seems more realistic than ever. As a computer scientist, however, I remain skeptical. The reality is that while AI can do many things, it’s far from being a crystal ball.

Algorithmic predictions, like actuarial tables, are useful in the aggregate: They can tell us, for instance, approximately how many people will die in our community over a given time period. What they can’t do is offer the final word on any individual’s lifespan. The future isn’t set in stone: A healthy person might get hit by a bus tomorrow, while a smoker who never exercises might buck the actuarial trends and live to be 100.

In the age of ChatGPT, the idea of AI doing amazing things seems more realistic than ever. As a computer scientist, however, I remain skeptical.

Even if AI models could make meaningful individual predictions, our understanding of illnesses is constantly evolving. Once, nobody knew that smoking caused cancer; after we figured it out, our health predictions changed dramatically. Similarly, new treatments can render prior predictions obsolete: According to the Cystic Fibrosis Foundation, the median life expectancy for people born with the disease has climbed by more than 15 years since 2014, and new drugs and gene therapies promise greater gains in the future.

If you want certainty, this might sound disappointing. The more I study how people make decisions with data, though, the more I feel that uncertainty isn’t necessarily a bad thing. People crave clarity, but my work shows that people can feel less confident and make worse decisions when given more information to guide their choices. Predictions of bad outcomes can leave us feeling helpless, while uncertainty — as anyone who plays the lottery knows — can give us license to dream of (and strive toward) a brighter future.

AI tools can be useful in low-stakes situations, of course. Netflix’s recommendation algorithm is a great way to find new shows to binge — and if it steers you towards a dud, you can click away and watch something else. There are higher-stakes situations where AI is useful, too: When a fighter jet’s onboard computer intervenes to avoid a collision, say, then AI forecasting can save lives.

The problems begin when we see AI tools as replacing, rather than augmenting, our own agency. Although AI is good at spotting patterns in data, it can’t replace human judgment. (Dating app algorithms, for instance, are notoriously terrible judges of compatibility.) Algorithms are also prone to confidently fabricating answers rather than admitting uncertainty and can also show worrying biases based on the datasets used to train them.

The more I study how people make decisions with data, though, the more I feel that uncertainty isn’t necessarily a bad thing.

What should we make of all this? For better or worse, we need to learn to live with — and, perhaps, embrace — the uncertainties in our lives. Just as doctors learn to tolerate uncertainty to care for their patients, so we must all make important decisions without knowing exactly where they will lead.

That can be uncomfortable, but it’s part of what makes us human. As I warned the woman who feared the onset of Alzheimer’s, it’s impossible for AI to quantify the value of a single lived moment — and the challenges that come with being human aren’t something we should be too quick to outsource to an unfeeling AI model.

The poet Rainer Maria Rilke once told a young writer that we shouldn’t try to eliminate uncertainty, but instead learn “to love the questions themselves.” It’s tough not knowing how long we’ll live, whether a relationship will last, or what life holds in store. But AI can’t answer these questions for us, and we shouldn’t ask it to. Instead, let’s try to cherish the fact that the hardest and most meaningful decisions in life remain ours, and ours alone, to make.


If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org

Samantha Kleinberg is the Farber Chair Associate Professor of computer science at Stevens Institute of Technology and the author of “Why: A Guide to Finding and Using Causes.”

This article was originally published on Undark. Read the original article.

Read more

about artificial intelligence

Products You May Like

Articles You May Like

White Sox’ Luis Robert says he was so frustrated he thought, “I’m quitting”
Uvalde officials release unheard 911 calls, documents from school mass shooting
Texas county approves $115K in security funding for progressive DA after secret meeting
Rapids transfer Moïse Bombito for record outbound fee, league sources say
U.S. adopts an alert system for missing Indigenous people : NPR

Leave a Reply

Your email address will not be published. Required fields are marked *