×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Fiction

The Black Widow Case

Dave Hangman on a crucial lawsuit taking place the day after tomorrow.

“Mrs Portman, we regret to notify you that our company will not be paying the sum insured for your husband’s death,” the suited insurance agent told her, wielding a fake smile behind his rimless glasses.

“What? That’s impossible!” she protested in surprise. “When we took out the accident insurance along with the vehicle, we were assured that it covered with a hefty indemnity in the unlikely event of death or disability. I remember very well that you emphasized that such a case was so extremely unlikely that, precisely for that reason, the indemnity was so high.”

The widow in her thirties wore black. Her only touch of color was a pearl necklace of costume-jewelry quality. She had not removed her dark glasses, as a tangible reminder to the world of her grief.

“That’s right, your insurance is on the high end of coverages of what the market offers. The problem is that you took out accident insurance, not life insurance, and what happened to your husband cannot qualify as an accident at all.”

“What wasn’t an accident? I don’t understand what you mean.”

“Let me explain. We have painstakingly reconstructed all the events leading up to your husband’s death. At the intersection of Devonshire Street and Reseda Boulevard, a vehicle with an entire family ran a red light and drove into the path of your husband’s car at such a speed that the autonomous vehicle found it impossible to brake in time to avoid the collision.”

“An accident, that’s for sure!”

“No, it wasn’t. Since it was impossible to brake in time, the vehicle had to choose between continuing straight ahead and possibly killing five people, three of them children, or swerving and jumping down an embankment, risking only your husband’s life, to fall into the parking lot of a shopping mall. Unfortunately, it landed on a parked vehicle that exploded upon impact.”

“Are you telling me that our car chose to kill my husband instead of those children?”

“Not exactly. It chose to cause a lesser evil to prevent a greater evil from occurring.”

“But based on what you’re saying, my husband’s death is directly associated with the decision to save that family. By all accounts, that must be an accident.”

“Far from it, Mrs Portman. An accident is an unplanned occurrence that results in unintentional harm to persons or things,” the insurance agent explained, satisfied now that he was in his element. “It can be caused by a failure, mechanical or computer; by error, human or cyber; or by pure chance, such as a natural disaster. Your husband’s death was not subject to a contingency but a voluntary decision by the AI governing the vehicle. Therefore, it cannot qualify as an accident and so is not covered by your insurance. Just as it would not be an accident if your husband had used his car to commit suicide.”

“Are you serious?”

“Very serious, Mrs Portman.”

“But there was human error when the driver of the other vehicle ran the red light. That’s the root of it.”

“Yes, but that was not the cause of your husband’s death. What caused his death was the AI’s decision to alter the route of his vehicle. That was a voluntary noncontingent decision.”

“That’s unheard of! According to that, if the AI avoids the greater evil, as is written into its programming, any collateral damage it produces would no longer be an accident and therefore there would be no insurance to cover it.”

“I see that you have understood perfectly. But for your peace of mind, yes, there would be insurance to cover it. The person would have to be insured regardless of the risks to which they were subjected. That’s what life insurance is for.” The agent smiled thinly.

“Damn it! You’re infuriating,” she shrieked hysterically.

“I understand your surprise, but if you read the details of the particular conditions of your policy you will see that this situation is clearly outlined in the case of autonomous vehicles.”

“But my husband is dead. We have two young children, a mortgage, and now we no longer have the income from his job. We have no savings. The bank will throw us out on the street. If that damn AI wanted to prevent a greater evil it should have taken all those circumstances into account.”

“I understand you, ma’am. But the AI can only take into account the specific circumstances surrounding the event. It cannot possibly evaluate the long-term consequences. That would be like asking it, when deciding, to also take into account the likelihood that one of the children in the other car would grow up to be a terrorist.”

“I’m the one who’s going to become a terrorist if you don’t shut up!”

The insurance agent fidgeted uncomfortably in his chair. He could see that the widow was very upset.

“Perhaps you could sue the company that developed the artificial intelligence software for having programmed behavior that led inescapably to your husband’s death. I think a good lawyer could raise it as a programmed injury and get you a nice settlement,” he suggested.

“Of course. As long as you don’t have to pay, you don’t care who has to pay.”

“I was just trying to help you.”

The widow pulled out a white handkerchief and wiped her eyes under her dark glasses.

“Come to think of it,” she said, “there was a contingency. The parked vehicle exploded after my husband’s car hit it.” It might not have done.

“But that was not the cause of his death; it was the AI’s willful decision to crash the vehicle with your husband inside.”

“That was not the cause of his death? I think, as you say, a good lawyer could prove that it was that explosion that killed him.”

The Black Widow Case
The Black Widow Case Steve Tarantino
Art © Steve Tarantino 2024. Please visit stevetarantino.com

“Yes, but I’m sorry to tell you that on a legal level that would have no bearing on our contract. It would be a similar case to your husband using his car to commit suicide, but upon impact with another vehicle, it was an explosion that ultimately killed him. The primary cause of his death would still be a voluntary decision to commit suicide.”

The widow stared silently at the insurance agent.

“So, you don’t plan to pay either way.”

“As I’ve explained, our policy only covers accident risk and that implies contingency and involuntariness.”

“I could sue both you and the software company.”

“You would be faced with two mutually contradictory judgments. Winning one would necessarily mean losing the other. In one you would have to claim that your husband’s death was due to a programmed act for which the software company must compensate you. In the other, by contrast, you would have to argue that it was an unintentional accident and should be covered by our insurance policy. It would be interesting to see if the courts admit both claims at the same time. You have before you a dilemma.”

“I see you are amused by all this,” Mrs. Portman said angrily.

“No, far from it. I’m sorry for your loss, I really am. It’s just an exciting situation from a civil law and risk transfer point of view. Forgive me, it’s pure professional interest.”

Again, the widow wiped her eyes under her dark glasses with her white handkerchief.

“Who authorized a damn software company to make a decision about my husband’s life?” she asked bitterly.

“Clearly, your question indicates where the responsibility lies. On the people who programmed and trained that AI and who put inside its algorithms their own ethical values for deciding what to do in high-risk situations.”

“Is that so? Did they program the AI to make the decision that killed my husband?”

“We have contacted the software company. They have confirmed to us that all autonomous vehicles have a protocol implemented for decision-making in emergency situations. It is the same as in emergency health care. There is a triage protocol in place to classify and prioritize patients based on their severity and likelihood of recovery. To some extent, the physician decides who lives and who dies.”

“But in this case, it was a damn computer program that decided on life or death.”

“I would say, rather, that it was the people who designed those programs that decided, by incorporating within the AI their own value scheme, without giving the opportunity, in this case, your husband to decide for himself.”

The woman looked at him for quite a while, pondering his last words.

“What should I do, sue them?”

“Your insurance has good legal coverage that can help you defray the cost of litigation. You can even choose your own lawyers.”

• • •

Three months later, Mrs Portman filed two lawsuits.

First, she sued the company that had developed the software for the autonomous vehicle for programming a behavioral protocol that had inescapably led to her husband’s death. She also sued it for implementing its own ethical values without regard to those of the people involved, and for preventing her husband and any other autonomous vehicle users from being able to make their own decisions.

The judge condemned the software company to pay a hefty compensation to the widow and forced it to modify the AI programming to, from then on, allow each user to implement his or her own ethical code in the AI algorithms. To do so, future users would have to answer an ethical questionnaire that would allow the AI to act in accordance with their ethical values in high-risk situations. The investment required would be substantial.

Mrs Portman also sued her insurance company for failing to compensate her for her husband’s death, claiming that it was a ‘malfunction’ of the vehicle’s software that prevented her husband from making decisions about his own life. The claim was based on the fact that, due to poor design, the system blocked any chance of her husband being able to save his own life. She argued that this was a manifest computer design error and that this contingency should therefore be covered by her policy.

The court agreed with her, so the insurance company was forced to pay the widow the full sum insured for her husband’s death. It also ruled that, from then on, this coverage should be reflected in the particular conditions of policies involving autonomous vehicles.

Best of all, legal expenses were also fully covered by her insurance policy.

Within a short time, both in the world of case law and in the insurance world, this case became known as the Black Widow Case. Not only because of the very negative effect it had on the accounts of the companies involved but because she had trapped them in their own legal web.

© Dave Hangman 2024

Pushcart-nominated and four-time ‘Writers of the Future’ honoree, Spanish writer Dave Hangman has appeared in numerous anthologies and literary magazines.

• This story was provided by After Dinner Conversation, an independent nonprofit that promotes philosophical and ethical discourse by publishing short fiction: afterdinnerconversation.com.


Questions For Consideration

1. Even today, software makes many decisions that influence our beliefs and behaviors and change our lives – related to engine flow regulators, air conditioning temperatures, which news and advertising we see, and so on. If those programs act within their instruction set in a way that causes harm, should it be deemed an accident, or a programmed choice?

2. Do you think a car with self-driving AI should first ask you if you prioritize (a) your own safety over others or (b) creating the greatest net safety for society even at the risk of your own life? And if you were asked, how would you answer?

3. Given that Large Language Model AI is trained on millions of examples, rather through than a programmed set of rules, is it fair for human programmers to be liable when AI goes wrong?

4. Would your opinion about the lawsuit outcomes change if the liability theories created by the legal precedent caused car insurance rates to double? What if it causes companies to simply stop offering policies or research for self-driving cars altogether?

5. Should a computer ever be put in a position to decide who lives and who dies? What if giving a computer that quick-decision-making authority caused a net gain in saved lives?

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X