Search

Subscribe Our News

Subscribe Our News

Artificial Intelligence, Real Consequences: Rethinking Accountabilities in AI-related Litigations

Can computers think? Can submarines swim? 

With the proliferation of Artificial Intelligence (AI), our existence turns fascinating yet stranger day by day. It was predominantly supposed to be a mere tool to replicate human intelligence and create efficiency. It can be said that AI has sufficiently achieved its purpose. The average human’s life is arguably easier today, yet it contemporaneously burdens the judiciary and policymakers with novel dilemmas. AI’s applications and the (relative) newfound autonomy in its operations blurs the lines of accountability, particularly in the context of adverse incidents.

So the next time your chatbot defames individuals, your med-tech software produces erroneous diagnoses, or your car’s autopilot harms innocent pedestrians, courts still struggle to determine, who or what shall take the blame.

The harms resulting from an AI’s malfunction are often conflated to fall solely within the technological context, such as, loss of personal data. However, the ease with which AI is being integrated into cars, surgery robots etc., is alarming as regards its liability implications. 

Burdens and ethical dilemmas

Consider a scenario where you own the newest self-driving car. The vehicle, whilst on its auto-pilot mode, crashes into another vehicle; the court is now tasked with determining liabilities and damages. Despite not being in control of the wheel, you are after all the owner of the vehicle and should be held liable.

Conversely, by virtue of the AI being in control of the car at the time of the accident, the car manufacturer could also face the brunt. This scenario underscores the persistent ambiguities in relevant liability frameworks, leaving difficult questions for judges and lawmakers to answer. 

Liability decisions are further complexified as developers attempt to integrate moral choices into their AI systems. The supposed moral choices rest with the ‘artificial human’, but it is ultimately a burden on the developers designing these systems, to make such choices for them. Furthering the example above, would the AI controlled car either hurt innocent pedestrians to save its passengers, or crash into a tree and hurt its own passengers, whilst saving the pedestrians? Car manufacturers are forced to programme answers to the ‘Trolley Problem’ for when the AI unfortunately ends up in such a situation.