
AI and Karma: Who Is Responsible When a Machine Decides?
AI and Karma: Who Is Responsible When a Machine Decides?
In 2016, the AI system COMPAS helped judges in 13 U.S. states make parole decisions. Investigative journalists at ProPublica discovered: the system was twice as likely to incorrectly classify Black defendants as «high risk» for recidivism compared to white defendants. The algorithm worked. The algorithm discriminated. And no one went to jail for it — not the developers, not the company, not the judges who applied the system.
This is not an isolated case. It is an illustration of the fundamental question of our time: who is responsible when a machine makes the decision? From a karma perspective — the question of where the consequences return when actions are taken through an algorithmic intermediary.
Three Levels of Responsibility in the AI Chain
Responsibility in the AI world is distributed — and this is precisely what makes it complex. The EU AI Act (entering into force 2024–2026) was the first attempt to systematize these levels.
Level 1: The Developer
Developers create algorithms, train models, and make architectural decisions. They determine which data the system trains on, which metrics are optimized, and which constraints are built in. Timnit Gebru, former co-lead of Google's AI ethics team, showed that most NLP (natural language processing) systems are trained on data that reflects historical societal biases. The system doesn't create bias — it inherits it from us. This raises a moral question for the developer: is she responsible for the bias in data she didn't collect?
Level 2: The Company and Regulator
Companies deploying AI systems bear responsibility for how they are used. If a bank uses a scoring algorithm that discriminates against certain groups — this is not an «algorithm problem». It is the bank's problem. The EU AI Act introduces a risk-based classification: high-risk systems (biometrics, credit scoring, critical infrastructure management) face mandatory audits.
Level 3: The User
The user — the final link in the chain — also bears a portion of responsibility. A judge relying on a COMPAS recommendation doesn't transfer moral responsibility to the system. This is no different from any other advisor: their advice, your choice. Neither an autonomous vehicle nor a surgical robot eliminates the responsibility of the operator who decided to use the technology in those circumstances.
Algorithmic Bias: How AI Inherits Our Mistakes
Timnit Gebru and Joy Buolamwini of MIT Media Lab demonstrated in 2018 that commercial facial recognition systems have 99% accuracy for white men — and only 65% for Black women. This is not accident or malicious intent. It is a direct consequence of data: most training datasets consist predominantly of images of white men.
Algorithmic bias is a mirror. AI is not biased on its own. It reflects the bias in the data it was trained on — and that data reflects the bias in the human decisions that created it. A vicious cycle:
- Historical data contains discrimination (e.g., Black applicants received fewer loans)
- The model learns to predict «success» based on this data
- The model reproduces discrimination as a «neutral» decision
- The decision is legitimized by the authority of the algorithm
This is the karmic mechanism of bias: injustice from the past is embedded in the system and reproduced in the future with technological efficiency.
The Case of Autonomous Vehicles and Surgical Robots
Uber autonomous vehicle crash (2018). An Uber self-driving car struck a pedestrian in Arizona. A woman died. The investigation showed: the system «saw» the object but classified it as «unknown» and decided not to react. The vehicle's safety operator was watching their phone. Uber faced criminal proceedings and paid compensation. A software engineer was subsequently convicted of «negligent homicide». But the systemic question — who is responsible when an algorithm «decides» wrongly — remains legally contentious.
Surgical robots. Da Vinci systems are used in thousands of operations daily. They reduce surgeon error — but don't eliminate it. When the robot makes a mistake — who is responsible? The surgeon who «pressed the wrong button»? The manufacturer? The hospital that decided to use the system? The answer depends on the specific case — and this is precisely what makes the situation karmically complex.
The Karma of Creators: The Ethics of the Developer
Software ethics is one of the fastest-growing fields of applied philosophy. Nick Bostrom in «Superintelligence» posed the question radically: developers of AGI (artificial general intelligence) bear responsibility comparable to developers of nuclear weapons. This is a strong claim — but it points to a real problem: the leading figures of the technological revolution rarely interrogate the consequences.
Today, institutional responses are emerging: Partnership on AI — a coalition of Google, Apple, Amazon, Facebook, and Microsoft — develops principles for ethical AI. The EU AI Act introduces legal responsibility for high-risk systems. MIT, Stanford, and Oxford have opened AI ethics institutes.
At the individual level, the karma of the developer looks like this: every technological system is an embodiment of the values of its creators. When you optimize for engagement metrics — you embody the value «retain users at any cost». When you build diversity into training data — you embody the value of fairness. There is always a choice. About digital karma — how our online actions reflect our values — read more in our article on digital karma.
Your Digital Decisions
Even if you're not an AI developer, you participate in this system daily. You use algorithmic recommendations, make decisions based on automated assessments, and trust or distrust systems. Every interaction with an AI system is a vote for a certain kind of technological development. Take the test at karm.top in the «daily» and «honesty» categories. About what happens when anonymity lets us avoid thinking about consequences — read our article on anonymity and morality. About the role of misinformation — in our article on fake news and disinformation.
Frequently Asked Questions
Can AI be fully «objective»? No. Any AI system reflects the values and assumptions of its creators — in the choice of data, metrics, and constraints. The «objectivity» of an algorithm is often a mask for bias that's harder to challenge because it's expressed mathematically.
Who should be held accountable for AI mistakes? The EU AI Act offers a framework: companies that deploy high-risk systems bear responsibility. For harm to users, standard product liability law applies. In general, responsibility is shared among developer, deployer, and user.
How can an ordinary user influence AI ethics? Through consumer choices (which platforms to use), through demands for transparency (access to information about how algorithmic decisions are made), and through civic activism (supporting AI regulation).


