Me and the Other. Why we can’t blame the algorithm?

A team of researchers from Poznań University of Economics and Business (and also members of NAWA project) recently published a fascinating study in the Journal of Business Research on how managers make ethical decisions in recruitment processes, collaborating with AI or with other people. They conducted three experiments with more than 400 managers, presenting them with a dilemma: whether to use sensitive personal information about candidates to increase recruitment effectiveness. The results provoke reflection on something much deeper than just HR procedures – on the nature of moral responsibility and the boundaries of our ethical community.
The researchers observed something unexpected. When managers received recommendations from an algorithm, they took greater responsibility for potential violations of ethical norms than when the same recommendations came from HR.
This discovery makes us think about something fundamental: what exactly is responsibility, and why does our understanding of agency seem to differ depending on whether we work with a machine or a human?
Philosophers have long asked about the limits of moral agency. Who can be liable? We usually answer: the one who has intentions, who understands the consequences of his actions, who has something we call free will. The algorithm (at least according to our intuition) does not have this. It processes data according to patterns that someone has saved. He does not “want” to achieve anything in the way humans do. It is a tool, even if it is extremely advanced.
But this is where the complication begins, as seen in the study’s results. Our attribution of intentionality is not a purely rational process. When we see someone reaching for coffee, we assume they “want” to drink, but our access to their inner state is indirect, based on interpreting behaviour and our own experiences. It can be called “the problem of other minds”.
It’s similar to AI, only… otherwise. We can see its “behaviour” – the algorithm analyses data, generates recommendations, and influences decisions. But when we ask about “what’s going on inside,” we encounter something that differs from the mystery of the human mind. There is no one there who would “experience” this process. Perhaps this is why managers can’t assign responsibility to AI the way they do to humans. Responsibility is not just a matter of causality. It is a question of agency understood more deeply: the ability to reflect morally, to feel the importance of decisions, to bear not only external but also internal consequences.
When the manager from the study worked with the HR department, he might have thought, “These are recruiting specialists. They know better. If they decided that it was necessary to use this data, it means that it was justified”. The responsibility could be distributed among various moral subjects. But when the algorithm suggested the same, something changed. The algorithm cannot be assessed in a moral sense – one can only talk about a technical glitch or an error in the data. Managers were aware of this intuitively (“If I agree to this action, I will be responsible, not AI”).
This shows something deeper about the nature of our moral decisions. We do not act in a vacuum, judging actions according to abstract principles. We operate in a network of relations with other entities, with whom we, consciously or not, negotiate responsibility, guilt, and consequences. The structure of this network may shape our choices perhaps more strongly than our declared values. In organisations, these networks of responsibility are complex. “I followed orders”, “It was a decision by the team”, “The procedures were so”. Each of these sentences is a way to dispel responsibility, to embed one’s own action in a broader context. And it works, that’s why people in organisational structures can make decisions that they could avoid as individuals.
AI does not participate in this network in the same way humans do. We can cooperate with the algorithm; we can rely on its recommendations. But the study suggests that we cannot share responsibility with him in a psychological and moral sense. The algorithm is not (at least for now) a member of our ethical community. There is no face that we can see with reproach. There is no conscience that can be burdened.
This may explain the paradox observed by researchers: cooperation with AI in certain contexts can lead to more prudent ethical decisions, not because algorithms are smarter, but because their presence changes the structure of responsibility. When we can’t instinctively spread the blame between ourselves and a colleague from the HR department, when we know that in the end we will make the decision, we can be more reflective.
But it also raises the question: do we want to build systems in which ethical behaviour is based on the inability to escape responsibility? Shouldn’t we rather strive for organisational cultures in which people behave ethically regardless of their accountability structure?
The study shows something else. The difference in perceptions of responsibility emerged only when the scenario involved a violation of ethical norms. In morally neutral situations, managers did not differentiate between cooperation with AI and with humans. This suggests that our understanding of algorithmic agency is context-dependent, depending on the moral gravity of the situation. And this leads to the last, open question. If AI becomes advanced and increasingly autonomous in its decisions, will it, at some point, start to look like an entity to which we could assign responsibility? Not because it will acquire consciousness or intentions, but because it will be so complex that we will stop understanding the mechanism of its operation and begin to treat it as a mysterious Other? And will we then paradoxically lose the effect that researchers are currently observing? When algorithms become impenetrable, when we can treat them as a “black box” with our own logic, will we again start to dispel responsibility – not in fact, but in our heads, in the way we justify our own decisions?
Maybe the biggest discovery of this study is not about AI, but about us. It shows how much our ethics are relational, embedded in a network of connections with other moral subjects. And how subtle changes in this network – like the emergence of a new type of co-worker who looks like an agent but to whom we can’t assign intentions, can change the way we make decisions.
This article is part of the project “People and Algorithms in Organisations: Competences to Work in the Digital Environment” (DIGIT_NAWA), funded by the NAWA – Narodowa Agencja Wymiany Akademickiej (Polish National Agency for Academic Exchange). #DIGIT_NAWA #AI #ArtificialIntelligence #Management#Leadership #HumanAICollaboration #ComplementaryAI #AIStrategy #BusinessStrategy #DigitalTransformation #FutureOfWork #AIResearch #NAWA