Imagine a robot that is programmed to steal. It has no consciousness, no moral sense, no conscience, no freedom, no will, it just does what it is programmed to do – steal people’s belongings. Even though we can describe this robot’s actions as being ‘objectively disordered’, we know that there is absolutely no moral agency involved. The robot is simply not the proper subject of moral language. It’s not a bad robot, it’s not an evil robot, words like ‘bad’ and ‘evil’ don’t apply to it. They might apply to the robot’s creator, but they don’t apply to the robot itself. Without the passport of freedom and knowledge, the robot can never be a citizen of the moral universe. No one would ever think of arguing about the robot’s level of guilt or responsibility because it has none.
Now imagine that you are a witness to a miracle. The robot suddenly wakes up. It is conscious of moral values, of right and wrong. However, it is still programmed to steal. It can’t not steal, but it now knows that stealing is wrong. But there is actually one small bit of freedom that the robot discovers it does possess. It can choose what to steal. Now imagine watching what happens when this conscious robot goes to its target house to do its nightly thieving. “What will it do?” you wonder, stuffing handfuls of popcorn into your mouth.
You watch this “Garden of Eden” moment unfold as the robot eyes the life savings stuffed under the mattress, and then looks at a little paperclip sitting on a table. It looks again at the money under the mattress and then the paperclip and hesitates. Taking the money would have grave consequences for the owners. Taking the paperclip would have trivial consequences. You sit utterly still, a spectator of the first moral decision of this robot’s life. Suddenly, the robot grabs the paperclip sitting on a table and takes off. What would your reaction be? Would you be happy at the outcome of the first-ever real decision of the robot’s life?
If you saw yourself cheering at that moment, what were you cheering for?
Were you cheering because the robot stole a paperclip? Or were you cheering because the robot decided to use what little freedom it had to spare its victim the serious harm that its programming created? If you saw yourself fist-pumping, was it not because within the concrete circumstances of the robots life at this moment, it did the good it was capable of doing and avoided the evil that it was capable of avoiding? Did you not have the intuition that the choice it made, under the circumstances, was good?
How should we describe or characterize what the robot did?
If we describe what happened as “stealing a paperclip” – is that really the whole truth about the robot’s action? The robot’s moral universe is not big enough to include the moral possibility of theft. Increasing harm or lessening harm was its entire moral universe at the moment. Our happiness at seeing the robot choose the paperclip was not us relishing the theft, it was rather the satisfaction of seeing the robot make the only moral choice possible. Even though objectively speaking the robot’s action fit the description of theft, morally it was an act of harm-mitigation, not theft. This is not an attempt to justify the objective actions of the robot, only to correctly describe the moral facts which necessarily include the robot as a conscious subject.
Everyone who lacks the power to do something, everyone whose freedom has been compromised, everyone bound by vice or addiction, becomes a little like that robot. To the extent that they have lost some of their willpower, their moral universe has shrunk to the same extent. And to that same extent they have also left the world of moral discourse. The point of this thought experiment is to see how to correctly characterize those that act out of weakness, rather than the fullness of the freedom of will that God created us to have. Thomas Aquinas tells us that “it may happen, on the part of the agent, that a sin generically mortal becomes venial, by reason of the act being imperfect, i.e. not deliberated by reason, which is the proper principle of an evil act.” Pope Benedict made the same distinctions when he said:
There may be a basis in the case of some individuals, as perhaps when a prostitute uses a condom, where this can be a first step in the direction of a moralization, a first assumption of responsibility, on the way toward recovering an awareness that not everything is allowed and that one cannot do whatever one wants.
This is another Edenic Moment, the start of a moral awakening. Pope Benedict goes on:
She [the Church] of course does not regard it [condom use] as a real or moral solution, but, in this or that case, there can be nonetheless, in the intention of reducing the risk of infection, a first step in a movement toward a different way, a more human way, of living sexuality.
Here too, despite the objectively bad situation, we have a harm-reduction motive and a first step in a direction of becoming less like the robot and more of a human being. So when Pope Francis says:
“conscience can do more than recognize that a given situation does not correspond objectively to the overall demands of the Gospel. It can also recognize with sincerity and honesty what for now is the most generous response which can be given to God, and come to see with a certain moral security that it is what God himself is asking amid the concrete complexity of one’s limits, while yet not fully the objective ideal.”
He isn’t saying that God is telling the weak person that it’s okay to do something evil, for the same reasons that you weren’t’ cheering the robot for “stealing a paperclip.” The robot or the prostitute finding the only good intention there was to be found in that tiny sphere of freedom was a moment of grace, one that perhaps won them the grace of a little more freedom. The pope is saying, like the fathers at the Council of Trent, “do what thou are able, and pray for what thou art not able (to do).”