Morality in the tool box

A new study has some interesting results pointing to resistance to AI stemming from moral and not just “practical” concerns. Pre-publication article here

While I might disagree that moral concerns are separable from “practical” or “pragmatic” ones on the grounds that the practical results of one’s actions have a moral valence, I do think this is an interesting set of findings and hope to see more research on moral, ethical, and even religious reactions to advanced technologies.

I’ll offer one way to think about this. Tools (technologies, things) aren’t moral actors in their own right, but their creation and use has moral implications. The more specific a tool, the more the morality that led to its development comes into focus.

Anthropologist Margaret Mead is widely (and probably incorrectly) credited with saying that the first sign of civilization in an ancient culture is a healed femur. Whoever actually made this claim should’ve taken credit because they created an interesting thought experiment. There’s a moral claim being made that places the beginning of human civilization at the ability and willingness to share someone else’s burden and care for a community member who, left to the course of nature, would surely die before their bone could heal. There is certainly something appealing about implicitly saying the dawn of human civilization began with a very specific kind of tool: a splint. This new technology made such healing possible. The splint, a tool whose only purpose is to heal an injury and, on top of that, assumes someone else is able and willing to take on the survival load of the injured person has moral urgency. As a fable of community, civilization, and technology, this is quite compelling. For the purposes of morality and technology, this story creates one pole for moral technology: The morality of the splint is one of community, healing, and shared endeavors.

The morality of the sword is quite different and a good example of the other pole of morality and technology. As much as I like swords (I was a collegiate fencer and I do enjoy DnD as well as medieval books and movies) it is hard to avoid understanding their purpose. The morality of the sword stems from the idea that killing other humans is permissible and even sometimes desirable. At inception, a sword had no other purpose, there are far better, cheaper, and easier tools for building a house, excavating valuable resources, chopping wood or any other creative endeavor. Even for defense against animals or people, spears, clubs, and similar simpler tools can work better for most folks. The sword requires specialization to produce and to use effectively. It would seem to encourage, if not outright demand, hierarchy. To put it simply, swords are for specialists to use to kill people. The morality of the sword is one that elevates aggression in a zero sum game of survival and conquest.

Stark contrasts here for single-purpose, specialized tools. What does that mean for us today?

Many of our digital tools have multiple purposes and even their developers had multiple aims in their creation. So the morality, then, shifts to the use of the tool. Just like with humans, tools of general utility can widely vary in their moral implications. We, as people, project our morality on to these tools by how we use them. The most effective users will, intentionally or not, imprint their moral agendas onto these tools. Use, over time, will also create specializations in these tools that show off the morality that “won” among their various users. As the tools seem to do more “thinking” or at least have the autonomy to execute on decisions in ambiguous situations, the moral programming that guided their actual programming begins to have more “practical” or “pragmatic” import.

It’s no wonder, then, as we create tools that are more an more like us, more autonomous and wide ranging, we struggle with the moral implications of why our tools get created and what they will be allowed to do.

Title photo by Barn Images on Unsplash

Dialogue, Silence, Tech, & Trust

Ryszard Kapuściński, the Polish anti-colonialist journalist and poet, wrote: “Silence is a signal of unhappiness and, often, of crime. It is the same sort of political instrument as the clatter of weapons or a speech at a rally. Silence is necessary to tyrants and occupiers, who take pains to have their actions accompanied by quiet.”

There’s a lesson here about technology and trust: Who gets to be part of the conversation, and who gets silenced? With the immense power of new tech concentrated in the hands of a small number of technology owners, this is not an abstract question. It will determine whether tech erodes trust or whether we can earn trust.

Transparency or Dialogue?

Tech can be used to earn trust by increasing transparency and fairness. Or it can be wielded to destroy trust, by enforcing silence. Unfortunately, I’ve seen far too much of the latter in recent weeks. The problem begins with how we talk about dialogue itself. Opening up honest dialogue is certainly a part of transparency. But that dialogue has to be honest and focussed around understanding the truth. What we’re seeing instead is a troubling shift: the prioritization of dialogue as performance over actual transparency. Transparency demands the truth of what is being shared. Dialogue, stripped to its bare mechanics, demands only words.

When the owners and builders of new technology aren’t honest about its benefits and drawbacks, that erodes trust. When those owners create spaces where only their voices are amplified and others are quieted, that erodes trust further. When people witness technology being deployed to benefit a privileged few at the cost of the many, when they watch as tech is weaponized to silence inconvenient voices—that isn’t just unfair. It’s dangerous. And it threatens both technology and society.

For the past year, and especially at January’s World Economic Forum’s Annual Meeting in Davos, Switzerland we’ve watched tech CEOs and billionaires celebrate their rising influence, increasingly finding themselves in positions of political power and favor. Their confidence is palpable. Their platforms are vast. And their willingness to use those platforms selectively by amplifying some voices while marginalizing others has become impossible to ignore. The difference between private celebration and public suppression couldn’t be more stark. It couldn’t highlight the fundamental unfairness embedded in our current tech infrastructure more sharply.

Control and Silence

Kapuściński again: “Silence has its laws and its demands. Silence demands that concentration camps be built in uninhabited areas. Silence demands an enormous police apparatus with an army of informers.” Looking around, an Minneapolis and elsewhere in the US, I have to ask, what does it mean when the technology we develop becomes part of that silencing apparatus?

The difference between the “dialogue” on the promenade of Davos and the repression on the streets of Minneapolis couldn’t be more stark. It couldn’t highlight the unfairness of tech (and society) any more sharply. On the largest scale, social media, increasingly powered by AI, is shattering what’s left of our global digital commons. In the warring narratives of Minneapolis, the US federal government used whatever technical savvy at its disposal to elevate grudges and lies over truth and responsibility. As if seeking to embody this trend, TikTok, under new, ostensibly American and certainly less competent ownership, turned its addictive algorithm sharply to the right. On an individual level, the surveillance state we’ve been warning of for over a decade now allows for personal revenge against the administration’s enemies, with facial recognition used to both take away privileges and track down new victims for ICE.

You can’t earn trust without listening to others, enforced silence breeds resentment and distrust. Using technology, especially the very tools that promised to connect us and democratize information, to create that enforced silence is a betrayal of innovation and of our best hopes for the future. Responsibility – for truth, for honesty, and for innovating for trust – can only be postponed, and people will only be silenced for so long.