Source: Pexels

The old adage that we should “begin with the end in mind” is still great advice, especially when it comes to new technology.

Consider the invention of the lightbulb and the widespread electrification of society. Here was a technology with a clear and unambiguous purpose – to illuminate a world that went dark when the sun went down.

“The days of my youth extend backward to the dark ages,” observed British inventor Joseph Swan, one of the first men to successfully harness electric light. “Common people, wanting the inducement of indoor brightness … went to bed soon after sunset.” Swan’s prototype was a model for Thomas Edison’s far more successful lightbulb. It set off a technological revolution that unleashed great benefits to society, even if it inconvenienced a few candle makers.

Today’s technological revolution – the rapid advance of artificial intelligence (AI) – displays no such clarity of purpose. As we race to create new uses for AI throughout the economy, no one seems to know exactly where we’re going, or what the world will look like when we get there. Even Sam Altman, CEO of OpenAI and a key player in the AI revolution, is disturbingly flippant and honest about this reality. Last year he told TIME magazine that, “No one knows what happens next.”

AI could pose a threat to the future of humanity itself – we simply don’t know. While current concerns about AI revolve around deepfake videos and autonomous vehicles, we need to realize there are much greater issues at stake.

Experts such as self-described philosopher and ethicist Amanda Askell suggest AI may soon be able to do “whatever intellectual work humans currently do.” What happens when AI replaces not just factory workers and cashiers, but the CEO of the company as well?

There are serious consequences to replacing the entirety of human labour with machines, including a loss of self-esteem and life purpose as well as many economic considerations. The fundamental question we must ask ourselves is: does AI actually make humanity better off?

Further ethical dilemmas abound as we approach the not-so-distant shore of a world driven by AI. As machines become smarter, how do we ensure that they reflect human values? Technology optimists like to claim AI is purer and more objective than messy human morality, and will thus help us upgrade our deficient innate operating software.

“I think the problem is that human values as they stand don’t cut it,” said Google Research executive Blaise Agüera y Arcas. “They’re not good enough.”

Agüera y Arcas believes it is possible to use computer code to create a better moral framework than what humans can provide. But even then, someone has to decide how to build such a machine; who will train computers to be better than humans themselves? And what happens when some artificially intelligent machine achieves moral superiority to man? Are we to bow before our new digital masters?

Meanwhile, AI research is being driven by the same tech culture which makes a virtue out of “breaking stuff” and “failing fast.” As AI scholar Kate Crawford points out, most AI development today goes on without any review or oversight of the ethics involved. We need to fix this.

One way would be to require a regulatory framework for software developers modeled on the current system for professional engineers who design bridges and build roads. This would ensure those who build AI are instructed in the ethical implications of their work and held to account through strict standards and regulations.

But placing new restrictions on AI developers is only a first step. If we are going to truly grapple with the ethical considerations of our current AI revolution, society-at-large must come to terms with its own morality. “A state is not a mere casual group,” the ancient philosopher Aristotle once observed. Rather it is a community of shared understandings and beliefs. But how can we answer the question whether AI-generated pornography is ethical, for example, when we don’t even agree on whether porn itself is ethical?

Before we can instruct a machine to act morally, we need to define what it means for a human to be moral. Unfortunately, it is apparent throughout the Western world today that there is no collective agreement on what is good. Our current political debates focus on identifying oppressors and oppressed, while ideologies like critical race theory encourage tribal identities. All this is a rejection of the West’s Judeo-Christian foundations. Our confusion about the ethics of AI is thus a symptom of a deeper societal malaise. Amidst the rise of artificial intelligence, it is paramount that we align our own values before trying to assign such values to machines.

Instead of AI for AI’s sake, we want AI for humanity’s sake.

D.C.C. (Danny) Randell is an Alberta writer specializing in technology and society. The longer original version of this essay first appeared at C2CJournal.ca.

Author

  • D.C.C. (Danny) Randell

    D.C.C. (Danny) Randell is an Alberta writer specializing in technology and society. The longer original version of this essay first appeared at C2CJournal.ca.

    View all posts