The Human Touch in the Age of AI: Why Ethics Matter More Than Ever
by Roger Steare and Lynn Murape
Unleash the power of AI ethically. Explore the moral landscape of AI, from its limitations to its potential for good. Discover how our emotions and responsible AI usage can shape a future where AI serves humanity – not used as a Weapon of Mass Deception
Generative AI: A Paradigm Shift
In just a few short years, generative AI has fundamentally changed the way we work and interact with technology. The days of confining this cutting-edge tool to specialised research labs are over. Now, instead of a fancy tech degree, all one needs to leverage GenAI is a stable internet connection. This, of course, then puts a lot of power into anybody’s hands …
Given that this power can be used for good or ill, the loaded question follows: Can we trust AI to help us make good decisions? However, placing trust (a human quality) in an object speaks to our tendency to anthropomorphise our creations , which of course leads to all kinds of problems. Let’s unpack this further.
The Intelligence Debate
While GenAI cannot display moral intelligence, can the same be said about intelligence in the cognitive sense? Yes, the word ‘intelligence’ is the epicentre of this technology but the answer is not as clear-cut as you would think. On one hand, some argue that because this technology excels at pattern creation, it is capable of original thought and creativity. However, others maintain that the ability to think and be creative is exclusively a human trait – because machines don’t have cognition, right? Sure, its capabilities are impressive, but what may seem like an original thought is simply a result of pre-programmed algorithms.
Man Meets Machine
According to McKinsey (and so many others), AI adoption has dramatically increased worldwide over the last year and only seems to be gaining momentum. Although there are a plethora of benefits to reap, there is still another side of the coin. What happens – at an ethical level – when man meets machine?
The rapid development of GenAI has raised concerns about its potential for misuse. From spreading misinformation and generating harmful content to perpetuating biases and intellectual property infringement. There are a myriad of mishaps waiting to be orchestrated when this technology falls into the wrong hands. From this, the responsibility to use AI as a force for good lies squarely on each of our shoulders – but we all know that the right thing to do is not always the easiest choice to make. This is because, whether we like it or not, our emotions play an integral part in our decision-making.
“We have palaeolithic emotions, medieval institutions and God-like technology.”
E.O. Wilson, Biologist, Harvard University
Why Emotions Matter in the Age of AI
Let’s take a moment to stretch our minds back to our preschool days: this is when our sense of morality started to develop. While they were predetermined by our parents and other figures of authority, we still had a basic sense of right and wrong. The rare moments when we didn’t have a grown-up telling us what to do, we had only one thing to rely on: how we felt. Then as we aged, (hopefully) matured and started making more complicated choices, we learned to “trust our gut” and consult our emotions more.
While “feelings aren’t facts”, emotions play a crucial role in our decision-making processes. They can motivate us to act in certain ways, shape our perceptions, and influence our judgments. Reason and logic are important too, but emotions – something machines don’t have – provide a valuable source of information and guidance.
Given that GenAI systems have no emotional intelligence, they are unable to make nuanced moral judgements. But can the same be said about neurodivergent human beings who appear withdrawn and unemotional? Short answer: absolutely not! This particular thought was raised at my latest speaking engagement at The Corporate Research Forum Annual Conference in Malta. In fact, the research is very clear that the opposite is often true. Our neurodivergent friends and colleagues may have even stronger emotions than those who are neurotypical. They just choose not to show them so much.
So, as impressive as AI is, it doesn’t have early childhood memories that have become life lessons. It doesn’t have the ability to “sleep on it”. It doesn’t have feelings. All AI has is the programming we’ve created and the prompts that we input. To AI, there is no “good” or “bad”: there is only code.
“In logic there are no morals”
Rudolf Carnap, Philosopher and Logician
Lynn Murape is a marketing content creator, a qualified mental health practitioner and a podcaster. Please contact her here
