AI is a distorted rear-view mirror

I’m reading “The AI Mirror” by Shannon Vallor. Shannon is a philosopher like me. She is the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence in the Department of Philosophy at the University of Edinburgh and a former AI Ethicist at Google. She argues that AI acts as a “mirror”, reflecting back our values, biases, and assumptions embedded in the “oceans” of data and algorithms we have created. Note the past tense. AI is not a portal to the future, it is a distorted rear-view mirror.

Potential Dangers and Profit-Driven Motivation
Shannon highlights the potential dangers of AI systems perpetuating harmful biases, eroding privacy, and undermining human moral agency if we don’t critically examine the values and assumptions driving their development and their use. We also need to consider that the biggest motivation in the development of AI systems is profit, not principle. Tech stocks like NVIDIA, Microsoft, and Google are now trillion-dollar businesses.

Technical Expertise With Philosophical Reflection
She advocates for a “techno-philosophical” approach that combines technical expertise with philosophical reflection to ensure AI aligns with human moral values. She calls for us to develop “virtuous” character traits like humility, courage, and compassion to help us navigate the challenges posed by AI.

I couldn’t agree more. My friend and colleague Pavlos Stampoulides is currently testing ChatGPT, Genesis and Claude with our MoralDNA Profile, and we can already see some frightening flaws in their inability to incorporate critical human virtues.

Moral Reasoning and Ethical Frameworks
The book also emphasizes the need for ethical frameworks to guide the development and deployment of AI systems, ensuring they respect human rights, dignity, and autonomy rather than undermining them.

This is where I began my work as The Corporate Philosopher in 2002 and why there is still such a need for moral reasoning and ethical frameworks in decision-making.

So if you’re looking for an ethical framework, my own book, “ethicability” is still available!

Moral Intelligence

[ctct form=”1146″ show_title=”false”]