From “Me” to “We”. In conversation with Richard Watson

The CHATROG Podcast, Episode 5

Photo Credit: YouTube

In this episode of the CHATROG Podcast, I have a deep and thoughtful conversation with Richard Watson, a renowned futurist, fellow philosopher, and author of Digital VS. Human. The discussion delves into the opportunities and ethical risks associated with artificial intelligence. Richard shares his scepticism about current AI trends, particularly expressing concern that, if guided by the economic interests of governments and corporations, AI could undermine human agency and creativity. He points out that AI frequently transforms human behaviour into data for commercial and governmental purposes, often without explicit consent from individuals.

Accountability and bias

A major theme is the problem of accountability and bias. Richard warns of a future in which AI systems act as “digital dictatorships,” making decisions without sufficient human oversight. He references real-world scandals, such as the Post Office Horizon debacle and the Netherlands welfare fraud case, to illustrate the dangers of unaccountable, opaque AI decision-making. Contrary to the popular belief in AI’s objectivity, he argues that these systems often exacerbate existing biases and inequalities. Instead, Richard sees real value in responsible AI use, where technology acts as a collaborator that enhances—rather than replaces—human insight, especially in fields like medicine and education where empathy and critical thinking are indispensable. He cautions, however, against the risk of losing crucial human skills as professionals like pilots and lawyers become over-reliant on AI for routine tasks.

Environmental costs

The environmental cost of AI, such as the energy, water, and mineral resources it requires, is another frequently overlooked topic that Richard raises. He suggests that AI’s environmental footprint may in fact outweigh that of industries like aviation. On the creative front, Richard believes that most AI-generated content simply mimics existing styles and lacks genuine creativity or lived experience. He advocates for clear labelling of content as either “machine-generated” or “human-generated” to give audiences the ability to make informed choices, especially as few people can reliably tell the difference. In his view, the increasing prevalence of artificial content may ultimately increase society’s appreciation for distinctively human qualities like creativity, care, and empathy.

Social and psychological impact

Social and psychological consequences of AI and technology are also addressed. According to Richard, AI-driven platforms and smartphones foster addiction by design, contributing to loneliness and making it harder—especially for children—to form authentic relationships. He supports robust regulation, such as restricting phone use in schools, and encourages a balance between screen time and real-world engagement.

From “Me” to “We”

Richard’s moral philosophy suggests a necessary shift from a focus on individual rights to shared duties and responsibilities, invoking Confucian and communitarian ideas. He calls this, from “me” to “we”. He believes that flourishing communities and meaningful relationships are key to ethical development and well-being, and that technology should serve to strengthen these bonds rather than erode them.

Our futures

The podcast concludes with contrasting visions of the future. On the pessimistic side, Richard imagines a world dominated by hyper-individualism, loss of privacy, environmental damage, and AI-driven social alienation. On a more optimistic note, he envisions technology fostering a shift from “me” to “we,” helping to forge community connections and amplify our creative and caring capacities. As practical advice, Richard suggests removing email from smartphones, making time for rest and reflection, reading critical works like Shannon Vallor’s ‘AI Mirror’ and Shoshana Zuboff’s ‘Surveillance Capitalism,’ and seeking out opportunities to help others and engage with different perspectives. Richard challenges us to rethink AI’s purpose, champion accountability, and reimagine our future based on collective moral progress rather than unchecked innovation.


You can listen to all the episodes of the CHATROG podcast here
Moral Intelligence