Supremacy Book Review

Supremacy – Parmy Olson

AI seems to be everywhere at the moment. The media machine and the echo chamber of social media algorithms have bathed my feeds in news, white papers, and academic articles about the latest digital revolution. I have to admit here that before reading Supremacy, my knowledge of AI and the large language models that drove the algorithms and machine learning behind them was a bit of a foreign concept.

Although Supremacy sets out a history of just two AI companies (ChatGPT and DeepMind) and the personalities that shaped and invested in them and subsequently bought them, Parmy Olson does a great job of explaining throughout what drives AI tools—at least the ones that are competing head-on with search engines.

If you want a history of AI and a glance at the potential future, this is a great book to start with. It is surprisingly entertaining and deeply fascinating while still being very ‘quaffable’. Having recently finished the book, I am about to flip back to Page 1 and start again – as I am sure I will get a few more golden nuggets of insights I didn’t pick up the first time.

Book Review Related Articles
Dr Paul Marsden poster

Dr Paul Marsden | Ep 10

Watch or listen:
The Human-AI Interaction

In this episode, we explore AI, Positive Psychology, Digital Wellbeing and technology.

Connect with the guests
Dr Paul Marsden

Paul is a Chartered psychologist specialising in consumer behaviour, wellbeing and technology. He is a university lecturer at UAL and a consultant consumer psychologist with Brand Genetics.

Paul believes that the biggest digital disruption is the one happening in our heads, as technology transforms our identity, experiences and relationships. He helps businesses understand how this digital disruption influences consumer needs, motivations and behaviour.

He lectures on consumer trends and consumer psychology at the business school of the London College of Fashion, where he also researches the phenomenon of “enclothed cognition” – the psychological impact of our clothes on how we think.

He co-founded Brainjuicer PLC (now System1 Group), a research company that uses online psychological techniques to understand consumers.

Connect on Social Media :

In today’s episode of Confessions of a CyberPsychologist, YouTube Link, I chat with Dr Paul Marsden about the intersection of technology, well-being, and AI-driven futures. From his early fascination with Blade Runner to developing AI “co-workers” for businesses, Paul offers an inspiring take on how technology can be harnessed for positive change.

Growing up with Cyberpunk Influences

Paul’s childhood fascination with Blade Runner ignited his interest in synthetic humans and the blurred boundaries between human and artificial intelligence. This early inspiration spurred him on to do a PhD and eventually co-found Brain Juicer (now System1), a research agency based on word-association techniques.

Positive Psychology and the ARC of Happiness

He underlines the importance of positive psychology, highlighting self-determination theory—specifically the ARC model (Autonomy, Relatedness, and Competence)—as a guiding principle:

  • Autonomy: We thrive when we feel in control of our choices, whether online or offline.
  • Relatedness: Connection to others—be it through social media, communities, or shared experiences—bolsters our well-being.
  • Competence: Feeling capable and “savvy” boosts happiness and encourages repeat engagement in both digital and consumer spaces.

He points out how brands and technology platforms that support these three needs often inspire loyalty, promote well-being, and turn user satisfaction into sustainable profitability.

AI for Well-Being and Business

We talk about the potential of AI assistants—like “Mia” or “Lexi”—that can be given specialised skills and even distinct personalities. In business contexts, these AI “co-workers” not only handle routine tasks (filling forms, summarising documents, analysing data) but also provide empathetic support to employees. By tailoring AI with human-like qualities and memory, users build genuine rapport with their digital counterparts.

He also discusses AI’s capacity to democratise therapy by offering empathetic conversations and personalised mental health support, particularly in a world where counselling resources are stretched. AI’s potential to reduce barriers—such as cost, stigma, and lack of local services—could open mental health support to a far wider population.

Debunking the Doom and Gloom

From I am Gen Z (a documentary highlighting the digital anxieties of Gen Z) to the ongoing debate between Elon Musk’s AI pessimism and Mark Zuckerberg’s optimism, Paul emphasises a balanced view. He sees technology not as inherently harmful but as a tool whose impact depends on how it is used—and crucially, whether it displaces activities that meet our innate ARC needs. He also stresses that regulatory or institutional fears shouldn’t block everyday people from accessing the transformative possibilities of AI.

Looking Ahead

Cyberpsychology is evolving beyond its traditional focus on social media, cyberbullying, and online risks. The rise of AI offers a reset moment: a chance to embrace an applied, more positive strand of cyberpsychology that explores how humans and machines can co-create value, enhance well-being, and tackle real-world challenges.

Key Takeaways:

  1. ARC Model – A simple but powerful framework for designing technology and consumer experiences that promote happiness and loyalty.
  2. Empathy in AI – Human-like AI can boost productivity and provide meaningful emotional support, reshaping how we work and care for each other.
  3. Digital Literacy – Embracing AI across education and business is crucial; we risk falling behind if we ban or overly restrict these tools.
  4. Positive Outlook – Technology reflects our choices and motivations; focusing on the good it can do encourages innovation and shared prosperity.

Recommended Reading

  • Anything by William Gibson (who coined “cyberspace”)
  • Without Miracles by Gary Cziko
  • Darwin’s Dangerous Idea by Daniel Dennett

By shifting our lens towards the potential for well-being, AI, and positive psychology, we can steer cyberpsychology from a “doom and gloom” narrative to one centred on thriving, innovation, and meaningful human connections.

Acronyms:

AI: Artificial Intelligence

LLM: Large Language Model

Other podcasts
Reigning in artificial intelligence

The global attempt to reign in Artificial Intelligence

What was originally considered Artificial Intelligence Science Fiction only a decade or so ago has become a reality or at least a potential certainty. It has been suggested that designers in Silicon Valley use SciFi as an inspiration for the creation of new technology.

Although there is a practical perspective to AI, there are also psychological consequences to this growing part of our digital technology. Some of these consequences are already self-evident, others are still to emerge. 

Psychological Aspects of AI 

  • AI has the potential to reduce the value of human-based work including, but not limited to: journalism, administration and creative design. But, it also has the ability to enhance or increase productivity in various fields
  • A lot of ink has been spilt over how AI will steal so many jobs. But, history is littered with both job losses and new job creation that come from technological inventions e.g. the tractor, the printing press, and online banking – this doesn’t seem exponentially different
  • Scare-mongering by the media (and also by high-profile tech giants) can cause unnecessary moral panic that can result in fear-based passivity, rather than a proactive focus on how to train for future employment 
  • But, humans are extraordinarily adaptive and have the ability to learn new skill sets and find new career paths through these innovations
  • Maybe it is time to change the narrative around AI to be more about mitigating the harms and building future-focused AI skillset resilience

November 2023 AI Safety Summit

The psychological implications aside, there is still enough of a concern about the practical elements of AI that an AI Safety Summit took place at UK’s Bletchley Park on 1-2 November 2023 in London to talk through how to potentially manage and regulate AI going forward. 

Elon Musk has often warned about the dangers of AI. He to British Prime Minister Rishi Sunak at the Safety Summit about the dangers, positives and potential restrictions that needed to be placed on those who build AI. 

The summit is a great start in this process of better understanding the human impact of AI. A few highlights gleaned from The Evening Standard articles reporting on Elon Musk’s comments both before and during the Summit held are: 

More intelligent than humans

  • AI and machine learning have the ability to be more intelligent than the smartest humans
  • It is not clear if AI can be controlled, so there needs to be an over-riding ‘off switch’

AI will operate according to its programming

  • AI is biased in that its foundational programming will be in line with the belief systems, biases and worldviews of those who write the programs. If these are implicitly in contrast with the greater good of humanity, the outcomes could become counter-productive.

Existential risk

  • The greatest risk lies in AI becoming accidentally ‘anti-human’ – which is what we should be most concerned about
  • While Elon Musk regularly mention the threat of extinction from AI, others suggest the threat to be more akin to a pandemic or nuclear war, others suggest that the threat is minimal – although it is impossible to predict
  • He also mentioned risks such as deep fake videos, disinformation from fake bot social media accounts and humanoid robots
  • The greatest risks come from ‘frontier AI’, which will probably exceed today’s AI models, or from losing control of these systems – especially around potential misuse 
  • Although Elon Musk does think that AI will mostly be a force for good (such as personalised tuition for children), he also suggests that AI will result in a time when there are no jobs

International regulator body

  • Rishi Sunak wants to make the UK a ‘Global Hub’ for AI safety
  • A neutral 3rd party regulator body should be set up  to ‘observe what leading AI companies are doing and at least sound the alarm if they have concerns’
  • All 28 countries will recommend an expert to join the UN-backed global panel on AI safety

The Bletchley Declaration

  • All 28 countries that attended the summit have signed the ‘Bletchley Declaration which sets out a shared understanding of the risks of AI and pledges to develop AI models that are safe and responsible
  • These governments and major AI companies (Open AI and Google DeepMind) will work together to research and manage AI risks and include external safety testing on new AI models that may hold some risk to society and national security
  • It resolves to ‘work together in an inclusive manner to ensure human-centri, trustworthy and responsible AI that is safe’
  • The document is not legally binding, but rather a voluntary, agreement

Read more: 

You can read more about the reporting from the global summit, written by Martyn Landi, Gwyn Wright and Mary-Ann Russon of The Evening Standard, from the below links.

Elon Musk says AI one of the ‘biggest threats’ to humanity.

AI Safety: What have we learned?

Elon Musk: AI could pose existential risk if it becomes ‘anti-human’.

Elon Musk tells Rishi Sunak: AI ‘the most destructive force in history’

Related Articles