Supremacy Book Review

Supremacy – Parmy Olson

AI seems to be everywhere at the moment. The media machine and the echo chamber of social media algorithms have bathed my feeds in news, white papers, and academic articles about the latest digital revolution. I have to admit here that before reading Supremacy, my knowledge of AI and the large language models that drove the algorithms and machine learning behind them was a bit of a foreign concept.

Although Supremacy sets out a history of just two AI companies (ChatGPT and DeepMind) and the personalities that shaped and invested in them and subsequently bought them, Parmy Olson does a great job of explaining throughout what drives AI tools—at least the ones that are competing head-on with search engines.

If you want a history of AI and a glance at the potential future, this is a great book to start with. It is surprisingly entertaining and deeply fascinating while still being very ‘quaffable’. Having recently finished the book, I am about to flip back to Page 1 and start again – as I am sure I will get a few more golden nuggets of insights I didn’t pick up the first time.

Dr Paul Marsden poster

Dr Paul Marsden | Ep 10

Watch or listen:
The Human-AI Interaction

In this episode, we explore AI, Positive Psychology, Digital Wellbeing and technology.

Connect with the guests
Dr Paul Marsden

Paul is a Chartered psychologist specialising in consumer behaviour, wellbeing and technology. He is a university lecturer at UAL and a consultant consumer psychologist with Brand Genetics.

Paul believes that the biggest digital disruption is the one happening in our heads, as technology transforms our identity, experiences and relationships. He helps businesses understand how this digital disruption influences consumer needs, motivations and behaviour.

He lectures on consumer trends and consumer psychology at the business school of the London College of Fashion, where he also researches the phenomenon of “enclothed cognition” – the psychological impact of our clothes on how we think.

He is a chartered psychologist, chartered by the British Psychological Society with a PhD focused on online psychological research techniques. He co-founded Brainjuicer PLC (now System1 Group), a research company that uses online psychological techniques to understand consumers.

Connect on Social Media :

In today’s episode of Confessions of a CyberPsychologist, YouTube Link, I chat with Dr Paul Marsden – a consumer psychologist who specialises in consumer behaviour, wellbeing and technology. 

We talk about:

00:35 What got Paul into CyberPsychology and the psychology of online behaviour.

02:58 How technology and CyberPsychology relates to positive psychology (Autonomy, Relatedness and Competence) and consumer behaviour.

09:34 The ‘_iAmGenZ’ documentary Paul appears in and the correlation between technology use and youth mental wellbeing.

16:57 Developing AI Large Language Model (LLM) assistants being used in financial institutions and research companies to help people be more effective and productive, and to help with motivational analysis to identify people’s underlying or hidden motivations.  

24:07 The future of AI and mental health therapy – the democratisation and personalisation of otherwise expensive therapy and the relationship we build with AI in therapy.

28:25 The positive impact of AI and technology (that appears to be intelligent) in the future, how its use becomes an extension of ourselves to do greater things than we could do without it, and how AI can help us with interacting with other humans.

41: 24 How as CyberPsychologists, we should future-proof our careers by focusing primarily on the application around the AI-human interaction, rather than a general negative focus on disorder and dysfunction around digital technology use.

51:26 The greatest concerns he has around AI in the future – especially with students (using AI as part of their education) and within business.

55:56 Advice for CyberPsychologists who would looking to get into AI as a career.

58:55 Focussing on finding out what makes you happy, and makes your life worth living, in accordance with The Arc of Happiness – allowing you to thrive and do things that promote wellbeing.

1:03:15 The increase in media literacy and critical thinking around click-bait and misinformation.

If you are interested in the future impact of AI on human potential and the role positive psychology plays in technology and AI, you will probably find this episode interesting.

Acronyms:

AI: Artificial Intelligence

LLM: Large Language Model

Other podcasts
Reigning in artificial intelligence

The global attempt to reign in Artificial Intelligence

What was originally considered Artificial Intelligence Science Fiction only a decade or so ago has become a reality or at least a potential certainty. It has been suggested that designers in Silicon Valley use SciFi as an inspiration for the creation of new technology.

Although there is a practical perspective to AI, there are also psychological consequences to this growing part of our digital technology. Some of these consequences are already self-evident, others are still to emerge. 

Psychological Aspects of AI 

  • AI has the potential to reduce the value of human-based work including, but not limited to: journalism, administration and creative design. But, it also has the ability to enhance or increase productivity in various fields
  • A lot of ink has been spilt over how AI will steal so many jobs. But, history is littered with both job losses and new job creation that come from technological inventions e.g. the tractor, the printing press, and online banking – this doesn’t seem exponentially different
  • Scare-mongering by the media (and also by high-profile tech giants) can cause unnecessary moral panic that can result in fear-based passivity, rather than a proactive focus on how to train for future employment 
  • But, humans are extraordinarily adaptive and have the ability to learn new skill sets and find new career paths through these innovations
  • Maybe it is time to change the narrative around AI to be more about mitigating the harms and building future-focused AI skillset resilience

November 2023 AI Safety Summit

The psychological implications aside, there is still enough of a concern about the practical elements of AI that an AI Safety Summit took place at UK’s Bletchley Park on 1-2 November 2023 in London to talk through how to potentially manage and regulate AI going forward. 

Elon Musk has often warned about the dangers of AI. He to British Prime Minister Rishi Sunak at the Safety Summit about the dangers, positives and potential restrictions that needed to be placed on those who build AI. 

The summit is a great start in this process of better understanding the human impact of AI. A few highlights gleaned from The Evening Standard articles reporting on Elon Musk’s comments both before and during the Summit held are: 

More intelligent than humans

  • AI and machine learning have the ability to be more intelligent than the smartest humans
  • It is not clear if AI can be controlled, so there needs to be an over-riding ‘off switch’

AI will operate according to its programming

  • AI is biased in that its foundational programming will be in line with the belief systems, biases and worldviews of those who write the programs. If these are implicitly in contrast with the greater good of humanity, the outcomes could become counter-productive.

Existential risk

  • The greatest risk lies in AI becoming accidentally ‘anti-human’ – which is what we should be most concerned about
  • While Elon Musk regularly mention the threat of extinction from AI, others suggest the threat to be more akin to a pandemic or nuclear war, others suggest that the threat is minimal – although it is impossible to predict
  • He also mentioned risks such as deep fake videos, disinformation from fake bot social media accounts and humanoid robots
  • The greatest risks come from ‘frontier AI’, which will probably exceed today’s AI models, or from losing control of these systems – especially around potential misuse 
  • Although Elon Musk does think that AI will mostly be a force for good (such as personalised tuition for children), he also suggests that AI will result in a time when there are no jobs

International regulator body

  • Rishi Sunak wants to make the UK a ‘Global Hub’ for AI safety
  • A neutral 3rd party regulator body should be set up  to ‘observe what leading AI companies are doing and at least sound the alarm if they have concerns’
  • All 28 countries will recommend an expert to join the UN-backed global panel on AI safety

The Bletchley Declaration

  • All 28 countries that attended the summit have signed the ‘Bletchley Declaration which sets out a shared understanding of the risks of AI and pledges to develop AI models that are safe and responsible
  • These governments and major AI companies (Open AI and Google DeepMind) will work together to research and manage AI risks and include external safety testing on new AI models that may hold some risk to society and national security
  • It resolves to ‘work together in an inclusive manner to ensure human-centri, trustworthy and responsible AI that is safe’
  • The document is not legally binding, but rather a voluntary, agreement

Read more: 

You can read more about the reporting from the global summit, written by Martyn Landi, Gwyn Wright and Mary-Ann Russon of The Evening Standard, from the below links.

Elon Musk says AI one of the ‘biggest threats’ to humanity.

AI Safety: What have we learned?

Elon Musk: AI could pose existential risk if it becomes ‘anti-human’.

Elon Musk tells Rishi Sunak: AI ‘the most destructive force in history’