What was originally considered Artificial Intelligence Science Fiction only a decade or so ago has become a reality or at least a potential certainty. It has been suggested that designers in Silicon Valley use SciFi as an inspiration for the creation of new technology.
Although there is a practical perspective to AI, there are also psychological consequences to this growing part of our digital technology. Some of these consequences are already self-evident, others are still to emerge.
Psychological Aspects of AI
- AI has the potential to reduce the value of human-based work including, but not limited to: journalism, administration and creative design. But, it also has the ability to enhance or increase productivity in various fields
- A lot of ink has been spilt over how AI will steal so many jobs. But, history is littered with both job losses and new job creation that come from technological inventions e.g. the tractor, the printing press, and online banking – this doesn’t seem exponentially different
- Scare-mongering by the media (and also by high-profile tech giants) can cause unnecessary moral panic that can result in fear-based passivity, rather than a proactive focus on how to train for future employment
- But, humans are extraordinarily adaptive and have the ability to learn new skill sets and find new career paths through these innovations
- Maybe it is time to change the narrative around AI to be more about mitigating the harms and building future-focused AI skillset resilience
November 2023 AI Safety Summit
The psychological implications aside, there is still enough of a concern about the practical elements of AI that an AI Safety Summit took place at UK’s Bletchley Park on 1-2 November 2023 in London to talk through how to potentially manage and regulate AI going forward.
Elon Musk has often warned about the dangers of AI. He to British Prime Minister Rishi Sunak at the Safety Summit about the dangers, positives and potential restrictions that needed to be placed on those who build AI.
The summit is a great start in this process of better understanding the human impact of AI. A few highlights gleaned from The Evening Standard articles reporting on Elon Musk’s comments both before and during the Summit held are:
More intelligent than humans
- AI and machine learning have the ability to be more intelligent than the smartest humans
- It is not clear if AI can be controlled, so there needs to be an over-riding ‘off switch’
AI will operate according to its programming
- AI is biased in that its foundational programming will be in line with the belief systems, biases and worldviews of those who write the programs. If these are implicitly in contrast with the greater good of humanity, the outcomes could become counter-productive.
- The greatest risk lies in AI becoming accidentally ‘anti-human’ – which is what we should be most concerned about
- While Elon Musk regularly mention the threat of extinction from AI, others suggest the threat to be more akin to a pandemic or nuclear war, others suggest that the threat is minimal – although it is impossible to predict
- He also mentioned risks such as deep fake videos, disinformation from fake bot social media accounts and humanoid robots
- The greatest risks come from ‘frontier AI’, which will probably exceed today’s AI models, or from losing control of these systems – especially around potential misuse
- Although Elon Musk does think that AI will mostly be a force for good (such as personalised tuition for children), he also suggests that AI will result in a time when there are no jobs
International regulator body
- Rishi Sunak wants to make the UK a ‘Global Hub’ for AI safety
- A neutral 3rd party regulator body should be set up to ‘observe what leading AI companies are doing and at least sound the alarm if they have concerns’
- All 28 countries will recommend an expert to join the UN-backed global panel on AI safety
The Bletchley Declaration
- All 28 countries that attended the summit have signed the ‘Bletchley Declaration which sets out a shared understanding of the risks of AI and pledges to develop AI models that are safe and responsible
- These governments and major AI companies (Open AI and Google DeepMind) will work together to research and manage AI risks and include external safety testing on new AI models that may hold some risk to society and national security
- It resolves to ‘work together in an inclusive manner to ensure human-centri, trustworthy and responsible AI that is safe’
- The document is not legally binding, but rather a voluntary, agreement
You can read more about the reporting from the global summit, written by Martyn Landi, Gwyn Wright and Mary-Ann Russon of The Evening Standard, from the below links.