Marshall S. Rich

Dr Marshall S. Rich | Ep 12

Watch or listen:
Digital Mental Health

In this episode, we explore the work of the world of Forensic CyberPsychology. 

This involves understanding cybercriminal behaviour and the cognitive bias and decision-making vulnerabilities of cyber attackers.

Connect with the guests
Dr Marshall S. Rich

Marshall is a Forensic CyberPsychologist who has a Doctorate in Business Administration, Management Information Systems and Services, as well as a PhD in Forensic Cyberpsychology, Identifying Adversarial Behavior Patterns in Cyber-Attacks.

He currently works for the United States Institute of Peace as a Senior Expert in Cybersecurity.

Connect on Social Media :

In today’s episode of Confessions of a CyberPsychologist, I chat with Marshall S. Rich (https://www.linkedin.com/in/marshall-rich/) who is a Senior Expert in Cyber Security working for the United States Institute of Peace.

We talk about Forensic CyberPsychology and focus on:

00:39 What got him into CyberPsychology and how his background in the military has shaped his current forensic focus on predicting and counteracting cyber adversary behaviour.

03:58 How the blend of a military background and academic research makes him so uniquely qualified to develop bespoke awareness and training programmes in adversarial cyber threats.

10:56 The topline insights from his recent presentation on CyberPsychological warfare.

13:29 The work he does at the US Institute of Peace – including the hosting of high-level events and the managing of the potential cyber-threats that are aimed either at the dignitaries attending or at the event itself.

20:29 What Marshall focused on for his PhD – specifically understanding cyberthreats by understanding cognitive bias and decision-making vulnerabilities of cyber attackers.

25:03 The difference between Forensic CyberPsychology and the CyberPsychology of Cyber Security.

27:24 merging the practical elements of the military, law enforcement and CyberPsychology.

30:36 What the ‘Cyber Forensics Behavioural Analysis Model’ is and how it is used to predict around 70% of future cybercriminal behaviour.

37:23 The usefulness, restrictions and theory of a HackBot for international Forensic CyberPsychology.

41:47 The need for cybersecurity practitioners to consider both the technical defences and the psychological/human side of cybersecurity – including cognitive bias and attacker deception techniques training.

44:02 Marshall’s greatest concern for the future in the world of Forensics – including the rapidly evolving human behaviour tactics of cyber criminals. As technology advances, so do their methods and their exploitation of human behaviour.

45:49 advice on how to get into the field of Forensic CyberPsychologist.

Marshall’s experience covers a range of skills including the military, academia and working with government organisations.

If you are interested in the psychology of cybercriminals, you will probably find this episode interesting.

Dr Marshall S. Rich's Research
Other podcasts
Dr Paul Marsden poster

Dr Paul Marsden | Ep 10

Watch or listen:
The Human-AI Interaction

In this episode, we explore AI, Positive Psychology, Digital Wellbeing and technology.

Connect with the guests
Dr Paul Marsden

Paul is a Chartered psychologist specialising in consumer behaviour, wellbeing and technology. He is a university lecturer at UAL and a consultant consumer psychologist with Brand Genetics.

Paul believes that the biggest digital disruption is the one happening in our heads, as technology transforms our identity, experiences and relationships. He helps businesses understand how this digital disruption influences consumer needs, motivations and behaviour.

He lectures on consumer trends and consumer psychology at the business school of the London College of Fashion, where he also researches the phenomenon of “enclothed cognition” – the psychological impact of our clothes on how we think.

He is a chartered psychologist, chartered by the British Psychological Society with a PhD focused on online psychological research techniques. He co-founded Brainjuicer PLC (now System1 Group), a research company that uses online psychological techniques to understand consumers.

Connect on Social Media :

In today’s episode of Confessions of a CyberPsychologist, YouTube Link, I chat with Dr Paul Marsden – a consumer psychologist who specialises in consumer behaviour, wellbeing and technology. 

We talk about:

00:35 What got Paul into CyberPsychology and the psychology of online behaviour.

02:58 How technology and CyberPsychology relates to positive psychology (Autonomy, Relatedness and Competence) and consumer behaviour.

09:34 The ‘_iAmGenZ’ documentary Paul appears in and the correlation between technology use and youth mental wellbeing.

16:57 Developing AI Large Language Model (LLM) assistants being used in financial institutions and research companies to help people be more effective and productive, and to help with motivational analysis to identify people’s underlying or hidden motivations.  

24:07 The future of AI and mental health therapy – the democratisation and personalisation of otherwise expensive therapy and the relationship we build with AI in therapy.

28:25 The positive impact of AI and technology (that appears to be intelligent) in the future, how its use becomes an extension of ourselves to do greater things than we could do without it, and how AI can help us with interacting with other humans.

41: 24 How as CyberPsychologists, we should future-proof our careers by focusing primarily on the application around the AI-human interaction, rather than a general negative focus on disorder and dysfunction around digital technology use.

51:26 The greatest concerns he has around AI in the future – especially with students (using AI as part of their education) and within business.

55:56 Advice for CyberPsychologists who would looking to get into AI as a career.

58:55 Focussing on finding out what makes you happy, and makes your life worth living, in accordance with The Arc of Happiness – allowing you to thrive and do things that promote wellbeing.

1:03:15 The increase in media literacy and critical thinking around click-bait and misinformation.

If you are interested in the future impact of AI on human potential and the role positive psychology plays in technology and AI, you will probably find this episode interesting.

Acronyms:

AI: Artificial Intelligence

LLM: Large Language Model

Other podcasts
Ep 8 Todd Fletcher and Dr Chris Fullwood

Todd Fletcher and Dr Chris Fullwood | Ep 8

Watch or listen:
The Psychology of CyberSecurity Professionals.

Why do cybersecurity professionals either blatantly or subconsciously disregard standard cybersecurity protocols? 

In this episode, we explore the psychology of cybersecurity and the impact of personality and cognitive bias on our ability to resist cyber-attacks.

Connect with the guests

In today’s episode of Confessions of a CyberPsychologist, I chat with Todd Fletcher, who is a PhD research student focussing on the psychology of cybersecurity professionals, and Dr Chris Fullwood, who is a senior lecturer in psychology at Birmingham City University and one of Todd’s PhD supervisors.

We talk about the psychology of cybersecurity professionals and why they may intentionally or unintentionally disregard sound cybersecurity practices. We focus on:

01:00 Todd’s background in digital technology and how he became interested in studying CyberPsychology.

06:49 The difference is between CyberPsychology and Cybersecurity.

13:00 Todd’s PhD research on the behavioural influences of Cybersecurity professionals.

20:21 The ‘Big 5’ personality and, how they can either help or hinder a cybersecurity professional in an organisation, and if there are common traits amongst those more likely to become cyber victims.

35:23 The Security Acceptance Model and its practical application in organisational cybersecurity.

37:33 The recent DefCon conference in Las Vegas and the research Todd was doing at the conference.

42:49 The difference between White, Grey and Black Hat hackers.

47:20 What parents should know about teen amateur hacking behaviour.

01:02:43 The future of cybersecurity amongst professionals and the general tech user.

1:08:55 Advice for those starting out in cybersecurity, and

10:15:18 Managing good mental health practices amongst cybersecurity professionals.

Todd’s experience is in the digital realm within business. Having spent time implementing cybersecurity practices, he became interested in the people within the cyber processes, leading him to become interested in the psychology of people within cybersecurity.

If you are a cybersecurity professional, manage a cybersecurity team, or are interested in cybersecurity as a career, this is an episode to watch.

Other podcasts
Reigning in artificial intelligence

The global attempt to reign in Artificial Intelligence

What was originally considered Artificial Intelligence Science Fiction only a decade or so ago has become a reality or at least a potential certainty. It has been suggested that designers in Silicon Valley use SciFi as an inspiration for the creation of new technology.

Although there is a practical perspective to AI, there are also psychological consequences to this growing part of our digital technology. Some of these consequences are already self-evident, others are still to emerge. 

Psychological Aspects of AI 

  • AI has the potential to reduce the value of human-based work including, but not limited to: journalism, administration and creative design. But, it also has the ability to enhance or increase productivity in various fields
  • A lot of ink has been spilt over how AI will steal so many jobs. But, history is littered with both job losses and new job creation that come from technological inventions e.g. the tractor, the printing press, and online banking – this doesn’t seem exponentially different
  • Scare-mongering by the media (and also by high-profile tech giants) can cause unnecessary moral panic that can result in fear-based passivity, rather than a proactive focus on how to train for future employment 
  • But, humans are extraordinarily adaptive and have the ability to learn new skill sets and find new career paths through these innovations
  • Maybe it is time to change the narrative around AI to be more about mitigating the harms and building future-focused AI skillset resilience

November 2023 AI Safety Summit

The psychological implications aside, there is still enough of a concern about the practical elements of AI that an AI Safety Summit took place at UK’s Bletchley Park on 1-2 November 2023 in London to talk through how to potentially manage and regulate AI going forward. 

Elon Musk has often warned about the dangers of AI. He to British Prime Minister Rishi Sunak at the Safety Summit about the dangers, positives and potential restrictions that needed to be placed on those who build AI. 

The summit is a great start in this process of better understanding the human impact of AI. A few highlights gleaned from The Evening Standard articles reporting on Elon Musk’s comments both before and during the Summit held are: 

More intelligent than humans

  • AI and machine learning have the ability to be more intelligent than the smartest humans
  • It is not clear if AI can be controlled, so there needs to be an over-riding ‘off switch’

AI will operate according to its programming

  • AI is biased in that its foundational programming will be in line with the belief systems, biases and worldviews of those who write the programs. If these are implicitly in contrast with the greater good of humanity, the outcomes could become counter-productive.

Existential risk

  • The greatest risk lies in AI becoming accidentally ‘anti-human’ – which is what we should be most concerned about
  • While Elon Musk regularly mention the threat of extinction from AI, others suggest the threat to be more akin to a pandemic or nuclear war, others suggest that the threat is minimal – although it is impossible to predict
  • He also mentioned risks such as deep fake videos, disinformation from fake bot social media accounts and humanoid robots
  • The greatest risks come from ‘frontier AI’, which will probably exceed today’s AI models, or from losing control of these systems – especially around potential misuse 
  • Although Elon Musk does think that AI will mostly be a force for good (such as personalised tuition for children), he also suggests that AI will result in a time when there are no jobs

International regulator body

  • Rishi Sunak wants to make the UK a ‘Global Hub’ for AI safety
  • A neutral 3rd party regulator body should be set up  to ‘observe what leading AI companies are doing and at least sound the alarm if they have concerns’
  • All 28 countries will recommend an expert to join the UN-backed global panel on AI safety

The Bletchley Declaration

  • All 28 countries that attended the summit have signed the ‘Bletchley Declaration which sets out a shared understanding of the risks of AI and pledges to develop AI models that are safe and responsible
  • These governments and major AI companies (Open AI and Google DeepMind) will work together to research and manage AI risks and include external safety testing on new AI models that may hold some risk to society and national security
  • It resolves to ‘work together in an inclusive manner to ensure human-centri, trustworthy and responsible AI that is safe’
  • The document is not legally binding, but rather a voluntary, agreement

Read more: 

You can read more about the reporting from the global summit, written by Martyn Landi, Gwyn Wright and Mary-Ann Russon of The Evening Standard, from the below links.

Elon Musk says AI one of the ‘biggest threats’ to humanity.

AI Safety: What have we learned?

Elon Musk: AI could pose existential risk if it becomes ‘anti-human’.

Elon Musk tells Rishi Sunak: AI ‘the most destructive force in history’

Google's Triple Threat

Search and select – big tech nudges

In July 2019, Robert Epstein (PhD) testified before congress in relation to research he has been conducting since 2012 on Google – specifically on their power to suppress content, and manipulate thoughts and behaviour of those who use the search engine.

This PDF is an updated and expanded version of his testimonial, where he lays out the results of his research and suggestions not to disband the search engine or to make it a public entity, but rather to ‘encourage’ the tech giant to share its index with other entities (i.e. or it to become a public commons) while still retaining ownership and control thereof. Google already does this, to some degree, but Dr Epstein argues that making it available to all will encourage greater competition without weakening the infrastructure of the work already done. 

A previously released article from Bloomberg Businessweek (from page 37 of the PDF) provides a shorter summary of his arguments.

Although a lot of the research is around the search Engines ability to influence USA elections, a few highlights of his main points are worth noting:

  • ‘The rise of the internet has given these companies unprecedented power to control public policy, to swing elections, to brainwash our children, to censor content, to track our every move, to tear societies apart, to alter the human mind, and even to reengineer humanity’ – to reiterate this point, he links to a 2018 The Verge article on a leaked 2016 Google video to top execs highlighting their desire to resequence human behaviour to better align with Google’s values. 
  • ‘Google’s corporate culture revolves around the idea that we’re here to create a better world, where “better” is defined by the prevailing company values. If you doubt that, check out the leaked PowerPoint presentation, “The Good Censor“… the algorithms determine what more than three billion people around the world can or cannot see.’
  • ‘If you have been using the internet for a decade or more, Google has collected an equivalent of about 3 million pages of inormation about you…Google services are not free. We pay for them with our freedom’
  • An existing body of research suggests that these new, often invisible, ways of changing people’s thinking and behaviour are likely to have a much bigger impact on children than on adults. And who is more attached to new tech devices than anyone else? Our children, who are often unattended when they are immersed in social media or playing games or communicating with other people on their computers or mobile devices? 
  • ‘In January 1961, President Dwight D. Eisenhower warned about the possible rise of a “technological elite” that could control public policy without people’s awareness…The elite now exists and they have more power than you think. Democracy as originally conceived cannot survive Big Tech as currently empowered. It is up to our leaders – and to every one of us as individuals – to determine where we go from there. 
Locus of Control and CyberSecurity

What role does job control play in adherence to Cyber Security?

'Exploring the Role of Work Identity and Work Locus of Control in Information Security Awareness'.

Extracts and summary of the research by: Dr Lee Hadlington, Dr Masa Popovac, Prof. Helge Janicke, Dr Iryna Yevseyeva, Dr Kevin Jones (2019)

In her summary of the work, Dr Popovac describes the research as exploring ‘the adherence to organisational information security and the role of work-related and individual factors such as individuals’ perceived control within the workplace, their commitment to current work identity, and the extent to which they are reconsidering commitment to work.’

Key quotes from the research:

  • ‘Cyber security is not just about technology. Almost all successful cyber attaches have a contributing human factor’ (a direct extract from the UK National Cybersecurity Strategy 2016-2021 p. 38)
  • ‘for the most part, technology cannot be the only solution to issues related to organisational cybersecurity…employee[s] (the human factor), can present a paradoxical element into the fight’
  • ‘On the one hand, employees can be a critical asset in the fight against cybersecurity breaches, and can act to deny malicious attempts to access sensitive company data. On the other hand, employees can be the ‘weakest link’…in the cybersecurity system; they are not logical, prone to misunderstanding and confusion, act on impulse and want to get their jobs done’

Summary of the research: 

This research focuses on what factors, outside of personality type, play into employee engagement in cyber security engagement in the workplace. The main aim of the research was to understand:

  • to what extent the ability to control job function has in the taking of responsibility for cyber security
  • if the identification with the workplace plays any role in improving cyber security amongst workers

The researchers point out that:

  • there is a difference between knowledge of the company’s information security policies and the ability of the employee to understand them.
  • there is also a potential gap in how individual attitudes and behaviour aligns with these policies. 

Previous research done in the area of cyber security has found that those more likely to be cyber-security conscious were: 

In contrast, those who engage in cyberloafing (engaging in non-work tech use during working hours) or have higher levels of internet addiction were less likely to be cyber-security conscious. The assumption was that these workers believed the higher levels of company security mitigated online risk when accessing specific materials and activities. Another assumption is that those who have little regard for the company they work for, or who feel they have limited control over their job, are also more likely to have a lower interest in adhering to internet security protocols.

Employees who have a higher internal locus of control are more likely to have lower stress levels, feel more in control of their work and have greater job satisfaction. Those who are higher in external locus of control feel they have little control over work, higher levels of stress and lower job/company commitment – therefore more likely to engage in counter-productive work behaviours, often to rekindle a sense of self-control over their work or potentially as an active attempt to harm the company.  

Those who feel less committed to their work may be less prone or may not see the value in engaging in cyber security behaviour.

The findings of the research are: 

  • Those with a higher internal locus of control are more likely to see their actions as a way to protect both themselves and the company from cyber attacks
  • Workers with a higher external locus of control perceive themselves to have a minimal amount of control over their work and workplace, assumed that both they and the company were vulnerable to attacks whatever action they did or didn’t take, so saw little value in following processes relating to information security.  
  • Those who have a strong work identity, and experience a sense of belonging in their workplace, are more adherent to cyber security policies
  • In contrast, those with a lower level of work identity and/or looking for a new role are less compliant. 
  • Being older and being female were also found to be more likely to engage in higher levels of information security compliance – confirming previous research. 
  • Those who have a clear understanding of the formal company rules around information security are more likely to follow them. 

Definitions: 

Locus of control: ‘an individual’s expectancy related to how rewards or aspects of life outcomes are controlled on the basis of the actions of the individual (internally) or as a result of forces outside the control of the individual (externality)’.

Organisational commitment: ‘the level of attachment an employee has with their workplace’.

Work identity: ‘the strength of an individual’s identification with their work, and not directly their workplace or organisation’.

This is not an open-source document and will need purchasing to read the full original article.

Googles Bias

Merely being able to see the bias, doesn’t protect you from the bias

The title of this post is a direct quote taken from an interview with Dr. Robert Epstein, a Senior Research Psychologist on The Epoch Times TV channel.  

If you were ever wondering how much our opinions are being influenced by ‘Big Tech’, this is worth watching. Dr Epstein has spent the last decade conducting scientific research on the effects of bias in search engines (particularly Google). An interesting quote at 1:04:12 ‘merely being able to see the bias, doesn’t necessarily protect you from the bias’.

We can make changes to our own tech habits and behaviour, but we do need to do something about changing regulations around how tech influences our behaviour. If they can change our behaviour for our good, that is one thing, but changing our behaviour in a socially and personally detrimental way, we need to do something about it.

We are all at risk, no matter our age.

If you are interested, you can read more about the work done by Dr. Epstein here. 

CyberSecurity and CyberPsych

When Cyber Security meets CyberPsychology

Cyber Security is not the same as CyberPsychology. It is similar to comparing someone who helps you physically set up home security and someone who seeks to understand why you don’t turn that security on when you leave the house. 

In a recent webinar, one of our Cyber Experts Dr John Blythe joins three of the collaborators of the latest whitepaper on Human Factors in Cyber Security. The video is a playback of the webinar.

If you want to access a copy of the white paper to read, you can find it on the Chartered Institute of Ergonomics and Human Factors website.

The webinar playback showcases a recorded video summary of the white paper and also contains a Q&A session with the three panellists. It provides a value insight, for those involved in Cyber Security within organisations, as to the human factors that have been and continue to affect companies in a remote and hybrid working environment. 

 

Personality Type and Cyber Security

Does our personality type make us more or less susceptible to phishing and online scams?

According to academic research in this area, the short answer is ‘yes’. The majority of research has used The Big 5 Personality Types to identify different types of Cyber Security behaviour. Although research in this area has slightly conflicting results, there are some general findings that are interesting to note. These are outlined briefly below.

Openness to Experiences

Those who have a greater level of this type have a greater ability to adjust their viewpoints and are therefore better able to review information in emails on the merit of the content itself, rather than on preconceived ideas around either the content or the sender. They are, therefore, better able to identify phishing content. However, they are more likely to reveal personal information about themselves on social media and within online communication.

Extroverts

Are a lot more sociable and more likely to share information with others around phishing scams. They are also more likely to share information about themselves with others and are more likely to have been bored during lockdowns, craving social interaction, so potentially more likely to click on links to help alleviate boredom.

Agreeableness

Those high in agreeableness traits are more inclined to want to please others, and try avoid people disliking them. They are, therefore, more susceptible to phishing attacks, as they just want to please others. If the email looks like it comes from an internal department or a supplier/customer, they may want to try be helpful and/or ‘fix’ things.

Neuroticism

People who have more of this personality type have an inherent need to believe that others are telling the truth. They also don’t like to upset people, so are likely to fall for phishing scams.

Conscientiousness

Those who display more of this trait are the least likely to fall for phishing scams. They tend to read content more critically and are more likely to follow training guidelines.

Although generic cyber security training and education is vital within any organisation, to help minimise susceptibility to phishing attacks, training should include how each personality type can be affected differently. This may make individual workers more vigilant towards phishing attacks that they are more susceptible to – based on their dominant personality traits.

Take the Big 5 Personality test and a brief explanation of each:

If you want to take the Big 5 Personality Test to find out more about where you fit within each range, you can find a link below.

Take the test.

Read more about The Big 5 personality types.

A few notes about Personality based Psychometric Tests:

  • Although there are a number of psychometric tests available on the market, a large number of them are complicated to decipher and/or are only commercially available. Researchers, therefore, tend to use The Big 5 personality psychometric test as a standard academic for research.
  • Personality tests can indicate a preference for specific behaviour but should not be used to stigmatise people and categorise them into neat boxes. In all things psychological and behavioural, we are all on a spectrum, and display a unique combination of characteristics to a greater or lesser degree.
  • Personality tests are self-completion questionnaires that people fill in based on how they view their own behaviour. We are generally not very good at understanding our own behaviour. This means that they can give us (like any self-completion questionnaires) an indication of different behavioural types, but should be read and interpreted as such.

If you want to know more about what cybersecurity threats you may encounter, you can read ESET’s T2 2021 Cyber Threat Report.

The Cyber Effect Article Header

The Cyber Effect – Mary Aiken

Published in 2017, The Cyber Effect is one of the first CyberPsychology books to be written specifically for the general public. 

A book that everyone should read, especially parents. It will open your eyes to the potential opportunities and dangers that exist online.

About Mary Aiken:

Professor Mary Aiken is a Professor in Forensic Cyberpsychology at UEL. Go directly to her website or follow her on Twitter.