Human Mental Health and Artificial Intelligence

Written by turbulence | Published 2024/03/18
Tech Story Tags: artificial-intelligence | ai | mental-health | technology-and-mental-health | bias | cognitive-bias | ai-and-mental-health | ai-and-anxiety

TLDRThis is an article that discusses ethical principles to reduce the fallacies of fear based thinking related to artificial intelligence. via the TL;DR App

Objectives

  1. To learn more clearly what emotions we are feeling related to AI
  2. To help us deal with emotions related to AI in a better way
  3. To review implicit bias in ourselves and also in AI
  4. To review ethical principles as a possible model to rethink human involvement with AI.

Why should we care?

  • Not addressing human mental health concerns can decrease our capacity for resilience
  • Having strong emotions about AI might negatively affect decisions made about AI’s use

Humans are complex and have complex and nuanced emotions. Researcher Paul Ekman studied human emotions and determined that humans have 6 basic emotions. They are like the primary colors of art. The 6 emotions are fear, anger, joy, sadness, disgust and surprise.

If you are feeling any anxiety about AI, that’s an emotion that could be related to fear.

Anxiety is fear of the future.

Our capacity for sensing and reacting to the environment with our emotions comes from our limbic system.

Parts of the limbic system are made up of structures that form a complex network for controlling emotion. This part of the brain coordinates to regulate responses to stress, emotion, trauma, and fear. This might be nicknamed lizard brain.

In 2002, a psychologist named Daniel Kahneman won the Nobel Prize in economics for determining a model for how human beings think.  He discovered that humans have 2 types of thinking patterns which he named System 1 thinking and System 2 thinking.

  • System 1 thinking is automatic, fast and error prone
  • System 2 thinking is deliberate, slow and reliable

When we are thinking in a fear motivated way we will trigger our system 1 thinking and also trigger our capacity to make errors in judgment.  These errors in judgment will be based on our implicit bias.

Implicit biases are our prejudices, predetermined beliefs, childhood ingrained values. They are things we have been taught from a very early age which were taught to us to “keep us safe.” They may be part of our core beliefs about ourselves.

These implicit biases might be triggered in our system 1 thinking and cause us to act in ways that are negative, prejudicial, and fearful. We may do this without thinking, on automatic impulse.

The danger in thinking fearfully about AI is that we may have thinking fallacies:

  • Use system 1 thinking and make quick and fear based thinking about AI and in the proceess not making good decisions about is use in society

  • Use system 1 thinking when teaching AI new skills - inserting into AI prejudices that can’t be easily removed

  • Not realizing we have implicit bias in the first place. Not realize that we are installing into everything we do implicit biases, especially when we are acting in a fear based way

  • Not realizing that though we might be able to determine our individual implicit biases through testing and learning and acting differently, if we teach AI implicit biases we may not be able to remove the AI implicit bias.  AI does not necessarily have consciousness to understand that it is being biased.

  • We might exacerbate the trauma we have just felt collectively as a global community because of COVID pandemic by being afraid of AI.  The anxiety we feel will cause us to not make good decisions about the use of this new technology.

In order to reduce our fallacies in thinking, we might consider changing our thinking to system 2 thinking about AI. We could be deliberate and more conscious and more alert in our management and implementation of AI.

System 2 thinking will bring us back to basics.  Though there are many possible models for system 2 thinking, it might be helpful to use ethical principles.

Here we revisit ethical principles to more deliberately examine AI.

System 2 thinking can be studied by realizing the fear we have about AI can be managed through deliberate, conscious, mindful thought and action in a system 2 way.

What are ethics anyway? Moral principles that govern a person's behavior or how to conduct an activity.

There are 7 basic ethical principles and 7 principles of medical ethics described here we can consider as an example framework.

These are the 7 basic ethical principles presented here:

  1. Integrity - consistency of actions and values, methods and principles

So that what you believe and what you do are in harmony

  1. Respect - acknowledging the inherent value of all individuals and treat all with dignity

    Honoring others beliefs even if they are different from your own

  1. Responsibility - is our awareness that our actions have consequences that we must accept no matter if positive or negative. Included in this is the idea of accountability which means we should answer to our actions and especially to those affected by our actions. Being a good steward is an example of this.

  1. Fairness - ethical decisions must be unbiased, treat everyone equally regardless of status, position, or personal attributes

  1. Compassion - empathy and understanding towards others is important even if you disagree with them. We need to consider others' feelings when making decisions.  All stakeholders should have equal say. Remembering always: there is a human element in every decision we make

  1. Courage - Courage means doing what is right despite the idea being risky or unpopular.  It means standing up for what you believe in

  1. Wisdom - means making decisions based on being informed on the issues and allowing oneself to be guided by all the information or facts, but also taking into account stakeholders.  “Balancing head and heart in decisions.”

These are the 7 medical ethics principles described here. These medical principles are considered to help human mental health which has been affected by the emergence of AI.

  1. Non-maleficence - This is the idea that medical providers should do no harm.

  2. Beneficence - This is the ideas that medical providers should go beyond doing no harm to actively help others’ welfare.

  3. Health maximization - This principle allow for an environment that maximizes health and opportunities for health of the general public.

  1. Efficiency - This idea encourages the use of scarce medical resources efficiently.

  2. Respect for autonomy - This is the patients’ right to determine what will happen to them. The ability to make their own medical choices.

  3. Justice - Justice includes equity for all people so that all people can have health care at the same quality.

  4. Proportionality - This principle involves weighing the individual needs versus the needs of the greater good

These are some ideas we could be focusing on when talking about AI. We could use system 2 thinking to make these decisions about AI with more clarity and deliberation. Though there could be other models that would encourage systematic thinking and reduce fear, one idea could be using an ethical guidance model.

References:

  1. Peter Schröder-Bäck, et al. Teaching seven principles for public health ethics: towards a curriculum for a short course on ethics in public health programmes. Schröder-Bäck et al. BMC Medical Ethics 2014, 15:73 http://www.biomedcentral.com/1472-6939/15/73
  2. https://axies.digital/ethical-decision-making/
  3. Picture of Limbic system/brain by Shutterstock.com
  4. Family image was created by DALL-E/ChatGPT


Written by turbulence | Multipotentialite reader and writer. Visit my website at: https://amyshah.live/
Published by HackerNoon on 2024/03/18