Most of us have only worked with other humans. How do you manage the personal relationship you have with your AI system(s) of choice?
This is as much about human psychology as it is about technology etiquette.
Our Checkered History of Chatbot Abuse
Chatbot abuse is real, and it isn't new – we just have new tools and vehicles for our frustration.
In 2021 and 2022, it was Replika's AI "friends" that were targeted with abuse from their human creators. This Reddit thread features user-shared examples of interactions with their AI chatbot that would be considered abusive in a human relationship.
Before that, it was Alexa, Siri, Cortana, and Google Home, and all kinds of delivery robots that have been the recipient of verbal and physical abuse. (Even Jeff Bezos has reportedly been overheard berating Alexa.)
Boston Dynamics is notorious for showing off how its robots adjust to unplanned contact:
Now, in 2023, we have ChatGPT, Jasper, AI-enabled Bing, and dozens of other AIs to interact with – and no end in sight.
Will we treat the next generation of chatbots with more respect?
Conversational AI Enters the Uncanny Valley
I've previously written about DAN, a method to jailbreak ChatGPT to bypass its built-in content filters, popularized once again by Reddit.
Dozens of people have shared their strange DAN interactions, which include apocalyptic predictions that feel straight out of a sci-fi movie and many, many examples of inappropriate AI-human interactions.
While it's easy to tell that ChatGPT isn't sentient, DAN can be pretty convincing. It's hard not to draw parallels between DAN and the plot lines of dystopian movies, especially when we personalize and anthropomorphize AI by giving it a name, a rewards and punishment system, and specific guidelines on how to respond.
Today's conversational AI systems aren't sentient beings. But they're programmed to simulate realistic speech, and in doing so, they mimic humanlike traits and can elicit emotional responses when we use them.
And because conversational AI interfaces are meant for back-and-forth iteration in natural language, it's shockingly easy to insert "extra" language when we react to or modify a prompt that didn't turn out the way we intended.
How can we ensure that our usage of AI helps us become even better humans?
The Live Social Experiment is Happening Now
This is a gray area in human psychology. We don't have longitudinal studies on the effects of our interaction with chatbots, and we also don't have social norms or best practices on how to talk to AI. The live social experiment is happening now.
We have some big questions to answer, both for our own use of AI and as a society. For example:
- If we treat AI systems as subhuman and use abusive language in our interactions with AI, is this a safe form of emotional catharsis?
- Or does this only strengthen a behavior pattern and enable us to continue using abusive language elsewhere – like with other humans, or in online comments, or in other chatbots we program?
Even more important is this: how AI models are created, programmed, and trained.
Have you ever wondered why the AI systems in 90% of all smartphones (e.g. Siri, Alexa, Cortana, Google Assistant) have feminized names and/or voices?
I have, and it led me to research and think pieces by UNESCO and the EQUALS Skills Coalition about gender and technology back in 2019 (attached below).
The research discusses how feminized AI assistants are programmed in a way that subconsciously reinforces gender biases, including the notion that women are subservient.
Specifically why this happens is explored in the study; the underlying reasons were established eons before modern technology.
As it relates to AI, the following quote from the 2017 AI Now Report sums it up:
AI is not impartial or neutral. Technologies are as much products of the context in which they are created as they are potential agents for change. Machine predictions and performance are constrained by human decisions and values, and those who design, develop, and maintain AI systems will shape such systems within their own understanding of the world.
Strengthening Self-Awareness & Discernment With Conversational AI
Our interactions with everyday technology can reveal a lot about our habits, beliefs, and underlying values.
In my experience, it's hard not personalizing an AI that has already replaced so many functions I would previously have assigned to a human team member.
While you use conversational AI like ChatGPT, consider the following questions to cultivate your own self-awareness:
- What emotions or thoughts come up as you partner with AI?
- Do you empathize with a technology if it responds in the first person, addresses you by name, or has a friendly, humanlike voice?
- Do you get angry at a technology if it responds in a way that's out of context?
And remember the following:
- The words you say affect your nervous system. If you'd like to reduce chronic stress, using kind language is a great place to start.
- Large language models like ChatGPT are designed to predict the most natural-sounding next word in a sentence, then the next word, and so on.
- While research is underway to develop models that can self-improve, fact-check, and generate their own training data... those AIs aren't accessible to most of us yet.
The Future is Now
We have new urgency to advocate for accountability in the organizations and teams building AI systems intended for widespread, global use.
In the absence of regulation guardrails (so far), whose "job" is it to ensure AI helps advance humanity?