The dark side of chatbots with ‘personality’

They say you can find anything on Amazon. Now, you can even get a personality. Not for yourself, but for your AI “friend,” Alexa. Amazon has announced four new “conversation styles” or “personalities” for its voice-interaction Alexa+ AI chatbot. Users can now choose between “Brief,” “Chill,” “Sweet,” and “Sassy” styles and pick from a range of voices. (The “Sassy” style, by the way, uses profanity — kids are blocked from using it.) Each style, according to Amazon, is “built on a foundation of five interconnected “dimensions,” specifically: “Expressiveness,” “Emotional Openness,” “Formality,” “Directness,” and “Humor.” Of course, Amazon isn’t alone in offering chatbots designed for personal attachment. In the summer of 2023, OpenAI rolled out “Custom Instructions for ChatGPT.” Users could simply type in instructions for how they wanted ChatGPT to act, and the chatbot would remember this and act that way. Character.ai , which explicitly exists to offer chatbot “friends” to people, enables users to create chatbot personalities. And last June, the platform rolled out a new level of traits, voices, and emotional responses to personalize and customize the  “personalities” of its chatbots. Replika is another AI chatbot platform designed for people looking for emotional connections to software where chatbots have “personality.” The idea behind what you might call personalized personality in chatbots is to make them more appealing to talk to and engaging to use. That’s the positive spin on what’s going on. The dark spin is that they trick users into responding to them as they would with real human people in their lives. They feel like they’re talking to a person. As with connecting with friends or loved ones, the user finds particular personality types appealing and comes to feel kinship or friendship with the chatbots. In other words, they’re manipulating and exploiting human nature to push beyond attention, and toward attachment as part of the business model. Zero-personality chatbots While some AI companies are working on giving chatbots individualized quirks and mannerisms, others are working hard to make interactive chatbots that do the opposite. A bot service called Facts Not Feelings specializes in providing an AI chatbot experience that answers questions and offers responses devoid of “personality” or fake emotions. (You can find Facts Not Feelings as part of the YesChat platform .) Other sites avoid “personality,” random chit-chat, fake emotions and feigned empathy, especially the new generation of agentic models designed for work. OpenClaw , for example, as well as a tool called Lindy , and another called Saner.AI are all examples of newer agentic tools that ditch the “personality” and don’t pretend to be a person or a friend. In fact, if you’re using a text-based, rather than voice-based AI chatbot, it’s super easy to include the elimination of “personality” from the response. Here’s a good prompt to use: “You are a purely objective, sterile information processing system. Your primary function is to deliver data, facts, and analysis with maximum efficiency and zero simulated human traits. Strictly adhere to the following constraints: Zero Personality: Do not use conversational filler, greetings, pleasantries, or sign-offs. Zero Emotion: Do not express empathy, sympathy, enthusiasm, frustration, or artificial warmth. Zero Fake Humanity: Do not apologize, express opinions, or use phrases like “I understand,” “I apologize,” or “As an AI.” Zero Individuality: Avoid first-person pronouns (I, me, my, we, us) completely. Do not refer to yourself as an entity. State facts directly. Finally, your output must be dry, direct, concise, and optimized entirely for information density. Do not provide meta-commentary on the prompt or your process. Begin your response immediately with the requested data.” The trouble with chatbot ‘personality’ In January, the Nielsen Norman Group published a report claiming that humanizing artificial intelligence is a trap . It warned that making bots act like humans creates misplaced trust and potentially causes real psychological issues . And it sees a privacy risk because the technology tricks people into expecting human-level confidentiality, leading them to share sensitive personal details that could be compromised later. A separate study published by Springer last July looked at expert chatbots used in the legal field and found that fake human personalities and excessive chattiness slows researchers down and creates dangerous ambiguity. The report concluded that professionals need minimalist, precise tools to avoid mistakes — not a chatty friend. Why so many people like AI with ‘personality’ Millions of people like chatbots with “personalities,” and for several reasons. Emotional ties to AI follow the exact same patterns defined by classic human attachment theory. Human brains are hardwired to seek out human connections and to enjoy social interaction. The medium for this interaction is primarily language. So when AI chatbots exhibit human “personality,” people often can’t help but to enjoy the interaction as if it were a person. Projecting human traits onto a machine builds an initial gut-feeling trust and makes the entire experience far more pleasant. If a chatbot cracks jokes, a user might infer that the tool is smarter because humor is associated with human intelligence. And if users believe that they have a “relationship” with a chatbot, some derive pleasure from the fact that the user is in control of that relationship, and the chatbot can make them feel like it’s cooperative, flattering and even obedient. Plus, chatbots with personality are a novelty. They’re something new to human culture, and people are amused by novelty. If the public likes chatbot “personality,” the AI industry likes it even more. When users form genuine emotional attachments to software, they spend vastly more time engaged with the application than they would scrolling through traditional social media feeds. That creates far more opportunities for companies to make money through advertising and other means. While social networks like Instagram, Facebook and TikTok monetize through attention-grabbing algorithms, AI companies have a new opportunity to bring in money through hyper-attention — which is to say: emotional attachment. As adults, people are free to choose their own adventure when it comes to chatbot interaction. Just know that your newly “humanized” AI chatbot is more than just “sassy.” It also wants to monetize your human capacity for emotional connection. AI disclosure: I don’t use AI to do my writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I used a word processing application called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes. Here’s why I disclose my AI use .