In service, many hopes still revolve around the omnipotent chatbot in customer contact, which is supposed to handle many standard concerns in the written channel. The app Replika goes a big step further and promises users that they will be able to communicate naturally with AI soulmates. Hope or humbug?
Replika is an AI chatbot that adapts individually to its users and is available for conversation around the clock, as a chat version or with a computer voice for an additional charge. Replika tries to synthesize its user’s voice to mimic it and get closer to a real conversation in its perception. As in any other chat app, Replika answers questions from users; the app is reportedly used by over 10 million people worldwide. Most of the conversations are in English, but for some time now they have also been available in German. According to the New York Times, the promise of a soulmate caused download numbers for the app to skyrocket at the beginning of the Corona Pandemic. For customer loyalty, the startup is coming up with quite a few things, so many of the new features are reminiscent of elements from video games and heavy users reach higher levels or collect virtual currency, which they can exchange for virtual clothes for their chatbot. Beyond the 1:1 soul mate relationship, a community of Replika fans has developed, who, for example, exchange information on the Reddit platform about training Replika or give tips on how to have erotic conversations with Replika. What sounds promising to some, at the same time calls many worried voices to the scene. Researchers warn of negative mental effects and conditioning if users believe Replika can replace real social connections. Other users are concerned if they get the impression that Replika can assess their emotional situation better than they can. In a business context, some users experience native bots like Replika as irritating or even misleading. Eugenia Kuyda, CEO of the startup Replika, refutes accusations that her bot could lead to delusions. This is reminiscent of the case of Blake Lemoine, a Google developer, who believed that a chatbot developed by Google had become conscious. Lemoine recently went public with this and was subsequently put on leave by Google. The 42-year-old had gone toe-to-toe with the chatbot and, according to his own account, exchanged views with the AI about religion and the perceptive power of his own personality. Blake Lemoine came to the conclusion that the Google AI had taken on a form of “ego consciousness.” Google dismissed these conjectures, complaining, “These systems mimic the kind of exchanges they find in millions of sentences and can pick up on any topic.” Still, many potential users seem open to mystical expectations in AI contact. A survey by Kaspersky concluded that one in four Germans under the age of 31 could imagine forming an emotional bond with an AI. Marketeers may be excited, but for service managers it increases the requirements for transparency and attentiveness in dealing with the smart AI helpers.