Image Credits: https://strixus.com/entry/ai-generated-companions-and-the-business-of-friendship-18229

The False Promise of AI Friendship

So recently there was some talk about a startup called Friend on X.com/Twitter that is selling a necklace pendant that let’s you talk to an AI who pretends to be your friend and is ALWAYS listening in on your conversations, letting it jump in whenever. First, it looks like a copy of an already existing free and open source hardware kit + software to run like the exact same product called Omi from BasedHardware. How much of that $99 went into that $1.8 million domain name purchase? Granted, it seems there is no subscription cost, with only the one time upfront fee.

Anyways, none of this typical AI startup copying and fighting over ideas/products. What I wanted to discuss was how once again this is more bad than good in my opinion. As a person who would identifiy to be susceptible / within the target customers as people who might feel lonely sometimes and/or are too busy / life happens / physically separted from their currently existing friends, a part of me wants to believe that you can solve this problem with a buddy that’s always with you!

The problem is that fundamentally it is not a real human being with a lived experience. Unless the super smart people have cracked AGI (where is it huh?) and are able to run it in their distributed GPU compute clusters and give you a tiny computer with a client with some secret API key accessing it AND it is running a complete simulation of some kind of person’s lived experience, then I would be more open to this idea. However, we are clearly not at that point yet.

Previously we had AI chatbots before, and I assume now they have voice capabilities, in products like Replika, which I’ve talked about before, and the widely popular character.ai. These things are cool, I will admit! I only draw a line when it is very obvious the product is predatory on lonely people who have this feeling that technology will save them, that other human beings are impossible to talk to, who will always hurt them, and that this is the only way! To never let go of their little pendant. What happens if you lose it and you’ve become so dependent on it that you instantly become broken? Maybe you can say the same thing for a human being that you love, you too will be broken for a while (or maybe forever) once they die and that’s supposed to be normal while feeling the same way towards an AI “dying” isn’t? Once again, these are very different things RIGHT NOW! We are not living in some fantasy world where the AIs have real lived experiences, and (simulations of) feelings! There’s no evidence that in the black boxes of large matrices of floating point numbers representing their “neurons” that anything like this is happening. If we did, I would consider it more valid to form an AI friendship that deep. But definitely not now. Pretending that the computer program is something it is not is dangerous. Very dangerous. The most dangerous thing is that these programs will likely make a person more socially awkward as they become fine tuned to a yes-man of an AI who doesn’t have the free will to leave you and socially isolate them from their currently existing social circles. Who would buy an AI product posing as a friend that has the capacity to leave them for some reason!

If you believe in this kind of technology and have real human friends, fine, go for it and build to your hearts content but always keep in mind the dangerous places some people are in in their heads. If you are deeply lonely and sad or feel jaded from human interactions, know that there do exist people out there who you will enjoy being with, who you can understand and have fun with, even if you don’t agree on everything — you can find them, if you persist. Maybe you won’t feel satisfied for the next 5 to 10 years of your life as you grind it out to live. Maybe shorter or longer. Let this uncertainty just be, ignore it and focus on action. Be biased toward action over staying in the realm of cognition. Change your environment, focus on something new, learn something you’ve always wanted to do — non-destructive distractions are good imo.

Please, do not overrely on these AI agents for something that requires a human touch. Solve human problems with humans. Solve non-human problems, with AI.