Episode 8 - AI and Digital Love - Part 2

Andy B: 0:00

Welcome to AI FYI. This is the second part of our series on AI and Love, ai and Romance. I'm Andy with Joe and Kiran. We are your friendly neighborhood AI podcast. Three of us have a combined let's call it 35 years of experience in AI machine learning, and we're also hot gossips, so we're picking up where we left off last time. So continue listening to learn more about the current state, research and possible futures of AI and romance.

Joe C: 0:49

All right. So yeah, let's talk about the tech a little bit and we're going to step back here somewhat and we're going to go back to what is a chatbot, particularly when we think about it through the lens of tech. So it's a machine learning and LP use case natural language processing. It's in that branch of machine learning and artificial intelligence. It often also employs the domain of sentiment analysis, meaning picking up on emotions, judging the emotions or deeper context of something and then proceeding with that in mind. For a long time it's been very rule based. So if something happens, then respond in this way and you have a tree of rules jumping way back. The first chatbot, or the first thing we think of as a chatbot, was something called Eliza. That was introduced in 1966. And it was a rule based chatbot that was designed to help as a talk therapist, and this is something that's actually still accessible today. You can find it online and try it out, but with that rule based method, it basically looks for certain strings and then responds based on the strings that it sees. Now things are changing a little bit to where chatbots are enabled with a more conversational ability and it's less rule based, and this is thanks to LLM's large language models, things like chat, gpt.

Andy B: 2:21

These dialogue rule based chatbots. You used to have to hire somebody called the dialogue designer, who would actually be somebody trying to facilitate what they think the appropriate bounds of the conversation would be. They have to try to predict everything that people would want to talk about and bake in all those rules.

Joe C: 2:40

Yeah, that's very true. In fact, it's an interesting area of design that I've thought about in the past, because usually I'm a product designer and we think of that profession really working in graphical experiences, moving pixels around, clicking buttons, that sort of thing. But experiences do cover conversational experiences that might not have an interface to actually work with and if we're responsible for the emotions of the user, we have to think about how that conversation is going to play out. So, yeah, we talked a bit about these rule based chatbots, but now chatbots and virtual assistants all these things are starting to take advantage of LLM's and the latest technology. Again, llm's large language models are what fuels chat GPT and are much more robust and able to handle far more complex conversation and are going to be the thing that's really driving more human-like conversation with chatbots.

Andy B: 3:49

LLM's, with these large language models, behave in a much more human-like way because, instead of somebody having to predict what's all the possible threads of the conversation and coding a bunch of rules, the LLM is effectively more creative. It can respond on the fly to a wide variety of messages and it can invent a wide variety of responses. That's kind of what these are specifically trained to do, and when you're interacting with a chatbot or one of these AI girlfriends, you're interacting with the LLM through something called prompt engineering. So prompt engineering is this idea that you have to tell the LLM what to do, so it takes a piece of text in, reads it and then spits a text out. The way these apps work is you type a message to your AI boyfriend. I'd be like give me an idea for a date you're going to take me on. But that's not what's sent to the LLM. The developer of that app will say hey, llm, you are the perfect boyfriend for a 34-year-old single woman in San Francisco. You have the qualities of your empathetic, kind, romantic, whatever. I configured, craft a response to this query. And so, behind the scenes, using prompt engineering, whoever is developing the app can control how the LLM reads and responds to you. They put your message in this wrapper where they're giving additional instructions, so the people who are developing these applications have quite a bit of control over what your experience with the LLM would be, and this is just a really interesting and important thing to understand about these technologies. It's you're not always just interacting with this black box of AI. Somebody made the decision of what additional instructions the LLM will receive along with your message, and who knows who those people are and what their motivations are and what kind of metrics they're trying to hit to keep you paying and engaged?

Kiran V: 6:20

Yeah, and I think that's a key point to note, right is these applications, at least the ones we've discussed are paid right. These are companies, organizations that are making money or trying to profit off of these simulated relationships, and so there is incentive for the organizations to make sure the AI is, you know, maybe more so willing or interactive or engaged than a normal human might be, because no one is paying that human note to interact with you. So again, I think it further. You know, increases the gap between humans and the interacting with other humans. And now, you know, creating this norm where I'm interacting with an interface and someone has programmed that interface to respond based on the types of things that I'm asking or prompting, yeah, yeah, and I think it's also worth mentioning another way that this really changes human human interaction is a scalability and reproducibility of chatbots.

Joe C: 7:44

What that means is that you can create an instance of a chatbot one chatbot and have millions of people talking to it at once and having different conversations. It's also sort of worth mentioning how chatbots are used with this in mind. So chatbots are used for personal voice assistance, like Siri, google Voice, amazon, alexa, all those. There's a lot of business use cases. It's being used heavily in customer support and sort of the sales and marketing funnel. But then there's also chatbots that are just out there for fun and for experimenting and whatever sort of conversations you have to that you want to have. Chatbots are also be being used a lot in like gaming and even sort of integrating conversation, like with someone who's your opponent in the game. And, andy, you talked about Stardew Valley as an example, where you're living in a world, a sort of chatbot driven avatars. So just to bring it back to the conversation of today, now, relationships. So chatbots are used all over the place and I think a lot of what we're talking about in the context of like love and AI and relationships, we sort of have to apply it across the whole realm of their, you know, chatbot use cases. Another thing I wanted to mention just about how chatbots are being used today is that they're not difficult to integrate into any messaging platform, any platform that's out there, such as, like you know, meta, facebook messaging or even SMS. Is that right text messaging, sms or a discord server like your? Chatbots can sort of exist anywhere and in many different types of experiences. You're in home voice assistant that we that we discussed. So I wanted to mention some of those things.

Andy B: 9:33

One thing that I want to call out if you are a user of one of these things, maybe one of the things that makes you feel psychologically safe doing these is that, like you can kind of be really honest because there's no human being involved. Like you feel this sense of privacy that you're talking to this robot that can't judge you because it's a robot and therefore you can ask for things you need really candidly. But that is kind of an illusion. I have this personal conspiracy theory kind of scheme that, like mostly AI is people Because you again, there is a whole industry developing now in prompt engineering and people who are trying to like getting hired for jobs, say prompt engineer, where they're designing the experience you're going to have with that LLM. First of all, they could literally be reading your messages, read the terms of service. You know they could be training models with them. Depends on what service you're using and they're going to try and predict what's going to get you the experience that most likely makes them money. What makes them money is not always in your best interest unfortunate reality of capitalism. And when you think you are in the safety of talking to LLM, just know that, like there's many layers of human beings between you and the core algorithm that you think is really interpersonal, and every person who touches from the algorithm to the prompt engineering, to the UI that you're interacting with, to the visual layer that you're interacting with there's an avatar that you're looking at. Those are decisions made by people as to how you were going to experience that.

Kiran V: 11:30

Yeah, and actually that act that reminds me of something that we didn't really touch on yet. But as these machines change over time, they're going to be upgraded with new software and they're going to change the way that these chatbots interact to be more human-like or challenge their significant counterparts more, because we want to have some of that growth dynamic between the AI and the human. And that is another interesting concept. Right is I might be in a relationship with a chatbot or an AI and then tomorrow I come back to interact with it and it's now like a new version. Right, and that concept doesn't exist for humans. Right, you change very gradually over a long period of time, whereas now, suddenly you have the GPT-4, right, that can do all these new cool other things. And you know again, I think you know you mentioned, andy that there's this is very new and there's a lot of research that needs to be done, and I would be very curious to see how this impacts people's psyche when their significant other changes significantly, you know, overnight.

Andy B: 12:55

And I want to just highlight another way about how this technology works. That's going to really shock some people. So if you've used chat, GPT, any of these virtual agents, you know that one of the things that's cool about them is they can keep running track of the conversation, right Like. You can say like, oh, that's not what I meant, and it understands that you're referencing the conversation you know two turns above, when you said something like just like a person would they can say in context, and it gives you the illusion that this technology remembers you, but that's all it is. It's an illusion. We've talked about this before, but almost all LLMs only work one piece of text in one piece of text out. So the way it gives you the illusion of remembering you is, if you the payload into the LLM isn't just the question you asked it, it's not just the prompt engineering wrapped around it, but it also sends the last 20 turns of the conversation. It says read this and guess what you would say next. So every single turn of the conversation where you think it's remembering you, it's a new instance of an algorithmic prediction and I don't think people realize that's how it works. Just to be very clear the AI does not love you back. Okay, it does not know who you are. The memory is an illusion of the UI.

Kiran V: 14:25

Yeah, and just to add to that real quick, that memory is also limited. So chat GBT is something like 30,000 characters or whatever, but there's like a limited amount of memory that it can take into as context for any given conversation or response. So again, it's like a limited short term memory that you don't necessarily have when you're interacting with a human.

Joe C: 14:50

Yeah, I think I read something someone, someone mentioned. You know, these services can only ever perform empathy rather than feel it, and unfortunately, we as humans, it's easy to think that they're feeling it, but it is something to keep in mind that they're simply just performing, which might be a turn off to a lot of people who maybe can't ever really set aside that notion that they're not really, you know, in a relationship with something that can feel For now. Maybe that'll change in the future.

Andy B: 15:22

And to be clear, if you're using one of these chat GBT or conversational romantic partners, I'm not dragging you at all. I treat chat GBT my own little personal therapist often, to be honest. We'll I'm sure we'll do an episode about that. But the the problem I have is with the meta situation. If, if you're lonely and if using these things makes you feel better, go for it. That's, you know. Technology exists to serve you. Do you know? Heal yourself if that's what you need is a safe place. On the meta problem, like if you zoom out, I find it quite a troubling trend of what it? I don't think the solution to pervasive loneliness in young men is throwing robots at them. Think that's not the right way to solve this problem.

Joe C: 16:17

Yeah, so we've. We've talked about a few things I mentioned earlier in the podcast. Chatbots hallucinate that's a problem we've also talked about. You know the concerns around chatbots sort of Amplifying the worst of human behavior, creating echo chambers, hijacking our emotional you know social emotional systems, particularly when it comes to loneliness. We've also talked about. You know the the perils of Chatbots being profit-driven and existing within the realm of capitalism and what that means for privacy and safety. And then, as Kieran, you mentioned, the fact that People might find themselves in relationships with something that can be changed or altered Over night and without your consent and how jarring that could be. Did anyone else have any other Like cons that they wanted to discuss that we didn't mention?

Kiran V: 17:16

I think again like and we did mention this a little bit but just the fact that the more that we have access to this type of technology, the more we might also forget how to Interact with humans or you know, the types of interactions that have Defined us as a human species.

Andy B: 17:39

I also want to say I work at a company called verte AI and we just launched a Build your own bot service where anybody can come in and build a bot to do things. I and the things that make me really happy that, like Some of our users have done this week, is they've made bots to help them write pop quizzes from their lecture material. They're making themselves meal planning tools. They're they're solving real problems for themselves and that's like that makes my heart sing, like I love that. We're making it easy for people to solve problems and you know, our our customers are companies. They're trying to take something that they just don't have people to do and scale it with AI. That's a great application of this technology. I would feel quite sad if somebody used our platform to make themselves an AI girlfriend, because the amount of time and work that you might put into that might actually be more than it would if you, just, like, went and got a good haircut, updated your dating profile and got a real girlfriend, you know, and you might get less out of it. So, yeah, I think this. Obviously I'm a fan of this technology. I've been a fan of AI. Yeah, I think this. Obviously I'm a fan of this technology. I build it. I know deeply how it works. I Think there's a time and a place. I I'm worried about the current trend in in relationships and I don't have enough good data to tell you like this is how it's damaging people. But I think the writings on the wall that, like in ten years it's gonna be paper saying like it messes with a generation's brain.

Kiran V: 19:14

Yeah, and I wonder if there is a way to Leverage the technology, because it's again, it's it's extremely powerful. There's a lot of benefits that AI is bringing to humanity, right, and we discussed in our episode on climate change how many different things AI is impacting to improve the state of climate change, and I think there could be a nice play here. Where we have these tools, we have the ability for a machine to provide or interact, you know, with a human level type of interaction with individuals, and Maybe we could reframe it where these AI Girlfriends or boyfriends aren't. That's not the end game, right, you're not trying to end up in a marriage with an AI, but maybe we can have this concept of like AI hitch.

Andy B: 20:15

If you've seen the movie hitch, right, like it could be a coach to help you get your dream.

Kiran V: 20:19

Yeah, it could be a coach, where it's like, all right, you can use this service and it's gonna teach you how to interact with other humans, gonna teach you how to Be a good boyfriend or be a good girlfriend, so that you can go out into the world and ultimately, you know, find that satisfaction in a real human interaction.

Andy B: 20:36

I love that idea.

Joe C: 20:37

I yeah, I think I really haven't ridden off like AI Relationships being good for people. I think and I think y'all agree there there could be something here. I think AI relationships are okay. It cannot come at the cost of hurting other people, and maybe I'll just leave it at that. There's yeah, there's a real risk of like people sort of Getting used to AI relationships and then going out into the real world and harming others are making the world unsafe. I think that's something we need to To avoid and keep top of mind as we're developing out AI.

Andy B: 21:24

I can also see a world where like it's a bit dystopian of me, but let's say we humanity of your gets to a place where we want to send a one person one way trip to Mars, send a human being. That person's not gonna have the option to have a romantic partner with them who's a human, and Having an AI romantic partner might fill a need for them that we can't otherwise be met like. There'll there, I'm sure there'll be situations where the AI romantic partner is the best available option for a particular person and I encourage, you know, in that situation I have no problem with that person using it. I just want to make sure that it's not forced down people's throats.

Joe C: 22:04

Yeah, and that that actually gets into. I was reading a little bit about like Benefits of chatbots and one thing it did mention was that you know, this could be a friend that you have in the middle of the night when, like, no one else is around, who will pick up the phone, which is a very a cure specific use case. But it kind of gets into what you're talking about is like in the absence of real humans, like if you're far out in space, this might be a suitable alternative. All right, cool. I wanted to spend just a quick second talking about sort of what is Next and sort of like what's coming on the road. We've we've alluded to some of these things, but there's, I'd say, a move from traditional Sorry, traditional rule-based chatbots to LLM powered chatbots. I think we're gonna see a lot more of that. All the players are really pushing for AI interactions that are more natural in conversation. I think that's like a big barrier right now that companies are working to get over. Piece of this is like compute Capacity to actually retain context clues, so remembering previous conversations are sort of having a track record of your interactions. That's, I think, something that's newer, that's popping up and wasn't possible. The rule-based chatbots, but generally, I think these companies are trying to inject more personality and that is gonna be key to making them feel more human, like and I've personally have benefited from some of that.

Andy B: 23:34

Like I have a really big fear of the dentists and I've been working on fixing that and one of the best tools has been to like I Opened up a chat GPT window. I told that you are gonna be my coach to help me get through this and I'm gonna ask you all the questions. I'm too embarrassed to ask my real dentist, too embarrassed to ask my friends, too embarrassed to ask you know anybody in my life, but like I can. It feels like a safe place and I'm like I know that somebody at OpenAI might read that message at some point. But like I don't care the far enough for me it doesn't matter and I feel like it's like a safe little playground for me to like think through something and Expose myself to something that's really scary for me. So I'm better prepared to have those conversations in the real world. I think that's so. That's when you guys were saying, like, oh, what if it's like an AI? Hitch, like, I love that idea because I benefited from coaching from an AI where I can, like, do my own little baby steps at my own speed with this. Like if I bet you that your AI girlfriend that you're subscribing to would love to help you learn how to treat a real woman right and get you a hot date.

Joe C: 24:43

That is a very additional. A very interesting additional benefit, in a way, is that for some people, this could be something to emotionally rely on, but that is also a domain expert. Like you know, my boyfriend doesn't know anything about dentistry, and so he can make me feel better about it. But he might be able to make me feel even more better about it if, like, he could throw some facts at me. So it's interesting.

Kiran V: 25:14

Yeah, and I think this is again right. This is something that we could use and leverage to be that option where AI is improving people over time so that we can develop these IRL interactions right with other humans. But I think capitalism is something that's going to hinder that, because if the goal is, hey, you're going to use this as a coach to go out into the real world, ultimately what that's saying from a product perspective is, the goal of the product is to have the users stop using the product, which is exactly the opposite of capitalism, which is like use it as much as possible and as frequently. And you know again, we talked about just the concept of notifications and how Google could bundle all its notifications and send you updates once a day, but rather they choose to have it notify you as much as possible so that you're constantly coming back. So I think this is that double edged sword of capitalism and having AI truly impact humans for the better.

Andy B: 26:30

Yep, and just to tie it to real world tech matters a little bit more directly, even I agree to everything Kieran said. What this means is the people building this technology are employed by companies and they're given clear goals, like make sure users grow over time. Every quarter we must make more money than the previous quarter, and like for when they start building. You end up with all these high hopes of like I'm going to make this really useful and great, and then you realize, oh no, people are their love coaches, like they're getting too good at relationships by dating AI. Now you need to make them stick around and you've got pressure to keep user engagement. And that's when you start like the slippery slope of these individuals working in these tech companies. Just add a button here to capture your attention some more, add a little thing they're each trying to hold your attention for like one extra percentage point, because that's their job, because that's what the company needs, because that's what capitalism instructs for us, and you end up with tech that hurts people.

Kiran V: 27:35

I don't know if I have told you guys about it, but I'm reading this book called stolen focus, and it's the premise is exactly that right, how social media and the internet is stealing this precious focus away from us and taking so much of our time to. And that's because of these applications being designed exactly for the purpose of keep people engaged in this application as long as humanly possible. And that is the incentive of the developer, that's the incentive of the engineering manager, that's the incentive of the product manager, because all those people are employed by a company that needs to make money, that needs to pay bills, that needs to show profits, and what they're doing is they're profiting by taking your focus and selling that to the highest bidder, which is really scary.

Joe C: 28:32

Absolutely All right. So we're about to wrap up, but I wanted to ask any final thoughts before we close out.

Andy B: 28:41

Boo capitalism.

Joe C: 28:44

Yeah, I think we end a lot of our episodes with that sentiment.

Kiran V: 28:47

Yeah yeah. Ai is super powerful, has a lot of benefits, but capitalism is forcing people to maybe take advantage of other humans so that they can profit off of it.

Andy B: 29:07

And again, we're here to give everybody information, so you feel informed and you know what's happening behind this tech. So you have the same info we do on how this stuff works and what the motivating agents are. And if you work in this technology, nothing's going to get better if we all just do what we're told. You know it only takes like an average of 4% of the population to start a revolution. So if you work in tech, think about it.

Joe C: 29:39

All right, thank you, Andy, and thank you Kieran, and thank you for joining us today. You can find us anywhere you listen to podcasts. Please listen, rate, share and subscribe. And email us at aifyipodcom. We would love to hear from you. We'd love to know what you want to hear more about and what we can do future episodes on. But until next time, thanks for joining us and have a good one. See you, Bye.