Episode 5 - AI and Mental Health

Andy B: 0:00

Joe can you tell us about? Let's just get to start getting into context. I'm curious. I want to know how bad are the mental health apps? Of course, here we have a woman and two boys rolling towards something having fun. That is just a nice representation. And then I thought these, this measure twice, kind of architectural curated.

Kiran V: 0:29

That's cool. We're like creating something.

Andy B: 0:32

Yeah, last year this was my vision board. I had a version of it for for mobile. Here's the one that's on my phone background. Actually I think it's this and, like so much of this came true, I did a whole bunch of yoga last year, did over 100 sessions of yoga. I ended up going to Croatia on a girls trip with three girls who came with me to Croatia. Like I put this picture of mushrooms at the very top in my aspirational bucket and then remember we had rain, record rainfall in California. It's because I put that there. That's my belief. So trying to manifest a slightly less rainy future for us, this this particular season, yeah, anyway.

Joe C: 1:27

I talked to my friend Micky, who listened to our episodes as far, and he's interested in hearing more about why we're doing the podcast. I just thought I'd throw that out there, like I'm partially doing to learn a lot, and maybe I'm not saying that enough that how I'm learning with the audience, so I don't know something to keep in mind.

Kiran V: 1:52

I did actually learned a lot in the last hour.

Joe C: 1:57

Me too.

Kiran V: 1:57

But but it's actually surprising how at least from what I found little impact AI has in like physical, like working out fitness.

Andy B: 2:15

I learned a lot. I'm going to tell you guys, I'm going to tell you on the podcast, so you can react in real time.

Joe C: 2:23

I almost think this could be like three different episodes.

Andy B: 2:29

What if we try that with the three separate sections today and then, after it's recorded in editing Kieran, you can kind of make a call as to whether it makes sense to leave it together or make it three shorter episodes.

Kiran V: 2:43

Yeah, we could try that. The only thing is, we'll need like some intro or something or like different intros. Yeah, Like if we're an intro of like all three verses and but we can also do all the intros. Yeah, we could just do all the intros.

Andy B: 3:01

One assuming it's one episode and one assuming it's three episodes and thinks the magic of editing Hopefully it's not too hard for you, kieran, to if we just go straight into the gossiping, we can add the intro at the front right. I think we can do a two part.

Kiran V: 3:17

One is like mental health, and then one is diet and exercise. That's a good way to split it up. What do you think about that, andy?

Andy B: 3:39

I think it's a good idea. Hey everybody, welcome to AI. Fyi, my name is Andy. I'm here with Joe and Kieran. We are your friendly neighborhood podcast about artificial intelligence, machine learning and the applications. We are three people who work in the tech industry, have a bunch of AI experience and we are just generally curious people and we love to gossip about what's happening in the AI world and I believe today Joe is going to talk to us about wellness apps in the new year of 2024 and AI.

Joe C: 4:18

That's right. We're going to be talking about wellness apps, such as mindfulness apps, headspace, but also some of those apps that are beginning to provide therapy to users all over the world, like BetterHelp. And what do we have here? Talkspace and headspace is getting into that game as well, so I look forward to that.

Kiran V: 4:40

And this is part of a two-part series on wellness in AI. So this is going to be all about mental health and we're going to have another episode where we go into diet and exercise and fitness and more of the traditional wellness, when you think about that applications and how AI is used there.

Andy B: 5:04

Joe, can you tell us about? Let's just get into the context. I'm curious. I want to know how bad are the mental health apps?

Joe C: 5:14

I don't. I think it's 50-50. I think there's some really good things. Good in that they will help scale in place of you know the actual number of therapists. Bad in some of their data use practices, meaning capitalizing on the conversations of sad people to advertise. I actually didn't yes, I actually did not see a lot of like unsafe use of generative natural language scenarios to provide therapy. It seems like these companies know that mental health is very fragile and they have not started really capitalizing on like complete robot therapists to administer the therapy, which was my main concern.

Andy B: 6:14

Using the word capitalizing, what do you mean? Like they're just, you think they'll eventually get there.

Joe C: 6:20

Yeah, but I think they as long as they can ensure safety first, which isn't a bad thing. I think people could receive robot therapy, as long as it's safe and it's not hallucinating crazy stuff.

Kiran V: 6:31

I wonder how insurance claims work with like therapy and then robot therapy, right, like if there's an app that's supposed to help you with your mental health and then you end up with, like you know, you end up doing something crazy because you have a trigger from something totally unrelated. Can you now go and blame this app for causing that?

Joe C: 6:57

That's a good question, but I mean you can sort of pose the same question for human therapists, like what if your human therapist says something, are you able to sue or do they have any liability?

Andy B: 7:10

It's not so different from a self-driving car, yeah go ahead. Yeah, the prompt engineering would get really crazy because, like, how do you make sure that the chatGPT is practicing evidence-based counseling? Like to make sure it's like? I found out recently that there's something called EFT tapping, which is like you basically tap certain parts of your face for a certain amount of time and it like there's like research papers that calms you down and it's like good therapy. Like I would have thought that was make believe if chatGPT had been the one to tell me like no, no, no, go like this on your collarbones for a minute. Like I would have been like that's nonsense. But a nice therapist lady did it and was like I promise you there's papers. And I was like I don't believe you. So I Googled it and there was papers. Yeah.

Kiran V: 7:54

Yeah, I mean, I've been learning a lot about this actually because my wife, mira is in grad school right now for to become a counselor, and there's a technique called EMDR, which is a there's a somatic therapy, which is like a new kind of field of therapy that uses the body and the mind together. So somatic therapy is like actually using like physical techniques to impact the mind, and EMDR is is like eye movement something, something it's I could probably look it up, but it's basically what you do. Is you like close your eyes and you just like tap and then move your eyes to simulate like a REM sleep, which causes these different pathways in your brain? And then you have this CBT, like talk therapy, while you're doing that with the therapist, and apparently it's like one of the most effective forms of therapy and it's like within one session, like 85% of like extreme trauma patients were like significantly improved.

Joe C: 9:02

It's crazy, yeah. Yes, and talking about wellness, we we know that there's a whole host of wellness apps out there. There's a lot focused on mindfulness and some of the big ones are headspace, calm and insight timer. I looked a lot into headspace. That seems to be the most popular. Have you ever used one?

Andy B: 9:26

I have used space before. I've used calm, I use the bedtime stories.

Joe C: 9:32

I love headspaces, bedtime stories and they like shuffle sort of the things that happen in the stories so that every night you don't hear the same like plot and it's not exactly a story. It's more just like describing a scenario, and I thought that was maybe AI, you know, like generating content that's unique from the content you heard the day before. So let's actually talk about how some of these apps use AI, and I sort of started with like how do I imagine they use AI or like how do I hypothesize that they do it, and then I went out and validated if they actually do it. So, yeah, I talked a little bit about this. I don't think there's so much like generation of content. It's happening a little bit. But I saw that headspace uses AI for like real time adjustments. Basically like if you search for something, then they can quickly go and adjust search results. That's sort of an obvious one, but you can do some like content adjustment and like if you search for like stress related apps, they can use AI to understand that. Sorry, if you search stress in, search in like a search experience, they can use AI to know that you're very concerned about your stress levels and they might like put in some stress related things into like a video you watch or into your results about what meditation to do next. Wait, so you're saying this is within the app when you're searching.

Kiran V: 11:04

It will have like a feed of like. Here's like personalized results based on your searches.

Joe C: 11:11

Yeah, and extending that beyond just search but also in, like, some of the content that they have, or like recommender system recommender system yeah, Is the content also AI or is that like a human that's recorded it? I didn't see evidence of like complete AI generated content, like complete videos. I also did not see evidence of like AI voices being used. I think humans are still like running their meditations and like little video segments that they do, but I do think that's like in our future, like actually generating meditations using an AI voice, because we know that's possible now that you can have a very realistic sounding AI voice.

Andy B: 11:59

And that be just so that you can like what good say. I find a certain type of voice really soothing. I could have all the meditations be transferred to that voice that resonates with me.

Joe C: 12:10

Totally. Yeah, I can imagine that pretty easily. Like I know, spotify now has their like DJ, which I believe it must be an AI generated voice because it's very personalized. It says my name and like a few other things that I imagine are like not a human sitting there reading.

Kiran V: 12:29

Yeah, it's kind of creepy how realistic these things are now. Yeah, it's like you don't even know if you're talking to a human or not.

Andy B: 12:36

Fun fact, all three of us are actually gen AI right.

Joe C: 12:40

Yeah yeah, for now, a few other things. These apps are using AI to incorporate biometric data. We're going to talk more about that when we talk about devices. But based on maybe some biometric data you're giving to it, like heart rate or I don't know like the amount of sleep you have, it can adjust content based on that.

Andy B: 13:01

I have a question. Yeah, you seem to think that this will just how much AI is being used in these mental health apps will increase. Is that just because they have to make money and so allow them to serve more people better and faster? Because, like some part of me does not believe at all that a, no matter how intelligent a machine learning tool is, could never compete with a real human being. Right Like, especially like a, I'm sure, a seasoned therapist. We all know those older people in their lives who can just smell out bullshit a mile away, like how's an AI going?

Joe C: 13:36

to replicate that. I think we'll get into that in a minute, because we're still in the realm of like mindfulness and meditation apps. Okay, I think their main use case for AI will be just making better content and reaching more people, and I think that's a really good thing. But these types of apps aren't targeted in the way that like a real therapy session would be, and so I think they can work in like broader strokes and not really worry as much about like safety in the way that you're describing. But let's talk about what we could call a set of apps around online therapy and these are really marketplaces and sort of like experiences that enable working with the therapist remotely. So some of these apps are better help. That's probably the biggest one I came across Talkspace Headspace actually has something called Headspace for Organizations, which came out of an acquisition. They bought a company and it was called Headspace Health I'm pronounced for Organizations, and how these apps use AI is in a few different ways, I guess. But first, have you all ever used one of these?

Kiran V: 14:51

Like sort of used ginger, I don't know, it wasn't really AI, it was. It was like connects you with the person. But 10 minutes into my first session the person was like okay, well, I need to go make dinner. I kid you not, this is no joke For 10 minutes the person's like I need to go make dinner, so can we do this another time? And I was like like you literally had this on your schedule as an option to book this time and I booked it, and then now you're saying you know, so it was a pretty bad experience and I never used it again. But I mean, I'm sure it's a great app when it works, so, but yeah, it's just like you talk to another person.

Andy B: 15:34

Well, part of mental health is authenticity and honesty, so yeah, so when I've been doing telehealth for therapy and ADHD treatment and when switching to this telehealth provider which is actually like a local it's associated with the local Bay Area, like mental health practices they wanted me to do an objective test for my ADHD in addition to the screenings with the doctor, and I had to do this. It's not really a or it might be AI. I found the paper. It's called a DVA to testing. Basically I had to watch the screen and I had headphones on and there was a proctor on the other screen and I had to have a mouse that plugged in. So it was like no latency and I had to watch a screen and react certain stimuli for 22 minutes without stopping and then at the end of it they ran a bunch of stuff on it and they spat out. And they spat out like a 40 page PDF about my brain. So basically it would say either the number one or two via audio and visual and every time it said the number one or every time I said the number two, you had to click and whether you heard it or saw it and to the frequency and they basically read into and there's periods where it's like really fast, there's periods where it's really slow or nothing's happening. There's periods where, like your ears and your eyes are getting different things. It confirmed my previously already diagnosed ADHD, but then the report that gave me which I think is done based off of all the micro delays in my reaction time made a lot of assumptions. It told me in that report. Buried in there was that I would be bad in emergencies because I get flustered, and I'm like that's not true. Luckily I'm in my 30s, so I've already lived through emergencies. I'm awesome in emergencies. I can't imagine being like 15 maybe, and if I had seen that report about myself at that age it would have changed the course of my life. It would have been devastated. But I saw this thing being like you're probably not very good under pressure and I'm like try again, like I am.

Kiran V: 18:03

Yeah, oh, that's interesting.

Joe C: 18:05

It sounds like there could be some AI in that, maybe in the assessment, but also in just like document generation.

Andy B: 18:14

Yeah, oh, absolutely.

Joe C: 18:15

Yeah.

Andy B: 18:16

Because the report spit out instantly.

Joe C: 18:19

With these, these apps or these sort of marketplaces. The way they use AI is by matching coaches with users and doing that better and doing that at scale. They can do it very quickly. Apparently, some of them have experiences to help coaches fill out note taking summaries. I could see that being pretty useful. So if you're a therapist, you don't want to take all the notes like it does it for you. And then there was another use case, and this is the one I was really interested in, because this is where we get into like, is the robot actually doing the therapy? And there are some use cases of actually generating stock messages, sort of for low tier or like low touch therapy. So in early 2022, headspace acquired Sayana with, which is an app with an AI chatbot that TheraPyzes, so the AI sort of ingest check-in information and mood trends and then actually advises patients. It sounds like that service is actually doing some of the TheraPyzing, which indicates that maybe Headspace is trying to get more deeper into that game. Oh, and Kieran, the company Ginger, was actually acquired by Headspace and that is what became.

Kiran V: 19:34

Headspace for organizations. Organizations, oh, that makes sense.

Joe C: 19:38

Yep, yeah, I mean I have some questions around like this, like we talked about, is sort of the viability of AI at risk for using it to actually conduct therapy? Are they safeguarded? So earlier this year, leslie Witt, who is the Chief Product and Design Officer at Headspace Health, which formerly was Ginger and is now actually Headspace for organizations, said there's a ton of excitement, but the whole sort of industry is, as they said, laced with profoundly laced with a not ready yet sentiment. So while I think a lot of these companies are interested in using AI, and certainly the power of generative AI, to do the actual therapy, they're not quite there yet because of the sensitivity around mental health care.

Kiran V: 20:32

So do they market AI? And like is that one of the you know tenants of their marketing? Now, because I know, like a lot of other industries like, that is like you know, use AI to power your organization with. You know smart worksheets, or you know there's like a million different things. That's like. All I'm curious, like for these mental health apps is like is that similar, is it different?

Joe C: 20:56

I'm gonna that's a great question and sort of off the cuff, I would say no, that they don't market it, because I do think it would turn people away, because I don't think people want to put their mental health into the hands of a robot. Yet I why yet?

Andy B: 21:15

Why the word yet?

Joe C: 21:15

Because I maybe because I have full confidence that, like if people really do care about safety and they figure this out, then it means mental health care can be scaled out and it can reach so many more people at lower costs. I think that's certainly something I can imagine for the future. But like that's sort of scary. If it's not quality health care, we need to be really careful about it. How many?

Kiran V: 21:44

people use these apps, yeah, so I wish I had harder numbers on that and I can go find them.

Joe C: 21:50

I sort of looked at the top mindfulness and these marketplace apps and they're in the oh, I wish I remembered. But the ones I've talked about all have more than a million users. Wow, it's had over 70 million in 190 countries. Yeah, I was like way off with that bar, but I saw a lot of apps out there that don't have even a million users.

Kiran V: 22:18

Oh yeah, I'm sure I mean Headspace is. I feel like that's just been there the longest.

Andy B: 22:26

Headspace is the one where the other royal got a job right. No, that was something else I forget the name of that. Better help.

Joe C: 22:38

That might be it Better help or better up something like that. So I also wanted just to like make note of apps that are out there that don't necessarily fit in mindfulness or like therapy or marketplace apps. We could throw astrology apps into this and I think this is where, like as machine learning and AI people, we could have a lot of fun. I use CoStar. There's also the pattern. Have you all ever checked out one of these?

Andy B: 23:08

Yeah, so I use CoStar because you referred me to it and you're actually one of my only friends on CoStar. So every day I get a little report on how Joe and I should be interacting based off of our mutual astrology, and then I've used the pattern. But Some girls at work are really into astrology and they hate costar because they say that AI generated astrology loses the important human touch that's necessary for accurate interpretation of the science.

Joe C: 23:40

This is really kind of the same argument we have about any like mental health app is kind of like what are we losing by letting robots do it? My personal opinion is that astrology, which I don't exactly believe in, there is wiggle room. Where we have, we can worry about safety less. I hope I don't ever have to eat those words. But like it can be a little more fun or like abstract and like right now, generative AI is really good at like making up shit and it makes for interesting horoscopes. Yeah, I use costar and like sometimes I am like who thought of this? Or like what does this even mean? And I have no doubt that like AI can be used to generate it.

Kiran V: 24:18

Yeah, and I think it's because, like astrology is like an explicitly unregulated industry or whatever. No one expects that to actually be the predictor of their future or whatever. Versus like a mental health thing is like. This is now a studied science.

Joe C: 24:38

Yeah, but it's interesting to question, like is it unnatural or untrue because a robot made it? Like what? I'd be curious what your co-workers say, andy, about. Like what does a human bring to a horoscope that makes it more true than Well, yeah, their beef is just with accuracy, actually.

Andy B: 25:02

and they got me into an app called Channy, which is actually like a team of astrologers doing the same horoscope, and so for a while, every day, I would look at like five different astrology apps and see how they would compare, and they were all equally creative as far as I'm concerned. All equally useful to me, yeah. So question for you, kieran, because you said astrology is not, is just whatever, and healthcare is, you know, research based. I recently heard about the reproduction crisis in psychology. They're basically like they can't rep the replication crisis in psychology. They can't replicate experiments at all for some of the foundational things that we use to practice mental health today. So it might also be finger in the air like everybody's doing their best as far as science goes.

Kiran V: 26:09

Yeah, no, I agree, I think like it is like how do you measure that? I think we haven't measured it Other outside of like you know it's. It's always going to be subjective at the end of the day because there's no like value, like you know quantifiable number, that you can put on like a mental health right. It's always going to be a subjective evaluation versus like I can look at your blood pressure and get like a you know quantified value. So, yeah, no, I think, I agree, I think it's interesting, but then it's also like when you see a particular treatment. So the one that I keep going back to is EMDR, because that's the one that Mira has told me the most about. I looked at it's eye movement, desensitization and reprocessing and that's that you know, where you do like the eye movement and kind of simulate those REM sleep which opens different neural pathways, and that has consistently received, you know, improvements on those subjective measures, right. So it's still a subjective measure, but to consistently go from like I'm like a 10 stressed and anxious, to like a three or four is like okay, there's like some notable thing, but yeah, it's like even that is a person and it could be a placebo. You know, like you never know, you know so.

Joe C: 27:38

I do think yeah, maybe I can't speak on it too much because I'm not a therapist but like I read some article the other day about how there's really only like six emotions, like fear, surprise, and then a couple more that I don't remember but like if a robo therapist can at least get you so far as to place you in one of those buckets and then administer like the best approach to like dealing with those emotions. Maybe that's something, maybe that's like better than nothing.

Andy B: 28:14

Two thoughts I'm having. One I totally use chat GPT is my therapist. I know I'm not supposed to, but I do Like there are some questions that are too embarrassing to even ask my health care providers and I sort of think like I'm just going to ask chat GPT, like I want to bounce off, like you know, when you can't get too squishy to Google, yeah, that's when I like throw it at chat GPT and I tell it like what show me, what are the things I even need to be Googling to know what the hell I'm dealing with here.

Kiran V: 28:49

Have you found that helps Like? Does that help you?

Andy B: 28:54

It does Sometimes when I have more information than things I think are mysterious, feel solvable. For me it was like stuff related to dentist phobia and yeah like nothing crazy. But like I know other people use chat, GPT is like a sounding board and a friend at times, Right Like it's the sort of neutral conversation list that you don't have to feel shame around and that's got to be good for your mental health.

Joe C: 29:21

That's very relatable. I mean, I think I've used chat, gpt in that way too. But I want to bring up a way. I've used Dolly for mental health as well. I've done this like once or twice, but I have recreated images from my past, like events, both good and bad, that I can think of, that essentially like placed me back in the original experience in a healing way, like I recreated a memory and it made me feel really good or it sort of like let me glance at it a certain way and I had never understood art therapy before, like I wouldn't understand, like why art therapy is effective or like what it actually is. But coming out of that experience I was like I think this is art therapy, like the ability to and and Dolly or any image generator is really just like giving you more tools beyond a paintbrush, or like it gives you the tools to actually create like your own little masterpiece in a way you never had before. That like I certainly couldn't Take the time, or wouldn't take the time, to paint those memories that I had, but I can take five minutes to write a prompt describing it and like it did a really excellent job anyways.

Kiran V: 30:45

No, I think that's so interesting because that actually kind of you know, makes me think of a total tangent of how AI could be used, which is To help to identify the types of therapy that would be most impactful To a particular individual, right? So for you, seeing those things could help for someone else, it might be like a meditation therapy for someone else who might need like a EMDR or talk there, and it's like there's so many different things that you could use, and I've actually had this discussion with Mira quite a lot where it's like our medical system is Broken and I think that's people's. People say that a lot. Right, it's like oh, but I think the reason why is because it's not necessarily comprehensive. Like there's no place where you can go to and have someone look at you holistically and be like, oh, you have a broken leg, go, you know, get this bone fixed. Versus, oh, you have a broken heart and you need to go talk to a therapist, or you are stressed out because of something that's happened at work and you need to go. You know, do you know physical therapy? You know like so many different things that you could do for an individual, and I think right now it's kind of like on us as individuals to be like oh, my stomach is hurting, I'm gonna go to this doctor, oh, I feel sad, I'm gonna go talk to this doctor. And I think a lot of cases People may not have the tools to determine what they should do and they might not be able to get. So that could actually be something that's that AI could help with to understand the types of pain you're feeling and maybe not necessarily Solve it, but help you find a solution, because that something personally is something I've felt like navigating the medical system and I don't have any like particularly complex issues or anything, and still I'm like it takes me six months to book an appointment with my primary care physician. It's like, okay, well, you know, I know it's tough.

Andy B: 33:00

That's really interesting to me, like I love the idea of personalization in healthcare so that you know, maybe with AI assistance the healthcare providers we have can give everybody a really customized treatment plan. But then also, like I have ADHD and it's a minefield to get any sort of healthcare sometimes like a minefield makes it seem dramatic I shouldn't use word terms. It's very challenging. It's like the ADHD Olympics to get ADHD medication. Like endless forms and appointments and follow-up phone calls, like just a bunch of stuff that is impossible to do and I remember thinking like this is what an AI Assistant should do for me. That would actually help my mental healthcare is call 16 pharmacies and goes hi, do you have this in this instant release Adderall and stock, thank you like.

Kiran V: 33:53

I was yeah.

Joe C: 33:54

Yeah, I mean you could say one of the big reasons our healthcare system is broken is because, like, it can't scale and it's too costly. And so I think, yeah, through automation and AI, the possibility is there to smooth out the bumps significantly.

Kiran V: 34:12

And and yes, I AI for medical logistics.

Andy B: 34:16

So, I actually talked to a bunch of people on this area recently. I do consulting for large language model application at work and I've now spoken to three people in the same month who are from large healthcare providers in countries you are listeners, live in, who are trying to do medical record summarization and basically what happens like, for example, one person I talked to was the director of engineering at a hospice care network and he talked about, like there are these nurses who get a patient in who, like it's not good news, right, if you're in hospice care, but they have, like, at that point when the patient gets into the treatment, it's like a 300 page document Of what their prior care has been and what has worked, what hasn't worked, what they've asked for, etc. And then that person has like 10 minutes max to do intake and set care guidelines. It's just impossible to ask them to read that and so they're trying to come up with ways to have generative AI like Read and summarize that accurately. I had big concerns with that personally because, like there's a lot that could go wrong with that, but there's a lot that could also go right if you can do it safely, because, like, otherwise, how else are these poor you know providers ever gonna get through so much documentation for a single patient?

Joe C: 35:37

Yeah, that safety issue is a huge barrier right now, and Will capitalism Let these things, once they're safe, actually make it to the people? We'll see, okay, so we talked about a couple apps. There's actually just a few more I wanted to talk about because, as I was doing my research, I was like you know, these mindfulness apps and Therapists aren't the only way people get mental health. I saw an app out there for like a self-care pet management App, almost like Tomagachi, like work on your self-care by taking care of this digital animal. It also occurred to me that you know a huge swastika. The world are religious and there's like Bible apps and, if you like, look to your religion for mental health and you have a Bible in your pocket, like something like that is gonna do you a lot of good. And that's just one example. So I had this idea for an app.

Andy B: 36:32

I'm gonna give this away for free. Okay, oh, you're welcome public. Somebody should make an app that simulates having a dog. Let me pitch you on this having a dog is good for your health objectively. Like people who have dogs end up living like eight years longer than people who don't. Um fact, check those numbers on your own people. I did not google it recently, but people who have dogs live longer. And then if you listen to like the huberman lab podcast, for example, and you go into his like morning routine, he's all like what's the first thing you should do in the day? Like go get a bunch of natural light, like go on frequent walks. I'm like you know who helps you do that a dog, but it not everybody has the ability and the means to take care of an animal without possibly hurting that animal. So what if you had a digital dog I'm thinking like pokemon go, but that works. That actually like tells you like now is a great time you just woke up, go on a five minute walk to the nearest tree Just only to the nearest tree where your dog would relieve itself and come back and that will help you. Um, I think that could be really fun and we'd really help people.

Kiran V: 37:38

It would also be a good app for people that are like I want to get a dog and let me try it out. Or like parents when their kids are asking them it's like, all right, don't kill this virtual dog, and then we'll think about getting Give an actual dog. Well, check out finch. This is the app I came across that seems to be the like top self-care pet app.

Joe C: 38:01

So yeah, I want to check it out. It looks cute.

Andy B: 38:05

I want to look into it a little bit more.

Joe C: 38:06

To sort of close out this section on these type of type of apps, I want to tell a story. It's a good and the good, the bad in the ugly story and it's about a Tech nonprofit crisis helpline. It's called crisis text line and it seems like they do a lot of good. So, um, they've they've said that data science is at the heart of everything they do and they want to use technology to help people who are in crisis. So they're constantly looking at their data and one of the things that they do is actually identify, like, what might be the situation that they're finding that they need to help with, based on the words that are being used. So, if they receive a message that says numbs or sleeve, there's a 99 chance that there's a situation involving cutting. If the words mg or rubber band are used, there's a 99% match for substance abuse. And if they use the word sex, oral or Mormon, um, there's a good chance that that person's actually questioning if they're gay. So this opens the door for Directing them to the right kind of help or the right kind of emergency level. I love that.

Andy B: 39:21

Data science telling on the Mormons. Sorry boys.

Joe C: 39:24

So this, uh, my understanding is, this company works directly with people who reach out to the community. Company works directly with people who reach out to them, but they've also partnered with big players like meta and google on their natural language Use cases, so that even these bigger companies that here or receive this kind of language can Possibly provide help too, and I want to learn a little bit more about that. So that's the good, but then there was like a little bit of bad with this. So apparently other commercial companies have partnered with this company and gotten the data to basically see what does someone in crisis look like, but then has gone on to use it more negatively not even negatively, but apparently there was another AI company, that um, received this data and then used it to uh, sell back to companies um, customer service patterns. So basically you can take people in crisis and how you help them and then apply that helping people who have customer service issues. That's not the worst thing in the world. But apparently these people in crisis did not know that their data was being used for this, for profit effort. Oh, better help got busted. Basically taking their patient data and then using it, giving it to Facebook meta to build advertising personas so that they can find other people in crisis that sort of match. I thought that. But then at the same time it's like, okay, well, they are going and finding more people who could benefit from better help. As long as better help actually is better help, then maybe that's not the worst thing in the world. But I think the real trust breach is that the people in crisis didn't even know to begin with that their data was going to be used in this way. In fact, they were told it wouldn't, and so this situation with better help was the first time the FTC actually like punished a mental health tech company and had them basically stop using user data in this way and then find them like $8 million.

Andy B: 41:42

But I'm sure that just passing that data even onto a company like meta, like I'm sure people in crisis are different consumers than people who are not currently in crisis, and I'm sure whatever you're going to get some Buzzfeed article telling you like 36 things to buy to make the big sad, big happy sort of another flaw and I didn't really see this happening, but there's some level of like.

Joe C: 42:12

The data was anonymized, in theory, could not be traced back to the people in crisis, which is great, but there's always the threat that it can be traced back. There have been studies that, like anonymization doesn't actually exist and that there are ways to find out who said this. So if there is very personal, private health information, there's a risk that it could be tied back to these people who were probably very scared to share it in the first place.

Kiran V: 42:37

I also wonder, like what's the level of information that they were sharing, like where are these like medical notes, are these like notes from the sessions, or is it like general metrics or like survey data, which still I mean, but I think that's that could also. You know, depending on what, how much they shared.

Joe C: 42:56

Yeah, I also to Andy your thoughts on sharing it with Facebook. I don't know if it's still remain very private between like that client and Facebook or if Facebook would use that to help other advertisers. That would be pretty good yeah. That concludes my efforts to understand better how AI is being useful mindfulness and therapy apps. There's a whole lot there and I'll be on the lookout for more.

Andy B: 43:29

Do we have any tips for people? Like we had to synthesize what doesn't doesn't work, like I would tell people. Go ahead and tell chat GPT your darkest, dirty, dirty secrets and give you good advice sometimes.

Joe C: 43:42

Yeah, if I had to come up with some advice, I would say, like, be very conscious that your data is never really private. It's really such a big risk. A lot of these companies take privacy very seriously. I didn't hear a lot about any like breaches or sort of that trust being violated. Beyond very managed data sharing between trustworthy companies. That's even a thing. And I would also say be be very conscious that AI is starting to be used in these things that are supposed to help with mental health, and maybe just sort of understand that you may not always be talking to a human and I don't know beyond the lookout for like shit that don't make sense, but I don't know, if you're, yeah, someone in crisis or who needs help, that this shouldn't even be something you need to be thinking about.

Kiran V: 44:52

Unfortunately, yeah, I do wonder just like what the trajectory for this is going to look like to me, and I kind of got a similar sense when you know we talk about like exercise and fitness. But I feel like AI is not going to be a dominant part of this industry, like potentially ever, because I think it's like the like the need for that human level connection when you're in the like for people that are going through these crises. I feel like that's going to continue to exceed the need or like the demand for like oh, I need you know something that's just like high volume, scalable, you know, ai, mental health care and if you think about the pendulum we discussed, like I often talk about, like what's a job that's good for robots and what's a job that's good for people.

Andy B: 45:52

Mental health care feels like a job that's really for people.

Joe C: 45:57

I don't know, andy, I'm not entirely sold. I think it's the science. I do think, like Karen, earlier you talked about how AI could be used to sort of triage all sorts of health concerns, and I think in that sense, mental health is a good starting spot because there isn't like physicality to it In most cases, like it were very far away from AI helping you with a broken leg or like something very physical with your body. But the mind is, you know, different in that, like I don't know. Maybe it's an easier entry point, as long as we do it right.

Kiran V: 46:43

But I do think that when you talk about a broken bone, the remedy for that is way more straightforward than someone that has PTSD.

Joe C: 46:54

Yeah, that's a good point, and so.

Kiran V: 46:55

I would argue like the broken bone is actually an easier thing for AI to take over, and it's more about the mechanical engineering of like okay, how do we build a machine structurally to do that? But I think when it comes to like physical things, I feel like AI can actually be more impactful or at least I've seen more examples of it being impactful there than in mental health and I don't think this conversation hasn't convinced me more of that.

Andy B: 47:28

I want to point out we all know and love Star Trek, the next generation, and if you think about the people that Jean-Luc Picard surrounds himself with on his command team, you have data, who is a robot, and an AI, who is incredible for certain calculations. On his other side is Deanna Troy, the ship counselor, who is an empath, and she's brought into just as many emergencies as data is to deal with things through intuition. So I think there's some truth to what I'm saying, joe, which is like I think we understand that like these things, some jobs, I mean, I don't know, like this was recorded in the 80s, I think so like maybe it's masculine versus feminine, like the empath versus the data stuff, but I think some things are just better done by people. Maybe I'm being a lot of, we'll see.

Joe C: 48:24

One more sort of counter argument, though, is that, like humans are very self interested and I've worked with, like, mental health professionals that were not suited to me and also like, sometimes their own, their own fears, their own like hang ups would come out in a way that wasn't helpful for me, and I think the optimal AI therapists, like, would not be self interested. I mean, yeah, when you're working with a human, you're working with a human.

Andy B: 49:00

So, for better or worse, but we live with people you have to like the reason you meant to health even matters is because we live in societies. We live with other humans and, like I like the fact that some of my friends they'll listen to me wine and wine, and wine about something, and then we'll be like, okay, stop it. You've done this to me, joe. You've been like Andy, you just need to go to therapy, like so, like there's something about that human push and pull that like maybe we'll never empower an AI to tell you the tough love moment that you need to grow.

Kiran V: 49:40

Yeah.

Joe C: 49:41

I think the tough moment can be factored in.

Kiran V: 49:43

Yeah, I think you could factor that I'm just so on the fence here, like I feel like when we've talked about other topics, to me it's like very clear how AI is like already making a huge impact and could potentially replace, you know, a lot of behaviors. I still I feel like I'm very much on the fence here Because, because I now can see Joe's perspective where it's like maybe it is good enough for like 85% of the cases and that pain of having to go to five different therapists until you find someone that you can connect with is, you know, kind of driving the, you know, motivation for more AI therapy.

Joe C: 50:32

It's not. It's not so different from like how we say self driving cars will make driving safer. There's human error in like humans behind the wheel and there's a real case to be made that a self driving car will get more safer than like humans driving.

Andy B: 50:53

So I would then say that the AI needs to be a tool of scale and customization for the provider, not for an individual electing their own tree. Well, I don't know. Actually, I don't know if I would have just said if I believe it.

Kiran V: 51:10

So the reason why I think it's different than a self-driving car is because a self-driving car, there's clear rules to follow, even though there might be situations that are new and you know a kid runs into the road or whatever, but there's always that rule of like don't hit that kid or don't hit this object. Stay within these lines. So I think it's much more objective to successfully navigate a street than navigating a person's brain Because, again, don't hit that kid is an important mental health rule as well. Yeah, it is.

Joe C: 51:48

But I think, ok, yeah, it's a good argument. I do think there is. There's a rule book that therapists should follow and like it's their education and like if they're doing something outside of that, then are they good therapists. But it's nuanced, it's nuanced, it's a very good question.

Andy B: 52:13

Well, I could be convinced that personalization and customization matters Because, like you mentioned, like the art therapy Joe with AI, that sounds really cool and I think of like, yeah, if you have somebody in your life who's going through a severe illness and you anticipate a nigh end, being able to, like, capture their voice maybe they do this for ALS patients, like, so they can have their own voice to text text, to voice algorithms, but just being able to talk to that person or have them read you a story again Years after maybe they've passed. I don't know if that's normal, I don't know that's wild, that's crazy. That could be listeners right in.

Joe C: 53:01

Do you think would you go see a robot therapist? Let us know.

Kiran V: 53:08

Yeah, I'm genuinely curious what it's going to look like.

Andy B: 53:14

And what are your expectations regarding how your data is treated? Like when I told you put your darkest secrets into chat GPT, Don't actually do that. If it's anything you don't want some data scientist at OpenAI to see.

Joe C: 53:28

Therapists will be like. I need to report you if, If there's like violence involved or, like you know, a whole host of things. Maybe that's like something you should look at before you go talk to chat GPT as well.

Andy B: 53:43

Yeah, that that term is called mandated reporter. So there was a couple roles right in our society called mandated reporting, where you're legally obligated to report crimes. I used to work as a counselor at kids camps for working directly with children and I was a mandated reporter. Obviously, if I suspected assault I had to report. It Is when AI become mandated reporters, can you? Sue the manufacturer.

Joe C: 54:10

Yeah, do you think chat GPT should should report? Now we're getting into like minority report. I know that's literally what I was thinking. Good questions.

Andy B: 54:23

With how fast the generative AI world is Moving and almost every company has set aside a large section of their 2024 budget to try and explore with gender of AI, I feel like if we did this again in a year, it's going to look totally different.

Joe C: 54:38

That's actually something I wanted to address. I went into the research expecting to see a lot of generative AI use cases and I didn't. But I suspect it's because it's too new and next year, or like in the years to come, we'll see the boom Like, I just don't see it yet, but it doesn't mean everybody isn't working on it. Thank you, joe, for walking us through.

Kiran V: 55:01

I think there was a lot of very interesting insights that I at least learned from talking about, you know, wellness and mental health applications. I was surprised to learn, you know, maybe not as much AI as I might have thought, but I think there are still, at least for me, a lot of open questions on where is the future of AI in mental health, and I think there's a lot of open questions and I don't have a strong perspective in either direction, at least today.

Andy B: 55:37

And if you are a first time listener, we'd love for you to share your thoughts and to subscribe. Follow us on Spotify or Apple podcast. So your favorite application. Give us a rate. Recommend us to a friend, subscribe if you're interested, and you can always reach out to us on our website. Look at show notes We'll be releasing pretty soon, as well as to contact us with questions. That website is AIFYIPodcom.

Joe C: 56:04

And just a reminder that in a companion episode we're going to be talking about diet and exercise and that sort of realm of wellness. So stay tuned for that episode. Thanks for joining We'll see you next time. Bye.