Episode 10 - AI and Starbucks

Kiran V: 0:00

Do you guys think it's weird to drink instant coffee with cold tap water, and just that?

Andy B: 0:10

I don't think it's weird.

Kiran V: 0:11

I run things, I'm crazy because I do that. It's just like so easy though.

Joe C: 0:15

It's a little weird to do it on the Starbucks episode.

Kiran V: 0:19

Yeah, that is true. Actually, we should have all got Starbucks. How do we not think about that?

Welcome back listeners to another episode of AI, fyi, your go to podcast for all things AI and learning about new topics, new things, what's going on in AI and everything in terms that we can all understand. Today we've got a pretty exciting episode, something that I was actually surprised to really hear about, but we're going to be talking about Starbucks, and we're going to talk about how Starbucks is using AI to build the next generation of what we all know and love to be our favorite coffee spot. So, joe, why don't you take it away and tell us what is Starbucks?

Joe C: 1:23

Yeah, absolutely. I'm very excited to talk about this topic and I was very surprised to hear I think we all were kind of shocked at how much Starbucks uses AI. But here's sort of the except for Andy. So here's the overview of Starbucks, which, if you are living on planet Earth, you probably know about Starbucks, but if you don't, it is the premier roaster and retailer, especially coffee, in the whole world. That's what Starbucks says. It was founded in 1971 in Seattle and you can actually go to their first store, which I've been to.

Joe C: 2:01

And today, and a lot of these stats that I'm going to share are very fresh. They're from the beginning of 2024. They have there were a couple counts, but like 32 to 35,000 stores worldwide. So since 1971, that growth is pretty insane. About 16,000, a little bit more than that of those stores are in the United States, and I just saw a lot about how they're trying to expand internationally and particularly in China. So, as of this time, there's about 6,800 stores in China and that's scattered across 800 different cities and I was like whoa, there's 800 cities in China. That's crazy. So, yeah, we're going to be talking a lot about their digital platform.

Joe C: 2:50

So just to touch on that, as you may know, they have a pretty robust rewards loyalty program and that is contributing to a lot of their business. And just a little bit about how that works is it allows you to pay through the app and then get reward points, which they call stars, and then you can redeem those stars for free food and drinks, and so, as of this year, there's a record 34.3 million active users in that rewards program and the rate of the app is at 13% from the same time last year. So a lot of growth there and obviously something they're going to keep, you know, investing in. Like I said, a big part of this platform is like mobile payments, and apparently these pay capabilities were used more than for more than 30% of all purchases, and it seems that if you're a part of the rewards program, you're coming into stores a lot more often, more frequently and spending more time in there. So that's that's the overview.

Joe C: 3:56

I just a couple like, like I said, they're really investing in this platform, certainly in AI, which we're going to get into, and just growing internationally. I also saw a lot on how they're trying to invest more in like worker well-being and worker pay. That's something I read on their website, so who knows what that means, but you know, often when we've heard about Starbucks in the news in the past couple of years, it's been about you know how they treat their employees and employees unionizing and that sort of thing. And so let's talk, let's get into the AI of it, and Andy's going to be driving this conversation. She's done a lot of research, so, Andy, take it away.

Kiran V: 4:35

Well, actually, before that, I'm curious do either of you guys use the Starbucks app?

Andy B: 4:41

So I'm one of the people who's kind of trying to boycott Starbucks because of the union busting they're doing. I will be going to Starbucks that have unionized if I can, but otherwise I will go to another coffee vendor.

Kiran V: 4:58

How do you know if a Starbucks has unionized?

Andy B: 5:01

You can ask the employees, but I think there's also a Starbucks union national. I think on their website they show details of which stores have done their votes, how it went. Of course, Starbucks has shut down stores that have unionized, so that's not great.

Joe C: 5:19

Someone told me yesterday that Mickey, who's getting a lot of shout outs on this podcast, told me that they're they're actually going to have to reopen a couple of stores in San Francisco because they were accused of closing those stores to prevent, or like bust, the union that form there, and so I guess the law is saying you have to reopen those.

Andy B: 5:40

Yeah, you can't just stop union efforts. That's called union busting and it's illegal yeah.

Kiran V: 5:49

Wow drama.

Andy B: 5:51

So I was the one that pushed the boys into doing an episode on Starbucks and when I first pitched it, both of them were like but I nag them into it, and it's for a couple of reasons.

Andy B: 6:02

One juicy, juicy Starbucks drama and capitalism drama. Obviously I want to talk about it, yeah. Two, it's actually just another stealth episode about agriculture. I tricked you. I got a whole bunch of stuff about crops to talk about, yeah. And three, I think Starbucks is just like a really relatable household brand and probably every single person who's listening to this podcast has gone to a Starbucks at least five times in their life and it'll be really interesting, I think, for folks to realize how, from the very ground that the coffee bean grows to all the way to your transaction at point of sale, there's AI every step of the way and there's investments in AI happening every step of the way. I think it's a really interesting story of how AI is changing the world. So that was my motivation for talking about Starbucks that and the SEO of. If people type Starbucks into podcasts, maybe we'll pop up.

Joe C: 7:02

And it's a yeah, probably a very a topic that a lot of people can relate to there everywhere, Exactly.

Andy B: 7:12

So yeah, stop me when you guys have questions, because I went down the deep end researching stuff about coffee and I'm sure I have way too much to talk about. But first and foremost, little thing I learned this morning guess who's on the board of Starbucks?

Kiran V: 7:31

Hmm, Musk.

Andy B: 7:32

Satya Nadella.

Kiran V: 7:34

Oh, no way.

Andy B: 7:36

Yeah. So for those who don't know, satya Nadella is the CEO of Microsoft. He certainly knows a lot about AI. He was the CEO of Microsoft who did the partnership between Azure and OpenAI, and he's been on the board of Starbucks since 2017.

Kiran V: 7:52

Wow, they're smoozing it there in Seattle.

Andy B: 7:56

Yeah, starbucks started publicly talking about their machine learning platform called DeepBrew around 2017. And it's built on Azure ML. So that's that connection, those billion dollar contracts.

Andy B: 8:17

Yeah, exactly so. For those of you who are not in the tech world, the reason we're going wow, ooh is that Microsoft Kind of lost some ground in market dominance. In the early 2000s, people stopped always using Windows OS and Microsoft Word and things like that, and they're actually making up a lot of ground right now in the AI world. Right? Their platform, azure, is a competitor to things like AWS and GCP. If you don't know what those acronyms mean, don't worry. You just know that tech companies buy services from Microsoft to run their cloud infrastructure and Azure. Microsoft has a really strong partnership with the leading name in generative AI, openai. So the fact that the CEO of Microsoft is on the board of directors of Starbucks. Starbucks has a machine learning platform. I'll explain what those are in a second built on top of Azure. It's a very circular thing.

Kiran V: 9:20

They all know each other and like a quick demystification for cloud computing. It's basically like a bunch of computers sitting in a building somewhere and those computers are extremely powerful computers and when you're connecting to the cloud which is most of the websites, if not all of the websites that you're using, they you're sending tasks to that center to say you know, send me this webpage or do this calculation, and then it does it on the computer and it just sends you the result so you can see it. So your laptop doesn't fry itself because it's trying to do a whole bunch of math. It just sent somewhere else do the calculation, the heavy, expensive stuff, and then send it back. So cloud computing in a nutshell.

Joe C: 10:03

And we've talked about, like data storage and then also the compute needed to train some of these massive models, that all can be happening through those cloud services as well.

Andy B: 10:16

So on Starbucks's board is at least one person probably more than one that know a whole bunch about AI and what AI can do. So when I started laying out my notes for this, I have a section on just how coffee growing is being impacted by AI, which is pretty interesting, but it's hard to figure out how much of that Starbucks is directly involved in. Then I have what Starbucks admits to doing and what I found Tech Talks and Slides on, and then I have my own guesses on the direction that Starbucks is going. So we can get into it in that order, I think, maybe following from tree and soil all the way through to when you buy a cup of coffee.

Joe C: 11:00

That sounds great. I love the hypothesizing too. I think that's always a big part of this.

Kiran V: 11:05

I love how you slid agriculture in this episode.

Andy B: 11:09

Always and forever. I am an agriculture stan, okay, so did you guys know that most people who grow chocolate beans have never had chocolate?

Joe C: 11:21

Oh, interesting.

Andy B: 11:24

Once the cocoa bean gets pulled off the tree, that takes so much processing to turn it into chocolate and it's so expensive. Most chocolate growers have never had the opportunity to change to taste the output of their own agriculture.

Joe C: 11:38

Something so sad.

Andy B: 11:40

It's deeply sad. You can actually go on YouTube and type in chocolate grower tasting chocolate. For the first time there are people who go and record, bring Hershey's to the chocolate farms and let people taste what they're growing, and it's really heartbreaking and interesting to watch. Hershey's is chocolate, yeah, barely. Not legally, but barely.

Kiran V: 12:03

It's milk and sugar and food yeah. News to me.

Andy B: 12:07

So I'm not sure I did not find as much of the similar story on coffee, but just a little bit of context. The coffee plant, I believe, comes from the Eastern Hemisphere and it is a cash crop. Coffee is second to tea in the most consumed beverage on earth. I believe it goes water, tea, coffee, beer, wine but economically coffee commands a larger price point. I believe it's a larger market cap for coffee than any other beverage on earth.

Andy B: 12:47

And it's a cash crop, which means that there are farmers all around the equator, in a lot of places around the earth South America and Africa specifically, but certain islands as well who grow coffee and for the most part it's this weird thing where a lot of coffee is grown actually by still relatively small farms. The beans are taken off the tree and it's like a little cherry, it's a little red dot thing and there's a bean inside that has to be dried. Then you have to remove the husk from the bean. The beans are brought into centers where their weight and a quality score is assigned. The farmers paid for them and they're taken off to subsequent processing, which can include drying, fermentation. Obviously they're usually imported to other countries raw, roasted on sites like Starbucks roasts their own coffee right and then brewed. So when Starbucks?

Kiran V: 13:50

gets those beans, they've already gone through all that processing and all they're doing is roasting them, or do they also do some of those other steps in the processing?

Andy B: 14:01

I was not able to get a definitive answer on Starbucks's bean supply chain. I do happen to have a friend who I will not dox on this stuff. He works at a company that when beans are imported on mass in bulk to United States, a lot of them go through the port of San Francisco and Oakwood and companies like Starbucks and Peets hire a third party sampling group to test the quality of the beans, test the caffeine it can be of wide variety and then decide which batches they're going to buy from the bean aggregators. So I know at least part of their coffee supply chain is buying it. The point of sale to Starbucks is in the United States at these ports.

Kiran V: 14:47

Got it. And then I'm curious what is the volume that we're talking about here, like daily, monthly, from a form I can't reach.

Andy B: 14:59

I'm in my brother's house and I literally can't reach my keyboard, so maybe somebody can Google it, but it's tons and tons and tons of coffee. Starbucks, I believe, is the largest. Well, nestle is probably the biggest seller of coffee, but Starbucks is probably the largest coffee chain.

Joe C: 15:14

That's kind of yeah at that scale. With so many stores and so many consumers, they must have like a massive mix of different beans coming from all over the place, which, yeah, makes you wonder how you maintain a consistent Starbucks flavor when you have such a variety of beans.

Kiran V: 15:35

I'm assuming Okay, so I looked it up, just like the immediate Google answer. Guess how many pounds of coffee beans per year are consumed?

Andy B: 15:47

I'm gonna say it's in the hundreds of no, it might be in the billions, not in the trillions 10 billion pounds of coffee beans are harvested per year, providing more than 25 million jobs.

Kiran V: 16:01

Again, that's just like the first immediate thing. So, but 10 billion pounds of coffee beans?

Andy B: 16:07

25 million jobs, that's 25 million people involved in the coffee bean industry.

Joe C: 16:14

That is crazy.

Andy B: 16:17

So there's a lot of. It's a really complicated multi-step supply chain, right Like coffee beans which I believe originated in Africa, were brought to Brazil and South America along with human beings from Africa, thanks Colombian exchange Grown on mass a lot of coffee farmers. Their job is to grow as many beans as possible and then they sell the beans to their local coffee bean aggregator and they make a very small amount of money on that. That person takes all the beans from a set of farms and takes it to another like center where they're again a larger batch is concentrated, and then so on and so forth Until they arrive at like a massive Starbucks processing center. And Starbucks is a premium coffee globally. Right Like Nestle Folgers.

Andy B: 17:16

There's a lot of like gradation in coffee. Obviously you have your high-end fourth and fifth wave coffee shops that like work with direct, with farmer, source their own beans, do everything supply chain control themselves and they do low volume, high quality production. But for the amount of coffee that Starbucks sells, their quality of coffee is just. It is at a different level than, like most of the Nestle brands. Nestle's Nespresso wine is probably the same quality level of like fanciness, but most of Nestle's coffee is also sold as like instant. They have vending machines, they've got coffee pods, they've got diffusion brands, etc.

Kiran V: 17:59

Yeah, when you go to Asia it's like mostly Nespresso or Nestle coffees that you get.

Joe C: 18:07

Do you have a sense of like what makes one coffee better than another? Is it just like the amount of flavor, or like the taste.

Andy B: 18:19

Interestingly enough, I've read about AI that was used in lots of beverages for quality control. I think there's an episode we could do about AI and beer. Actually that's really interesting. But when you make beverage out of any plant, whether it be tea, wine, coffee you can taste what happened at the farm. Our palates are refined enough that slight changes in tannins and acids and sweetness and bitterness can be detectable. And sometimes, no matter what the farmer does, you can't control the weather right. So what quality of beans you can produce year over year can change, and it's subjective. Somebody might like more acidic, somebody might like more sweet. Imagine you are selling your beans at a local aggregator. What that person thinks looks at your beans and they have to guess how good these beans are going to taste. You can get paid a different amount based off what that person thinks, and who that person is might be different depending on what day you show up.

Kiran V: 19:23

Yeah, it was actually interesting, so slight tangent. But I did a coffee tasting class like two hour class, whatever, in Mexico City when I was there and I was like, oh, we're going to show up and just drink a whole bunch of coffees. But the first chunk most of the class was actually just understanding tastes. So there's I guess they've quantified all the tastes that we can taste, but there's like 60 or 80 or something like like some double digit number of different flavors and they and these flavors are like tar or wood or jasmine or orange. So it's like very much tied to like nature and like the flavors and scents that we're getting from nature. And that's how we've quantified it, because I think for me that's something that's really interesting I've thought about is like we haven't, like we haven't digitized flavor, we've digitized color and sound Okay well, maybe we'll change, but that's.

Kiran V: 20:20

Yeah, it was like interesting to just see, like, how we map flavors to like the real world and there's no, like it's like 26 flavors, it's like no, it's like peach or jasmine, so that's cool. So another tangent.

Andy B: 20:34

I really like booze and I like wine. So when, like, a sommelier tastes like, look at a white wine and they smell like like, look at a white wine and they smell it and they go, oh, green apple, what they're picking up on is actually a particular chemical compound that is in the plant, green apple. You can do spectrometry on it, right, for people find palates and they're picking up on the exact same chain of like whatever acids or proteins or whatever they're smelling, and it's the same chemical chain in the two things, right. So when you're saying these tastes, these are the actual chemicals that people with refined palates can pick up, which is associated a layman term for like that chemical.

Andy B: 21:16

Yeah. And like green apples have a lot of them.

Kiran V: 21:19

Yeah.

Joe C: 21:20

I'm going to guess this is maybe where some of the AI comes in with detection and like a very defined like thing that a machine might look for.

Andy B: 21:31

Yeah, we're not there yet, though.

Joe C: 21:33

Okay.

Andy B: 21:34

Imagine, imagine you're a coffee farmer. Odds are you are in an impacted poor economy around the world. You're probably doing subsistence level agriculture or slightly above, and you're getting pennies to the dollar's worth of what your labor is providing. And the coffee plants are fussy. Actually they are prone. First of all, you have to if you're going to make money, you have to kind of monocrop them so, which means often you're clearing forests to try and give your coffee beans a good run, and there's diseases that can impact these coffee beans, and one disease spreading one year can tank a farm.

Andy B: 22:20

And if you think about, for example, people in South America who might be the descendants of enslaved people, it's not like they have 200 generations worth of history with this crop the way that, like wine growers in Turkey would. It's people who are growing this crop because they know that they will be able to sell it no matter what, and they don't necessarily understand always, like they don't necessarily always have the context of how they change the terrar, how it impacts the coffee bean. Of course they can drink coffee, but like they do, but it's not as direct of a relationship. So one of the ways that people are deploying machine learning is to help coffee growers, for example. One of the two species of coffee there's Arabica and Robusta. You just heard of those terms. Those are two different species of this tree that grows a fruit. The fruit has a bean. The bean is the coffee.

Kiran V: 23:16

And those are the only two. Are there other?

Andy B: 23:18

there are, like many species or I believe those are the only two we drink. I don't know if there's others that exist. Oh, okay, so Robusta coffee, kafea canophora, is actually susceptible to a lot of diseases, insects, fungi, and if you can't identify them quickly and treat them, it can reduce your yield, which reduces your profits, and then it can actually destroy your trees. But these trees, once planted, they take like three to seven years to actually start growing beans. So it's already an investment, a risk that you're taking to plant the trees right. And I spent way too long yesterday looking at pictures of coffee leaves and one was sick with an illness called rust and one wasn't, and it's not clear to the eye which one was the sick plant. So there's a whole bunch of groups, including a group out of the Federal University of Lavras in Brazil, who make something called Coffee App Gonna. Quote from what they described. We attempted to develop an AI-based handy mobile application which would not only efficiently predict harvest time, estimate coffee yield and quality, but also inform about plant health.

Kiran V: 24:46

Nice.

Andy B: 24:47

Multiple features of analyzing fruit from the image taken by phone camera within field can thus track fruit ripening in real time.

Joe C: 24:57

It's interesting to me that it's a mobile app, but this is probably for use by very small farmers, like I'd imagine you'd want, like a drone that flies over the field. But obviously that's gonna be an investment.

Andy B: 25:09

The people from that group. I don't know if they're associated with it. There's a group called there's the group in making coffee app at the Federal University of Lavras, and then there's an app called Demetria, which is an AI mobile app for coffee growers. I can't tell if there's any relationship between these groups. They're both based out of Brazil and they're trying to service individual farmers. Phones are wildly prevalent all over the world in AI connection and internet connection. Again, we're using cloud computing to do AI, so these farmers can be like and is what I'm looking at early stages of rust disease.

Andy B: 25:44

Snap a picture and get back a really accurate prediction actually of whether or not there's spider mites or rust on the plant, which are really common illnesses. A paper was released January 11th so 20 days ago that made some of these models much more accurate and smaller, which is really cool. Explain what that means. I'm gonna go a little bit too technical for a second. Then I'm gonna ask Joe and Kieran to play it back demystified. Okay. Originally, the data set for rust and spider mite detection came from a very particular data set that was really unbalanced and people trying to train classification models were having quality problems. Somebody figured out how to use GAN to supplement and balance the data set. And then they trained a transformer, and a really small transformer performed super, super well. Oh nice, who would like to translate for the audience?

Kiran V: 26:46

Yeah, so essentially, what Andy is saying is unbalanced data set. So here they're probably looking for a bunch of pictures of healthy plants and a bunch of pictures of unhealthy plants. My guess is the unhealthy plants. They probably didn't have enough samples because that sounds harder to collect, and so using GANs, you can actually generate more examples of that unhealthy plant, and when you're generating, it's called synthetic data. Go check out our training data episode, which isn't up right now, but we'll put it up at some point when we record it. Synthetic data is basically creating fake versions that are very similar to that type of data that you're looking for. So in this case, they might create a whole bunch of fake images of ill plants to supplement the data set so that when they're training the model, they have enough examples of healthy and unhealthy plants, because if you do too much of one, then the model will only learn healthy plants and it'll be really bad at detecting unhealthy, or vice versa. So that's what GAN synthetic generation is used for to help improve the quality of models.

Kiran V: 28:01

Transformer is a component in a model. It's just a way of building an AI model. We don't need to get into the details of that. And then, when we talk about bigger or smaller models. Generally, we'll talk about the number of nodes in that model. So again, slight tangent, but an LLM uses billions of these nodes. But you can imagine billions of anything is a lot.

Kiran V: 28:28

And when we talk about compute and cost means very expensive to run and operate, and so you know a lot of once. We've built these like really big models that perform well, the next phase of optimization is like how can we get the same accuracy out of a much smaller model which is super cheap? And eventually, you know, some models that we have are actually running on the phone. So now it's not running in the cloud, it's running on your phone, which means it's going to give you results a lot faster Because you don't have to go talk to the internet and come back. It can just, like you know, do your facial recognition on your phone itself. Slight tangent, but that's. You know essentially what Andy's saying when we're talking about, you know, trying to supplement these data sets and improve the accuracy of those models.

Joe C: 29:14

We alluded to this before, but getting the model so small that it can be accessed on the phone is so important because if it's something that can only run at a high cost on like big machines, it's not going to make it to the people in the same way that it can if it's on a mobile phone with your basic internet connection. So that's great for these farmers.

Andy B: 29:37

Yeah, so the big transformer model was overfit. How does it overfit? All that means is that it's data was not as accurate when it would make predictions. But yeah, these folks published a paper and they're building off work. There's a big body of work of identifying this, and what I love about this application of AI is that the consumer of the model is not Starbucks. It's the farmers whose work feeds into Starbucks, and they're trying to give them tools to make more money.

Andy B: 30:06

Grow healthier plants and agriculture can get really complicated. If you can give people cheap mobile apps that help them do less harm on earth but grow better crops, that's a win. There is a completely other group doing the exact same problem, with a different type of model, from a different point of view, ready for it. So came across an article that said some group out of a university in the UK, the Imperial College of London, I partnered at agriculture and an astronomer had partnered together and I was like what does the astronomer have to do with anything? If you remember our episode from AI in Space, which I don't even know if we posted that one yet either, it's coming. Sorry guys, it's coming.

Andy B: 30:58

So they were able to train. They have very high precision satellite images of crops and they were trying to detect rust and some other illnesses from space, from satellites. It worked not great but fine. And now they're modifying that computer vision algorithm to run just like Joe suggested on drones, and they actually have tests deployed these drones in Chiang Rai, northern province of Thailand, for coffee farmers there.

Andy B: 31:32

So pros and cons there are things you can see from the top view down on top of the trees that you can't see if you are a farmer on the ground or a person on the ground. So this could be used in conjunction with the handheld phones close up picture of a leaf. But obviously it's a larger investment to get drone monitoring of any kind of farm. So this is just a pilot program that they've been running. We'll put in the show notes a link to the interview with those folks by thought that was really interesting. So yeah, there's a lot of investment being made in AI to help the farmers grow and get the full value of their beans. I think that's cool.

Kiran V: 32:18

Wow, yeah, particularly haven't even got to Starbucks yet, yeah.

Andy B: 32:22

Yeah, and I don't know if you guys remember this, but when we worked at figure eight one of our clients actually we did one of the first applications of our semantic segmentation tool that we were working on. Side note, semantic segmentation just means you're painting on an image like color, coding it like paint by number so that you can train. An algorithm was a paper plate aerial view of a paper plate with a bunch of semi wet beans on top and we were counting how many beans and grading the quality of the bean with the semantic segmentation so that somebody could make a quality control algorithm for the beans that was at the point of sale, from the farmer to the bean aggregator so that they could more fairly and evenly across different geographies price, grade and value beans.

Joe C: 33:18

Yeah, it makes sense and that it sounds like that's something that could take the subjectivity out of being quality. I don't know. Yeah, it could help for more consistent pricing and like quality.

Kiran V: 33:33

Yeah, I mentioned, there's like probably a fair amount of corruption when we're talking about things coming from, you know, unregulated countries and mass amounts of money being exchanged. Definitely some sort of corruption and shmoosing and nepotism here, I'm sure.

Andy B: 33:52

And will the AI solve that? Well, depends how it's deployed. Right like you could just as easily make an objective quality control algorithm that just tells everybody their beans are shit, and here's two cents One an actuality. Human might have said these beans are great, here's 20 cents a pound and there's a lot of corruption, whether there's AI there or not. But it's one tool in the toolkit to try and give like Starbucks's corporate governance. You know that the six ethical people left at Starbucks who try they might be able to influence the other AI consistent payment to farmers.

Joe C: 34:30

It'll be interesting to see how something like that handles. And then there's a range of quality. Like you might have two different models one that predicts quality for more, like good quality if it's more acidic, you know. Or one that's like better at predicting the sweetness when qualities assessed on sweetness. Or is it just like one model across the whole globe that everyone's using to say this is a good being or not a good being? Room to grow here.

Andy B: 35:01

Yeah, enough about agriculture. I've taken up more than my time. Let's talk about making coffee. So the beans get brought in and then, in the case of Starbucks I don't know exactly the Starbucks roasts other coffee beans, but, as I understand it, very few of them are roasted at the coffee shops. They have like regional coffee roasting centers where they're roasted on mass and then sent out to the different coffee shops.

Joe C: 35:33

They must smell so good the centers.

Andy B: 35:36

Oh yeah, If you've ever been in a coffee shop that roast their own coffee, it's delicious. Try to find one in your neighborhood.

Kiran V: 35:43

It's like glass does that in San Francisco, right yeah it's like glass and you can see them like there's a big barrel and there's like little like paddles turning the beans. It's cool, it smells really good.

Andy B: 35:55

So I was born in a small third world poor country that the time was called Yugoslavia, and even recently my mom, my parents, just moved back. My mom just texted me a picture. You can buy raw coffee in a store there because it's cheaper. And then you used to have, you know, fireplaces, this kind of old timey thing, and you'd have a hole in the back of your fireplace and you could like put this metal rod that has this round barrel looking mini thing on it and a handle and you could quickly fire, roast your own beans to order.

Joe C: 36:26

Oh my God, yeah it sounds so good, I want to make some coffee now.

Andy B: 36:32

Yeah, you can. Also. There's lots of different ways you could mess around home roasting coffee. But most people just get local roasted beans. Or you can get roasted beans, like you know, at Costco. You'll see them vacuum packed, etc. And roasting is currently an art, not a science. So think of this as, like when you're making whiskey, there's a master distiller who, like, uses a wooden bead to knock on the still and say now, now the whiskey mash is ready. When you're making wine, you've got a master vintner or winemaker who is like now the mash is ready or I'm going to blend. Exactly like this, same situation with coffee.

Andy B: 37:16

Coffee roasting is a skilled practice because, depending on the batch size that you're roasting, roasting a small family size amount of coffee, very different than roasting in a commercial big factories set up an amount of coffee for distributing, like the Starbucks bags to Costco, for example.

Andy B: 37:36

Somebody is watching heat be applied to the coffee bean and then deciding this is the right level of roast. It matters because you get different tastes from different roast levels. To achieve the roast level is not just you're looking for color of a certain kind, but it has to do with how fast the bean got to that color. Was it high heat, short time, medium heat, medium duration you can still get to the same outcome, but that heat application changes the chemicals in the bean and changes the final cup of coffee. So again, you can roast coffee and then be like this roast went well or this rent went poorly and then decide how you brand and market it. I don't know if you guys remember this from our days at Figure 8, but we actually worked with a dog food company who would just make dog food, then use the same semantic segmentation tool we used for the coffee beans to count how much meat, vegetables and gravy it ended up making. Then, after they made the dog food, would decide whether this was their premium brand, mid-tier brand or budget brand.

Joe C: 38:54

Oh wow, remember this.

Andy B: 38:56

Yeah.

Joe C: 38:56

It was very strange to go through an essentially coloring book chunks of meat.

Andy B: 39:02

It's weird to think that they don't set out to make a particular recipe, but rather they wing it and then they figure out how they're going to sell it.

Joe C: 39:13

I just wanted to say, in that use case that was labeling the data to train a machine to go ahead and do the detection on its own the plan wasn't for it to always be a human process of humans assessing that ratio.

Andy B: 39:27

Yeah, so there was a paper from June 2022 titled Coffee Roast Intelligence. I didn't see this paper published in any journal, I just found it on archive and I couldn't even read the entire paper on archive because the link was broken, so I don't know how true this is, but at least one group was working on an Android app that identifies the color of coffee beans by photographing or live streaming the video of the roast to an upload server. The application displays text showing what level of coffee beans have been roasted, as well as informs the percentage chance of class prediction to the consumers. Basically, what you can imagine is you can hold up your phone looking at the vat where the coffee is roasting and in real time you're seeing little boxes probably following individual beans around, and then it'll say like dark roast, 80% confidence.

Kiran V: 40:32

I wonder if we're going to see a Twitch live stream of this one day.

Andy B: 40:36

Maybe. I hope so.

Joe C: 40:38

Billions of views, I think the color detection could be complex too, Because in that case the lighting could be different based on where that app is used. So you would have to account for that and how you train the detection, or even the background of the machine that's doing the roasting.

Kiran V: 41:00

Yeah, if it's a light colored barrel or a dark colored barrel, yeah could be hard to get right.

Andy B: 41:06

And, to be honest, I thought the reason that paper was the only one I could find is because it's kind of a nonsense toy idea. What's actually happening in these commercial coffee roasting applications is they're roasting thousands of pounds of bean at a time and they have everything instrumented and they're measuring chemicals that are being off-gassed, the heat, the sound, the microphones in these things, and they're putting tons of data from these sensors into a computer that's showing this to a master roaster and they're using older types of machine learning called regressions, to do much more sophisticated prediction work. Actually Doing this fancy computer vision, deep learning stuff from your phone is not, it's for home application most likely. On the commercial side they're doing much larger scale data collection, much more fine grained, and they're doing chemistry essentially.

Joe C: 42:13

You would need to all those beans passing through. There's just no way.

Andy B: 42:17

Exactly. I did not have the time to go deep in finding how Starbucks does the AI on this stage. Maybe it's because I just ran out of time on researching, but I'm guaranteed they're collecting a volume of data from all their coffee roasting centers that would let them train models, try to reproduce certain effects. There's no way they're not doing that. So that brings me to Deep Bru. Does one of you guys want to explain why the name Deep Bru is really funny name for Starbucks' machine learning platform?

Joe C: 42:57

Well, I mean things that come to mind is like Google's big AI wing is called Deep Mind, I believe, and then obviously Bru Coffee. I'm just wondering if there was a little play on words from those two ideas.

Kiran V: 43:15

So I think there's that. And then the other one is that these machine learning models are called deep neural networks. The only reason why they're called deep neural networks is because you have more than two layers of neurons. So there's like input layer, a bunch of hidden layers, and then an output layer, and so they call it deep because there's many layers.

Andy B: 43:37

Yeah, so deep blue is also a famous chess computer. It was one of the first supercomputer AI systems. It was released in 1995 as a prototype by IBM. It was a famous AI for playing chess. It's called Deep Blue, so Deep Bru.

Kiran V: 44:01

And that's the one that Gary Kasparov played against. Right, it was Deep Blue.

Joe C: 44:04

Yeah, exactly it speaks to, maybe the awareness Starbucks has about AI as an industry and that it wasn't just a random selection. Maybe Satya Nadella?

Kiran V: 44:19

his influence coming in hot.

Andy B: 44:21

Well, as far as I can tell, deep Blue was named at least one year before Satya Nadella joined the board, because I typed Starbucks tech talks into YouTube and stuff and a little bit of industry gossip All the beautiful nerds that we are that work in tech. We go to conferences and then we brag about what we're building and we don't check with the higher rups always about what we talk about. So there's going to be some like back end developer really proud of some ETL data pipeline that he built, giving a talk that got recorded as a tech talk that shows a slide of the machine learning infrastructure that Starbucks has. Guess who has a picture of that slide? This girl.

Joe C: 45:03

Nice AI FYI.

Andy B: 45:06

Yeah, so nothing secret. I'm sure it's all on YouTube, but they started building a machine learning platform and I'm going to explain what that is for a second. So a lot of large enterprises have built what they call machine learning platform, which is essentially an application or a selection of applications that are used internally that pass data between all these services that are necessary in order to make the AI work. And I actually saw a program manager at Starbucks have a slide at a tech talk where they showed like a whole bunch of gray blocks that were really big and one little small box in the middle of the slide. That was the actual ML code and around it was the data warehouse to store all the sensor data, the import and export services to move the data around, the thing to execute training jobs, so on and so forth.

Kiran V: 46:08

Technical details the infrastructure code, just to have these things running somewhere on some cloud computer.

Andy B: 46:16

So Starbucks has their own machine learning platform and they started talking about it loosely in public around 2016, 2017. And in 2020, they actually made their own blog post about it on the Starbucks corporate blog. We can link that in the show notes so people can look at it, and I screenshotted some slides from a talk somebody gave on how they use some of the things they do with their deep-brew mind. So, for example, they can predict your behavior when you're ordering via their app incredibly well. When you are using that mobile app I don't know if you saw this, joe, but they actually recommend sometimes like what drinks and they customize the promotions to you they can really accurately, as far as I can tell, predict the next four drinks that you're going to order.

Joe C: 47:08

Wow.

Andy B: 47:10

If I'm reading this slide correctly.

Joe C: 47:12

I wonder if that just consumes their previous behavior or even other factors like I don't know where they live, the weather.

Andy B: 47:22

So one of the applications that they've done with deep-brew is they actually use AI to determine the ideal location of Starbucks stores. They put in a whole bunch of data about all these different neighborhoods, cities, et cetera, and then they'll find the most profitable locations to place Starbucks, which is how you might end up with two Starbucks in the same intersection in a crowded place, because they'll have the data showing that there will be so many people wanting coffee that they could just have two Starbuckses across the street from each other, each selling enough coffee to make it profitable.

Joe C: 47:58

Yeah, I have seen that and kind of wondered.

Andy B: 48:02

They also have huge supply chain complications. Like Starbucks makes a lot of their own ingredients, some of their own syrups, some of their own, obviously, they roast the coffee in these big central things and getting it all to the 36,000 stores. The stores don't have infinite storage room. They have to predict what's going to be ordered at that store and then send the right things to the right facilities at the right time. They're powering a lot of that with AI.

Kiran V: 48:31

Yeah, and you know that I'm going to be cranky if I don't get my PSL on time.

Andy B: 48:36

Yeah, exactly. So I think that's a really interesting use case. All the supply chain management. They've done a lot of predictive analytics with AI on it. They run automated A-B testing on the mobile app with their machine learning platform to try and test how they can change your behavior. I mean, they talk about it in their slides. They try to use predictive AI in their digital menu board as well as the digital board on Drive-Thru to upsell and cross sell you you specifically more coffee. Yeah, like they can customize the menu, the digital menu board, for every location based on what's trending at that location to try and increase the transaction size. Basically, this is like they haven't said this, but this is my guessing like their goal is to get you just come in more often and every time you make a purchase, make that purchase a larger dollar amount.

Joe C: 49:40

I wonder if there's a connection to the like customers mobile phone that when you like drive up to the store, the order board or the menu changes for you specifically, not just tailored to like anyone who visits that store.

Andy B: 49:56

I haven't found evidence of them doing that yet, but a lot of their talk track is about personalization. So how do they treat every individual customer as an individual? And they're doing it not because they hot take. Starbucks does not care about how you are actually doing. Starbucks cares about their shareholders. They want you to spend more money at Starbucks and they can do that by selling you more coffee, talking to buy more coffee.

Joe C: 50:24

I had actually read. I'm sorry if this this berries one of your points, andy, but so a better customer experience is going to lead to more money coming in, and I read one of the ways they use a hide to provide a better customer experience is actually ordering how the order of which they make the drinks that have been incoming from customer orders, and that is all in an effort to make sure you get your drink like hot and ready at the time you pick it up, which is hard to do when you have so many orders coming in at once. It'd be hard for an individual to do that.

Andy B: 51:02

They talk about their AI platform and AI tools as a co pilot a lot. Co pilot is a term used in the tech industry. When you have an assistive AI, do some work along with a human.

Kiran V: 51:17

And they are called co pilot by GitHub, which is what developers use to have a machine write code for them while they're while they're creating their code.

Joe C: 51:27

But then a human goes in and basically says is this right or not? Do I accept it or reject it?

Kiran V: 51:32

Hopefully they should be.

Joe C: 51:34

Yeah, hopefully.

Andy B: 51:36

Great point. So you guys ready for me to go full tinfoil hat and give you my let's do it Okay. Every article where Starbucks talks about this, they go out of their way to say we're not looking to replace people with AI. We think she does protest too much. Okay, union busting, labor, hating Starbucks, with a CEO of an AI platform on your board.

Kiran V: 52:08

Scary times ahead. Yeah, follow the money.

Andy B: 52:11

There's already robots that make coffees. I have bought coffee from them. The SFO airport there's a robot you can go up to and all it can make is like four drinks and it takes longer than a person right now. But that is going to change. It's going to get better.

Joe C: 52:29

Oh yeah, the robot also is intentionally very cute like it waves at you, and I know that I once watched a little girl go up to that at the SFO airport and looked absolutely terrified.

Andy B: 52:45

That's cute. So, listen, what I think is going to happen is that I wouldn't be surprised if, in the next couple of years, Starbucks puts some stores out that are all robot and automated, just to test and see if us, the consumer, really wants to engage with a person. There's this complicated thing happening where, like, social media and phones are breaking all of our brains, making us terrified of human connection. Late stage capitalism is isolating people from the very communities they live in, and millennials started this with being too scared to pick up the phone and call anybody but, like people, kind of avoid people. If you're the sort of human being who has not gone to a grocery store in person in ages and gets all your grocery stores groceries delivered through Instacart and you don't even talk to the shopper, be for real with yourself. Are you going to favor going to a Starbucks where you don't have to talk to anyone? You might.

Andy B: 53:47

The biggest thorn in Starbucks' side last year was human beings asking for living compensation. Make no mistake, this company that's worth trillions of dollars, that has billions of dollars of revenue a year, does not 20% of, no, over 46% of their stock is owned by institutional investors. Okay, what that means is Starbucks is beholden to their stock price. The stock price only goes up if their profits continue to go up into the right. At a certain point they can only underpay farmers and baristas and everyone in the middle so much they're going to hit a stopping point and then they're going to try to figure out how to cut out humans.

Andy B: 54:40

That's my conspiracy theory. Absolutely there is some group at Starbucks. I don't think it's that much of a conspiracy theory or the exploring completely digital robot barista experiences and you know people vote with their dollar. It could do really well. It could do really poorly. It depends on whether you value your local Starbucks. Starbucks leadership teams will actually be on conferences and on they'll be on YouTube saying like, oh, starbucks is one of the last third places left in the nation. People really value Starbucks as a place to connect in the community and I like I hope that's true, but we shall see.

Kiran V: 55:31

It is like Amazon has done this right, like they open their brick and mortar stores and it's fully automated. So you just walk in and there's a camera tracking you and all the things you pick up and you just walk out. I haven't really checked. I'm curious how that has gone, because I went into one of those stores. I'm like I don't really like being here for some reason, I don't know why, because it's not like I interact with store clerks when I go to the grocery store, but it felt a little like dystopian. It was like I was the only one in the store, because it was like it was a novel thing.

Kiran V: 56:06

There's not a lot of people there. So I just like walked in and I was like the only one in the store and it was so weird and I was like I can't come back in here. This is too much for me. But 50 years from now, who knows? Like I could totally imagine everyone's used to that and it's like I'm just gonna go into this. All the stores are automated. You just walk in, get whatever you need, come out Like so.

Joe C: 56:32

For me. I think I am someone who would kind of this experience, this like touchless experience or like not interacting with people, but I know that these experiences are being put out by these large companies that maybe don't care about the like health of humanity, like Starbucks or Amazon, allegedly and so that would keep me away from those. Like I went into a similar store, kieran, and like I also felt weird about it, but only because I knew it was an Amazon store, but not necessarily the actual experience itself.

Kiran V: 57:11

Yeah, yeah, for me, like, even if I go to McDonald's and they have like those screens to order and there's no line there, and then there's a line to order, I'll actually go stand in the line because I just don't like ordering on those screens.

Joe C: 57:24

I don't know why?

Kiran V: 57:25

Yeah, it's something.

Andy B: 57:28

And if you've been in a McDonald's recently, McDonald's is trying to get you to do your ordering in front of a computer and just the last little bit is a human being. I'm of two minds. One part of me is like the more technological unemployment, AKA people being put out of work by AI, the better, because if we make human labor valueless, then nobody has to labor. This is the Star Trek post-scarcity. We all just go up to our replicator, say, one iced frappuccino please, it appears, and then we go do whatever the heck we want. So that sounds great. But generally speaking, when new technology comes out, all the reward of it is reaped by the few, the wealthy, the powerful. I don't see the CEO of Microsoft, who's a board member on Starbucks, volunteering not to be a billionaire anymore. So it could go either way. I think it is a gray zone for me.

Andy B: 58:39

I wouldn't mind robots doing all the service work, as long as I had lots of opportunity to interact with other human beings. Right now I have to labor for a living. I have a day job. That is often me being home alone in front of a screen. My interactions with other people can be limited at times. Right now I'm luckily visiting my brother and sister-in-law because my brother has a home office that I get to use for this. But if all the work gets replaced, all the service work gets replaced by robots, I really want to have an opportunity to just meet random people, Even if it's just a hi good day. Can I have a coffee? It's a meaningful transaction to a lot of folks.

Joe C: 59:20

Yeah, I particularly love third spaces too. Actually, one of my gripes about San Francisco, surprisingly, is that it has very few coffee shops you can just hang out in, as compared to other cities I've been to, particularly internationally. But I think that's also not really profitable just to have people hanging out and not like consuming and leaving, so that's unfortunate.

Andy B: 59:46

And because everything's ruled by profits, you can't just do what's right for the people, you have to do what's right for the shareholder.

Joe C: 59:52

Yeah, I learned something interesting I wanted to share. If you were shocked that Starbucks is low-key NAI or tech company, you may be shocked that it's also low-key, I think. So the rewards program, all those folks who are paying through the app you can actually put money into the app and you have to use that to make payments through it, and you can only put it in at increments that start at $25, are above, and you can't get that money back out, you can't turn it back into cash, and so I think I read in 2023. At one point, the whole program held $1.8 billion of money just sitting in there, which-.

Kiran V: 1:00:37

They're definitely investing that.

Joe C: 1:00:39

Yeah, they're probably making interest on it and that's what banks do, which I was like I was talking about this episode with my sister-in-law last night and she slinked me an article.

Andy B: 1:00:49

They actually there might be a class action lawsuit, that $25 minimum or something. You can change it. But they make it very hard in the app and the drinks are priced so that you always have to top up with more money. Like the pricing and the increments are such that you always have extra cash in your Starbucks account. Why? It's because they're making interest off of a billion dollars of your money sitting around.

Joe C: 1:01:17

Yeah, crazy A coffee chain. Yeah, what a world we live in.

Kiran V: 1:01:24

So something I did want to maybe like less of the doom and gloom of AI and Starbucks and generally coffee, but something that we talked about earlier of like flavor profiles and digitizing flavor profiles. I could imagine, once we're able to do that effectively and in a way that people understand, that you could actually go and order coffee that's like peach coffee and it's like it just tastes like peaches. Like the way that we could change the flavors of the coffee and be so precise in like individual beans that have like different flavors even coming from the same plant. That could be a really cool future to just have like here's all the different flavors that we have and it's not just like some random like South American roast that has hints of this and that it's like no, this is like jasmine coffee and it like tastes like jasmine flower.

Andy B: 1:02:20

And I came across an article. There's a huge beer manufacturer in Northern Europe called Carlsberg and they actually developed an algorithm to predict the upcoming beer from mixing different strains of hops. So they could like look at the genetics and the flavor profiles of the hops going in and then predict, before they did it, what the beer was gonna taste like. So they're wasting less, making better beer.

Joe C: 1:02:47

So yeah, it gets into, I guess, like genetic modification, which no doubt has AI involved in it.

Andy B: 1:02:53

Yeah, anyway. Our goal was to only talk about Starbucks and AI for 30 minutes, and here we are at an hour. Who wants to do an outro? I think people are sick of listening to us talk about coffee.

Joe C: 1:03:05

I'll go for it. Take it off. Yeah, thank you folks. We are trying to shorten our episodes, but we get so excited and there's so many like tangents and crossover into other topics. So, yeah, that's it for this episode. We really appreciate you listening. Please like, subscribe and share us. We can be found on all different podcast platforms, so please go and rate us and give us stars where you can, and stay tuned for more fun and interesting episodes on how AI works, and we'll see you next time. Bye.

Kiran V: 1:03:38

And send us an email at aifyipod, at gmailcom, if you want to request things or leave comments on all the socials where we're listening, so that we can make sure that this is interesting for you all.

Joe C: 1:03:52

Definitely have a great one, see ya.

Andy B: 1:04:11

Bye.

 was the work of the AI, and I just thought that was a really nice way of showing collaboration between AI and people and what could be possible.

Kiran V: 1:12

Classic human in the loop in sci-fi.

Andy B: 1:15

Exactly and, quite frankly, I think TARS is the hero of that movie. He's the first one to suspect that there's some shady stuff going on with the dude. I don't want to spoil the movie, but if you haven't seen it by now, I'm sorry, but he's the first one to suspect that there's shady stuff going on with the destination planet. He basically shoots himself and volunteers to jump into a black hole to collect data, knowing, but he was self-aware enough that he's a robot and doesn't feel pain the way that people do him. It was just like OK, like I thought it was really cool. That's my dream, for I'd love to have a TARS.

Kiran V: 1:52

Yeah, and I think that well. So another one that I think also is human-like but doesn't have a human form is Jarvis from Iron man, and that's Iron Man's assistant and he comes into a shop and he's like Jarvis, I need this crazy high proton collider thingy. And then Jarvis has these arms in his shop and just like, we'll do a whole bunch of stuff. But when he interacts with Jarvis, jarvis just talks back to him like a human, and I think that's something that we almost, I feel like, take for granted. People that aren't familiar with AI and familiar with the complexity of it Take that ability of like oh, we can actually instill this human values into a robot. Just for granted. That like, yeah, ai does that, but it's like. The reality is we're so far away from that and obviously, with the rate of acceleration of technology, maybe it's closer than I'm thinking, but to me it's like at least 50 years when we would have any sort of sense of sentience in our robots.

Andy B: 3:04

And it's interesting, we're actually a lot closer to the physical parts than we are to the mental parts. Like making there's videos online. You can see if Boston Dynamics or whatever that company is they're like operating. Many arms Factories have been doing that for ages.

Joe C: 3:23

I think. Sorry, paul, I feel like we're way off topic.

Kiran V: 3:28

I think it's fine. We can always cut this stuff out.

Joe C: 3:31

OK, yeah, I feel like we I just realized like we're kind of talking about sci-fi and robots and not really space, but we can continue, you know like.

Andy B: 3:44

Well yeah, no, well, we have to start in the world of the possible and then bring it back down to the realistic. What Kira just said makes sense, right, like we have big dreams as a species for how AI is going to help us explore and live in space, and then the reality is very like doon, doon, doon, doon, doon, doon doon. Like we are nowhere near our big dreams really.

Joe C: 4:27

Hi everyone and welcome to AI FYI, where we talk about the good, the bad and the buggy of AI. So we're here to demystify AI and talk about all things AI out in the world today, which is a lot. So I'm Joe and we have with us Kieran and Andy Say hi, hi, hello, hello. So we are AI experts, we work in the field of AI and we're also just lovers of technology and, let's say, ai hobbyists. So today we're going to be talking about a really fascinating subject. We're going to be talking about space and how AI is being used in the space industry and all the endeavors that humanity are taking to get off the planet and go explore the stars. So why is space cool, guys? Why are we talking about it?

Andy B: 5:17

It's so big. It's the biggest thing we know of. There's so much of it.

Kiran V: 5:21

Yeah, space is massive and I remember when I was younger, my dad got us a telescope and we just started looking at stars and you very quickly realized the vastness of space.

Joe C: 5:34

It is wild. It could be infinite, and I think right now we're in a very interesting time of space exploration. Certainly, in the past decade, maybe even longer, space has become more commercialized. There are tons and tons of satellites in the air. We're going to be talking more about that, and technology is taking us further than we've ever gone before. So those are some of the things we're going to be talking about.

Andy B: 5:59

It's also like space is a big part of the human imagination. Humans have been low-key, obsessed with the stars since as far as we can tell. Before we were humans. And the fact that you said space is being so commercialized I'm like, oh, isn't that kind of a myopic take? Because it's only our very narrow world of space that we can currently reach Our imaginations, like movies and TV shows, so to speak. Think far beyond what we can reach, of what could be out in space.

Joe C: 6:30

Absolutely. In trying to understand this topic more, I kept coming across sci-fi and all the ways we've imagined being in space and how that's pushed us forward. In fact, let's talk a little bit about space and AI in media, because there are a lot of examples.

Andy B: 6:50

Where my head immediately went was foundation, not talking about the Apple TV show, but the original Isaac Asimov books. If you guys remember, I used to have the Isaac Asimov guide to Shakespeare on my desk at work when the three of us worked together. So foundation is a bunch of books. I think they're kind of mediocre but they're interesting to read and in it it's a very distant, very far-reaching space humanity. I don't know if you guys have read this.

Andy B: 7:22

Basically, in the first book, some essential, basically like a mathematician predicts that the empire that covers like 40,000 planets of humans will collapse and then establishes a foundation that goes through generations to like host human knowledge. While this empire collapses, what he predicts happens. So the book series ranges like thousands of years, tens of thousands of years in the history of humanity out in space. But he wrote three books and then took a long break and then wrote two more and then the second two. They're all about AI, basically because some of the main characters end up on planets where, like, humans have changed so much that they've basically become co-existent with robots.

Kiran V: 8:16

So it's not like a bionic thing, it's like there's like full robots and humans, kind of like blade hunter.

Andy B: 8:23

I'm trying to remember exactly what happened, but I want to say there was even like robots doing genetic engineering on humans to make humans more like durable and less human-y.

Joe C: 8:36

Dude, would you say. The AI allows them to exist in space or exist off Earth, or well, I guess Earth doesn't exist in that world, but off in deeper space.

Andy B: 8:48

I think so much of the really high fancy sci-fi. Ai is just like woven into people's daily lives.

Kiran V: 9:00

It's almost like it's expected in those civilizations or those realities, and the one that I think of is Blade Runner, and within Blade Runner there's like different tiers of AI as well. They have like the AI that's kind of accepted to live amongst the humans, and then they have like the replicants and they're like trying to get rid of them. So it's interesting that the concepts that we have in humanity of like classes and different economic classes or social classes is also imitated or replicated in that like AI civilization. So it's just like interesting how we kind of parallel humans and technology even though we're so different, and I think that's kind of something that you see a lot in science fiction.

Andy B: 9:58

I was just thinking about the Jetsons. You remember the maid in the Jetsons was a robot and she was like a little sassy. I mean it makes sense why we think of like all the things we don't want to do as things that we would offload to robots. That makes sense. But they actually put her in a little maid outfit, which is like completely unnecessary.

Joe C: 10:16

I want to talk about Dune because it also has a very interesting like way of handling AI. Apparently, the author thought that AI in science fiction or like in human history or like future history would be an eventuality that, like AI would be interwoven into our world and he actually starts his books in like a post that world where AI got so powerful that it was actually banned in like a religious way, and a lot of things we see in Dune have come about because thinking machines were banned. They're apparently like a great Jihad or war 10,000 years before the books start. That's something I didn't really see in the movie but apparently is more talked about in the books.

Joe C: 11:06

I think sorry, pause, I feel like we're way off topic. Yeah, that's true, and I think, like anything that happens in space robots are going to be really important. But we a few other examples. I feel like there's many times where there's like some robot companion or it's like part of the spaceship and it's essential to get the humans from like point A to B as they're traveling through space. This is a key like plot device in 2001, a space Odyssey spoiler alert the AI is evil and like kills some people, and then of course, we have, like the droids and Star Wars and even in Star Trek, like data is, you know, a machine and part of the crew and I wanted to mention from Star Trek to that there's like the seafaring Borg species, which is basically like a collective colony AI situation and you know, part of their being machines has allowed them to live in space.

Andy B: 12:17

And what the Borg do. I don't know, kira, and I assume you have not seen the Star Trek. This is wild. This is like the best bit of Star Trek as far as I'm concerned.

Andy B: 12:25

The Borg attempt to assimilate, so what they mean is they try to find sentient biological creatures and make you one node in the AI, and then every sentient creature that's added adds to like let's call it a network or a neural network, of what the Borg as a collective becomes. And there's two interesting kind of arcs there's a sentient creature that becomes a Borg, that becomes disconnected from the whole, that they interview in one of the episodes, and that conceptualization of this AI. And then what I think the best episode of Star Trek ever is is, spoiler alert Picard becomes a Borg and he is freed by his crew and he decides to retire. And the next episode is him dealing with his trauma of having been part of an AI on his family's vineyard in France and it's like it's not in space at all, right, and it's not showing any technology. It's him walking through fields, but that's where they have some of the best conversations about what is it to be human in a world of intense technology and AI.

Joe C: 13:36

Yeah, we don't talk about Star Trek enough, and probably a lot of Trekkies would say that there's a lot of philosophy and technological imagination there.

Kiran V: 13:46

I know I think I missed the Star Trek train growing up, so I might need to go through the infinite backlog of Star Trek episodes and movies.

Joe C: 13:56

Me too, I would say.

Andy B: 13:58

Yeah, there's so much.

Kiran V: 13:59

I think yeah. So I mean in the last part. I think we've seen there is no shortage of examples of AI in media and it's extremely prevalent in any sort of sci-fi anything. You're going to have some sort of AI, whether it's explicitly stated or not. So Star Trek. There's also plenty of examples in cartoons. There's episodes in SpongeBob Plankton's wife is a robot. There's also Hitchhiker's Guide to the Galaxy, and I don't know if you guys remember, but that's the one where they go on this long journey to find the answer to life and they get to this AI that has been computing the answer to life for some thousands of years, yeah.

Kiran V: 14:49

Yeah, what is the purpose of life and the or what is it the meaning of life and everything? And the answer is 42. And it's just funny because this is a AI and if you guys haven't seen it, it's basically like this giant box, right? It just looks like a big building.

Andy B: 15:07

Yeah, In the movie it's this big rectangle with a smile on its face.

Kiran V: 15:11

Yeah, and it's like it talks to them and so it has that human characteristic. But then when it's trying to give you an answer to life, it's like us, as humans, probably are trying to figure out what that is anyways, and the computer just computes that as a value, because, at the end of the day, when we think about neural networks and how they're implemented, it literally is just a series of numbers, and so when it comes out, it's like yeah, that makes sense to the machine because that's what it computed the value to be.

Kiran V: 15:39

So it shows, you know, I think there's a point where it's like it's talking and you think it's a human and then suddenly you're like you realize that wait a second, this is a machine and you know it has limitations. And I like that they put that into the movie because they could have easily had some. You know, this long spiel of the computer gives that's like, you know, the some meaning of life and but it's like oh, it's 42.

Andy B: 16:04

That's the answer and it's like very confident, and it's just like no. I've thought about it for a while. It's 42. And the people who were like waited for millennia and many, many generations. That like come on this platform to see what the great saint AI said. We're just rioted. They're like 42.

Kiran V: 16:25

And then I think that the other one that I wanted to talk about well, there's a lot, but the other one is Westworld.

Kiran V: 16:32

So if you guys haven't seen Westworld, the premise is, you know, they have AI robots and humanoid robots and so they have this like theme park kind of thing.

Kiran V: 16:42

So humans will go to this theme park and interact with these robots that live inside the theme park, and so, you know, and it's like a Wild West theme, so they'll go on like oh, let's go on a cowboy adventure, and they'll, like you know, ride a horse and like go around. Or, you know, they'll have interactions in a saloon with the AI and again, spoiler alert, it turns out to be very, very dark and it's like behind the scenes of you know how they're creating these robots is. You know, they're very much like using growing human flesh, like it's they're trying to simulate these as humans as much as possible, and it turns out that the people running this are also robots. And so then you kind of get into this like mind loop of like, okay, wait, who is actually a robot now? Because people you thought were humans turned out to be robots, and so it's just like really trippy to kind of see that you know. Play out over the episodes.

Andy B: 17:39

Would it be correct to say that, like some space westerns aside, every time we make media about space or distant, you know, galaxies, somehow AI is involved? Human beings have not really made a lot of media where they've disconnected exploring our vast and infinite universe from using some sort of AI.

Joe C: 18:03

I think so. Yeah, I mean, the example of Dune is like post that, but it was in there.

Andy B: 18:11

In a lot of people's minds, ai and robots are sort of synonymous with, like space and sci fi. I think this comes back to what I said that space is really big. I think people have some intuitive feeling that like it's bigger than what maybe we as species can do and that artificial intelligence can supplement our intelligence, can supplement our time right and our resources. So it makes complete sense that like it's kind of a trope almost, that when you talk about any sort of media that relates to space, you're going to put a robot or some AI in it. Thank you, and we've already. We'll talk about this next. Most of space exploration is pivoting more and more towards AI. It's a job that's maybe better suited for robots than people.

Kiran V: 19:05

Okay, so we have all these examples of AI in space in media. You know, a lot of sci-fi movies and TV shows that have come out over decades now, right, and this is not new at all. People have been thinking about this for such a long time, before the technology was even existent in any capacity for humans, right? So it's crazy that naturally, we started thinking of technology, as you know, having some you know sense of human ability, right, in many different forms. So, you know, now, if we come back to real life, you know where did we really start when we talk about AI in space, and I was actually extremely shocked to discover that the first example of AI in space was actually all the way back in 1959 when we launched the Deep Space One from NASA.

Kiran V: 20:05

Yeah, so that is 50 plus years ago, almost 60 years ago. What did it do? And it was very rudimentary and essentially what it was is they called it the remote agent and the purpose was simply failure analysis on the probe itself, and so it's this autonomous system that was able to look at the machine's systems and metrics and stats and surface any, hey, there might be a problem with the probe, and so, super basic, it wasn't talking to anyone. It wasn't doing really anything other than monitoring the systems on that satellite to surface, identify any potential failures that might occur throughout the mission?

Andy B: 20:56

Was it an anomaly detection model?

Kiran V: 20:58

A software package that can predict aging and failure of materials, including those used in airplanes, cars, engines and bridges.

Kiran V: 21:07

So if I was to guess how this is implemented, they essentially have a number of metrics that they're monitoring, based on material temperature or speed of the spacecraft, or probably not detecting things around it in 1959. But essentially it's probably a fairly basic model that has a number of input signals, which are these different stats of the systems on board, and then, based on different events that they must have seen in their testing, we'll identify that, like hey, this specific system is prone to failure, given examples that we've seen in the past testing. And so, while it's very basic, it wasn't doing anything like navigation or detection of other objects. It was simply monitoring its own systems. This is something that you can't do if there's no human on board, and so you need the machine to be able to take over that, and so it's cool that back in 1959, they were using these systems on board our first spacecraft. So yeah, first ever spacecraft or first ever AI used in space was in 1959 on the Deep Space One mission.

Andy B: 22:40

I just found a slideshow from somewhere by somebody named Ron Garrett from 2012, titled the remote agent experiment debugging code from 60 million miles away, and I'm scrolling through the 50 slides really quickly just to see if there's any architecture information on the model. When I come across a slide that's just titled the fall of western civilization RA downgraded from mainline flight software to a flight experiment. Attempt to rewrite planner in C++ failed. This is the most honest truth I've ever seen about how AI gets implemented. Attempt to rewrite in C++ failed.

Kiran V: 23:20

Yeah, yeah, and that's it right. It's like these are just humans that were creating the system and you know they probably were like, well, if something goes wrong, when I'm 60 million miles away from Earth, like I really can't do anything. So you know I need the system to at least be telling me that information and again think about 1959. There were no computers back then. This is when we're going from you know mechanical computing machines to you know actual electronic technology.

Andy B: 23:52

And so which were the size of a room for those who didn't know, and they were sometimes operated. I don't know if fifties was the era of punch cards, but almost like, the women used to sew fabric and make fabric with chacarlooms, with cards that had programmed prints. That's how you used to program computers. You'd have to hire a woman to put holes in pieces of paper that you fed into a machine the size of a building for it to be much, much, much, much worse than the compute on your headphones currently.

Kiran V: 24:23

Yeah, and the way that that worked is you have, if you think about bits today, right, we talk about, you know, bits is like a one or a zero and that's indicating some signal, right? That punch card was physically coding those every single bit, right? So if you punch a hole, that's a zero. If you don't punch a hole, then that's a one, or vice versa, and that's it Like you have to punch out the whole program. And you know, for people familiar with something like assembly, right, this is like a step lower than assembly, where it's like I'm actually coding every bit in this memory bank, which is literally a piece of paper.

Andy B: 25:02

So maybe you can get into history of computing. We're talking about AI and if you think about why that's so crazy that they got something that can be considered AI working in 1959, you have actual compute hardware, you've got bits on it. You've got a program called assembly that might be helping hardware interpret the bits. You have these like layers of abstraction. Artificial intelligence runs often on a layer on top of Python. Python runs on something called C. There's basically like like a crepe cake of the many layers of different people who had to make things so that the AI that you think is super sophisticated and intelligent and smart, that could talk to you like a person, like chat GPT tries to, will break. If, like somebody messed up something mid of semi colons in the 90s on a driver for a piece of hardware that it needs to run, that's wild.

Joe C: 26:01

This this use case. It seems like it was done out of necessity, because we couldn't be there in space, and I think that's a common theme with with everything that AI is doing for you know, furthering space Necessity.

Andy B: 26:18

Yeah, I'll go and talk a little bit about astronomy. So, yeah, we just spaces too big for people. This just is very big. And we've gotten, you know, we sent these initial probes out to space and we sent satellites that have cameras on them and telescopes to then like very slowly, beam back and transmit the images they're taking. And those have gotten so much better. Like there's a new big space telescope that's been operating. You can see beautiful images of it reshooting things. So we're like, wow, this is beautiful. And then we take another picture 15 years later and we have like 200 times more high definition. It's like what happened to TVs in the 2000s. If you think about what you used to think a high def TV is and what you have now in your living room, it's very different. That's been happening in astronomy and there's so much data coming from these technologies that People have basically pivoted astronomy to be almost entirely machine learning, because there's no other way to look at that much data that quickly.

Kiran V: 27:31

And in May of this year set he, which is a, you know, space exploration. Oh man, what is what is set?

Andy B: 27:44

I know set is the people who are looking for aliens. Oh yeah, search for extra text terrestrial intelligence.

Kiran V: 27:51

Yeah, so in May of 2023, may of this year, set he, which is the institution that is called that stands for search for extra terrestrial intelligence, announced that they actually discovered 69 new exoplanets. So an exoplanet is just a planet that's outside of our solar system. So they discovered 69 new exoplanets with machine learning. So essentially what they did is they took this telescope and just pointed it all around to different places in the sky and then, using AI, it was actually able to detect the signatures of different celestial objects and determine that 69 of those were actual planets. And, in case you all didn't know, there are an infinite number of objects in space, many of which we've classified and many which we probably have no idea even exist.

Joe C: 28:49

Yeah, that number is probably going to go up every year from now on.

Kiran V: 28:53

So it's cool that we're able to like automatically. Do that.

Andy B: 28:58

So what's actually happening I'll take a second to explain, because I went deep on this for astronomy reasons Is traditionally what you do is you like, take a zoom in on a piece of the night sky, right, and you take a picture. Then you wait till the next night or two nights later and you take another picture and then you compare and you like literally one to one. You know, think about the scene in the office where it's like this they're the same picture actually but find the differences. Somebody used to have to manually try to like okay, this dot is that dot, this dot is that dot, and then see if dots moved or if one that should have been there disappeared, which basically means it went behind something incredibly tedious if you think about how detailed these images are.

Andy B: 29:43

So what they're doing with AI is it's actually like relatively simple computer vision object detection. They can take a bunch of frames over time and then use an algorithm, an object detection algorithm, to identify each point, give it an ID, track it, track changes and then highlight to a human being, where their attention needs to be, whether it could be a lot of interesting stuff. Then an astronomer will go deeper and be like oh, the change in light here, based off what I'm being alerted to could be a new pulsar. We've identified a new black hole, an interesting exoplanet. So what it's helping people do is we get all this data way more data than a human being could ever. All of us working together on Earth could not go through. The AI is just sifting that information using object detection to be like, okay, expert person, look here, look here, look here.

Joe C: 30:42

I have a related use case or something tangential to that. As you may have heard, we have a space garbage problem right now, one issue that that's causing, I say space garbage, but also just a whole lot of satellites, like everyone's got one up there. It's causing problems for astronomy because satellite trails and things that are in a near atmosphere are setting off these alerts and giving false positives on things that astronomers should be looking at. And so there was a use case I read about, where they crowdsource classifications and use machine learning to actually identify what is a satellite trail or a false object that and then can be taken out of the images and things that the astronomers are looking at.

Kiran V: 31:36

Yeah, and this is actually just another reason. Obviously the ozone layer adds an extra layer of shielding when you're trying to view things in space.

Kiran V: 31:48

So, this is why we launched the Hubble Space Telescope initially, and that's been decommissioned and now we have the James Webb telescope. But this is why we launched telescopes into space, so you don't have all of those things obstructing your view when you're imaging. But I think these things that we're discussing here are actually examples of where AI is outperforming humans and when we're viewing the night sky and taking images of that. There is literally no physical way a human could do what an AI is doing today, and so it's really cool that AI is literally allowing humanity to go further and understand more than we possibly could have without AI. So we're getting into that realm in certain places in AI where the machines are better than humans, and it's significantly better in some cases, which is, I think, really exciting because it shows the possibilities for AI for humanity.

Andy B: 32:54

It must be a really incredible time to be an astronomer like a little stressful, because you suddenly have to become a machine learning Python engineer to do your job quickly. So that sorry about it. But on the flip side, like they have way more information than even astronomers 30 years ago could have ever hoped to have, thanks to AI. You know, like I feel like probably they were thinking that they were getting like a faucet was going drip, drip, drip, and then these big new telescopes are coming out. They're getting so much data and it's like you're seeing like a river gush by and then AI is coming through and actually sifting that river into the Amazon, into the Pacific Ocean, like, and all of a sudden what they thought was an amount of data they could process was like the drip from your sink is suddenly the world of data, the world of ocean. It's just so cool. They're gonna find so many interesting things. I would not be surprised if we learn some shocking or terrifying or deeply disappointing things in the next 10 years in space.

Joe C: 33:53

Yeah, I think all that data it's only gonna grow in the amount that we get every year Very exciting, but also, like you said, potential for scary things or lots of discoveries that might blow our minds too much. So we talked about satellites and telescopes and AI is powering their astronomy capabilities, but it's also worth mentioning that even to get them in place or to operate them in space, ai is being used for like crash avoidance and general navigation. You've probably heard like airplanes fly themselves these days. Largely, I'm gonna guess that's the case for a lot of our space apparatuses too that, of course, for unmanned things they probably have AI flying them, but I'm sure the ones that are manned by astronauts are also largely powered by AI or napkin.

Kiran V: 34:45

Yeah, and this is again like the crash avoidance. Right, we talked about space garbage and, in case you all didn't know, there's a lot of junk in space, including a Tesla Roadster with an astronaut suit inside it, like there's literally just junk floating around our Earth.

Andy B: 35:03

And some of it is atomic bombs, not scary anyone, but yeah.

Kiran V: 35:09

And so these satellites, as they're moving through, they're in real time making maneuvers just using AI. Right, humans are not doing any intervention to move these satellites around the world. And you know this is a 3D movement problem. So you think about, you know, driving a car, autonomous vehicle. Just add another dimension to that and you're moving at thousands of miles an hour. It's like, okay, this becomes, you know, kind of a challenging problem to deal with.

Andy B: 35:39

So what's interesting to me is you know who's really good at moving in all dimensions and not running into things Fish? How do fish brains work, and should there be an AI that tries to replicate fish brains?

Joe C: 35:52

I love it. I always think it's interesting when we arrive to an AI thing that replicates a natural thing. That's worth looking into.

Kiran V: 36:05

Yeah, maybe we can do an episode of AI that replicates nature.

Joe C: 36:09

Yeah, and Andy, I know you could talk about fish all day. You got a lot of fish facts.

Andy B: 36:15

Don't accuse me of things, but I have a lot of facts that.

Joe C: 36:17

I've spent.

Kiran V: 36:17

No, I love fish, I love.

Andy B: 36:19

Yeah, I spent the last weeks.

Joe C: 36:21

I love your fish facts.

Andy B: 36:23

Four hours at the aquarium the last week.

Kiran V: 36:25

Two circumstances. Which aquarium Love you some fish. I have a membership to Cal Academy.

Andy B: 36:33

And I live an eight-minute walk from it, so I just show my little card and just pop in Nice. Yeah, go look at fish All right.

Kiran V: 36:42

So yeah, so this. So another example of AI in space is something that's, you know, maybe a little more popular. But Mars rovers you know and we've all seen these again in science fiction as well, as I'm sure you've heard of Curiosity and Perseverance are two of Mars rovers. One was launched Curiosity was launched in 2011 and Perseverance was launched in 2020. And these robots have a mission to go and explore Mars and, in particular, places in Mars. So they actually launched Curiosity into a crater and they said go explore. And this rover has tons of AI systems on board. It has to maneuver this new terrain that we've never been to. So you know, we're sending a machine to go do something that we've literally never done. It has to do analysis of the environment around it.

Kiran V: 37:42

So this rover needs to go and survey the area around it and determine what are things that it should explore and what are things that should pick up, right. So obviously there's humans monitoring the system around the clock, but a lot of how the rover is actually traveling around is all just AI based, right, and these are systems that have cameras that can control the mobility of the rover, can get out or around obstacles, and this is all just driven by AI, right? And so the fact that we can send the machine millions of miles away to go explore for us right. This is like the Christopher Columbus or Magellan of robots, right.

Andy B: 38:33

But then again, of course, you can have self-driving little cars on Mars. There's no traffic.

Kiran V: 38:39

There's no traffic, but there's rocks the size of buildings that just show up in front of you.

Andy B: 38:45

Just go around it. That seems like a much easier problem to solve somehow than try not hit a pet.

Joe C: 38:52

Yeah, it does feel like our self-driving use cases on planet Earth are much more complicated. I wonder why we don't have more little robots driving around on other planets. Maybe the hard part is getting them there.

Andy B: 39:05

Yeah actually we have more robots driving around here. I know there's a couple companies like Starship and Kiwi who are trying to do the delivery robots, but why don't we have tiny robots driving toddlers around everywhere?

Joe C: 39:16

Probably because people kick them over and throw them in trees and stuff.

Kiran V: 39:20

Well, so I was in the hospital last weekend and we were just walking through the hall and a robot literally just turns the corner by itself and just drives like it's driving straight towards us, and then it just stops and spins around and waits for us to move. Then, as soon as we walk past it, it just goes and it's just like a delivery robot.

Joe C: 39:42

That just goes around the hospital by itself. It had cargo. Okay, what was it carrying?

Kiran V: 39:47

Yeah, and there's just a little sealed basket on the top and it just goes and it stood outside a door and just started beeping. Then some lady opens the door or comes out, takes the thing out of the robot and just goes.

Andy B: 40:00

I have a story about this I have to tell because it's incredible. So when those robots in hospitals first started getting made, it was the best example of human in the loop I have ever heard in my life. So they basically, like nurses, waste a whole bunch of time being like, oh, the only closet that has this special doohickey that's got to go in a person's thingy is, like you know, the fifth floor. So they have to like get in an elevator and like run up there and get the thing and then get in the elevator. It's just a huge waste of time. So they're like we're going to get robots to go and like bring things to and fro in the hospital instead of people having to do it, so that the people can stay where they need to do the care, and the robot brings the stuff.

Andy B: 40:41

And the very first prototypes of these things they were like okay, it's really way too hard to get it to learn how to navigate, not hit things and pick things and put them up. So instead they're like, just make it ask for help, Fuck it, just ask for help. And so there's a video online somewhere of, like the prototype robot, like you know, like going up to the elevator and then waiting for a person to walk by and be like hello human. Please press object button up.

Andy B: 41:07

And then the person would be like I pressed up and then it goes in, and then like goes into the like hello, human, please place object cup in object basket. And the person does it and goes, thank you, human. And they're like, so, like every time I need to hands exactly what you're saying here, and, like it waits for a person to come by, goes beep, beep, help me, I don't have hands.

Kiran V: 41:25

Yeah.

Andy B: 41:25

And the person does the hands thing. That's so cool. That's so cool I had.

Joe C: 41:30

I had no idea these were rolling around hospitals. That's great. Nice to hear a good AI example in healthcare.

Andy B: 41:37

Yeah.

Joe C: 41:40

Okay, so we we've talked a lot about things happening off earth, but I do want to mention a little bit of AI that's pointing back towards earth from space. These are earth sciences topics, so and this has a lot of intersection probably with, like climate change and us also agriculture. I saw two use cases where satellite technology is being used to look back at the earth and map and predict agricultural trends and weather trends. Of course 2020 study led by an international team of scientists and AI, folks use satellites to identify unexpectedly large number of trees across semi-arid areas of Western Africa, which is something they didn't really see from planet earth, and apparently tracking these trees.

Andy B: 42:29

Wait, did you say trees?

Joe C: 42:31

Trees, yeah.

Andy B: 42:32

Surprise trees.

Joe C: 42:33

Yep surprise trees, large swaths of them.

Andy B: 42:37

Wow.

Kiran V: 42:38

Are these like ants, just like walking around, or maybe so?

Andy B: 42:44

How have we never wait? When was this?

Joe C: 42:46

2020. That's so recent. Yeah.

Kiran V: 42:51

They just found wait, where did they discover these trees?

Joe C: 42:55

They're the uh, an unexpectedly large count of trees in the West African Sahara and Sahel. Oh wow, the desert is sprouting trees, these non-forced trees have a crucial role in biodiversity and provides ecosystem services such as carbon storage, food resources and shelter for humans and animals. Surprise.

Andy B: 43:20

The reason my brain is melting right now is I'm like I used to work in cartography for those listening. I worked on satellite imagery, mapping stuff. Like we've had satellites looking and mapping earth things like uh for kind of a while actually and you can do something, uh, where you can like basically measure the frequency of light and estimate whether the thing on the object is like liquid or plant or rock, what temperature it is can be detected. So the fact that some trees snuck up on us very strange. Were they new trees? Had we never bothered to get such high definition resolution of the desert, like what don't we, what else don't we know?

Joe C: 44:02

Yeah, and I'm going to guess there's. There's probably a lot of satellites pointing back towards the earth, doing these sorts of things and making new discoveries every day. Um, these are just two use cases I found, but I'm sure there are many, many more.

Kiran V: 44:16

Wasn't it um planet that we were working with that they were looking at uh migrations of elephants to you know? Help uh get rid of poachers.

Andy B: 44:27

So that actually was totally different. We worked with a company called Planet Labs. They were putting little micro satellites up in when they were going to try and sell that satellite data services. We worked with a drone company no, a conservation company that put drones in to try and track from high altitude uh elephants and then we were doing object detection on videos to track and count how many elephants were moving.

Joe C: 44:52

I love the idea of a little drone. A little drone just following an elephant, like a little personal assistant, to keep it safe.

Andy B: 45:03

That would be super cute, and then if a poacher came along, maybe the drone could be like hello human, please place object gun back in object bag.

Joe C: 45:11

Yeah, I don't have hands to stop you or it could swoop down. Um, the sad thing is probably a lot of drones do have guns, so it doesn't need to ask for help. Uh, let's see. I also wanted to mention, um, hibernation. So I came across an article about how hibernation and inducing it in humans is going to be key for having humans in space and traveling long distances over long amounts of time. Um, and apparently biologists are very close to figuring out how to do this, and I read a stat yep, that within 10 to 15 years, with sufficient funding, they'll be able to induce true hibernation, which should enable more spaceflight. I didn't actually find specific evidence of AI being used to get this research, but it's such a complex biology use case that I'm sure AI is involved and if it's not, they should look into it.

Andy B: 46:14

How real is this news? Cause I feel like there's always someone saying that intent of 15 years, but definitely within our lifetime we're gonna basically figure out a mortal life and say like that's been the case 30 years ago.

Joe C: 46:25

they were saying in 10 years we'd have it figured out 20 years ago they were saying in 10 years Like it's a 2023 wired article and they cited a few research studies that are going on that have had a few breakthroughs.

Andy B: 46:38

I will send to you.

Joe C: 46:40

Yeah.

Andy B: 46:41

I don't like it because it's like a well, actually, that's not true. I might have to end up using it for other reasons, but the it's like time travel, but only in one direction. You can only close your eyes and move forward in time. You'll never be able to go back in time.

Joe C: 46:56

However, some of the research is pointing towards like true stasis, meaning like your body really slows down aging or your normal processes, so that when you wake up you are, you know, in physical form, maybe a little bit younger than you otherwise would be.

Kiran V: 47:14

Yeah, and I mean right now, there are humans that have been cryo frozen with the hope that at some point in the future they will be able to revive you.

Joe C: 47:24

So it's like we're kind of trying to do this.

Kiran V: 47:27

Don't pay for that people yeah, don't do that Until we revive the first cryo frozen human.

Andy B: 47:34

Where my head went was like let's say you have a genetic disease right now or a fatal illness we can't treat. Could you go into stasis and then be revived at the time that that disease is curable? But, if you go into stasis one, you don't know how long that would be Everybody you love could die. You might never wake up. They might never cure the disease. So you might have conditions of being woken up.

Joe C: 47:56

You have to arrange for your maintenance, which is weird, like your storage.

Andy B: 48:00

Yeah, like if they think I could say you have an illness that will take you out in a year, but they think they'll have a treatment in 10 years. Maybe you're like I'm willing to miss out on 10 years of my family's life to then get another 20, but like, no, no, it's just an interesting thought, nothing to do with AI.

Joe C: 48:15

Yeah, so two other things, as we've talked about, data is so important to anything AI and really a lot of tech these days and I just wanna mention that NASA is pretty involved in producing data and they've done some things to open source it, which is always great, having people use that data and hopefully for crowdsourcing and for the benefit of all humanity. So just one example more than 20 years of global imagery data is available through NASA's Global Imagery Brow Services and they have tools for interactive exploration. So, alongside of this data, they have tools that allow anyone on the internet to actually do data labeling of image sets. Check out their tool, image labeler, which Karen and Andy and I built something very similar for object detection and computer vision use cases, and we got a patent for it.

Kiran V: 49:24

Yes, we did.

Andy B: 49:26

Did we? Video annotation oh yeah, I wasn't on that I don't think.

Andy B: 49:29

Yeah, even though I should have been. Anyway, the interesting thing about this is we've talked a lot about how we think that subject matter experts, smes, are a big part of making this AI work. So we talked about this is computer vision. So if you've heard some of our previous episodes, you know that computer vision is the domain of AI. That's all to do with site videos, stills, images, and a very common problem in that is something called object detection this is the word we've been throwing around which is you basically teach a machine learning model to identify and name and localize on an image, an object.

Andy B: 50:07

So to do that, it's what's called a supervised machine learning problem. That means you need training data, lots and lots of labeled examples of what you want the robot to do with AI to do so it can perform like a monkey C monkey do operation and for some of this training data part of the reason they're crowdsourcing it is like not everyone's gonna be really good at identifying. This is a NOVA, type two, type three, from this very slight light change on this still image. So you need subject matter experts, real astronomers, to like sit there and very patiently label and tag a bunch of different examples to then train the models to then scale what the astronomers can review and then have them do the human in the loop and then feed that back into the model. So you make this kind of virtuous cycle where, like the person shows the computer, the computer finds for the person, the person shows the computer. That's how these computer vision object detection algorithms work.

Kiran V: 51:09

I'm actually curious what the accuracy is of these systems Like what is the-. I found something on this.

Andy B: 51:15

I'm reading it. Right now I have an article open.

Andy B: 51:18

So neural networks that use many interconnected nodes are able to learn to recognize patterns. They're perfectly suited for picking out patterns of galaxies. Astronomers began using neural networks to classify galaxies In the early 2010s. Now the algorithms are so effective they can classify galaxies with an accuracy of 98%. And then there's another stat, so exoplanets. Astronomers have discovered 5,300 known exoplanets so far. By measuring a dip in the amount of light coming from when a star, when a planet, passes in front of it, ai tools can now pick out the signs of an exoplanet With 96% accuracy.

Andy B: 52:00

And for those who are stats nerds like us, I do have the papers. I can link them on our website. There's one from 2020 exoplanet detection using machine learning, and they have a paper on their technique. So accuracy of 98, planets, with recall of 82 and a precision of 63. And what that means is that the data is not going to be discovered in an image, and what that means is recall of 82 means that if you have 100 possible planets that could be discovered in an image, the AI is going to help you discover at least 82 of them.

Joe C: 52:37

Very cool.

Kiran V: 52:39

That's awesome.

Joe C: 52:43

One more thing worth mentioning, and I think we could probably talk about this in every episode. But AI is being used in the manufacturing of all things space and our space infrastructure and maintenance. I don't have any specific things to cite here, other than to say you know, when we use a on the ground in factories and in labs, and you know our scientists are using it, those things trickle up literally into space and it sort of just shows you that we're tackling problems across the whole spectrum, from the ground all the way into space, using AI, and together, you know, it forms like a backbone that enables us to go to space. I'm sure they're building machinery and space here on earth that we couldn't build without AI, you know, and again, meaning we can't get into space without it happening here on the ground.

Andy B: 53:49

So I just want to wrap it up in some of these use cases, confirm. This is actually very different than some of the previous episodes we've talked about and what you might be personally experiencing with AI, because as far as I'm aware, there's not a lot of generative AI use cases and not a lot of NLP. So even though we talked about the media talks about these AIs talking to you, real space exploration that's being done with machine learning right now is really based on light, sometimes sound, but mostly light, human and non-human visible. It's all in the world of computer vision, as far as I know, maybe some regressions for maintenance, but I'm not aware of any like NLP or Gen AI use cases. Are you guys?

Kiran V: 54:36

Not that.

Joe C: 54:36

I've heard of. No, that's a very interesting point we don't have. I'm trying to think of, like, how you use chat GPT in space.

Kiran V: 54:44

Yeah, and I think I mean, if you think about it, it makes sense right, like one sound, like you can't hear anything in space. In case you guys weren't aware, when you're in a vacuum, sound can't travel. The sound can travels by particles hitting each other and those that wave, eventually making it interior.

Kiran V: 55:06

Right, there's no sound in space and then when you have, when you're in space, there's no one to communicate with. But I wonder you know we could use AI, if we were ever to find aliens, to actually help us communicate with them, because we can actually have the AI learn their language in a matter of hours or days, probably, if we're able to access, you know, their information systems and then you know, find correlations of behaviors, or you know objects, or you know thoughts or patterns to then translate with an alien species. So one day maybe Google translate will support Martian or other languages.

Joe C: 55:56

I know they're working on it for pets Well, not, not even pets by animals in general, which you know have ways to communicate that aren't human.

Andy B: 56:06

And then they're going to do it for plants, and then we're all going to be like what do we eat? The plants are sad, the insects are sad, the animals are sad.

Joe C: 56:12

Yeah, I remember watching something about like should we be understanding what our animals are saying? Because like, what if your dog tells you they hate you? I mean, it's like Dr.

Kiran V: 56:22

Doolittle right, he was like started going crazy because he can't handle all the animals constantly talking to him and complaining about things.

Andy B: 56:30

Yeah, Well, they were also very demanding of his time and resources.

Kiran V: 56:35

That's the problem.

Andy B: 56:35

They were like a person that understands us finally, and then they laid into him.

Joe C: 56:39

Yeah, yeah, just sort of imagining here like generative use cases in space and thinking about like rovers, Kieran you were talking about. Maybe in the future there could be like AI that arrives to a planet and then sort of generates the right rover for the terrain, and that sort of thing. That'd be cool.

Kiran V: 56:58

Or even like sets up our like a base camp for humans. Right, if you have like a 3D printing arm and you have these things and you can understand the environment, now I can determine, like you know, what materials should I use to build it. What should a foundation for a building look like?

Joe C: 57:15

Yeah, Could understand the atmosphere.

Andy B: 57:18

Actually a video game I played that does exactly that. I think it's called Red Planet on Mars. I mean, it's a little like space exploration sim video game I was playing and your goal is to terraform and habitate Mars. But you start with this very long game. You send drones. First drones collect resources and do surveying. Second set of drones starts to, like, build infrastructure for bigger drones. Third set of drones starts building infrastructure for early humans. Like you chain these things and you let again, like robots do what they're good at and people do what they're good at, which I think is a really important thing for us to consider as a species.

Andy B: 57:59

Like there's not everything is appropriate for AI, and there's some things that we're just, I think, always going to be better than them, and vice versa. There's some things that just like, just let a machine do it, they're just going to be better at it.

Joe C: 58:10

Yeah, or things humans can't ever possibly do, yeah.

Kiran V: 58:14

And actually this kind of makes me think of the Martian. Have you guys read the book or seen that movie? It's a movie with Matt Damon Premise, in case you guys haven't heard of it. This guy goes into space with a crew and they're trying to terraform Mars. And so the first few lands on Mars and they have like all these expeditions planned, but then the first one goes awry because of some space storm or whatever and they have to like emergency evacuate the planet. But then one of the guys gets left behind because they think that he was. He got his shield, his face mask broke in the storm, and so they're like, oh well, he's dead and we can't get him back, and so we're going to just leave him. But it turns out that, like you know, some of the sand or whatever had blocked the hole in his spacesuit, so he like survived and now he's like this guy living on Mars by himself.

Kiran V: 59:13

And what was interesting to me as we're talking through this is there was a lot less AI in that movie than I maybe would have expected. Just given all of the AI in, you know, in pop culture, of media right and the you know, to think about a world where, you know, humans are able to get to Mars and bring enough stuff to terraform and do all this stuff, like you'd think that they would also have AI aiding them through this process. But it's interesting to think like it was very manual and he was like growing his own plants and like had to do all this stuff to survive for years on Mars. So it's kind of like a flip of like what we've seen in like you know so many of the other sci-fi oh yeah, I got balloons.

Joe C: 59:58

Did you see that I?

Kiran V: 59:59

did.

Andy B: 1:00:00

What was that? I don't know. Where did that come from?

Joe C: 1:00:05

I was going to say maybe for an AI, sorry for an author. Ai is sort of a cop out sometimes and like yeah, because it's a magical, you know, and so it's easy to say like oh, ai did this and AI just did that. But to figure out how a human might do something in a really complex situation could be a fun space for an author.

Andy B: 1:00:32

Yeah, so a lot of computer vision work, some regressions in space of a space exploration. What we imagine is actually not very similar to what the reality is Like. All the movies and TV shows are showing these like conversational agents and very humanoid robots, and then instead what we're doing is like large scale data mining. But it's cool, the world of the. You can only build what you can imagine, right. So it's good that we have big imaginations. Yeah, neat.

Joe C: 1:01:05

Very neat, very fun topic.

Kiran V: 1:01:09

Yeah and yeah. There's so much more here that I want to get into.

Andy B: 1:01:14

That we won't, because one of our goals is to make our episodes shorter and, on that note, I really hope that this was interesting to folks, that you learned or considered something that you hadn't before. I know I definitely learned and was really surprised by trees in deserts Surprised trees and maybe yeah, diamonds in space who knew?

Andy B: 1:01:34

And that this helps you understand a little bit more about AI and machine learning are impacting the world around you. If you're listening to us for the first time, let us know what you think. Please rate, subscribe and, if you're willing, share our show with somebody that you like and trust, and have a great rest of your week, everyone.

Joe C: 1:01:52

Thank you, bye-bye, music playing.