Episode 9 - AI and Space (and Sci-fi, oops)
Andy B: 0:00
If you want to have a robot that performs autonomously, it needs to have artificial intelligence. I think of robot as the hardware and artificial intelligence as the software that makes this creature. So there's some robots with four letter names in Interstellar. The one that you see on screen the most is called TARS, and they mention kind of offhandedly that they have personalities programmed into them so that they can be distinct and helpful companions. And TARS is really sassy.
Andy B: 0:29
And what I really liked about that presentation is it showed their artificial intelligence is very human-like but their physical bodies is very not human and the way they move. There was actually an actor in a 200-pound animatronic thing to try and replicate it that they had to shop out eventually. But the way they move allows them to do things with human intelligence that a human could never do, and vice versa. Like the fine motor actions were like the work of the people but then the like bash the door close was the work of the AI, and I just thought that was a really nice way of showing collaboration between AI and people and what could be possible.
Kiran V: 1:12
Classic human in the loop in sci-fi.
Andy B: 1:15
Exactly and, quite frankly, I think TARS is the hero of that movie. He's the first one to suspect that there's some shady stuff going on with the dude. I don't want to spoil the movie, but if you haven't seen it by now, I'm sorry, but he's the first one to suspect that there's shady stuff going on with the destination planet. He basically shoots himself and volunteers to jump into a black hole to collect data, knowing, but he was self-aware enough that he's a robot and doesn't feel pain the way that people do him. It was just like OK, like I thought it was really cool. That's my dream, for I'd love to have a TARS.
Kiran V: 1:52
Yeah, and I think that well. So another one that I think also is human-like but doesn't have a human form is Jarvis from Iron man, and that's Iron Man's assistant and he comes into a shop and he's like Jarvis, I need this crazy high proton collider thingy. And then Jarvis has these arms in his shop and just like, we'll do a whole bunch of stuff. But when he interacts with Jarvis, jarvis just talks back to him like a human, and I think that's something that we almost, I feel like, take for granted. People that aren't familiar with AI and familiar with the complexity of it Take that ability of like oh, we can actually instill this human values into a robot. Just for granted. That like, yeah, ai does that, but it's like. The reality is we're so far away from that and obviously, with the rate of acceleration of technology, maybe it's closer than I'm thinking, but to me it's like at least 50 years when we would have any sort of sense of sentience in our robots.
Andy B: 3:04
And it's interesting, we're actually a lot closer to the physical parts than we are to the mental parts. Like making there's videos online. You can see if Boston Dynamics or whatever that company is they're like operating. Many arms Factories have been doing that for ages.
Joe C: 3:23
I think. Sorry, paul, I feel like we're way off topic.
Kiran V: 3:28
I think it's fine. We can always cut this stuff out.
Joe C: 3:31
OK, yeah, I feel like we I just realized like we're kind of talking about sci-fi and robots and not really space, but we can continue, you know like.
Andy B: 3:44
Well yeah, no, well, we have to start in the world of the possible and then bring it back down to the realistic. What Kira just said makes sense, right, like we have big dreams as a species for how AI is going to help us explore and live in space, and then the reality is very like doon, doon, doon, doon, doon, doon doon. Like we are nowhere near our big dreams really.
Joe C: 4:27
Hi everyone and welcome to AI FYI, where we talk about the good, the bad and the buggy of AI. So we're here to demystify AI and talk about all things AI out in the world today, which is a lot. So I'm Joe and we have with us Kieran and Andy Say hi, hi, hello, hello. So we are AI experts, we work in the field of AI and we're also just lovers of technology and, let's say, ai hobbyists. So today we're going to be talking about a really fascinating subject. We're going to be talking about space and how AI is being used in the space industry and all the endeavors that humanity are taking to get off the planet and go explore the stars. So why is space cool, guys? Why are we talking about it?
Andy B: 5:17
It's so big. It's the biggest thing we know of. There's so much of it.
Kiran V: 5:21
Yeah, space is massive and I remember when I was younger, my dad got us a telescope and we just started looking at stars and you very quickly realized the vastness of space.
Joe C: 5:34
It is wild. It could be infinite, and I think right now we're in a very interesting time of space exploration. Certainly, in the past decade, maybe even longer, space has become more commercialized. There are tons and tons of satellites in the air. We're going to be talking more about that, and technology is taking us further than we've ever gone before. So those are some of the things we're going to be talking about.
Andy B: 5:59
It's also like space is a big part of the human imagination. Humans have been low-key, obsessed with the stars since as far as we can tell. Before we were humans. And the fact that you said space is being so commercialized I'm like, oh, isn't that kind of a myopic take? Because it's only our very narrow world of space that we can currently reach Our imaginations, like movies and TV shows, so to speak. Think far beyond what we can reach, of what could be out in space.
Joe C: 6:30
Absolutely. In trying to understand this topic more, I kept coming across sci-fi and all the ways we've imagined being in space and how that's pushed us forward. In fact, let's talk a little bit about space and AI in media, because there are a lot of examples.
Andy B: 6:50
Where my head immediately went was foundation, not talking about the Apple TV show, but the original Isaac Asimov books. If you guys remember, I used to have the Isaac Asimov guide to Shakespeare on my desk at work when the three of us worked together. So foundation is a bunch of books. I think they're kind of mediocre but they're interesting to read and in it it's a very distant, very far-reaching space humanity. I don't know if you guys have read this.
Andy B: 7:22
Basically, in the first book, some essential, basically like a mathematician predicts that the empire that covers like 40,000 planets of humans will collapse and then establishes a foundation that goes through generations to like host human knowledge. While this empire collapses, what he predicts happens. So the book series ranges like thousands of years, tens of thousands of years in the history of humanity out in space. But he wrote three books and then took a long break and then wrote two more and then the second two. They're all about AI, basically because some of the main characters end up on planets where, like, humans have changed so much that they've basically become co-existent with robots.
Kiran V: 8:16
So it's not like a bionic thing, it's like there's like full robots and humans, kind of like blade hunter.
Andy B: 8:23
I'm trying to remember exactly what happened, but I want to say there was even like robots doing genetic engineering on humans to make humans more like durable and less human-y.
Joe C: 8:36
Dude, would you say. The AI allows them to exist in space or exist off Earth, or well, I guess Earth doesn't exist in that world, but off in deeper space.
Andy B: 8:48
I think so much of the really high fancy sci-fi. Ai is just like woven into people's daily lives.
Kiran V: 9:00
It's almost like it's expected in those civilizations or those realities, and the one that I think of is Blade Runner, and within Blade Runner there's like different tiers of AI as well. They have like the AI that's kind of accepted to live amongst the humans, and then they have like the replicants and they're like trying to get rid of them. So it's interesting that the concepts that we have in humanity of like classes and different economic classes or social classes is also imitated or replicated in that like AI civilization. So it's just like interesting how we kind of parallel humans and technology even though we're so different, and I think that's kind of something that you see a lot in science fiction.
Andy B: 9:58
I was just thinking about the Jetsons. You remember the maid in the Jetsons was a robot and she was like a little sassy. I mean it makes sense why we think of like all the things we don't want to do as things that we would offload to robots. That makes sense. But they actually put her in a little maid outfit, which is like completely unnecessary.
Joe C: 10:16
I want to talk about Dune because it also has a very interesting like way of handling AI. Apparently, the author thought that AI in science fiction or like in human history or like future history would be an eventuality that, like AI would be interwoven into our world and he actually starts his books in like a post that world where AI got so powerful that it was actually banned in like a religious way, and a lot of things we see in Dune have come about because thinking machines were banned. They're apparently like a great Jihad or war 10,000 years before the books start. That's something I didn't really see in the movie but apparently is more talked about in the books.
Joe C: 11:06
I think sorry, pause, I feel like we're way off topic. Yeah, that's true, and I think, like anything that happens in space robots are going to be really important. But we a few other examples. I feel like there's many times where there's like some robot companion or it's like part of the spaceship and it's essential to get the humans from like point A to B as they're traveling through space. This is a key like plot device in 2001, a space Odyssey spoiler alert the AI is evil and like kills some people, and then of course, we have, like the droids and Star Wars and even in Star Trek, like data is, you know, a machine and part of the crew and I wanted to mention from Star Trek to that there's like the seafaring Borg species, which is basically like a collective colony AI situation and you know, part of their being machines has allowed them to live in space.
Andy B: 12:17
And what the Borg do. I don't know, kira, and I assume you have not seen the Star Trek. This is wild. This is like the best bit of Star Trek as far as I'm concerned.
Andy B: 12:25
The Borg attempt to assimilate, so what they mean is they try to find sentient biological creatures and make you one node in the AI, and then every sentient creature that's added adds to like let's call it a network or a neural network, of what the Borg as a collective becomes. And there's two interesting kind of arcs there's a sentient creature that becomes a Borg, that becomes disconnected from the whole, that they interview in one of the episodes, and that conceptualization of this AI. And then what I think the best episode of Star Trek ever is is, spoiler alert Picard becomes a Borg and he is freed by his crew and he decides to retire. And the next episode is him dealing with his trauma of having been part of an AI on his family's vineyard in France and it's like it's not in space at all, right, and it's not showing any technology. It's him walking through fields, but that's where they have some of the best conversations about what is it to be human in a world of intense technology and AI.
Joe C: 13:36
Yeah, we don't talk about Star Trek enough, and probably a lot of Trekkies would say that there's a lot of philosophy and technological imagination there.
Kiran V: 13:46
I know I think I missed the Star Trek train growing up, so I might need to go through the infinite backlog of Star Trek episodes and movies.
Joe C: 13:56
Me too, I would say.
Andy B: 13:58
Yeah, there's so much.
Kiran V: 13:59
I think yeah. So I mean in the last part. I think we've seen there is no shortage of examples of AI in media and it's extremely prevalent in any sort of sci-fi anything. You're going to have some sort of AI, whether it's explicitly stated or not. So Star Trek. There's also plenty of examples in cartoons. There's episodes in SpongeBob Plankton's wife is a robot. There's also Hitchhiker's Guide to the Galaxy, and I don't know if you guys remember, but that's the one where they go on this long journey to find the answer to life and they get to this AI that has been computing the answer to life for some thousands of years, yeah.
Kiran V: 14:49
Yeah, what is the purpose of life and the or what is it the meaning of life and everything? And the answer is 42. And it's just funny because this is a AI and if you guys haven't seen it, it's basically like this giant box, right? It just looks like a big building.
Andy B: 15:07
Yeah, In the movie it's this big rectangle with a smile on its face.
Kiran V: 15:11
Yeah, and it's like it talks to them and so it has that human characteristic. But then when it's trying to give you an answer to life, it's like us, as humans, probably are trying to figure out what that is anyways, and the computer just computes that as a value, because, at the end of the day, when we think about neural networks and how they're implemented, it literally is just a series of numbers, and so when it comes out, it's like yeah, that makes sense to the machine because that's what it computed the value to be.
Kiran V: 15:39
So it shows, you know, I think there's a point where it's like it's talking and you think it's a human and then suddenly you're like you realize that wait a second, this is a machine and you know it has limitations. And I like that they put that into the movie because they could have easily had some. You know, this long spiel of the computer gives that's like, you know, the some meaning of life and but it's like oh, it's 42.
Andy B: 16:04
That's the answer and it's like very confident, and it's just like no. I've thought about it for a while. It's 42. And the people who were like waited for millennia and many, many generations. That like come on this platform to see what the great saint AI said. We're just rioted. They're like 42.
Kiran V: 16:25
And then I think that the other one that I wanted to talk about well, there's a lot, but the other one is Westworld.
Kiran V: 16:32
So if you guys haven't seen Westworld, the premise is, you know, they have AI robots and humanoid robots and so they have this like theme park kind of thing.
Kiran V: 16:42
So humans will go to this theme park and interact with these robots that live inside the theme park, and so, you know, and it's like a Wild West theme, so they'll go on like oh, let's go on a cowboy adventure, and they'll, like you know, ride a horse and like go around. Or, you know, they'll have interactions in a saloon with the AI and again, spoiler alert, it turns out to be very, very dark and it's like behind the scenes of you know how they're creating these robots is. You know, they're very much like using growing human flesh, like it's they're trying to simulate these as humans as much as possible, and it turns out that the people running this are also robots. And so then you kind of get into this like mind loop of like, okay, wait, who is actually a robot now? Because people you thought were humans turned out to be robots, and so it's just like really trippy to kind of see that you know. Play out over the episodes.
Andy B: 17:39
Would it be correct to say that, like some space westerns aside, every time we make media about space or distant, you know, galaxies, somehow AI is involved? Human beings have not really made a lot of media where they've disconnected exploring our vast and infinite universe from using some sort of AI.
Joe C: 18:03
I think so. Yeah, I mean, the example of Dune is like post that, but it was in there.
Andy B: 18:11
In a lot of people's minds, ai and robots are sort of synonymous with, like space and sci fi. I think this comes back to what I said that space is really big. I think people have some intuitive feeling that like it's bigger than what maybe we as species can do and that artificial intelligence can supplement our intelligence, can supplement our time right and our resources. So it makes complete sense that like it's kind of a trope almost, that when you talk about any sort of media that relates to space, you're going to put a robot or some AI in it. Thank you, and we've already. We'll talk about this next. Most of space exploration is pivoting more and more towards AI. It's a job that's maybe better suited for robots than people.
Kiran V: 19:05
Okay, so we have all these examples of AI in space in media. You know, a lot of sci-fi movies and TV shows that have come out over decades now, right, and this is not new at all. People have been thinking about this for such a long time, before the technology was even existent in any capacity for humans, right? So it's crazy that naturally, we started thinking of technology, as you know, having some you know sense of human ability, right, in many different forms. So, you know, now, if we come back to real life, you know where did we really start when we talk about AI in space, and I was actually extremely shocked to discover that the first example of AI in space was actually all the way back in 1959 when we launched the Deep Space One from NASA.
Kiran V: 20:05
Yeah, so that is 50 plus years ago, almost 60 years ago. What did it do? And it was very rudimentary and essentially what it was is they called it the remote agent and the purpose was simply failure analysis on the probe itself, and so it's this autonomous system that was able to look at the machine's systems and metrics and stats and surface any, hey, there might be a problem with the probe, and so, super basic, it wasn't talking to anyone. It wasn't doing really anything other than monitoring the systems on that satellite to surface, identify any potential failures that might occur throughout the mission?
Andy B: 20:56
Was it an anomaly detection model?
Kiran V: 20:58
A software package that can predict aging and failure of materials, including those used in airplanes, cars, engines and bridges.
Kiran V: 21:07
So if I was to guess how this is implemented, they essentially have a number of metrics that they're monitoring, based on material temperature or speed of the spacecraft, or probably not detecting things around it in 1959. But essentially it's probably a fairly basic model that has a number of input signals, which are these different stats of the systems on board, and then, based on different events that they must have seen in their testing, we'll identify that, like hey, this specific system is prone to failure, given examples that we've seen in the past testing. And so, while it's very basic, it wasn't doing anything like navigation or detection of other objects. It was simply monitoring its own systems. This is something that you can't do if there's no human on board, and so you need the machine to be able to take over that, and so it's cool that back in 1959, they were using these systems on board our first spacecraft. So yeah, first ever spacecraft or first ever AI used in space was in 1959 on the Deep Space One mission.
Andy B: 22:40
I just found a slideshow from somewhere by somebody named Ron Garrett from 2012, titled the remote agent experiment debugging code from 60 million miles away, and I'm scrolling through the 50 slides really quickly just to see if there's any architecture information on the model. When I come across a slide that's just titled the fall of western civilization RA downgraded from mainline flight software to a flight experiment. Attempt to rewrite planner in C++ failed. This is the most honest truth I've ever seen about how AI gets implemented. Attempt to rewrite in C++ failed.
Kiran V: 23:20
Yeah, yeah, and that's it right. It's like these are just humans that were creating the system and you know they probably were like, well, if something goes wrong, when I'm 60 million miles away from Earth, like I really can't do anything. So you know I need the system to at least be telling me that information and again think about 1959. There were no computers back then. This is when we're going from you know mechanical computing machines to you know actual electronic technology.
Andy B: 23:52
And so which were the size of a room for those who didn't know, and they were sometimes operated. I don't know if fifties was the era of punch cards, but almost like, the women used to sew fabric and make fabric with chacarlooms, with cards that had programmed prints. That's how you used to program computers. You'd have to hire a woman to put holes in pieces of paper that you fed into a machine the size of a building for it to be much, much, much, much worse than the compute on your headphones currently.
Kiran V: 24:23
Yeah, and the way that that worked is you have, if you think about bits today, right, we talk about, you know, bits is like a one or a zero and that's indicating some signal, right? That punch card was physically coding those every single bit, right? So if you punch a hole, that's a zero. If you don't punch a hole, then that's a one, or vice versa, and that's it Like you have to punch out the whole program. And you know, for people familiar with something like assembly, right, this is like a step lower than assembly, where it's like I'm actually coding every bit in this memory bank, which is literally a piece of paper.
Andy B: 25:02
So maybe you can get into history of computing. We're talking about AI and if you think about why that's so crazy that they got something that can be considered AI working in 1959, you have actual compute hardware, you've got bits on it. You've got a program called assembly that might be helping hardware interpret the bits. You have these like layers of abstraction. Artificial intelligence runs often on a layer on top of Python. Python runs on something called C. There's basically like like a crepe cake of the many layers of different people who had to make things so that the AI that you think is super sophisticated and intelligent and smart, that could talk to you like a person, like chat GPT tries to, will break. If, like somebody messed up something mid of semi colons in the 90s on a driver for a piece of hardware that it needs to run, that's wild.
Joe C: 26:01
This this use case. It seems like it was done out of necessity, because we couldn't be there in space, and I think that's a common theme with with everything that AI is doing for you know, furthering space Necessity.
Andy B: 26:18
Yeah, I'll go and talk a little bit about astronomy. So, yeah, we just spaces too big for people. This just is very big. And we've gotten, you know, we sent these initial probes out to space and we sent satellites that have cameras on them and telescopes to then like very slowly, beam back and transmit the images they're taking. And those have gotten so much better. Like there's a new big space telescope that's been operating. You can see beautiful images of it reshooting things. So we're like, wow, this is beautiful. And then we take another picture 15 years later and we have like 200 times more high definition. It's like what happened to TVs in the 2000s. If you think about what you used to think a high def TV is and what you have now in your living room, it's very different. That's been happening in astronomy and there's so much data coming from these technologies that People have basically pivoted astronomy to be almost entirely machine learning, because there's no other way to look at that much data that quickly.
Kiran V: 27:31
And in May of this year set he, which is a, you know, space exploration. Oh man, what is what is set?
Andy B: 27:44
I know set is the people who are looking for aliens. Oh yeah, search for extra text terrestrial intelligence.
Kiran V: 27:51
Yeah, so in May of 2023, may of this year, set he, which is the institution that is called that stands for search for extra terrestrial intelligence, announced that they actually discovered 69 new exoplanets. So an exoplanet is just a planet that's outside of our solar system. So they discovered 69 new exoplanets with machine learning. So essentially what they did is they took this telescope and just pointed it all around to different places in the sky and then, using AI, it was actually able to detect the signatures of different celestial objects and determine that 69 of those were actual planets. And, in case you all didn't know, there are an infinite number of objects in space, many of which we've classified and many which we probably have no idea even exist.
Joe C: 28:49
Yeah, that number is probably going to go up every year from now on.
Kiran V: 28:53
So it's cool that we're able to like automatically. Do that.
Andy B: 28:58
So what's actually happening I'll take a second to explain, because I went deep on this for astronomy reasons Is traditionally what you do is you like, take a zoom in on a piece of the night sky, right, and you take a picture. Then you wait till the next night or two nights later and you take another picture and then you compare and you like literally one to one. You know, think about the scene in the office where it's like this they're the same picture actually but find the differences. Somebody used to have to manually try to like okay, this dot is that dot, this dot is that dot, and then see if dots moved or if one that should have been there disappeared, which basically means it went behind something incredibly tedious if you think about how detailed these images are.
Andy B: 29:43
So what they're doing with AI is it's actually like relatively simple computer vision object detection. They can take a bunch of frames over time and then use an algorithm, an object detection algorithm, to identify each point, give it an ID, track it, track changes and then highlight to a human being, where their attention needs to be, whether it could be a lot of interesting stuff. Then an astronomer will go deeper and be like oh, the change in light here, based off what I'm being alerted to could be a new pulsar. We've identified a new black hole, an interesting exoplanet. So what it's helping people do is we get all this data way more data than a human being could ever. All of us working together on Earth could not go through. The AI is just sifting that information using object detection to be like, okay, expert person, look here, look here, look here.
Joe C: 30:42
I have a related use case or something tangential to that. As you may have heard, we have a space garbage problem right now, one issue that that's causing, I say space garbage, but also just a whole lot of satellites, like everyone's got one up there. It's causing problems for astronomy because satellite trails and things that are in a near atmosphere are setting off these alerts and giving false positives on things that astronomers should be looking at. And so there was a use case I read about, where they crowdsource classifications and use machine learning to actually identify what is a satellite trail or a false object that and then can be taken out of the images and things that the astronomers are looking at.
Kiran V: 31:36
Yeah, and this is actually just another reason. Obviously the ozone layer adds an extra layer of shielding when you're trying to view things in space.
Kiran V: 31:48
So, this is why we launched the Hubble Space Telescope initially, and that's been decommissioned and now we have the James Webb telescope. But this is why we launched telescopes into space, so you don't have all of those things obstructing your view when you're imaging. But I think these things that we're discussing here are actually examples of where AI is outperforming humans and when we're viewing the night sky and taking images of that. There is literally no physical way a human could do what an AI is doing today, and so it's really cool that AI is literally allowing humanity to go further and understand more than we possibly could have without AI. So we're getting into that realm in certain places in AI where the machines are better than humans, and it's significantly better in some cases, which is, I think, really exciting because it shows the possibilities for AI for humanity.
Andy B: 32:54
It must be a really incredible time to be an astronomer like a little stressful, because you suddenly have to become a machine learning Python engineer to do your job quickly. So that sorry about it. But on the flip side, like they have way more information than even astronomers 30 years ago could have ever hoped to have, thanks to AI. You know, like I feel like probably they were thinking that they were getting like a faucet was going drip, drip, drip, and then these big new telescopes are coming out. They're getting so much data and it's like you're seeing like a river gush by and then AI is coming through and actually sifting that river into the Amazon, into the Pacific Ocean, like, and all of a sudden what they thought was an amount of data they could process was like the drip from your sink is suddenly the world of data, the world of ocean. It's just so cool. They're gonna find so many interesting things. I would not be surprised if we learn some shocking or terrifying or deeply disappointing things in the next 10 years in space.
Joe C: 33:53
Yeah, I think all that data it's only gonna grow in the amount that we get every year Very exciting, but also, like you said, potential for scary things or lots of discoveries that might blow our minds too much. So we talked about satellites and telescopes and AI is powering their astronomy capabilities, but it's also worth mentioning that even to get them in place or to operate them in space, ai is being used for like crash avoidance and general navigation. You've probably heard like airplanes fly themselves these days. Largely, I'm gonna guess that's the case for a lot of our space apparatuses too that, of course, for unmanned things they probably have AI flying them, but I'm sure the ones that are manned by astronauts are also largely powered by AI or napkin.
Kiran V: 34:45
Yeah, and this is again like the crash avoidance. Right, we talked about space garbage and, in case you all didn't know, there's a lot of junk in space, including a Tesla Roadster with an astronaut suit inside it, like there's literally just junk floating around our Earth.
Andy B: 35:03
And some of it is atomic bombs, not scary anyone, but yeah.
Kiran V: 35:09
And so these satellites, as they're moving through, they're in real time making maneuvers just using AI. Right, humans are not doing any intervention to move these satellites around the world. And you know this is a 3D movement problem. So you think about, you know, driving a car, autonomous vehicle. Just add another dimension to that and you're moving at thousands of miles an hour. It's like, okay, this becomes, you know, kind of a challenging problem to deal with.
Andy B: 35:39
So what's interesting to me is you know who's really good at moving in all dimensions and not running into things Fish? How do fish brains work, and should there be an AI that tries to replicate fish brains?
Joe C: 35:52
I love it. I always think it's interesting when we arrive to an AI thing that replicates a natural thing. That's worth looking into.
Kiran V: 36:05
Yeah, maybe we can do an episode of AI that replicates nature.
Joe C: 36:09
Yeah, and Andy, I know you could talk about fish all day. You got a lot of fish facts.
Andy B: 36:15
Don't accuse me of things, but I have a lot of facts that.
Joe C: 36:17
I've spent.
Kiran V: 36:17
No, I love fish, I love.
Andy B: 36:19
Yeah, I spent the last weeks.
Joe C: 36:21
I love your fish facts.
Andy B: 36:23
Four hours at the aquarium the last week.
Kiran V: 36:25
Two circumstances. Which aquarium Love you some fish. I have a membership to Cal Academy.
Andy B: 36:33
And I live an eight-minute walk from it, so I just show my little card and just pop in Nice. Yeah, go look at fish All right.
Kiran V: 36:42
So yeah, so this. So another example of AI in space is something that's, you know, maybe a little more popular. But Mars rovers you know and we've all seen these again in science fiction as well, as I'm sure you've heard of Curiosity and Perseverance are two of Mars rovers. One was launched Curiosity was launched in 2011 and Perseverance was launched in 2020. And these robots have a mission to go and explore Mars and, in particular, places in Mars. So they actually launched Curiosity into a crater and they said go explore. And this rover has tons of AI systems on board. It has to maneuver this new terrain that we've never been to. So you know, we're sending a machine to go do something that we've literally never done. It has to do analysis of the environment around it.
Kiran V: 37:42
So this rover needs to go and survey the area around it and determine what are things that it should explore and what are things that should pick up, right. So obviously there's humans monitoring the system around the clock, but a lot of how the rover is actually traveling around is all just AI based, right, and these are systems that have cameras that can control the mobility of the rover, can get out or around obstacles, and this is all just driven by AI, right? And so the fact that we can send the machine millions of miles away to go explore for us right. This is like the Christopher Columbus or Magellan of robots, right.
Andy B: 38:33
But then again, of course, you can have self-driving little cars on Mars. There's no traffic.
Kiran V: 38:39
There's no traffic, but there's rocks the size of buildings that just show up in front of you.
Andy B: 38:45
Just go around it. That seems like a much easier problem to solve somehow than try not hit a pet.
Joe C: 38:52
Yeah, it does feel like our self-driving use cases on planet Earth are much more complicated. I wonder why we don't have more little robots driving around on other planets. Maybe the hard part is getting them there.
Andy B: 39:05
Yeah actually we have more robots driving around here. I know there's a couple companies like Starship and Kiwi who are trying to do the delivery robots, but why don't we have tiny robots driving toddlers around everywhere?
Joe C: 39:16
Probably because people kick them over and throw them in trees and stuff.
Kiran V: 39:20
Well, so I was in the hospital last weekend and we were just walking through the hall and a robot literally just turns the corner by itself and just drives like it's driving straight towards us, and then it just stops and spins around and waits for us to move. Then, as soon as we walk past it, it just goes and it's just like a delivery robot.
Joe C: 39:42
That just goes around the hospital by itself. It had cargo. Okay, what was it carrying?
Kiran V: 39:47
Yeah, and there's just a little sealed basket on the top and it just goes and it stood outside a door and just started beeping. Then some lady opens the door or comes out, takes the thing out of the robot and just goes.
Andy B: 40:00
I have a story about this I have to tell because it's incredible. So when those robots in hospitals first started getting made, it was the best example of human in the loop I have ever heard in my life. So they basically, like nurses, waste a whole bunch of time being like, oh, the only closet that has this special doohickey that's got to go in a person's thingy is, like you know, the fifth floor. So they have to like get in an elevator and like run up there and get the thing and then get in the elevator. It's just a huge waste of time. So they're like we're going to get robots to go and like bring things to and fro in the hospital instead of people having to do it, so that the people can stay where they need to do the care, and the robot brings the stuff.
Andy B: 40:41
And the very first prototypes of these things they were like okay, it's really way too hard to get it to learn how to navigate, not hit things and pick things and put them up. So instead they're like, just make it ask for help, Fuck it, just ask for help. And so there's a video online somewhere of, like the prototype robot, like you know, like going up to the elevator and then waiting for a person to walk by and be like hello human. Please press object button up.
Andy B: 41:07
And then the person would be like I pressed up and then it goes in, and then like goes into the like hello, human, please place object cup in object basket. And the person does it and goes, thank you, human. And they're like, so, like every time I need to hands exactly what you're saying here, and, like it waits for a person to come by, goes beep, beep, help me, I don't have hands.
Kiran V: 41:25
Yeah.
Andy B: 41:25
And the person does the hands thing. That's so cool. That's so cool I had.
Joe C: 41:30
I had no idea these were rolling around hospitals. That's great. Nice to hear a good AI example in healthcare.
Andy B: 41:37
Yeah.
Joe C: 41:40
Okay, so we we've talked a lot about things happening off earth, but I do want to mention a little bit of AI that's pointing back towards earth from space. These are earth sciences topics, so and this has a lot of intersection probably with, like climate change and us also agriculture. I saw two use cases where satellite technology is being used to look back at the earth and map and predict agricultural trends and weather trends. Of course 2020 study led by an international team of scientists and AI, folks use satellites to identify unexpectedly large number of trees across semi-arid areas of Western Africa, which is something they didn't really see from planet earth, and apparently tracking these trees.
Andy B: 42:29
Wait, did you say trees?
Joe C: 42:31
Trees, yeah.
Andy B: 42:32
Surprise trees.
Joe C: 42:33
Yep surprise trees, large swaths of them.
Andy B: 42:37
Wow.
Kiran V: 42:38
Are these like ants, just like walking around, or maybe so?
Andy B: 42:44
How have we never wait? When was this?
Joe C: 42:46
2020. That's so recent. Yeah.
Kiran V: 42:51
They just found wait, where did they discover these trees?
Joe C: 42:55
They're the uh, an unexpectedly large count of trees in the West African Sahara and Sahel. Oh wow, the desert is sprouting trees, these non-forced trees have a crucial role in biodiversity and provides ecosystem services such as carbon storage, food resources and shelter for humans and animals. Surprise.
Andy B: 43:20
The reason my brain is melting right now is I'm like I used to work in cartography for those listening. I worked on satellite imagery, mapping stuff. Like we've had satellites looking and mapping earth things like uh for kind of a while actually and you can do something, uh, where you can like basically measure the frequency of light and estimate whether the thing on the object is like liquid or plant or rock, what temperature it is can be detected. So the fact that some trees snuck up on us very strange. Were they new trees? Had we never bothered to get such high definition resolution of the desert, like what don't we, what else don't we know?
Joe C: 44:02
Yeah, and I'm going to guess there's. There's probably a lot of satellites pointing back towards the earth, doing these sorts of things and making new discoveries every day. Um, these are just two use cases I found, but I'm sure there are many, many more.
Kiran V: 44:16
Wasn't it um planet that we were working with that they were looking at uh migrations of elephants to you know? Help uh get rid of poachers.
Andy B: 44:27
So that actually was totally different. We worked with a company called Planet Labs. They were putting little micro satellites up in when they were going to try and sell that satellite data services. We worked with a drone company no, a conservation company that put drones in to try and track from high altitude uh elephants and then we were doing object detection on videos to track and count how many elephants were moving.
Joe C: 44:52
I love the idea of a little drone. A little drone just following an elephant, like a little personal assistant, to keep it safe.
Andy B: 45:03
That would be super cute, and then if a poacher came along, maybe the drone could be like hello human, please place object gun back in object bag.
Joe C: 45:11
Yeah, I don't have hands to stop you or it could swoop down. Um, the sad thing is probably a lot of drones do have guns, so it doesn't need to ask for help. Uh, let's see. I also wanted to mention, um, hibernation. So I came across an article about how hibernation and inducing it in humans is going to be key for having humans in space and traveling long distances over long amounts of time. Um, and apparently biologists are very close to figuring out how to do this, and I read a stat yep, that within 10 to 15 years, with sufficient funding, they'll be able to induce true hibernation, which should enable more spaceflight. I didn't actually find specific evidence of AI being used to get this research, but it's such a complex biology use case that I'm sure AI is involved and if it's not, they should look into it.
Andy B: 46:14
How real is this news? Cause I feel like there's always someone saying that intent of 15 years, but definitely within our lifetime we're gonna basically figure out a mortal life and say like that's been the case 30 years ago.
Joe C: 46:25
they were saying in 10 years we'd have it figured out 20 years ago they were saying in 10 years Like it's a 2023 wired article and they cited a few research studies that are going on that have had a few breakthroughs.
Andy B: 46:38
I will send to you.
Joe C: 46:40
Yeah.
Andy B: 46:41
I don't like it because it's like a well, actually, that's not true. I might have to end up using it for other reasons, but the it's like time travel, but only in one direction. You can only close your eyes and move forward in time. You'll never be able to go back in time.
Joe C: 46:56
However, some of the research is pointing towards like true stasis, meaning like your body really slows down aging or your normal processes, so that when you wake up you are, you know, in physical form, maybe a little bit younger than you otherwise would be.
Kiran V: 47:14
Yeah, and I mean right now, there are humans that have been cryo frozen with the hope that at some point in the future they will be able to revive you.
Joe C: 47:24
So it's like we're kind of trying to do this.
Kiran V: 47:27
Don't pay for that people yeah, don't do that Until we revive the first cryo frozen human.
Andy B: 47:34
Where my head went was like let's say you have a genetic disease right now or a fatal illness we can't treat. Could you go into stasis and then be revived at the time that that disease is curable? But, if you go into stasis one, you don't know how long that would be Everybody you love could die. You might never wake up. They might never cure the disease. So you might have conditions of being woken up.
Joe C: 47:56
You have to arrange for your maintenance, which is weird, like your storage.
Andy B: 48:00
Yeah, like if they think I could say you have an illness that will take you out in a year, but they think they'll have a treatment in 10 years. Maybe you're like I'm willing to miss out on 10 years of my family's life to then get another 20, but like, no, no, it's just an interesting thought, nothing to do with AI.
Joe C: 48:15
Yeah, so two other things, as we've talked about, data is so important to anything AI and really a lot of tech these days and I just wanna mention that NASA is pretty involved in producing data and they've done some things to open source it, which is always great, having people use that data and hopefully for crowdsourcing and for the benefit of all humanity. So just one example more than 20 years of global imagery data is available through NASA's Global Imagery Brow Services and they have tools for interactive exploration. So, alongside of this data, they have tools that allow anyone on the internet to actually do data labeling of image sets. Check out their tool, image labeler, which Karen and Andy and I built something very similar for object detection and computer vision use cases, and we got a patent for it.
Kiran V: 49:24
Yes, we did.
Andy B: 49:26
Did we? Video annotation oh yeah, I wasn't on that I don't think.
Andy B: 49:29
Yeah, even though I should have been. Anyway, the interesting thing about this is we've talked a lot about how we think that subject matter experts, smes, are a big part of making this AI work. So we talked about this is computer vision. So if you've heard some of our previous episodes, you know that computer vision is the domain of AI. That's all to do with site videos, stills, images, and a very common problem in that is something called object detection this is the word we've been throwing around which is you basically teach a machine learning model to identify and name and localize on an image, an object.
Andy B: 50:07
So to do that, it's what's called a supervised machine learning problem. That means you need training data, lots and lots of labeled examples of what you want the robot to do with AI to do so it can perform like a monkey C monkey do operation and for some of this training data part of the reason they're crowdsourcing it is like not everyone's gonna be really good at identifying. This is a NOVA, type two, type three, from this very slight light change on this still image. So you need subject matter experts, real astronomers, to like sit there and very patiently label and tag a bunch of different examples to then train the models to then scale what the astronomers can review and then have them do the human in the loop and then feed that back into the model. So you make this kind of virtuous cycle where, like the person shows the computer, the computer finds for the person, the person shows the computer. That's how these computer vision object detection algorithms work.
Kiran V: 51:09
I'm actually curious what the accuracy is of these systems Like what is the-. I found something on this.
Andy B: 51:15
I'm reading it. Right now I have an article open.
Andy B: 51:18
So neural networks that use many interconnected nodes are able to learn to recognize patterns. They're perfectly suited for picking out patterns of galaxies. Astronomers began using neural networks to classify galaxies In the early 2010s. Now the algorithms are so effective they can classify galaxies with an accuracy of 98%. And then there's another stat, so exoplanets. Astronomers have discovered 5,300 known exoplanets so far. By measuring a dip in the amount of light coming from when a star, when a planet, passes in front of it, ai tools can now pick out the signs of an exoplanet With 96% accuracy.
Andy B: 52:00
And for those who are stats nerds like us, I do have the papers. I can link them on our website. There's one from 2020 exoplanet detection using machine learning, and they have a paper on their technique. So accuracy of 98, planets, with recall of 82 and a precision of 63. And what that means is that the data is not going to be discovered in an image, and what that means is recall of 82 means that if you have 100 possible planets that could be discovered in an image, the AI is going to help you discover at least 82 of them.
Joe C: 52:37
Very cool.
Kiran V: 52:39
That's awesome.
Joe C: 52:43
One more thing worth mentioning, and I think we could probably talk about this in every episode. But AI is being used in the manufacturing of all things space and our space infrastructure and maintenance. I don't have any specific things to cite here, other than to say you know, when we use a on the ground in factories and in labs, and you know our scientists are using it, those things trickle up literally into space and it sort of just shows you that we're tackling problems across the whole spectrum, from the ground all the way into space, using AI, and together, you know, it forms like a backbone that enables us to go to space. I'm sure they're building machinery and space here on earth that we couldn't build without AI, you know, and again, meaning we can't get into space without it happening here on the ground.
Andy B: 53:49
So I just want to wrap it up in some of these use cases, confirm. This is actually very different than some of the previous episodes we've talked about and what you might be personally experiencing with AI, because as far as I'm aware, there's not a lot of generative AI use cases and not a lot of NLP. So even though we talked about the media talks about these AIs talking to you, real space exploration that's being done with machine learning right now is really based on light, sometimes sound, but mostly light, human and non-human visible. It's all in the world of computer vision, as far as I know, maybe some regressions for maintenance, but I'm not aware of any like NLP or Gen AI use cases. Are you guys?
Kiran V: 54:36
Not that.
Joe C: 54:36
I've heard of. No, that's a very interesting point we don't have. I'm trying to think of, like, how you use chat GPT in space.
Kiran V: 54:44
Yeah, and I think I mean, if you think about it, it makes sense right, like one sound, like you can't hear anything in space. In case you guys weren't aware, when you're in a vacuum, sound can't travel. The sound can travels by particles hitting each other and those that wave, eventually making it interior.
Kiran V: 55:06
Right, there's no sound in space and then when you have, when you're in space, there's no one to communicate with. But I wonder you know we could use AI, if we were ever to find aliens, to actually help us communicate with them, because we can actually have the AI learn their language in a matter of hours or days, probably, if we're able to access, you know, their information systems and then you know, find correlations of behaviors, or you know objects, or you know thoughts or patterns to then translate with an alien species. So one day maybe Google translate will support Martian or other languages.
Joe C: 55:56
I know they're working on it for pets Well, not, not even pets by animals in general, which you know have ways to communicate that aren't human.
Andy B: 56:06
And then they're going to do it for plants, and then we're all going to be like what do we eat? The plants are sad, the insects are sad, the animals are sad.
Joe C: 56:12
Yeah, I remember watching something about like should we be understanding what our animals are saying? Because like, what if your dog tells you they hate you? I mean, it's like Dr.
Kiran V: 56:22
Doolittle right, he was like started going crazy because he can't handle all the animals constantly talking to him and complaining about things.
Andy B: 56:30
Yeah, Well, they were also very demanding of his time and resources.
Kiran V: 56:35
That's the problem.
Andy B: 56:35
They were like a person that understands us finally, and then they laid into him.
Joe C: 56:39
Yeah, yeah, just sort of imagining here like generative use cases in space and thinking about like rovers, Kieran you were talking about. Maybe in the future there could be like AI that arrives to a planet and then sort of generates the right rover for the terrain, and that sort of thing. That'd be cool.
Kiran V: 56:58
Or even like sets up our like a base camp for humans. Right, if you have like a 3D printing arm and you have these things and you can understand the environment, now I can determine, like you know, what materials should I use to build it. What should a foundation for a building look like?
Joe C: 57:15
Yeah, Could understand the atmosphere.
Andy B: 57:18
Actually a video game I played that does exactly that. I think it's called Red Planet on Mars. I mean, it's a little like space exploration sim video game I was playing and your goal is to terraform and habitate Mars. But you start with this very long game. You send drones. First drones collect resources and do surveying. Second set of drones starts to, like, build infrastructure for bigger drones. Third set of drones starts building infrastructure for early humans. Like you chain these things and you let again, like robots do what they're good at and people do what they're good at, which I think is a really important thing for us to consider as a species.
Andy B: 57:59
Like there's not everything is appropriate for AI, and there's some things that we're just, I think, always going to be better than them, and vice versa. There's some things that just like, just let a machine do it, they're just going to be better at it.
Joe C: 58:10
Yeah, or things humans can't ever possibly do, yeah.
Kiran V: 58:14
And actually this kind of makes me think of the Martian. Have you guys read the book or seen that movie? It's a movie with Matt Damon Premise, in case you guys haven't heard of it. This guy goes into space with a crew and they're trying to terraform Mars. And so the first few lands on Mars and they have like all these expeditions planned, but then the first one goes awry because of some space storm or whatever and they have to like emergency evacuate the planet. But then one of the guys gets left behind because they think that he was. He got his shield, his face mask broke in the storm, and so they're like, oh well, he's dead and we can't get him back, and so we're going to just leave him. But it turns out that, like you know, some of the sand or whatever had blocked the hole in his spacesuit, so he like survived and now he's like this guy living on Mars by himself.
Kiran V: 59:13
And what was interesting to me as we're talking through this is there was a lot less AI in that movie than I maybe would have expected. Just given all of the AI in, you know, in pop culture, of media right and the you know, to think about a world where, you know, humans are able to get to Mars and bring enough stuff to terraform and do all this stuff, like you'd think that they would also have AI aiding them through this process. But it's interesting to think like it was very manual and he was like growing his own plants and like had to do all this stuff to survive for years on Mars. So it's kind of like a flip of like what we've seen in like you know so many of the other sci-fi oh yeah, I got balloons.
Joe C: 59:58
Did you see that I?
Kiran V: 59:59
did.
Andy B: 1:00:00
What was that? I don't know. Where did that come from?
Joe C: 1:00:05
I was going to say maybe for an AI, sorry for an author. Ai is sort of a cop out sometimes and like yeah, because it's a magical, you know, and so it's easy to say like oh, ai did this and AI just did that. But to figure out how a human might do something in a really complex situation could be a fun space for an author.
Andy B: 1:00:32
Yeah, so a lot of computer vision work, some regressions in space of a space exploration. What we imagine is actually not very similar to what the reality is Like. All the movies and TV shows are showing these like conversational agents and very humanoid robots, and then instead what we're doing is like large scale data mining. But it's cool, the world of the. You can only build what you can imagine, right. So it's good that we have big imaginations. Yeah, neat.
Joe C: 1:01:05
Very neat, very fun topic.
Kiran V: 1:01:09
Yeah and yeah. There's so much more here that I want to get into.
Andy B: 1:01:14
That we won't, because one of our goals is to make our episodes shorter and, on that note, I really hope that this was interesting to folks, that you learned or considered something that you hadn't before. I know I definitely learned and was really surprised by trees in deserts Surprised trees and maybe yeah, diamonds in space who knew?
Andy B: 1:01:34
And that this helps you understand a little bit more about AI and machine learning are impacting the world around you. If you're listening to us for the first time, let us know what you think. Please rate, subscribe and, if you're willing, share our show with somebody that you like and trust, and have a great rest of your week, everyone.
Joe C: 1:01:52
Thank you, bye-bye, music playing.