DESCRIPTION: Have you ever wondered how robotic arms can be controlled using your mind? This week on Your AI Injection, Deep speaks with Keiland Cooper, a cognitive scientist and neuroscientist at the University of California, Irvine, about the intersection of AI and neuroscience research. Cooper uses machine learning to study how the brain learns, remembers, and makes decisions.
We learn about how analysis of deep neural networks using AI can be used to classify and model brain activity, and how this information is being used to develop technology for people with disabilities. Cooper also teaches us how studying the brain can be used to better develop existing AI algorithms. As one means to explore this, we discuss ContinualAI, a non-profit research organization co-founded by Cooper with a focus on continuous learning.
Learn more about ContinualAI: https://www.continualai.org/
DOWNLOAD: https://podcast.xyonix.com/1785161/10516829?t=0
AUTOMATED TRANSCRIPT:
Deep: Hi there, I'm Deep Dhillon. Welcome to your AI injection, the podcast where we discuss state-of-the-art techniques in artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.
Welcome back to your AI injection. This week, we'll be looking at the connection between AI and neuroscience. We've got Keiland Cooper on today. Keiland is a cognitive scientist and a neuroscientist at UC Irvine. And an NSF research fellow whose work is focused on understanding how the brain learns, remembers and makes decisions. Keiland is also a co-founder of continual AI, a nonprofit research org focused on continual learning for AI.
Hey, Keiland, start us off today by telling us a little bit about your background as a neuroscientist, and what made you interested in the intersection of AI and neuroscience?
Keiland: So I'm really happy to be here. And I'm really excited about the reconvergence of neuroscience and AI. They really have always been very interrelated. Deep learning is a great example. Early deep learning models were really called connectionist models. And these connectionist models grew out of mathematical psychology, almost where psychologists usually would fit their psychological data in some way, and attempt to try and understand what a participant could be doing given their tasks. And within that branch of mathematical psychology and cognitive science people started using connectionist models to model some of these results. And so if you look back at like really early, deep learning papers, they're closely trying to fit psychology results, but with early forms of neural networks, same with like the early perceptron days and these sorts of things. So deep learning, for instance, it's a very good example of something they kind of grew out of these intermixing fields. Back in those days,
Deep: I'm going to kind of borrow a metaphor that maybe you've heard before or not. Folks who work on airplanes, they don't always look at birds. So when people work on machine learning, AI systems, do they need to look at the brain?
Keiland: So I see it as a spectrum where along the axis is let's say biological plausibility and pick a function. So say maybe search is probably an example. There are a load of ways to do search.Some of which are very biologically possible and potential mechanisms that the brain could be searching for things. And other things are probably not as biologically plausible. And that could probably go for any function that you're trying to build your AI to, to achieve. And so certainly just like you can fly through the air on a plane, which uses different mechanisms than a bird does, but you know, some aspects of the physics apply, you can certainly solve the same task that a person does with different mechanisms. So chess, for instance, we beat Kasparov with AI in chess. It used a very different mechanism probably than what was going on in, in Kasparov's head. And in some cases, those different approaches can have pros and cons, whether you use say like, symbolic programming, like the good old fashioned AI days to play chess, or you use like an alpha go zero approach to just deep learn the entire game from scratch, or you do whatever handful of mechanisms people are using to solve chess. You still solve the task, but there's just different ways of going about it.
Deep: So that makes me think like, where do you see the human mind operating in a similar fashion to what you know are pattern recognition, models that we think of today?
Keiland: Sometimes it's kind of hard to say, where is this really using the same mechanism or where are we just maybe applying a metaphor by which it works in one space? I think one good example that that's pretty recent and has a lot of people excited is in, and me personally, cause I used to work on these kinds of models, were on the, um, semantic, deep embedding models. So like Word2Vecs, Google, those level of models. But these models that learn the quote unquote meaning of words in this space manifold. So by having two vectors, one vector equals each word, the distance between, if the vector is the word, the distance between two vectors equals the similarity between two words. And so in the classic Google days, when they, when they released Word2vec you could do things like the word, like king and queen would be similar in space. And so you could learn, if you represent words as vectors, then you could do these computations on those vectors and they would be learned in the space.
Deep: So do you see them similar in the sense that when we train these, let's take word embeddings, for example, when we train, word embeddings, we're learning the meaning of words to this prediction of like, you know what the next word would be, for example, uh, that's like a context where we're reading a bunch of stuff and trying to maybe try on for size how a word would fit there or how a meaning would fit, and that humans do a similar thing through the exploration of language or…?
Keiland: So what we at least see in the data that has people excited is that some people recorded from human neurons, actually, and they found that the meaning of words as represented by human neurons was best fit by these kind of latent, semantic word, embedding models. And that was kind of exciting because it was a validation of this approach a bit. And, you know, there's a lot more work to be done, but it was, it was an early validation of saying, okay, maybe there is some convergence here. So like you can take a person down and have them produce speech or listen to speech for multiple trials while you’re recording from the brain. And if you're recording from-
Deep: EEG signals or something else?
Keiland: Yeah so this would be direct probes implanted in the brain. Usually this would usually be epilepsy patients. So the doctors would have the probes in there any way to help them with their epilepsy, but in between clinical visits, they're just sitting in the hospital room. They just offered to do these experiments
Deep: Walk us through like, what that means and like what, what does those signals actually look like?
Keiland: So those, let's call them, they’re just using probes, which is the new wires physically implanted deep inside the brain. And usually like the hippocampus or some other cortical region surrounding it. Which in that case, it's very similar to what, what I usually do. You will surgically implant a wire into the brain and record from a population of the neurons that are inside that region. So each probe can pick up 10 or 15 neurons per probe. And if you have multiple probes, you can get a hundred, 200, 300 neurons on a good day. So what you're recording is the electrical activity of the neurons. And, and particularly the spikes. So neurons will fire an action potential. That's their method of communication. And by recording the say, if you have 50 neurons, you're recording for, I dunno, 10 minutes per trial, or, you know, a minute a trial or whatever, you can see the patterns of each neuron talking to each other in that population that we’re recording from. Almost down to where it'll end up being a time series of ones and zeros - did that neuron spike, or did that neuron not spike? There's other signals you can record like local field potential. If you've ever heard brainwave thrown around. That's what these are. But the content of the neural communication that's probably going on in the brain is largely going to be in the spiking activity.
Deep: And when you put these implants in, like you're taking care across patients to put it into the exact same spot when you're trying to do some kind of comparison experiment or…?
Keiland: So in, in my work, which is largely an animals, we usually, we take very, very great care to do that. And to do that in humans, you try, it's a bit harder because obviously the primary reason for humans having these electrodes in their head aren't for the studies, so the humans will have them in their head for, you know, largely medical regions. And so it's up to the doctor where would be the optimal place for their clinical treatment, but there's been enough people now in, in enough studies in those participants that it isn't a problem in terms of that variance.
Deep: Do you see similar reaction patterns across a particular region in the brain?
Keiland: Usually, you'll take that information and you'll fit like a classifier to it. So say I present an animal, an item, like an odor or something, or a visual stimulus. I’ll fit that neural activity that we see over the period after they're presented the stimulus with some sort of classifier fit to that animal. And then afterwards you can reach across animals to build your, your significance.
Deep:So like you've got a hundred rats. You you've got them all, sort of, with the implants in it, and you're giving them all a comparable stimulus, like a particular scent or something. And now you've got the signals from rat 1, 2, 3 up to a hundred. And now you train a classifier to differentiate that particular, I don't know, scent from another scent based on a neural pattern activity. Yeah, exactly. And now you introduce the 101st rat and you can detect the stimulation.
Keiland: Yeah, usually that's the hope because animals largely use the same algorithm and actually in our lab, our particular paradigm, we did a human study alongside it – an fMRI, so this is where you put people in a scanner and don't implant anything in their brain, but you can see through the scanner, correlate it through blood flow where the different parts of brain are active. But we gave them the same types of experiments, and we showed that at least in our paradigm that animals and people largely use the same algorithms effectively to solve our task.
Deep: So that's kind of curious, right? Like that doesn't have to be the case.
Keiland: No, it doesn't.
Deep: I mean, you know, you could imagine, like these networks are quite elaborate. Like you could imagine very, very different patterns, maybe even being individualized. And so that's interesting to me.
Keiland: Yeah. And it certainly would depend on the task you're using. In our case, we're interested in how people learn sequences of information, the hippocampus. And so in that case, it was conserved, but you could daydream that there are certainly things humans can do the animals can't. So if you really chose a human centric task, you'd probably be hard for us to find some similarities. And there's probably some things animals can do that people can't also, but at least in our paradigm and probably with most things, there is a great deal of conservation across species.
Deep: So what do you mean by conservation? Sorry, I didn't quite follow you. Meaning that there is a distinct pattern for a distinct stimulus or something?
Keiland: So the patterns will change, but when you decode the patterns, the same relationships between them will be present. Because you have different, I mean, even with animal, you're going to have different numbers of neurons, you're going to be recording from different neurons. They're going to have slightly different connection patterns, but by and large, even evolutionarily in most mammalian species, the parts of the brain really don't change that much. They grow in size a little bit, and there's minor variations here and there, but in a rat, the visual lobe is in the back of the head, in a monkey the visual lobes in the back of the head, in you, the visual lobes are in the back of the head. Your frontal lobes in the front, the temporal lobe and the broad connection patterns between all that other parts of the brain are really, really heavily conserved evolutionarily, which is great for cross species stuff, because this is why we use animals in the first place and why we've learned so much from animals and can apply it back to humans. And not just basic science, but even medicine and clinical work because evolution was lazy and put a great deal of conservation between animals and people and maybe one day machines if we build them biologically.
Deep: Huh. So what are the general big buckets of how machine learning and AI are used in neuroscience?
Keiland: That is a great question. And I think right now you can split it into kind of two main categories. One category is certainly classification like we were talking about earlier. I mean, people have used all sorts of different classifiers you can use for vector machines, linear regression, uh, custom equation that you use, or you can try and fit a dynamical system to your data.Right? Neuroscientists have used a host of analytical methods to try and relate the activity observed in neurons to the world. Deep neural networks just so happened to be a really good general function approximator and very good at classifying data. That's how we use them in our recent work. We use them as a, a better classifier than existing approaches.
Deep: So that's one bucket. The other bucket I would say is bringing back the original feel of them, which is modeling. So can you use a deep neural network to model a system? Not necessarily caring about classifying the data, but could you recreate the data in some way? Could you just use it as a, as a modeling approach? Give me an example of that.
Keiland: So one example would be, so say you have, um, a reaching task where you have an animal or a person reaching forward. So in brain computer, you want to try and figure out what's the neural computation of reaching. You could classify the data just by taking their neural data you record, building a classifier, and this is actually a good example because people have used almost every approach in the book on this because of brain-computer interfaces, or people are trying to use this clinically. And so people have used everything from dynamical systems PCA to support vector machines, and now deep neural networks to quantify, reaching. Think of a robotic arm, if you're just a, not even a neuroscientist, but a roboticist. And you're trying to build a robotic arm to reach. Couldn’t you just build a model to make that robotic arm learn how to reach forward and grab something and maybe carry it forward? Those are usually different approaches where one would be maybe a classifier or another one would be more of a dynamical neural network and probably would take other like visual information and have other planning functions and these other things involved.
Deep: I feel like there's a meta thing going on here. So you've got a behavior. And as a neuroscientist, you want to understand like what's going on in the brain to like, guide that behavior or realize that behavior, like in this reaching example. Is that the general question that neuroscientists are asking and what are you doing with that information? What happens when you understand, like, there's this general template of neural activity that happens during a reach? What do you do with that then?
Keiland: Yeah, I mean, reaching is probably a good example because depending on the lab and the PI and their interests, you will get people who are just interested in everything, ranging from the very science to bedside clinical applications of like, I just want to build the hardware that I could put in the brain of someone who, say, doesn't have function of their arm to build a robotic arm, to help them reach so very clinically focused. They probably don't even care how the brain does it. They just want to build the best, you know, Brain-computer interface possible to make that happen. And then you could have people all the way on the other end of the spectrum would just care about, like, what is reaching, how is reaching represented in biology in general? Like, what is the cross species approach to reach? Is it a shared mechanism between species or is it a different mechanism? What brain regions are involved? Is it just the motor cortex or do you need other information from other brain regions and probably everything in between on that scale. So even something as simple as just like reaching for it and grabbing a drink, you will have people looking at it from all sorts of different angles and people focused on methods - what would be the best way to code? Right? You could go on and on and on about every aspect of that one problem. And that probably goes for, I mean, the brain is big and there's thousands of neuroscientists working at every different angle and every approach think of to try and untangle all of these different features. It really is a heterogeneous transition disciplinary field. I would say.
Deep: You're listening to your AI injection brought to you by xyonix.com. That's x-y-o-n-i-x.com. Check out our website for more content, or if you need help injecting AI into your organization.
One thing you said really struck me. Like you might have, you know, somebody that's just trying to get a disabled patient to be able to operate a mechanical arm. And so in that scenario, you know, you have like some kind of chip of probes or something like implanted into the brain. So the first question I suppose, I would have as the person designing that is like, well, where to implant it. So you might have some like high-level guidance based on the literature on where to plan it. But the brain also feels pretty plastic in many ways. And it feels like maybe you can just stick it just about anywhere and the brain will start to realize like, will the neurons figure out how to move this hand? How does those things happen?
Keiland: Good point. We know there are general, I'm going to use probably a dangerous term, but let's say parts of the brain just, just vague from a lot of people where they have lesions of these parts, for instance, and they lose function or a lot of animal research, fMRI research, just convergence from decades of neuroscience and a bunch of different studies coming out these reach. So, for instance, my, my favorite brain region, the one I study probably the most is the hippocampus. We know it's very important for me. Um, if you don't have the hippocampus, you can't form new memories. Uh, we know that there's a visual system in the brain, usually in the back of the head, the visual cortex. And this is where a lot of the visual processing is going on. Um, motor and especially reaching where a lot of this information converges is kind of right in the top sliver of your head. So if you draw a line over at the top of your head between your ears, that's roughly where, it's called the motor cortex, is. And we know from a bunch of studies, even in humans, the motor cortex even has a map. You could stimulate, say, like a part of the motor cortex and a patient would feel a little tingle on their finger. Then you can simulate another part and they'd feel it in another finger. And you could do this and you can pretty much map out the entire body on this little strip of cortex. That for example, in the brain-computer interface world for reaching and robotic arms and movement, that that's been a key area where people have really honed in and looked, not only because the mapping so clear, but also it seems that the functions that that brain region is using are a little less complex than some of the other regions, which is why you hear a lot of work on paraplegic patients being able to use computer cursors and robotic arms and these things, but you haven't really heard much about like a memory implant yet. Because the computation in memory seems to be a lot more difficult than, you know, the computation-
Deep: So it's not as obvious how to interact with exactly memory feature.
Keiland: Exactly. And really what it ends up being is the dimensionality of the signals ends up being a lot higher. If you're into dynamical systems, it's just a much more complicated dynamical systems equation than it would be for reaching. But, but it's just easier. So our whole lab is trying to understand what is it about memory that is different from the rest of the brain and how could you classify memory differ from the rest of the brain? And by no means is motor cortex easy. This has been thousands of PhD scientists working for decades trying to make what we do have now possible. There was just a recent study where a patient could tweet for the first time, just by thinking about it.
Deep: Yeah wasn’t there somebody in that papers today, some guy, and I think it was in Britain or something, and he's like a quadriplegic, and with no interface with the outside world, they tapped in and the first thing he said was, “I want a beer”. It's like, okay, well, that seems like a reasonable thing to after that. I probably would too after that. So like, so there's two sides. There's reading what the brain is sending you information wise, but then there's also pushing stuff into the brain and like using these same probes to like stimulate sequences of events. And presumably this is how they're doing things like putting imagery into people's minds and other stuff. Talk to us a little bit about like the difference in like what's happening.
Keiland: So the former is certainly a lot easier than the latter. So it's easier to read from the brain and record signals from the brain because it's passive, you just implant your wires and read the electrical activity that the wires are picking up from hopefully a population of neurons. Then to then pass current back through those wires and stimulate neurons in such a way that they react how you want them to. We certainly can. And neuroscientists definitely do. Not, not only through electricity, but even a big method that that's out now is it's called optogenetics, which, long story short, instead of just using, uh, wires, you use light with genetically modified neurons. You put a gene into the neuron and it was actually from jellyfish the way that jellyfish use-
Deep: Uh, but in an animal, in a specific neuron, you're going to implant something into that specific neuron that allows it to recept optically or something?
Keiland: Yeah, yeah, yeah. So you can either every time a neuron would fire a voltage, you could either make it express light. And so people use this for called optical recordings. You can see a population of neurons in the brain and every time they fire, they light up. It's actually some of the most gorgeous science you’ve seen because it makes the brain look like the night sky.
Deep: Do you have to do this at the time you create the rat or in the rats actual, specific neuron?
Keiland: You can do both. So some people will breed rats with this gene and the gene seemingly harmless to the animal. There's no difference. If there was a difference, we couldn't use the methods, but it works fine. It doesn't seem to have any effect to the animal. They all behave fine.
Deep: It just lights up? The neurons in their head will just express light?
Keiland: Yeah.
Deep: Wow.
Keiland: And you can flip that and you can say, well, if I stimulate light, instead of using electricity to generate light, you can use the light to then generate electricity and make that neuron fire. And so-
Deep: And why would you do that? Versus electricity, is it just less invasive?
Keiland: Yeah. If you just put wires in someone's head or an animal's head, you don't have the best spatial resolution. So, you know you're recording from neurons. But you don’t always know I'm recording from this neuron every single day that I'm recording from. Because you're blind, right? It's like if you stuck a wire in a bowl of spaghetti, would you really know which strand of spaghetti that you're recording from? You know you’re looking at one of them but you don’t know which. But if you instead-
Deep: Cause you're getting a cluster of neurons?
Keiland: Exactly. Yeah. But if you look optically, and so people will put a microscope up to the animal's brain and they'll, they'll see that spatial picture, you will know exactly I'm recording from this neuron on this day, and this is what it does over time. All methods in neuroscience and probably science, all of them have pros and cons, so the optical ones are a bit slower. So you don't necessarily have the best temporal resolution, but you have amazing spatial resolution. Whereas electrically, you can record really, really fast activities in the brain, but you just don't really know where you're recording from all the time.
Deep: How do you get the brain to receive information in the way that you want? Like when they put an image into somebody, you know, what are they doing?
Keiland: So there's really not too much of putting images in people's heads yet. You can try-
Deep: Oh, I thought I read a paper on that years ago.
Keiland: So, it's crude. So in people, what they usually do, it's called TMS. So it's just a magnet that sits outside someone's head and you can crudely stimulate some neurons in a region. And sometimes depending on where you stimulate, different things will happen. It's a newer method. It's kind of crude, but it has already been tried to, to use in like depression research and some clinical work.
Deep: So you can get some kind of representation visually, but you can't really control it. Is that what you’re saying?
Keiland: Yeah, yeah. Sometimes. Now, we can stimulate parts of the, let's say in humans we can stimulate parts of the brain, and sometimes things will happen. And sometimes we'll, we'll kind of have a, a hunch of what will happen depending on where we're at. Just like I was saying, if you're in the motor cortex and I have a pretty good idea of this part of your brain represents your hand, you're going to feel like a tingle in your finger or something, because that's the part of your brain that represents your hand. If you're in the visual cortex or there's actually in humans, a part of the brain that specifically represents faces. And so people have stimulated that part of the brain that represents faces and people will see faces and things that aren't there.
Deep: Ah, I see, but it's not necessarily controllable. It's more like you're just poking around and stuff that's happening.
Keiland: Yeah. So it's certainly, we can certainly not like, implant memories or implant… That is science fiction for now, and that is not going to happen. But, it's really important to do because it starts to show that, you know, to have a participant say, “I see this” and it's not there, philosophically it's a really interesting question of saying the brain creates our mental world. It creates the world that we see and how we interact with it. And if you have a lesion in the brain, or if you mess with what that brain is doing, it's going to change how you interact with the world.
Deep: I was kind of thinking of a scenario, maybe more like your robotic arm one. So you've got the robotic arm. You're able to read the mind and they're able to kind of somehow control the arm, but I was thinking more like, can you send the tactile feedback back? So you've got like, you know, some pressure sensors maybe on the fingertips of the arm and maybe you send it somewhere. And because the person is, their own brain is involved in moving the thing and there's like some higher level, like visual processing that's connecting that thing with what they're doing, I was wondering if you could somehow put the, like the tactile sensor information somewhere in the brain and then can the neurons like learn that that's where that's coming from?
Keiland: That is certainly what people are doing now and they're starting to make progress on it actually. In the case of reaching, it turns out it's more important than you might initially think to have that tactile feedback. If you've ever had like a numb hand or like a number leg, you can still move it, but you don't have the right dexterity, you don't know how hard to grip, you have all sorts of other problems. And so if you had a robotic arm, it'd be like using a numb hand. Which you can use, but you don't get that feedback, which ends up being very, very important.
Deep: Yeah. I mean, I mean, knowing that you're squeezing this glass really tight is a very valuable thing to know.
Keiland: Yeah. And is it plush? Is it rigid? Robotic systems definitely learn this.
Deep: Like in this example, they're taking the pressure sensor data. Where do they put it in the brain, such that, and then what's going on within those neurons around this new information that's suddenly appearing on them to get them in and how is the brain figuring out how to connect the two? Or do we not know yet?
Keiland: So I'd say it's early days. And there there's different ways to do it. The easiest way would be, say, if you lost your arm in an accident and you're trying to get a robotic arm, and you're lucky enough to have some of the sensory neurons left, that, you know, they go from your fingertip to your spinal cord and then back to your head. It ends up being a loop so you have the command neurons that tell your muscles to move in your arm. That goes from brain to your spinal cord to your arm. And then you have the sensory neurons, which are actually a different pathway. So these are a sensory receptors, say, in your fingertip that then project to the spinal cord and then up to the brain. So two completely separate pathways that the brain is using there. The easiest way would be as if you have some of those sensory neurons left, people have tried to stimulate those to try and make up for that deficit. It's a lot harder to do sensory in the brain, but you could do that. And people have tried. Um, but like I said, it's, it's early days in there and I not quite as close to that research as I am with other things. So.
Deep: So shifting gears a little bit more back to the kind of AI world, is there something to be learned in our artificial neuron world, like the AI world, in terms of how we represent a neuron from the physiological, you know, electrochemical world of the brain's neurons, is there something to be learned there in terms of how the mammalian brains at the neuronal level work versus how our mathematical neuron representations work?
Keiland: I think it depends. And I think it goes back to our, your question earlier, when you were saying, what is the spectrum of AI going from a very biologically plausible artificial intelligence to something that is solving the same task in a way that is completely different than how the brain would solve it. And most neural networks I would say today are very, very, very loosely inspired by biology. Now, a lot of the times what we found is when you do kind of use some of the tricks that the brain might use, you can solve the problem. And that's nice. I mean, evolution has had a lot of time to take a look through and solve a lot of these similar problems, but you need not always use that same mechanism to solve the problem. But I think there has been a lot of examples, especially in neural networks that when people do borrow a concept from the brain, even if it's not exactly how the brain does it, but just loosely, it ends up being useful. An example, convolutional neural networks, most people have heard of them. The convolution operation was inspired by the retina in your eye because that's effectively what the retina in the back of your eyes doing, it's edge detection, feature detection, just at that. And even neural networks themselves are supposed to be loosely organized in the way that the brains are, but not necessarily. My favorite example is just all this stuff deep mind does. It seems every time they have a new breakthrough, they go back and they, say, well, had this breakthrough because we borrowed something from the brain. That was the case for Atari, that was largely the case for AlphaGo, and probably will be the case for a lot of the other things. They hire a lot of neuroscience PhDs so maybe that's why. So I do think that there is a lot to learn from the brain, but I'm not evangelicalist in the sense that I think the only way to get to AI will be by mimicking the brain. Because I've seen evidence to the contrary. I've seen computers do things way better than a brain ever could.
Deep: I mean, clearly, like I certainly don't read every single piece of information published on the web, and like formulate the ability to just translate from pretty much any language to any other language. Like that's not the way we approach language and learning, for example.
Keiland: And even not all brains are the same, right? Like, one of my favorite examples, like an octopus brain. Octopi are almost as intelligent as monkeys in a lot of the cases like they can things as smart as monkeys.
Deep: They have like totally distributed systems, yeah.
Keiland: Exactly. Each arm has its own like mini neural processor. You could almost call it like a mini brain. And then it has, and the organization of these things are completely different in a lot of ways from mammals in general, yet they can still solve a maze as fast as a rat. And all of these other tasks, if you put them up against monkeys or a bird it's a completely different. And so, I'm not as tied, I think the trick is to not think of the substrate as much as the computation. I think that's where the real power will be is if, if you can understand what is the ultimate algorithm that this is using. And I don't really care how it's realized and what hardware, what software, what have you. But it seems like the same computations are what seems to be the trick, is the algorithm. And that hopefully will be somewhat conserved.
Deep: Let's take an example of that. There's certain areas where, you know, we just don't do that well with machine learning and AI systems. So, for example, if you look at where we outperform humans, it tends to be in cases where we have a plenty of training data and the systems have just like all kinds of permutations that they can really, you know, exhaust in terms of training. But humans are like really uncannily good at taking a really small number of examples and generalizing and like learning from them. Yeah. Yeah. And like, what do you think are some plausible biological reasons? So like where are some like architectural shift from what's going on in the mammalian brain versus these like really large-scale pattern recognition.
Keiland: So, a lot of that seems to be that, it turns out the brain seems to be very, very lazy. Like any good programmer would be right? What's the easiest way to solve this task? And so at least at the representation of memory, what it seems to look like is, the brain likes scheme, enticed, generalized representations, maybe, those are the things that stick. We experience this every day, because you remember like, if somebody asks you what you did yesterday, you'll say like the big events. Yeah. You won’t remember every little detail, like if you had a video or a camera on it.
Deep: We’re like, almost like template machines. Like we formulate templates of things constantly, you know, whether it's like negative, like stereotypes or positive, like just lessons learned or rules or routines.
Keiland: Yeah. So we like to form these patterns. They have a name, they're called schemas. We like to form these schemers because it's energy efficient to do so for the brain. If what you have works for every situation, then you can just use that and you don't have to waste energy, relearning something, or time relearning something. And if anyone has a kid, you've seen these in action. When a kid, say, she had just learned what a dog was and she sees a cat for the first time and she points at the cat and says “dog.” Then you say “no, honey, that's not a dog, that's a cat.” So that's an example of the brain, just trying to say, “oh, I think I know what this is. This is what I've already seen before.” And then it's only after that that you start to fine tune, like, where does this apply? Where does this not apply? Where can I create a new schema of cats for? And then, you know, this goes on and on and on and on and on until you have probably some critical mass of these that you can apply to most situations in the world. So that's not to saying you won't ever need to learn new things. You will always need to learn new things. As someone who started continual AI, we're really interested in continual learning, how this process happens. But it seems that the brain uses this heuristic of what is the most in generalized representation that I can have that can get me by with most situations so I don't have to waste energy and time and resources neurally to form new representations.
Deep: That's a really, um, important constraint, you know, in a way. To me on some level, it feels like a lot of machine learning the last eight, nine years, like a lot of it is just more machines, bigger network, more, you know, whether it's GPT3, which was already massive. And now you've got Wu Dow, which is like even more and more massive. And it, I'm not saying that it's like the wrong thing to do because it's still impressive, but it's not taking advantage of the limitations in a way.
Keiland: We’re making up for architecture with more data.
Deep: I think that's right. I mean, it makes sense, right? Cause like business is frankly, a driver of a lot of this, like, you know, companies like Google and Facebook and, you know, Microsoft and others are trying to make something happen today or maybe in three or four weeks and fundamental architecture changes are harder and they take time and they take a lot more exploration, you know, and a lot more play. But I feel like that's, that's like a really significant area where we could improve on the machine learning front over the next 10 years is like, how do we do more with less? I mean, you were using the term energy. I think that's probably even true on the machine side. Right?
Keiland: That's exactly true on the machine side. Continual learning for instance, which is how can you take one model, and instead of retraining in it, can you just feed more data into it and it'll learn from the data? That is just one potential approach to trying to help minimize energy costs in, say, a server farm, because if you're paying millions of dollars to train a model, every time you retrain it on terabytes of data, that's a whole lot of energy.
Deep: These things are big bucks to train.
Keiland: Yeah. That's a million dollars for the company every time you want to retrain a huge model. And so, you've seen some companies try and implement like continual learning approaches to minimize those costs. And hopefully if you could develop that kind of architecture, you would also benefit a lot of other things too. One would be transferred learning hopefully, like we were talking about earlier, you could have a better basis for one shot learning.
Deep: And we see a lot of those ideas that like reinforcement learning, for example, where, you know, you're sort of in this field and operating, and new information is sort of directly translated into models.
Keiland: And if data, is the size of data grows, at some point, it's just going to be unrealistic to store absolutely everything all the time. Certainly your brain doesn't. So your brain is filtering out a lot of information, but it's also ingesting some of it and memorizing a tiny percent of it. And then even over time, you forget more and more of that. It doesn't care about memorizing every little detail of the environment. It just wants to know what is the function, probably the function I can use to make better predictions about the world, but what information do I need to help make these generalized representations? And everything else, it doesn't have the energy doesn't have the time doesn't have the space, capacity, what have you, to worry about or definitely store. So neither do servers.
Deep: What is the date of the art with the brain machine memory systems? Do you have a sense of that?
Keiland: It's still early days. So I'd still say still basic research. There has been some examples of attempts, but I'd say it's still early days. There's a lab at Penn, his name's Mike Kahana. He's probably one of the leaders in this area and one of the early pioneers of recording from epilepsy patients in human patients. And so they've showed, at least on the laboratory side, certainly not in the translational side yet, a lot of some of the problems in results that we've talked about before about that the brain can recognize representations of words in a certain way, or that if you stimulate this part of the brain, you can invoke the recall of a memory that someone had prior. But the exact mapping to, say, implant a memory in someone's head-
Deep: Or be able to offload your memories.
Keiland: Exactly.
Deep: Like not have to remember things, which frankly we all do all the time with our phones. Right. Like I would argue right now is the man machine memory interface. Now, it's like, “oh, well I'll just shoot a picture of this place.” Not so much cause that I care about the picture, but just to remember, like what was going on here, you know, or we jot stuff down, you know, on our laptops or in our phones or whatever.
Keiland: And the thing is because people are like, “oh, technology's going to take over everything” is that writing has already taken over everything and people talked about this with the Greeks. And the kicker of it all is Socrates, who was against writing, he was completely right. Maybe not extreme as sane and Plato's Republic-
Deep: We really have gotten worse at remembering and stories.
Keiland: My memory has stories and yeah, it is worse in a lot of cases. There's even some gorillas that have far better working memory than us. If you just show them a pattern of like, say, like four numbers and then they go off the screen and you have to like click which box they're there, gorillas can outperform us impressively well.
Deep: Really?
Keiland: They can easily look at this thing and they can learn it and they can know it well, and we’re like, “Okay. We might not be as smart as we think we are.” Just in that simple task. They’re not doing calculus anytime soon. Our memory has gotten worse, but would anybody say that that was not a reasonable trade for writing? Most of all, if we didn't have writing, we wouldn't even be able to know that that story existed because it went on probably would have been lost for the ages. But could you imagine a world without writing and just really, you know, mediocre human memory? I wouldn't want that world. I think writing has been pretty great.
Deep: We have like the ability to sort of block in what we may be focused on as individuals, but collectively whatever larger economic forces are at play are gonna like dictate who does what research. And there's always a good application that drives the state of the art. And then the other stuff just shakes out.
Keiland: We'll have the choice of what, what do we really want to put energy in and what do we want to be reasonable about? And do we want to pump the brakes on some things until we understand them a little bit better.
Deep: Yeah. Yeah. That is true. I mean, so what do you think is the long game with respect to like the man-machine interfaces? On the one hand, you know, you have these super techies that are like, oh, we're just going to like plant all these probes and then we're gonna have processing units and like, you know, this very like star Trek world-like thinking. And then on the other side is reality where it’s like, you know, I mean, we're very limited in terms of what we put in a human mind as far as any kind of human augmentation.
Keiland: I mean, so the question is what will happen when we have them, but we already do. There's thousands of people walking around with Parkinson's disease that have an implant in their heads so that their tremors are mitigated. There's thousands of people walking around the can hear because they have cochlear implants that have been around for decades, mind you, which is an artificial eardrum. And you're starting to see clinical trials for even Alzheimer’s disease. So, we're seeing this transition and it's slow and it's gradual, but the jump between stimulating a deep, very specific part of the brain to mitigate Parkinson's tremors is a world away from implanting a memory and offloading a memory, or even just putting an image in someone's mind that you would want them to see. What I daydream about is I almost see like language itself is a process of turning whatever idea you have into your head into some transmittable format. But in the world of brain computer interfaces, I could imagine an economy where we would no longer communicate in words. So it's funny, in Black Mirror, they have this episode where they people have brain-computer interfaces and you hear them talking to each other in their heads. So like, I hear what you're saying, but without talking. So imagine a world where you no longer communicate in language or you don't need that transcribed process of converting it to motor or sound, but you communicate in ideas, and your computers will learn to work with those ideas. And you're at that level. That I think will be the true power of what brain computer interfaces will come to. A creative, innovative, intelligent, scientific, artistic economy.
Deep: You know, you recently published a paper that looks at some deep learning and how it can be applied to neuroscience to help us link memories together better. I don't know if that's a reasonable representation, but maybe talk a little bit about that and how it maybe ties into some of the other stuff we've talked about.
Keiland: Yes. We're very interested in memory and we know a part of the brain is very important for forming new memories. And so this is the hippocampus. And if you don't have one, you can't form new memories. So if I met you and you didn't have a hippocampus, you would say, “Oh, Keiland, hi, nice to meet you.” And then we'd step away for 30 seconds and you'd see me and say, “Oh, hi, Keiland, nice to meet you.” And so on and so forth. You lose your ability to form memories and recall some memories. And, and there's, uh, a long scientific history of all the properties of it. But one thing that wasn’t quite worked out yet was, especially linking the literature from animals to people, how is it that we can take multiple memories, even memories that are separated in different periods in time and stitch those into some coherent representation? So, our work, we trained rodents to memorize a sequence of odors, where each odor represents kind of these big events, but what we found was as they learned these sequences, the brain ends up weaving these big events together in a linked fashion. And when presented with just one of the events (you don't see any others), the brain would basically queue up and replay the other two events. Right now, our work is just in sequences. Graphs are where we want to go next. That's exactly what we're trying to see, how far we can push this.
Deep: I wanted to ask you maybe one final question. Why should people who care about machine learning and AI care about neuroscience?
Keiland: Well, one because they're people and they certainly have a brain in their head that they use. So you can forget that all the, everything that we have in this world comes from the brain. So that's important. They should take care of their brains, but at a more reasonable level in terms of building algorithms, there is a lot to be learned. Even if for nothing else than to just come up with some new ideas. But I guarantee you, if you're deep into neural network design, or even probably some other machine learning designs, you'll come up with a lot of new ideas that you can maybe apply to your algorithm or, or so on and so forth. Or the very least maybe just run them by a neuroscientist at some point and see what they say. But also, and maybe this is a bigger point, right? Uh, for instance, in my work, when I'm designing new classification techniques to handle my data, I'll sometimes end up reading a paper about like cancer cell classification, because at some point the problems are different, but the method ends up being useful. I think a lot of the times in data science and data analysis, if you kind of get stuck in your field, you, you kind of miss some of the other, maybe-
Deep: No, I think that is a really great point. And frankly, that's why we have this podcast is, um, for me, I just love seeing AI applied in very, very different ways, because you do get ideas about a totally different context to how something worked and you're like, huh. I wonder if I can, you know, look at this problem in this way or that way. Tell us a little bit about ContinualAI.
Keiland: Yeah, so ContinualAI is a research non-profit, it's the largest research nonprofit concerned with building continual learning artificial intelligence systems. And these are systems that you can feed data in once and you don't need to keep retraining these networks over time. And we have academics and people from industry and tech companies, and even just lay people who are just really interested and we've developed this amazing community. Um, we have talks almost once a week. Somebody presents-
Deep: And this is an org that you founded I think, is that right?
Keiland: Yeah, I'm a co-founder and a professor at University of Pizano. And the key thing that we recently developed is called avalanche. It's a machine learning, deep learning framework for continual learning that has been used widely across the research and slowly academia and industry to kind of catalyze the continued learning process. So it has everything you would need in one place.
Deep: So if any of our listeners want to check it out just Google ContinualAI?
Keiland: Yeah. ContinualAI.org or my website or anywhere else.
Deep: Awesome. Well, thanks so much. This was a, this was a ton of fun.
Thanks so much Keiland for being on the show. And that's all for this episode of your AI injection as always thanks so much for tuning in. If you've enjoyed this episode, please feel free to tell your friends about us. Give us a review and check out our past episodes at podcast.xyonix.com. That's podcast.x-y-o-n-i-x.com.
That's all for this episode, I'm Deep Dhillon, your host, saying check back soon for your next AI injection. In the meantime, if you need help injecting AI into your business, reach out at xyonix.com. That's x-y-o-n-i-x.com. Whether it's text, audio, video, or other business data, we help all kinds of organizations like yours automatically find and operationalize transformative insights.