Does AI Know Better Than Doctors? Our New Healthcare Reality with Oded Cohen

Can AI outshine doctors in detecting life-threatening conditions? In this episode, host Deep Dhillon chats with Oded Cohen, VP of Engineering at Viz.ai. The two discuss how AI is revolutionizing healthcare, from speeding up stroke diagnosis to identifying overlooked brain hemorrhages. Oded explains the balance between AI and human expertise, touching on how hospitals are integrating AI into their workflows without replacing doctors. They also tackle the ethical and regulatory challenges, exploring whether AI will soon dominate medical decisions or simply serve as an indispensable tool for doctors. Are we heading toward a future where AI not only supports, but challenges, traditional medical judgment?

Learn more about Oded: https://www.linkedin.com/in/oded-cohen-78678a17/
And viz.ai: https://www.linkedin.com/company/viz-ai/

Check out some of our related podcast episodes: 

xyonix solutions

At Xyonix, we enhance your AI-powered solutions by designing custom AI models for accurate predictions, driving an AI-led transformation, and enabling new levels of operational efficiency and innovation. Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance your customers' satisfaction.

Listen on your preferred platform here.

[Automated Transcript]

Deep: Hello, everyone. I'm Deep Dhillon your host and welcome to your AI injection. Joining us today is Oded Cohen, a seasoned tech leader with over 15 years of experience building high scale Internet systems and implementing advanced analytics solutions. Oded is the VP of engineering at viz.ai, a company using AI to accelerate disease detection, streamline care coordination and optimize treatment pathways in hospitals. Oded, thank you so much for coming on.


CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:


Oded: Thank you for having me.

Deep: Awesome. So maybe start us off by telling us, like, what problem are you guys trying to solve at viz. ai? What exists without your solution or kind of before your solution came out of the market, like what's a solution? The typical scenario, and then how is that different with your solution kind of in place?

Oded: So this started using AI to solve, uh, use cases where time is of the essence. So it started with, uh, use cases of stroke where really every minute counts. So if you are able to properly detect and treat a stroke patient, the sooner the better, really every minute kills millions of neurons in your brain.

And this is something one of the co founder somewhat experienced, uh, in his life. And that's really how he came with that idea. And really the problem before Viz is that if you come to a hospital, uh, to be treated and they suspect you have a stroke, they're going to have to go through a series of tests.


Xyonix customers:


Oded: In this process, this process takes time. Sometimes it takes a few hours and different people are involved in it. And it's a very waterfall kind of a process. Right. So each step has to happen, and then the next step starts, and it involves humans and involve humans detecting the actual condition in the stroke.

And the realization was that we can use really a I to detect the condition, right? The type of stroke that you might have, and so it really accelerated detection phase right instead of waiting for human that might be occupied with something else at the moment. Two, what, which is not necessarily AI, but the engineering part is once you've detected it, really surface all that information to the entire team all at once, right?

We have mobile phones. So you might have a situation where the relevant people are not present near the area where the, you know, the scan was made. Was done. Okay. Maybe they need to call somebody from the house, but with the phones, you actually can be alerted right away. You can actually see the actual scan.

You can communicate kind of like a WhatsApp between the team. You can call people directly from the house and you can make decisions. While you're on the way to hospital, for example, right to prepare the patient already,

Deep: I was just gonna ask, like, so you have a patient in their house having a stroke. No, no, no, no.

Okay. Sorry. The physician might be in the

Oded: house or the surgeon might be in the house right now. Okay. Different

Deep: hospital, but and you've already got image, you already have imagery of the, like, what are you working with imagery.

Oded: So, in this use case, we, the imagery is automatically sent to the cloud to the viz platform, then.

All relevant algos are running on it to detect different conditions and assuming they detect something, it can alert, uh, either prioritize it to be looked at first compared to other things, or also at the same time. All the team that's supposed to take care of that patient get alerts right away and can see the actual scan and where the condition that was detected is and can already communicate through this.

If the patient needs to be sent to a different hospital, that hospital might actually be on that team. So they can already see that information and they can already prepare things. So really it's not just AI, it's the entire, You know, technology that can leverage, right? The fact you have cloud, you have mobile phones, you can notify people.

And so it's not just AI. It's a full platform that is doing that. Now,

Deep: just for a second. I want to, um, I'd like to like, really understand the whole patient, patients at home has a stroke patient somehow gets to the hospital. Somehow at the patient triage process, they figure out patient has a stroke. Then they send them for an MRI, um, or a CT scan.

A CT, in

Oded: the US it's usually a CT scan. Okay,

Deep: so then they get their CT scan. Now that image goes up and gets access somehow to the system. But in parallel, is the radiologist involved? Okay, so the radiologist is also looking at the scan. Maybe they take a little bit longer to get physically in front of the scan.

The machines are doing their thing. And then the radiologist is approving the output or, you know, being so there's a

Oded: model. There's a, there's a few ways to look at it. One is. We can prioritize the work of the radiologists and basically say, Hey, in the list of 20 things that you have to look at, look at here's an urgent case that the I already detected that is suspected stroke, right?

And so you can look at it earlier. So you can save some time there. But two, you don't really have to wait for the radiologist to say there's a stroke. You can at this point where the eye is already detected, and it depends on how the hospital wants to manage it. You can already send the alert to the stroke team and say, Hey, We've detected this type of stroke and they can already see it and communicate and already start the process instead of a waterfall process where you have to wait for the radiologist to do some of the preparation.

You want Operate and do something before their ideologies have it say, but you're going to reduce certain preparation before and you can save time and we typically talk about saving 40 minutes an hour, which is a long time in the in those type of conditions.

Deep: Okay, so two things you said are are interested.

I mean, everything you're saying is interesting, but this is two things stood out to me. One is Your system is operating against the queue of information that the radiology team is sort of looking at. So they have A bunch of things they could be looking at, and one of the things that the system can do is just elevate the priority of something.

Is that is that right? And that so

Oded: technically, the way it works is it's two parallel systems, right? So a scan is done and then the same scan is sent both to the radiology system and to the vis platform. So they operate completely independent. Theoretically, right? However, if we detect a certain condition, we can notify.

There's an integration to notify basically the system of their audiologist to send back some information to that system so they can also see what the AI was already detecting. It's not a, it's, it's running in product. So that's one aspect. The second aspect is the people that are not, that are the clinical team that needs to react to this, assuming there is a stroke, uh, and a certain condition, uh, they can already be notified right away at that point in time, right?

There's very high likelihood that the ID, what it did detect was correct. And so you can already do some of the preparation and you can already look at the data through your phone, wherever you are, and kind of, uh, give an initial guidance to the team, what needs to be prepared. While their ideologies do a more detailed analysis and verify things, right?

So this process really saves a lot of time, um, and really gets, you know, significant better results.

Deep: So if the system is experiencing a precision error or a perceived precision error, can any of those other humans in the loop nix the error? Right, so there's two options, right?

Oded: Two errors, right? False positive, false negative.

If it said there's something here. And radiologists look at it and decide it's not, or the team that is actually already being alerted, uh, the specialist is looking at it and decides this is probably not the case, then, okay, it was a false alarm, uh, it can happen. Uh, another option, if you missed it, radiologists will still go over the list regardless, right?

So it doesn't replace the existing system, it just makes it more efficient.

Deep: That makes a lot of sense, right? Like in health care, there's a ton of concern about, you know, machines dominating the situation. And I don't think it's just usually not allowed, you know, in the, in the health system for that to be the case, right?

Everything has to be physician or everything.

Oded: So everything we do goes through all the algorithm, everything gets FDA approved, right? And it gets through all the regulation. I think what helps. adopt those things is that, again, it does not replace the human decision, right? Nobody will make a decision based on just the eye, okay?

But it really helps things become more efficient. And by the way, it sometimes, uh, it can find things that humans miss. Which happens, I'll give you an example, a case, an interesting case that we had a guy in his 50s complains about a, uh, about a headache, goes to his primary care physician, you know, how it is, they don't know yet what it is.

There's some back and forth that lasts a few, a few days. Eventually, they decide to send him to a CT scan, right? They don't suspect there's a stroke. They just don't know yet what it is. Maybe it's. God forbid the cancer, right? There's so many options are

Deep: just a headache. You know, like, you never know. Right.

Right. So he

Oded: goes, he does a CD and they send him home and say, come back or we'll call you in two weeks. Once we have results, right? Because some, it goes into a queue and it's not a top priority. And so he goes home. He's supposed to wait for 14 days. Luckily, our system was installed there. So for our system, it doesn't really matter.

Right. Yeah. It got the scan. It ran all the algorithm. One of the algorithms that found that it had a brain hemorrhage and bleed. He left the hospital, got a call. Like it was literally a few minutes away from the hospital. It gets a call from the, because the physician automatically gets an alert and say, Hey, right.

Typically it will go into some queue of somewhere. And in a week, somebody will go over it. Gets an alert. Hey, there's something here. He looks at it, verifies this is indeed the case, right? Calls him. And again, it was just a few minutes away from the hospital. Says, I'm sorry to tell you, but you have to come right away.

We find that you have a brain hemorrhage. He comes, they go through the entire, uh, process. I think the next day he got treated. And he's back. No side effects. No, nothing right now. It might have happened that maybe 10 days later, they find that there was the hemorrhage. They call him and everything is fine.

But at the same time, it might have caused permanent damage because You know, it kept growing and eventually it starts damaging the brain. He might collapse and those things. I mean, that's

Deep: a great case, right? Because you have, um, you have a fixed resource, which is the radiologists and their ability to look at scans.

And there's sort of an inherent prioritization that happens due to sometimes overly high level variables. Right. But, um, but they have to. Kind of prioritize their queue. It's not that they're bad people. They're just there's only so many of them, you know, radiologists aren't cheap, you know, I think they're like a million bucks a year for radiologists.

It's incredible. So, so, but meanwhile, but the machine is a Relatively cheap, you know, and can just run right away. So, and it can take in so much more information than whatever was decided, used to define that cue for the, for the radiologist. So that, that seems like a pretty awesome case.

Oded: Yeah, there's also another case where, and this is, we're going into areas that are not necessarily acute cases, but are still interesting.

So a guy gets to the hospital, they do an ECG. Okay. Right. And ECG you can get for various reasons, right? Our system gets the ECG and detects that there is some relatively rare condition in the heart, a rare defect in the heart. And, uh, he didn't come for that. Like it was just the Algorand and found it. I think they were able to not only, uh, identify that, confirm this as a case.

One of the changes challenges we saw is you go to a hospital, you do a few tests, whatever they send you home. Let's say they find something. Now the guy might be two hours away. You have to reschedule everything. This can take weeks, like the entire process is long and painful. Sometimes the guy will just say, like, you know, people are just, you know, procrastinating or things like they don't handle it.

So the fact that we were able to detect it right away while he was in the hospital, They put him on a, uh, they sent him to an echo scan. Okay. Because the only way to fully validate the condition was echo and they confirmed that was the case and then now can start the process. So we might have saved him months, if not years of being undetected.

And this is a condition that might cause you to have eventually a heart attack and die. It's a, even when you're young, well, that,

Deep: so this is an interesting case from, um, from an efficiency. I think you're making in part efficient efficiency argument. So you're saving the time. Of the patient to and the mental share to have to kind of reschedule and all of that, but you're also saving the hospital staff time, whatever is involved in the rescheduling and then you potentially have the ability to just kind of act right then and there.

Um, in addition to the, the direct health benefits, which, you know, early detection, of course, is a big deal. So

Oded: I'll give you I'll give you more more cases. One of the challenges hospitals have or physicians have is there's a lot of information that you have to sift through and you have to go to a system that has all that information.

And some of the information is in one place or the other in another place. It's not always organized in the way that you want it. Let's say we've detected already a certain condition or suspect suspicion for that. We can pull data from other systems. AI is used here actually, because some of that data is non structured data, textual data.

That, like, it's not as easy to find whether a patient is getting a specific, let's say, is on a beta blockers, right? You have to, sometimes it's in a more structured way. Sometimes it's written in some text somewhere. And so sometimes, so sometimes you need AI module to actually extract that information.

And so what we can do is we can say, okay, here's the condition. We're not only going to. Alert you on that. We're also going to bring you all the relevant information from other system, including things that might be hiding in textual form and present you with that. So now you have a full picture of to make the decision, right?

You have the complete information to make the decision. And so there's a lot of different things. At the end of the day, you were saying something about like the sensitivity and healthcare and like you can't like the goal here is not to replace humans. The goal is to make them more efficient, to allow them to do, to treat more patients, to prevent us from missing patients, right?

It's still human still needs to be involved, right? You still need the specialists and all those, but they can just

Deep: do a lot more.

It sounds like you're plugged directly into like the hospital's Epic systems and you're able to get. Right at diagnostic data, in addition to like the doctor's notes and all of the scripts, basically, you know, like everything that happens for a patient in the hospital ultimately winds up in their epic.

System. Is that right? Like, are you kind of making the case when you sell to a hospital? Like, Hey, we're going to tap right in epic. We're obviously have a compliant, um, but we're going to pull that out. And we have these conditions that we can alert and bring efficiencies on today. But over time, we're going to continue to learn and operate on additional ones.

Did I get that right?

Oded: Yeah, so we do integrate with, you know, the hospital information system, epic or, you know, Others epic is the dominant one. And that's part of the input that we get. I ideally we get imaging, we get ECG, we get echo, we get HR information. So we are slowly expanding into more and more areas that allows us really to solve a lot of different problems.

And when we solve it also to give you the full picture of condition of the of the patient,

Deep: what would be helpful is like, why don't we take one condition and walk us through the model construction process. So, let's go back. Maybe it's a new 1. Maybe it's 1 of the original ones. What did you get on day 1?

Because, you know, you get plugged into the hospital patient management, patient data management system. Now, what, like, are you, where are you getting training for data from, how are you integrated in that training data collection process into whatever the radiologists see or whatever the cardiologists see, like, and then how much of that is like your own behind the scenes, you guys are going out and like, you know, bringing in physicians or human proxies for physicians to like label, like, walk us through that whole process.

Like, how does it work? From a new condition till model release and then model refinement.

Oded: So let's talk about imaging for a second, right, as an example. So in order to develop a new algo, we need data, right? That's the reality. And so typically you can buy data. That's the best use case is to, or the most common use case is to buy the data.

We do have a team of clinicians that are able to label Uh, the data so we can create the necessary data set to, to build the algo. And, you know, there's different things that you need to think about the diversity of the population, male, female, all sorts of things that, you know, are required. Obviously the size of the samples, it depends on the, how rare the diseases or how frequent you, you can find the data.

So you have to prepare the data. We, And a lot of the cases will buy the initial data. We can classify, we can tag it ourself, and then we can train our algos ourselves. Obviously, there's a process to submit the algo to get an FDA approval. We can run it in, in the background, uh, in a hospital to verify before it is actually live.

So to verify certain things before, right, to look at the data, to verify. And it actually does what we expect before we open it to be officially used within the hospital or in general. And then we keep on monitoring the data to ensure things change, things evolve. Scanners change. We might've not addressed certain condition where maybe people have a certain, already had a surgery in their brain that we didn't account for.

And then we, as we see real data and we can see maybe the. You know, the outcome is not as good as we expect in certain cases that we can further adjust and improve things. So it's not, the algorithm doesn't learn by itself, right? This is a common question we get. It's not a self learning algorithm that based on the data that it gets, it's slowly learning and adjusting.

That's a very controlled

Deep: process, I assume. Your data scientists Have the data, have labeled data. They run it's a fixed algorithm, right? Yeah. And then at that point it's snapshotted by the FDA, like cannot change, right. But the FDA is snapshotting the entire process for algorithm for new model releases too, I would imagine.

Oded: Uh, it's a bit different, like to make updated depends on what type of updates are, are made and exactly what is needed to be, uh, be submitted, but it's, It's, it's a lot like a lot shorter and simpler process to further update and improve things. So yeah, this is more or less,

Deep: uh,

Oded: and

Deep: so what extent are your models operating only on the sort of, for lack of a better word, like the lab result of, uh, that you're focused on.

So for example, how, to what extent is it determining the stroke based only on the radiological Versus also incorporating other variables about the patient there, you know, maybe their lifestyle, his lab history, all that.

Oded: So what I would say is some of the challenges that it's a regulatory issue as well.

Uh, and so a lot of the algorithms are focusing on. Detecting, let's say in the case of imaging, the imaging, uh, we might supplement it with more information that we think are relevant so that it could give a broader context. But today it's not, it's, it's, I don't think that we don't have a combination of an old radiology report and a new scan.

And then because of those two together, we can detect something otherwise we couldn't detect. Okay. Uh, there are cases where there's additional information that is somewhat like, um, Supplement a supplementary algorithm that gives you not just the condition, but also give you measurements around certain parameters.

And this also requires AI, like the size of the breed, the bleed, right. And the volume of it and all sorts of things. And so, but it's, it's a separate algo that runs in parallel. And then you get the information. You can see both of those things for various reasons. Trying to do those together is less practical or more challenging in terms of getting that to be approved.

Deep: To what extent is, like, are your physicians interested in explainability? Are you working on explaining why you think, uh, why the model thinks a particular thing? And if so, maybe walk us through some of the challenges there.

Oded: So I think the second algo that brings you some of the, uh, measurements helps with some of that there are some regulatory aspects of being able to provide the explanatory nature of why it thought that there is that condition.

So typically it's not in the main algo. I see, so

Deep: it's regulated separately, like the, the description, I mean, there, there's separate things, like if you conclude that it has this disease, this is why you might conclude that, which is different from saying I concluded this and this is why I concluded that.

Oded: Yeah. I mean, there's things that we would like to be able to do, but I think it's too complicated To get approval for cost a lot of money. And so we're, we're trying, we're basically trying to eventually there's an ROI, right? Like if, if it costs too much, you might not be able to do it. So you're providing enough value and enough information so that going to provide the, the ROI also for the, obviously for the hospital, for the physicians, it's not like in a, in a non regulated environment, it would be a lot easier, obviously, then there'll be able to actually do probably more.

Deep: Yeah, that makes sense. But I mean, few environments need regulation more than this one. I mean, this is probably the most important stuff that happens with machines.

Oded: I agree.

Deep: So, so tell me a little bit about that because it feels like you guys are, you guys, to get where you've gotten, you have to be at the forefront of FDA approval for AI algorithms.

Maybe, like, rewind a few years. Because they were pretty hesitant to, like, take this step forward and allow this level of intimacy with the physician, you know, that, that you, that viz. ai is sort of experiencing. Walk us through, where are they in their, in terms of their thinking now, and what did it take to get them there?

And these models are not as straightforward as prior diagnostic descriptions were to, like, communicate how they work. Is it. All of the reliance still on, you know, efficacy data in trials and kind of describing that ground truth collection process and addressing the things that we would think of as machine learning problems.

Or are there some sort of unique things that the FDA kind of thinks about that we might not guess?

Oded: So I don't know exactly what was, you know, Viz exists for eight years now. I'm, I'm sure it was hard at the beginning, right? I know that in the initial phase, we submitted something for approval, got rejected.

And like, they were trying to figure out how to do it, right. How to convince the FDA that this is, you know, it's worth pursuing. And there was a lot of, uh, novelty approvals where it's like first time. Nowadays, I think it's, it's not like it's, it's a lot more common than people think. I think we rarely, I don't think we get rejected.

We get like, maybe feedback or, but we typically are, we understand what they're looking for and to be honest, I'm not the one that is, you know, You know, highly involved in all this process, but the team is well familiar with what the FDA needs. In order to feel comfortable with those things. Nowadays approving, it's still a long process, still costs money.

Uh, we're talking about a few months, but it's not where it was before. There's actually a, a case where, I think it's called NTAP, I forgot the, what it stands for, but really the federal government, I don't know what age, like I'm, I'm not sure 100 percent what agency it is, but basically nowadays they pay a hospital to use those type of platforms.

Or to use algorithm to detect right, like LVO, like the, the common stroke that we're one of the main algorithm that we use, they actually the hospital get paid for every time we are platform is running on the skin. And the reason is what they realize is that. It's actually saving everybody's money. Like it saves the, the insurance money eventually, right?

Because if you're treating the patient faster, if the outcome is better, if they need to spend less time in the hospital, And the recovery after that is better. It's not a win win win situation where we win, the hospital win, and the patient win. It actually also whoever has to pay the bill wins as well.

And so,

Deep: who is they in this case? This is,

Oded: uh, yeah, probably one of the, federal insurance like the medicare medicaid i i never know what the yeah i actually did the u. s for 10 years but i i still don't know uh the difference well i don't

Deep: think those have spent a whole lifetime no even those of us who spent a long time studying the healthcare system right and so

Oded: the hospital get refunded on those things and they're doing it in a way where like it's not whether there's a detection or not, but every time there's a scan to prevent biases, right?

So there's no incentive here. And so I think from, Hey, we're not sure whether we need AI or whether it's okay to, we're actually going to pay you incentivize you to, to adopt it. It's a long way, right? It's very different. Yeah. I mean, that's,

Deep: That's, that's because, you know, I mean, any of us who work with these kinds of models and this data know we can outperform physicians.

So I remember the first time I built, I was working with a, a startup. Uh, this was a number of years ago. It was a cardiologist founded company that it was like an iPhone case for heartbeat anomaly detection. So the idea is, you know, physician pops it on. They don't, You know, they get some really quick or at least feedback before basically an improvement over the stethoscope.

And the first time I sat down to like build these models, you know, to put, to put something in place, I think it took me like a day or something. And I was outperforming all my, all my training data already. I was like, what? This can't be right. This has got to be wrong. So yeah, the, the algorithms are just so powerful now.

Right. And, and they're so good at pattern detection. Yeah.

That kind of brings me to like a slightly, slightly related topic. So most of what visited AI, at least it sounds like you're doing is interacting with, let's call it like highly vetted diagnostic data. That's, um, all gone through, like, really kind of. Rigorous approvals or it wouldn't be sitting in an epic system in a hospital somewhere.

But there's also this movement of kind of remote sensing, um, stuff that patients can have in their houses. Like the system we were working with, you know, one of the things that we had to do is we had to gather training data, but in very noisy environments, because you're not sitting in a perfectly clean anechoic chamber getting pristine, you know, audio data of a heartbeat.

We had a physician sitting, you know, in the third world with rickshaw horns going off constantly. So we had. So I'm curious, what do you think is the role of that kind of remote sensing information in this, uh, you know, with respect to like kind of AI and diagnostic data gathering and how, how does that maybe play with this now you're checked into the hospital situation?

Because if I go back to that stroke example that you first gave, what I would really want is, um, and I don't actually know what signals stroke gives off, but I'm guessing there might be something in the, in the dilation of the eyes. Maybe there's. So ideally, you know, you would have detected the stroke well before you set foot in the hospital and you could maybe trigger your whole system from that.

Do you guys think about that at all? Like the remote diagnostics issue?

Oded: So first, let me address the question about, like, the point about the noisy data. Even in the hospital, there's a lot of noise, right? Uh, patient move, and like, there's all sorts of condition, and the scans are different. So we've seen cases where, because we also integrate third party algorithms to our platform, and so we saw cases where, in theory, it should have worked, but when you look at the you know, real life data, the performance dropped dramatically, and then you need to help improve those things.

So noise exists. With regards to that, there, there is, uh, you're right that actually I'm not, I'm not familiar with the stats, but I know a lot of patients are just not coming fast enough. And so you lose also time in the home, right? Because people don't know about that. This is like something, right? You know, I don't know all the things, but I know like, if you, like, you might get like a part of your body, a bit kind of, uh, maybe a part of your brain, because in some of those strokes, half part of your brain is not getting, uh, uh, blood.

And so that case, the other side of your body is going to be kind of, uh, something is going to be off. You might be a bit confused, mumbling. And again, I'm not a physician, so I'm not, I'm not from that space, but So I'm sure there is a way to solve this problem. We're not there. Like, we're not, like, we're very B2B on the hospital side.

We're not going all the way to the end consumer. There is another step on the way because when you are in, uh, Emergency care, uh, ambulance. I don't know if you have that, right? There's, you can actually help there as well. There's things where you can send some of that information and get to the hospital and go through this process.

There's things like crazy where, Hey, you might need to route him to the right hospital because. others have too much, too many patients to like, like there's different things that you can do on the route to the hospital as well, right? Bring the data faster, route it, route it, right, route them to the right hospital.

And there's, you're basically taking the further down the road, right? Like just figure out whether you have something. And so at least for us, the second part might be relevant, you know, at some point, you know, we do have integration with some players around that, but, you know, we are, we have our hands full with that.

Yeah. I imagine to that.

Deep: Yeah, I mean, you have a virtually unlimited set of conditions. I mean, it's, I don't know, 5, 000 or something conditions to go after, you know, with the data set you're sitting on 1 thing you said is really interesting to me. So you're integrating 3rd party algorithms and that that seems like you're kind of in this.

Beautiful position. Once you're in these hospitals, you've got the data, you have the ability to facilitate the execution of new experimentation that you could even put in a framework in place to, like, facilitate the gathering of efficacy data to go to the, to the FDA and, you know, you could go really far with this idea of how, how do you, how do you guys think about that?

Like, Are you at the point where you're thinking, Hey, we, we actually want to build an ecosystem of smaller teams who frankly, given where you're at, it probably doesn't even matter as much to you that if they're your algorithms being executed or somebody else's, um, because the business problem is really getting in these hospitals and, you know, and, and, and impacting patient outcomes.

So how do you, how do you think about like, how advanced are you today? And where are you guys trying to get to there? Cause I could imagine a world where you're, You're really like, Hey, let's, let's take, let's have the world's best data scientists able to spend their cycles, grabbing data, building better models and, you know, and improving results.

Oded: So what I would say is good algorithm is not enough, right? Like, cause you really have to be able to deploy it into the hospitals and in incorporated into the workflow of the hospital.

Deep: Yeah.

Oded: And so they might want to still work with their own system. So for example, when we were talking about radiologists is like, I don't want to need a different platform.

I have my own platform, bring the AI into my platform, right? That one option or enhance it in a way where I don't need to learn a different system. And, uh, on the other hand, some physicians like, Hey, yeah, this is nice. Like the, we can have our own kind of at the palm of our hand, something that we can use to, to operate.

And, and we can kind of have a clinical workflow that we can manage very easily through this. So the idea is yes, like we, we can't do like developing an algorithm is costly, time consuming. We understand we can't do all of those things. And sometimes there's already good algorithms out there. Right. And so why would we now spend a year, a year and a half with a lot of money to do that?

We can partner with the company and say, look, we can, assuming there's a need, like, because there's use cases where, okay, just the algo itself is fine, right. For helping radiologists, but there's cases where you want to improve the entire workflow and an algo is just one part of it. Yeah. In that case, again, we're not just solving for the algo.

We're also streamlining the entire process. Right. Because we can, as I said, we can activate a team so you can, you get notification, you look at the data, you can activate a team, bring them in. And so the idea is that, again, where there's no algo, and we have an interesting opportunity that can, you know, help the hospital, there's a business case for this, then we will invest.

But if there's already an algo there that a company is willing to partner, I think that's a, that's a win win situation, right? We can get things faster to market.

Deep: Yeah. I mean, it makes a ton of sense. You know, when I put my venture capital hat on, it's like, you guys are a platform play, right? Like you don't, you don't need to own and dominate every single algorithm at this point, the mode around the business.

is just the sheer difficulty in getting inside of these hospitals in the first place. Kudos to you and your team. Like that's super not easy. You know, it's really hard as somebody who's You know, tried and failed at it a few times, like it's, it's, it's tough. And so it speaks to the power and potential of these models to really make a huge impact on, on care.

That's a big deal. So how many hospitals are, are you guys in now?

Oded: We're, we're in over 1400 hospitals and mostly in the U S some in Europe, although the focus is the U S.

Deep: Yeah, that's okay. So then that's, that's, that's a big deal. You're quite mature. And so this stuff is, is affecting millions of people every day.

Yeah. I

Oded: think we touch a patient every few seconds, like a new patient.

Deep: So all those hospitals must be in variance. So, so maybe one thing that would be valuable is like, what are the differences? In implementation across hospitals, because I'm guessing that hospitals have different ideas of how to integrate and like how to elevate information.

Maybe like, what do you give them? Are you giving them like precision and recall thresholds at which they alerts are sent who they're sent to? It seems like there's a bunch of heuristics there that maybe you offer as configuration to each hospital. Is that right?

Oded: So first of all, you know, we have a collection of products, algorithms, workflows, uh, we have clinical workflows that might not have an algorithm.

But they still are valuable just because we can, again, we have the tools to make the data available for you more easily, more accessibly, and communicate in the platform. So first of all, as a hospital, this started as a stroke play, right? So it wasn't like a platform. It was solution for stroke, specifically at the beginning for a specific condition in stroke, specific stroke.

And so it evolved over time to be more of a platform play. Right. That's where we are now. And it's still evolving. It's not, it's not done yet. We still have a long way to go. We're still not in all the clinical domains and it's, it takes time like to go into new domains, right. Into cardiology and oncology and all.

Like there's a, there's a lot of things. It requires an understanding. It's not just a technological challenge. It's a business challenge and a clinical understanding where the problems are. Maybe they already have solutions and there's no point for us to be there. And, uh, so hospital might buy different portions of, of our solution.

There are some level of configurations around how do you want to get alerted and, you know, who's going to get alerted and whether you want a small team at the beginning and then have an activation team where like, maybe there's one or two people that gets the initial alert and then they're, they're in charge of activating the full team.

And so there's, there's a variety of things. And to be honest, this is something part of our evolution to add more capabilities and be able to do more of those customizations. So hospital can, you know, adjust things to what they need. So this is a, this is where we are.

Deep: That's awesome. I mean,

Oded: it's, and it's not, it's not necessarily AI, right?

It's,

Deep: yeah, I know. It's a product.

Oded: It's a SAS product.

Deep: I mean, it sounds like you're almost focusing on the individual departments within the hospital, like an oncology solution, a neurology solution. Is that fair to say?

Oded: The idea is that there is some platform capabilities that are shared, but there, there is in each area, even within the, that, uh, there's subspecialties that have like each, Medical condition might have a specific need and a workflow.

You might actually be transferred. We have conditions where after you were you had a stroke, you might need to be referred to a cardiologist because it might be that the problem starts from the heart. And within that there are different You know, subspecialties. So it's not like a general solution that fits all right.

Again, there is a layer that is common, but you have to address certain things that are different for different clinical domain. Also in terms of the data, like a ECG is relevant for cardiologists. It's not relevant for other areas. You know, uh, echo to heart, like there's, there's different things that are not the same, right.

For the, so you have to be able to support them different modality as we call them. Right. And so there's an evolution and more and more capabilities. So we can actually support more and more of those domains.

Deep: This has been a super helpful conversation, you know, thanks so much for coming on the show. I have, um, maybe one final question.

Like if we, if we think out, you know, five or even 10 years into the future, um, in the realm of healthcare with all, you know, the diseases and stuff that we've mentioned and the rise of AI and the increasingly powerful algorithms. And even the non algorithms, the workflow, like what, what do you see as being the patient experience?

Let's go, let's just go 10 years out. Cause it's more fun. So 10 years out, how is the world different from a patient's vantage with AI bringing, um, you know, these, um, benefits to the, to the care.

Oded: So I'll give you a personal story. Like, uh, those ones are the most fun. Um, My father was diagnosed with cancer a few years ago, it re emerged again, and then he was on a, you know, monitoring.

And so every six months he gets a PET CT scan. And then, uh, recently it re emerged again, but we found out, and it was relatively big, we found out that six months ago when he did the test, it was already there, and it wasn't detected. Because, you know, humans, right? It happens. I mean, that's reality, right?

Now, the challenge is now even after you get detected, you need, they need to give you all sorts of, of, of scans and tests and all those things. And from what I understand is like it can take weeks and can take sometimes a month, right? Like the accessibility of that is, and the reason is it's usually A lot of the time it's people you just don't have the enough technicians to to do this and you know, and so my hope is is that once one 10 years from now again we have less of those cases that are being missed and then the ability to handle more patients so you don't have to wait for so long to actually get the treatment has improved significantly and then even when you're in within the process.

Again, the entire, it's not necessarily AI, but the workflow, everything is more efficient. So you basically more likely to get detected and not missed. If you are detected, you're going to get treated. You get the right treatment. By the way, we, we didn't even cover that. Like the fact that there's always new treatment coming in and like we can help surface.

Hey, there's a new treatment that might be relevant.

Deep: Yeah.

Oded: Right. So we can, or even like

Deep: even servicing clinical trials that somebody, you know, actually do that.

Oded: Like we identify today patients and surfacing them as potentially for like a candidate for a clinical trial and instead of having to go to a big deal

Deep: because it's so hard for, you know, somebody's so many people are sitting.

I mean, it's kind of heartbreaking. It's particularly in cancer because there's such amazing treatments that are just right on the horizon. And there's people who are looking at a terminal case. Somebody's got to get better at, like, matching them up with, you know, critical trials. Right, there's

Oded: twofold.

One for the people that are getting into the clinical trial. It might be a life saving. Yeah, like which is important, but it's furthermore the clinical trial itself takes so long because you have to you can shrink that time. So if you could write it. So instead of three years, it's two years or whatever.

You got a faster. You got a new drug to market faster. So our mission is, you know, to get more patients treated to get to the right treatment to life saving treatment, right? And so do I think it's going to be perfect in 10 years? It's probably not, but hopefully it's going to be a lot better. You

Deep: know,

Oded: yeah, I'll just add a little

Deep: bit because I have my 10 year old vision to I think everything you said is is right on.

I'll just add that I think getting more early indicator device, you know, info off of devices, whether that's the evolution of the of the smart watch over time. Or whether it's like handheld diagnostics that are just really low cost that can be sitting in the house Or even if they're just sent home by the hospital for patients that they're kind of concerned about I just think that realm is going to get so much better At early and we're going to get It's so much earlier indication.

And then I also just think like with, when you look at like impact of something like, you know, the, the new GPT LLM models, the rate at which they're accelerating research, people are able to, researchers are able to know what to work on so much better through like, you know, PubMed analysis, you know, like looking through the literature so much more efficiently.

All the way down to just, you know, over all the impact is, I just think it seems inevitable that humans are just going to start living longer and we're just going to get better and hopefully we're improving life quality as a function of age as well,

Oded: right? I, I think there's a, I mean, I kind of a focus of where, uh.

We operate in, you know, I was talking to a guy that was telling me about how they can find new drugs much more efficiently and quickly, just because of the ability to process data and, you know, with genetics and, you know, you know, better match and all those things like more precise medication. And so those things without machines, like it's, it's going to be impossible.