In this episode of Your AI Injection, Deep discusses with Artem Semjanow, the CEO and co-founder of Neatsy, about how their app revolutionizes care for individuals with musculoskeletal pains by using smartphone photography to analyze posture issues and recommend tailored therapies or doctor visits. They delve into the technical and medical challenges of developing an app that effectively mimics a doctor's visual examination, including the importance of precise imagery and the iterative process of refinement with medical professionals. Looking forward, Artem and Deep explore the potential impact of such technologies on the future of healthcare, envisioning a world where remote diagnostics and patient monitoring significantly enhance access to and the quality of medical care, leveraging AI to complement and augment human medical expertise.
xyonix solutions
Learn more about Xyonix's AI Testing, Compliance & Certification Solution, the best way to ensure your company is following the law with AI regulations, being ethical, and thoroughly testing and optimizing your AI systems. Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance your customers' satisfaction.
Learn more about leveraging AI in healthcare:
Improving Mental Health Using AI and ChatGPT Powered Behavioral Therapy with Kelly Koerner
Assistive Technologies and AI: Improving Hearing Aids Using Artificial Intelligence
Or take our free risk assessment HERE!
Listen on your preferred platform here.
[Automated Transcript]
Deep: Hello, I'm Deep Dhillon, your host. And today we have Artem Semjanow, the founder and CEO of Neatsy. ai. Artem leads a team leveraging advanced smartphone features like 3D scanning abilities to help with orthopedic diagnostics, self checkup, and personalized therapy solutions. Artem and his team collaborates with Harvard Medical School to streamline telehealth and remote diagnostics. For patients and doctors.
Deep: Let's get started. so Artem, maybe start by telling me what is it that Neatsy does and maybe walk us through from, uh, Patience standpoint, you know what the experience looks like and we'll just take it from there.
And thanks so much for coming on the show
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Artem: Thank you so much for having me. Yeah, glad to meet everyone. My name is Artem Semyonov. I'm a CEO and co founder at Neatsy And essentially Neatsy is an app that helps people with musculoskeletal pains. What does it mean is that if you have foot pain, knee pain, spine pain, neck pain, the app might help you.
So you essentially take the app, make a couple of pictures of your foot or spine. It analyze the shape of your foot or spine and understand any posture issues that you might have. And the app generates a physical therapy program for you. Or you can even make a customer studies. Both of them could help you to alleviate the pain issues.
And also the app would tell you that you might see the doctor. Do you really have to see the doctor if it sees some kind of risks, potential risks?
Xyonix customers:
Deep: All right. So, so tell me, tell me like. I'm envisioning a patient maybe taking an iPhone or a smartphone of some sort, and being guided through a process of gathering imagery around, I don't know, a body part or something. I saw this, , you know, a bit out on your website.
What, what are some of the challenges you have sort of there and how do you think about the imagery, , that you gather and how do you think about, you know, the consistency of the imagery? Cause as you know, and our listeners know, like from a machine learning vantage,, you want that consistency.
It reduces the amount of training data to really, you know, get somewhere with, with the analysis. So I'm curious how you think about that problem.
Artem: Oh, yeah. Well, that's a lot actually, , essentially our goal was to try to. Collect a set of pictures that would be meaningful for doctors to assess the condition of each patient.
Because, , there's a really, simple task for any computer vision, , project. Essentially, if you can train a person to recognize something from pictures, there is a very high chance you can train a machinery model to do that. And otherwise, if People cannot recognize some pattern from a set of pictures.
It's almost impossible to train, , AI. Well, sometimes it happens, but it's like, it's like kind of like a rule of thumb, I guess, how to like assess the feasibility of machine project, machine learning project. So at the beginning, we were, we took a lot of time to talk with real physicians, orthopedic surgeons, podiatrists, physical therapists.
How they do visual examination of their patients. What they look, what they are looking for when they look at the people's spine or people's foot and trying to assess if there's any problems with it and trying to understand the reason of the pain. And when I'm taking like, what are the, things that they are looking for with their own eyes?
I am talking also about what are the kind of like angles of photos you should take to make this, , symptoms or like some like signs as visible as possible.
Deep: , tell me a little bit more about how those conversations with doctors went.
Because I imagine physicians aren't used to thinking about it from a photography standpoint, right? Cause they're usually dealing with the real life, three dimensional human. And it's also, they can, you know, they can use touch. Uh, they can kind of rearrange them. So tell me a little bit how those conversations went, because you have to sort of anchor them in the.
Sort of lexicon of photography, if you will.
Artem: Oh yeah, absolutely. That's, one of the things with the big sort of, they like get used to see, , the people in the office. They like, , refuse to use like telehealth services. They always say, Hey, I need to see the patient. This is like the key phrase that we heard.
And that's actually why we started the project. We were like, Hey, what if we can make the tool to see the patient without actually seeing the patient? When we speak about, , different, , 3D, , and , how they assess, , people, essentially, we were asking, like, okay, here's, , two people, here's , a couple of people, , how would you distinguish, hey, this person have low ARCH or overpronation, for example, if you talk about health, issues, or, here's, like, a person that doesn't have any problem.
And. It was a little bit easier, like, to be honest, for me, because I have this kind of health condition, so I have some foot problems. That's one of my personal thing, and that's also one of the reasons why I started that. So, I was just going to the doctors, and I know that I have this condition, and I know how to treat it.
Some physical therapy, some orthotics. And I was asking them how they assess, how they think. angles they need to see, , to actually, , like do some visual examination and put the alphas visually, um, without the x ray or a CT scan. And, uh, by, Indulging in this process, we understood, , certain angles, that are the best for, , assessing certain conditions.
And we had a really, really long process of how to create an interface that kind of forced the user to collect the pictures correctly. And, at the same time, Easy enough for users to actually not to be that annoyed, like when you're doing this like correct angle, for example, I
Deep: mean, yeah, you're trying to like balance a few different things.
Right? One thing is the physician for a given body part needs to get understood camera to body part perspective. That's fairly consistent across patients. I think that's one thing you want. Okay. The other thing is you, you have the reality of a patient shooting a photograph, which is different
Artem: at their home.
Yeah,
Deep: they might be in bad lighting. They might be in the dark. , you know, they might be, , just bad photographers. If the lighting's a little bit off, then you have the steadiness of the imagery. Is more important than when lighting is, you know, more ample, , so tell me a little bit about how you experimented in those early days.
Like, what was the process for honing in on it? , you started off with some chats with some physicians, then you got some ideas, then you maybe picked a body part or something. Or did you do all of them at once?
Artem: We started with, uh, foot health specifically and we like, oh, like, yeah, we like, we're like, hey, how we can kind of fix the issue that the same issue that I have, like, like lower arch or pronation.
About, like, 15 to 20 percent of the world's population have it. And, , how we can spot these particular things and, , do it in a, like, meaningful, from a physician's standpoint, way. And really, like, use the tactics that the doctors get used to. And, well, that was a lot. So, we had a really, really long, way of experiments.
We bought several mannequins. Like, from the, like, the one that you, I use in the, in the, in the shop to, like, display quotes. They were, like, all over our office. Like different body parts, like, like different colors of skin, which is also actually very tricky. Like different skin can be like different colors.
And sometimes if, what if it's a smaller, human or a child, it's also kind of messed up. We can do the revision a lot.
Deep: Oh, that's a good, good point. So where you focus mostly on adults on the first, yeah.
Artem: So right now. Yeah. And walk me through a little bit about the foot,
Deep: like what part of the foot, um, is Do you want to capture it?
I assume it's the naked foot, maybe above, like from the shin down or something like that.
Artem: Yeah, so we did a lot of, like, tests of different, like, angles, , and we identified that the most important ones are basically three,, the, the one that gives the most information for doctors. , the first one is, like, you just kind of, like, imagine that would be, like, my foot.
So you kind of just stand up and you make, um, you make a photo, like, of the inner side of the foot. Okay. While you're standing on this foot, and you make this photo from the level of the floor. So essentially, you put your phone on the floor, and you look at your, like, patient's foot from this kind of level, like, as low as possible.
And, uh, patient, which is also important, have to stand with, uh, his weight.
Deep: Already, there's some assumptions being made. Somebody else is shooting the photo of the patient or because you want full weight on the foot, I assume, but maybe, yeah,
Artem: that's, that's the whole trick. Yes. Somebody else have to shoot the photo, but it's very inconvenient.
That's why we built an additional algorithm. that recognize if it's the foot in the frame right now or not. So you can place your phone against the wall or like a chair or whatever.
Deep: Yeah, and you can do it alone. Yeah. So like, um, you basically
Artem: just put your foot in the frame, like an outline, like shaped, to like the foot shape.
Essentially, and you're just trying to match your physical foot with the shape, whether this to see on the screen of a phone and we use selfie camera, not the main camera for them. Because there's a visual feedback, you kind of move in the frame and you see what's happening on the screen of your phone.
, that's the report. It's actually very interesting. You're asking me this questions. And I'm realizing how much experiments we did and how much things we tried and it didn't work.
Deep: The reason I'm asking all this is we, we did a similar project, , you know, with an iPhone camera for a plastic surgery application.
I think a lot of , more technical people, engineers and data scientists can appreciate this. A lot of times at the beginning of a project, you just immediately imagine all these things and how, how you're going to spend your time. And it's usually like four steps ahead that you envision spending your time.
But in reality, you get stuck on something so apparently straightforward that, you know, like I don't, I didn't really think too deeply about how the complications of capturing a photograph. Like in our case, we were mostly interested in, like facial photographs and portraiture of the face, it was a lot of time, like just getting the camera held, right.
Communicating with the user. Oh yeah.
Artem: Especially communicating with the user. That's, that's, that's, the biggest part because then we also needed
Deep: models after the photo was shot. We needed models to assess whether or not we got the right shot.
Artem: Not just the main model that kind of assess what's happening with your like foot arch, but it's also like an additional couple of models that is tracking what's happening in the frame.
Every second, is it the foot or not the foot in the, in the frame where exactly it's located, uh, like. a couple of segmentation models, a couple of, let's say, classifiers, the right name in the background. It's just additional, things that have to work to make the interface, like user interface, like, really, like, friendly for users.
Deep: So let's, kind of scooch along here. , so you've got that, that one lateral shot of the inside arch. I assume you wanted the outside, maybe a top down and some other.
Artem: Actually, no. The best, the most informative for doctors are. Like the view of the inner side of your arch.
Also, you just put your phone camera on, or you put your phone on the ground, uh, selfie camera facing upwards, and you just kind of hover your foot above the phone, so it gets, , the inner sides, yeah, because it's very useful for making customer products there.
And also the another one is very informative for doctors to just place your phone against the wall and you kind of put both of your feet, , and you're facing from the phone. The phone sees both of your heels. And it's even better if you just walk a little bit. So you just stand like that and then walk from the phone So it assess gate and it assess if your foot tilts towards the center or it tilts outwards
Deep: the physicians looking for some asymmetries potentially
Artem: asymmetries or if it's basically when the healthy way of walking is you pretty much you just put your foot and that's it but a lot of people when they apply weight, the, the foot, just for a split second, like some, like, microsecond, they kind of tilts, , , towards the center with every step.
And this creates additional stress to the whole, , musculoskeletal system, especially when you do running, especially when you run, 10 miles or something. Imagine how much steps and how much tilts you make like that during this run. And therefore people have some pain in the foot and knee. Therefore people buy special stability shoes or don't buy special stability shoes or buy custom orthotics or don't buy that.
And, , this is one of the, one of the health issues that we try to track over pronation and supination. And essentially like this is one of the best way. To trap it.
Deep: So I want to go back to the image capture a little bit. , sorry. I get really into the weeds. Now you guys, I assume your team, you went off, you know, you tried some stuff, you got some imagery together.
Did you go back and forth , with your physicians
you have to get from that to them saying, Oh, yeah, this imagery is sufficient. It's not. Was this anecdotal? Were you writing it down? Did you build it into a system with a, with an interface for the physicians to tell you what's right or wrong, you know, with the image and with, ,
Artem: I would reply to your question in two ways.
First way, Yes, it was kind of back and forth, back and forth. It was always kind of struggle how to make some kind of image collection that is easy enough for user. But at the same time, meaningful enough for a doctor to spot something from these images, and it was always a tug of war, and it still is.
And still, you still have to kind of balance that, how to make it easier to use for users. And, the quality of data, the imagery is, good enough for doctors to make decisions. That's the one part. But second part, we accidentally found, great product market fit. Because At first, we were collecting these images and we were wondering, like, Hey, what if we can just run some machinery model inside the phone and people can do prescreening, um, um, by themselves.
But, so, we were just We made a data labeling tool for us, essentially. We made a dashboard for doctors so they would see these pictures and say, Can I see something from that or not? And if they see some health issues, what are those? And accidentally, we found that it's actually what they need to do the telehealth job.
Like, to take care of their patients online. So, this data labeling tool become our, Product that you are right now in several hospitals that is enabling doctors to see the patient without actually seeing the patient in the office.
Deep: So like, I want to understand this a little bit.
So you've got imagery for a patient it's coming up, you've got those different perspectives that we talked about physicians looking at them. Through maybe a web interface or something. And now they need to label what they're seeing. And the first question I have is like, how did you come up with that label set?
And to what extent did it include actual. medical labels versus maybe photography problems or challenges. Walk me through a little bit on the human data gathering side, to what extent were you optimizing that interaction for machine learning, data gathering, and to what extent were you optimizing it for, you know, Physician patient like assessment and interaction and and is there a difference and and how do you see that?
Artem: That's interesting because like it all started with , just optimizing the data webbing or Making machine learning models nothing else. We were just making a tool It wasn't, like, existing tools on the market wasn't good enough. We did another set, another set, another iteration. We gathered some data from people who volunteered to participate in this experiment, mostly from different local gyms.
So I went to different gyms and asked people, Hey, can you Basically, let me take a picture of you with this. At everybody's feet. Yeah. Yeah, exactly. In fact, you're a little crazy, but. Yeah. And that's kind of the aspect, dude. It's like, hey, can I take a picture of your feet? That's exactly what I did. People were like, yeah, okay. Like, okay. If it's like for science, uh, let's do that. And, like, eventually, eventually, this scrappy MVP of how to show these pictures to doctors and really, like, do some laboring stuff actually evolved into the tool for doctors with real patients who can understand what they see, uh, put some notes inside.
And also, which is also important for doctors, they, uh, doctors have to use, uh, , their own EHR system that is kind of like, they basically make a log of what's happening with each patient that they have to make some and put some evidence to that. And there is a lot of integration issues how we can make this process as smooth as possible.
Just easy to use for doctors. So it wasn't really about how we present the pictures, but when we moved from just the data collection tool for machine learning to the actual product for doctors, it was a lot of, infrastructure work, how to make it HIDA compliant, how to make it like integrate with existing HRs and all this kind of stuff.
Deep: There's something that you did early on that I, you know, will I advise a lot of startups and we interact with them, but there's something that you did where you said you just, you just went to the gym and started gathering photos. This is something that I've seen really sort of differentiates startups.
And projects that succeed from those that don't, the ones who just immediately say, like, I need the data. I'm going to just go get the data. Like, I'm not going to because then there's another group that wants to sit around and wait for the data to show up from some clinical context. Maybe, maybe like something that's just usually a lot harder.
Then maybe being slightly embarrassed and walking into a gym and just grabbing photos. So clearly you did the, the thing that I think really matters, which is you just said, no, we're just gonna, we're gonna focus on getting the data. Tell me about how you made that decision. Was there any pushback to say, no, we want it in a clinical context?
Uh, not in this other, like, anything like that, or was it kind of
Artem: straightforward? Well, you see, it's always, like, when you do collect the data in, like, full clinical context, you get all the consents, from patients, it's really processed with, like, doctors, and you gotta go through all that. To the IRB
Deep: review board.
Yeah, to go to the IRB board,
Artem: yeah, you have to go over that. It's an investment of your time and money. You have to like, a lot of things happen and most to enforce that. But also it's important to mention, usually people don't participate in such studies for free. You have to get some kind of like coupon for something like, I don't know, Amazon gift card, whatever, like to each participate.
So it's, it's a budget really. So in order to actually do that, you have to be, confident. I mean, like, okay, of course you can't be confident in anything, but you have to be reasonably confident that it's going to work. So that's Pretty much what we did, we, like, at first we collected some data within, with this crappy process of, like, just asking the dudes in the gym and, like, really validated that this whole idea works.
And then we made an official contract. We did all the IFB, we got real patients at the hospital, real doctors. Everyone has, like, written consent to share this data and we made it, like, official. Oh, all formalized. Yeah, all formalized. Yeah. So it's a step by step approach. Yeah. And essentially you did the one first, you kind of discard the data.
Yes, that is not really like properly collected. You have to discard afterwards. But you've got The validation of the whole process that it really works so you could invest more time and money into that and make it properly
Deep: Let's talk a little bit about the conditions. You're actually trying to assess How did you determine what those conditions were to get them into your your your labels?
That you you know had the physicians labeling And then how did you, how much training data did you actually feel you needed per class or per category? And then , maybe walk us a little bit through the modeling stage when you started actually trying to, you know, detect these things.
And. Yeah, and just one quick thing. Are we only talking about imagery? Because I, or are we also talking about LIDAR imagery or something?
Artem: Well, it's a mix. Some of the things we use, just the images, just RGB. Uh, but for some stuff we use, , iPhone selfie camera, which is a structured light camera that emits infrared light.
So LiDAR, like, per se, exactly, but it's more like a structured light camera. But it
Deep: gets you like a 3D But it gets
Artem: So it's RGBD camera, so RGB plus depth. , so for some of the conditions, we use the depth table, yeah, as well. For example, in Android, we don't check all of the things that we could check on iOS.
So iOS is kind of like we check more things on iOS than on Android some parts of algorithm use just mgb. Some parts use depth data as well, or depth data, enhance them. Accuracy, for example, here, responding to a question about how we determine what do we want to track.
It's a, it's pretty simple process. We just take a look at what are, what conditions are frequent in the population. What could be seen with, um, visual examination done by physician, essentially. So if we speak about foot problems, it's flat arch or high arch, hydrocephalus, bunions, overpronation, supination, other problems.
All this over inflammation. Supination, also called valgus and virus deformation. Um, and then if you talk about spine, it's different kind of different posture issues, osis, lordosis, scoliosis, turtle, neck, uneven shoulders, which is also sinus scs. So all these things that. could be seen by real doctors. When you come to the office and they do examination just with their own eyes.
Of course, there is a lot of stuff that could be seen only on CT scan or MRI. And so we cannot really pre screen for that. We cannot really help for that, . But the idea is to Make an a simple and efficient tool that is just use just your phone and almost everyone in the world has a phone a smartphone right now in this year and Make it just a simple scalable tool to pre screen for very frequent conditions like frequent like spine Problems or foot problems, that, as you know, like foot problems, spine problems, they all kind of influence each other.
So sometimes you have spine problems because of foot problems. Sometimes it's the other way around. A lot of times people have knee pain or hip pain because of the foot problem or spine problem. So it's kind of, um, That's why
Deep: any problems down there is going to ripple up through the yeah,
Artem: Some people, uh, say like, why, why do you have to assess foot?
If I have a knee pain, I'm like, Guys, it's literally like two sides of one bone. It's like, of course, it can influence each other. It's literally one bone and like your, like, foot, uh, like, connection and knee connection, knee joint. It's like literally just two sides of one piece. So, of course, it can, like, influence, influence each other and even more.
So, that's why this whole, like, Topic is covered by one doctor orthopedic surgeon.
Deep: So one thing I wanted to ask you about is this comes up a lot whenever you have like a specialized human, That you need to gather training data from like in this case, , an orthopedic specialist or a physician, , that's expensive.
Obviously.
You don't after you start gathering it, you start seeing patterns and you start realizing oftentimes that you don't need that high of a specialist. To assess stuff because you sort of eventually, like, I'm sure you, you went through this where you've seen enough high arches that you kind of have other visual cues and you can take a lower stack human to assess it.
Maybe one with it out, you know, four years of medical school and plus a residency plus years of practice. How did you think about that problem of the cost effectiveness of data gathering and then if did you do anything about it to go to lower stack humans to increase your data?
Or did you always stick with, you know, orthopedic specialist and maybe if you got those other humans, how did you assess their efficacy relative to your true experts?
Artem: Well, that's a good question. I would say that we always stick to orthopedic surgeons in our case So we didn't stick to any like people who don't have a medical degree Yes, we can speak about like an orthopedic surgeon with like 30 years of experience who is like really do surgery Or about the residence intern that is just graduated from med school.
Yeah, it may be different level of experience, of course, but both people have medical degree. And that's, that's what we like always wanted to do. All the data that we have labeled are labeled with people's medical degree. That is important, but there are ways to cut costs here. Of course, first logical step, like one of the, because like, since I'm Polish, I'm Polish citizen, and like, we, we gathered a lot of, data labeling in Poland because, , the hourly rate of doctors are lower in Poland than the United States.
It's the first obvious, like, wait, how you can cut it up? And the second, which is much more important, And actually creates the biggest impact in like cost cutting because it's really very expensive to gather medical data sets. Is unsupervised learning. Everybody knows, everybody is talking about that, everybody knows that you can do a lot with unsupervised learning.
So you can pre train the models to essentially distinguish different. Features of people's foot or spine just using the big set of pictures unlabeled. So you kind of train it the way you would train. I don't know face recognition, like triplets or some more sophisticated way. Well, like, let's pick the most simplest way, like triplets, like for, like, face recognition, right?
Of course, right now it's much more complicated and sophisticated models, but the basic idea, you try to explain the algorithm to distinguish like pictures of two different people. And to say that it's the same person if it's the pictures of the same person, essentially. So you distort,
Deep: you like go through a maybe a distortion process on the image and try to You do, you
Artem: do, well, yes, but the most important thing you just collect a lot of, a lot of pictures.
It's much easier to collect lots and lots of pictures than actually collect lots of pictures. With doctors labelling that. There is a lot of like different systems like Aloka, for example, or AWS, Turk, where you can just put some small bounty for people doing certain tasks.
So you just, uh, collect lots and lots of pictures. You train models, but
Deep: the pictures are still of the feet, you know, like, yeah, yeah,
Artem: essentially, yes, yes, or spine, essentially, but you, you do fully automated way, fully anonymized. You don't know about these people. It just, it just, you know, like thousands, tens of thousands, hundreds of thousands of pictures in your,
you know, just numbers for you and you don't know anything about them. It's just some random pictures, essentially. And you just know that it's mostly foot or mostly spine with some like rare, like, disturbances. And you train some algorithm to distinguish. You like also set the, uh, set the task to actually a certain person do several pictures in different lighting condition so you can train the algorithm to recognize the pictures from the same person.
Wow. I recognize that some pictures are not from the same person. So that way you pre train the model. You can make, a real big model without overfitting it. That can recognize different features. , and then you just fine tune it on the, like, final data set, and you just fine tune the head of this model, essentially, on the, like, small data set that is super expensively collected within the proper clinical trial set.
Essentially,
Deep: Did you start that with like a bootstrapped model that you were fine tuning itself? So something on like, I don't know, BG G 16 or something big, or, or, well, it
Artem: wasn't like BG G 16 in particular. We had like more like light weight thing.
But essentially yes, it's unsupervised learning. The more data you have, the the better. Yeah. If you don't have these, like medical labels and you train the model to essentially Yeah. You're basically understand the pictures.
Deep: Yeah. The understand the model about feet and how to look at feet.
That's basically, yeah. Essentially. Yeah. And then you're fine tuning on these specific conditions, but they, it already knows how to look at feet. You've trained Yeah. Lower layers of the network. Yeah. That makes, that makes sense.
Shifting gear slightly to the business question, right?
So, you know, if we rewind kind of pre and post COVID, right? So pre COVID, very little, remote, , physician patient interaction post COVID. You know skyrocketed.
Artem: Yeah,
Deep: skyrocketed, right? Template of the problem that you're addressing here is that even that is sort of limited, right?
So before you know, needs he comes along and other remote diagnostics capabilities on remote, imagery gathering and other data gathering capabilities, you're really left with what a physician can do by looking at a patient looking at a camera.
Artem: Yeah.
Deep: And the sort of movement that you're sort of leading the way on or a part of is this idea that like, We can't only stick patients in front of their laptop, we have to get other data that's more meaningful if we're not going to have them in the office with the physician.
That's not to say we don't want them in the office with the physician, but there's a million reasons why,
Artem: maybe you don't, you can't get. That often you want to Yeah. Or that vessel, particular
Deep: vessel specialist is, you know, not in the middle of Wyoming. Yeah.
Tell me a little bit about like, one of the things that I find interesting about your approach. So there's a number of companies that are, you know, building like specialized hardware and devices to gather information on behalf of their patient for their, you know, supervising physicians and nursing staff.
And, you know, and it's everything from like, Stuff monitoring, you know, heartbeats and heart rates that to, something to like look inside of the ear, nose and throat, you know, like all kinds of different devices
Artem: here. Yeah, we're like, oh, yeah, dermatology
Deep: stuff. So tell me about, let's, we'll change gears a little bit.
I'm going to like, ask you to like, you know, project out five or 10 years. , what are the challenges in the, healthcare system today that make it difficult for a physician to use, devices to gather this kind of data that they need? And , how did you guys in particular circumvent that or sort of address it?
And like, what are some of the challenges that you see in the space in general?
Artem: , I would say that first and foremost, , as everyone knows, like 90 percent of healthcare money lies in insurance. So it's only 10 percent is paid out of pocket and 90 percent is insurance. , so insurance companies, it's not like a lot of that.
So it's, it's a bunch of them and they can dictate the rules. Pretty significantly, like how they reimburse, what are the conditions they reimburse, United States has like, like a bunch of, um, health care, health insurance companies, but in Europe, for example, it's even more complicated because it's just government pretty much sets the standard.
How, like, you get, like, how doctors reimbursed from public fund and, how they should work. How they have to treat the patients and yes, there is like a tectonic shift happening. Right after the COVID that actually that was mostly driven by health insurance companies changing the rules So look now doctors can be reimbursed by health insurance for telehealth visit imagine what was happening like five years ago before for it era doctors simply like cannot be reimbursed Yeah, that's right.
That has for many many many conditions and many many like job that they do. So I think this is the major factor actually driving the industry and giving the ability to doctors to use all of these devices and, I think what we can see here is just that. market shifts and how the insurance recognize that.
And I think insurance already started to recognize it's actually cheaper from insurance standpoint. Oh, do the telehealth that actually pay for facility fees because facility fees are sometimes, sometimes it's not, it's not really a doctor's time. That costs a lot. It's facility fees that cost a lot. I think just insurance companies have to be snappier and faster because it's actually very good for them if they allow the doctors to be reimbursed for providing the telehealth service and use all this remote patient monitoring tools that you mentioned. Okay, for example, we do remote patient monitoring just with the phone.
But we're doing the remote patient monitoring within this niche of orthopedics. There are a lot of other companies within other niches of healthcare that do the remote patient monitoring, both with just or with special device that is delivered to the patient's door. And it's all happening because You can provide better care for patient and it's cheaper actually to provide this better care so more people could benefit from this better care.
Deep: Well, let's talk a little bit about the, about the better care thing. I don't think we got a chance to address it, so you're describing the models, the conditions that you're trying to, you know, assess via the model and ultimately, you're sort of building this assistive tool so that the physician, kind of knows what the model thinks.
Yeah. , ultimately, you know, for regulatory reasons, of course, the physician has to make the decision, but you can imagine that just like machines have recall error rates, you know, where given a condition is present, they don't see it. You can imagine physicians, real human physicians have the same problem of measuring phenomenon.
, So it's really like this case where the machines are sort of saying, Hey, this is what we think you might be using some explainability techniques over time to say why, , we think this, the physician looks at it and makes the ultimate call, based on what they know, but at least, you know, that anything basic or that anything, the models capable of understanding has been considered, whereas if you have no assistance, I don't know, physicians.
Memories going or something. They're humans too. They're busy, you know, maybe they were in surgery for 14 hours. Like, there's all kinds of things that can cause either a misdiagnosis or maybe a failed diagnosis.
Artem: I would, I would say that that's exactly what the stuff that you are talking right now about is exactly the reason why people want to go to bigger hospitals, which are like scientific centers.
So people want to go to some scientific center because they understand that there are like a bunch of doctors. They could discuss your case, like together, and there will be like several opinions. And you might have like better. Diagnostics, better outcome, better treatment plan because of that, because they are always kind of discussing ideas, discussing all the new stuff that is going on in the, in the industry.
And they also discuss some cases of patients, like if this is like some kind of borderline case, they like two or three doctors can just take a look at your imagery, take a look at your CG scan. Hey, well, maybe it's. No need or maybe yes, you have to like do the surgery because it will be better.
Deep: And physicians have different kind of core expertise, right? Like, you know, not every cardiologist, like, you know, not every cardiologist is great at listening in a stethoscope for, , like some particular rare heartbeat anomaly, but they might be epically great at the actual surgery and intervention, which is probably where they spend most of their time and energy.
So, I mean, I like this idea of why do we go to collaborative physicians and isn't the model itself just a collaboration across thousands of physicians, you know?
Artem: Yeah, exactly. So pretty much, yeah, you can like think a bit about that in two ways. First way, Is really like the AI model itself.
It's kind of summary of knowledge of many, many, many physicians, like hundreds, if possible. , so it's never going to be smarter than really smart physician, but it's going to be very good, even better, maybe than average physician.
Deep: That's, that's, I don't know. I mean, a lot of, a lot of models are outperforming, , smaller sets of physicians.
So, I mean, I would probably argue that if anything, it's going to be much better , at least for straight pattern diagnosis scenarios.
Artem: But I would, but I would also add that, uh, the benefit of using this second opinion, from AI is essentially you can get this level of scrutiny, if you may, like a level of detail and a detailed look into your case, not within just big hospital chains, big scientific center, but in every kind of every corner of the earth, even in a small practice.
So because your doctor would consult with a virtual AI doctor it just more eyes. Take took a look at you at your case and that's in group your chances that it's going to be like, all right Yeah,
Deep: and I imagine you're you know, you're employing some kind of active learning technique maybe software Wherever the model output is disagreeing with your humans, , if nothing else, you're going to trigger them all to look at it again.
, so the humans kind of dig in and say, uh, no, actually we were wrong here, or yeah, no, we, we were right here. Either way, that's good for the model moving forward, and frankly, for the humans that might be learning from that. Yeah, and, yeah,
Artem: and for the patients, essentially. It's, it's, it's better for the patients.
Deep: this has been a, Really fantastic conversation, and super insightful. I want to end with, maybe let's fast forward five or 10 years into the future. Describe for me, you know, if everything that, , that you envision happening at your company, you know, at Neasty happens and gets realized, and if other everything happens in the other companies that are maybe specialized in different areas, dermatology, whatever, what does the world look like from a five year perspective?
From a patient standpoint, you know, is it better? Is it worse? How is it better?
Artem: Well, I would say that. With the years, um, the whole problem of like doctor scarcity, especially in developed countries becomes more evident because of the aging population, , like less doctors in the industry and just doctors have to sort more people, more and more people every year.
That's one of the trends. And essentially I would say that all of these like remote patient monitoring tool. That would help to overcome this trend and make the healthcare more available than we have today. So we have better care Just sitting in your home probably a cheaper price.
I hope so. I really hope so for that as well believe that it would become just a thing that people don't even realize it's there. It's like convenient air travel like we have like right now and people 50 years ago. Didn't have that because it was like super expensive for me for some high class guys.
And right now I can buy tickets like to the other side of the world and just travel there by airplane. I don't even think that it's kind of something special. It's just, it's just the usual thing. And I hopeful that 10 years, all of these tools that I'm building, that all other companies are building, just gonna make, gonna become.
The commodity, essentially, for patients. It just, the stuff, it's there, and it's helpful. And people don't even think that it's something that they cannot have.
Deep: I'll throw my two bits on the tenure out front. Like, what my hope is is that when you decrease the barrier to getting, , a professional assessment on a scenario, what you also do is you get people coming in, like, think of all the people who don't want to drive an hour to go to the physician, deal with the insurance, all that stuff to get something looked at.
What ends up happening is it doesn't get looked at until it becomes more serious. So you just wait until
Artem: it becomes more serious. You can catch it You can catch things much
Deep: earlier, and so I think the overall kind of care around all of these scenarios is just going to go up. I mean, everything from, you know, smart orthopedic capabilities, smart dermatological capabilities, smart toilets, all of these different tools.
Ultimately, if we decrease that friction enough, decrease the cost, decrease the time burden, and then the models get strong enough so that we can get more predictive, so we can get earlier and earlier and further and further upstream. I think we're just talking about a much healthier population. Like, we'll be able to intervene earlier.
I mean, it's, it's really a lot of upside.
Artem: Yeah, and I, I, I want to make a deal with you. When we were gonna be 80 years old, me and you,
Deep: Yeah. we gonna,
Artem: we gonna have to run a marathon. And you be able to do that
Deep: I don't know. I've got, uh, my cousin's a cardiologist. He would beg to differ . He said My, my, my, my best patients are all, are all marathon, but our joints, but our joints, muscles would be our joints and muscles. Our heart maybe not so much.
Artem: I just. had the opportunity to reflect how much stuff we, how much stuff we did that didn't work, actually. How much trial and error was, in the, , earlier stages of our path with this company. There was so much, really, of this, you know, hardcore research and development. When you do something, it doesn't work, you throw it out. When you do something, it doesn't work, you throw it out. But that's kind of the thing when you do science. And when you try something and it doesn't work, it's a result, it's a legit result, right?
Deep: I know, we tend to dismiss the dead ends that we went through. Yeah, yeah. Because when you do business,
Artem: because when you do business, the dead ends are actually not cool. But when you do science, they are cool. Well, I would argue even
Deep: in science, like it's a huge hole in the literature. Nobody publishes their failed experiments nearly at the level that they do their successful ones.
Well Art, thanks so much for coming on the show. Thank you. I think it, it was really a terrific conversation.
Artem: Yeah, well, I know it was like a really wonderful conversation. Thank you for really insightful questions.
Deep: That was a fun episode and keep up the good work. This, you know, I think this stuff is so important.