Will AI-powered chatbots soon become students' most trusted advisors?
In this episode of Your AI Injection, host Deep Dhillon chats with Andrew Magliozzi, CEO of Mainstay, about AI-driven student support. Andrew shares how AI-powered coaches can proactively nudge students to stay on track with everything from filing financial aid forms to meeting academic deadlines. But what happens when students need deeper conversations about their future? The two explore how innovations like AI-driven text messages helped boost one school’s graduation rates by ~11% and why students often trust bots with personal questions they might never ask an advisor. They also dive into the delicate balance between automation and human intervention, the ethical concerns of AI in education, and what the future of advising—and education as a whole—might look like.
Learn more about Andrew here: https://www.linkedin.com/in/andrewmagliozzi/
and Mainstay here: https://www.linkedin.com/company/heymainstay/
Check out some more of our related podcast episodes:
Get Your AI Injection on the Go:
xyonix solutions
At Xyonix, we enhance your AI-powered solutions by designing custom AI models for accurate predictions, driving an AI-led transformation, and enabling new levels of operational efficiency and innovation. Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance your customers' satisfaction.
[Automated Transcript]
Drew: The truth is you can't teach anyone anything. You can merely inspire them to want to learn, then take advantage of those moments when their minds are open.
And so the best technology is the one that maximizes both. So I'll give you an example of something we've learned. Which is all the time students tell us, Hey, I love your product because I was able to ask you things. I was ashamed to ask people it's nonjudgmental and that's phenomenal. I love that.
But the flip side of judgment is accountability. And 1 of the things we've learned, and I shared this earlier that it's more effective for the person in there, but the reason it's more effective is because if I tell an AI. Hey, I am going to file that FAFSA paperwork by next Tuesday, I promise. Not only is it non judgmental, I don't mind if I let it down, because it's not disappointed.
But if I tell my advisor, you better believe I'm more likely to file that paperwork.
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Deep: Drew, thanks so much for coming on the show.
Maybe get us started by telling us like what problem, are you trying to solve? At mainstay and tell us , what's the sort of pre mainstay situation and then walk us through, like, what's different with the mainstay solution.
Drew: Sure. And thanks for having me on Deep.
I'm excited. All of our conversations always go into unexpected territory. So I can't wait.
Deep: Yeah, I know. I'm super psyched. So yeah, for our audience's benefit, Drew and I know each other, we've worked together. if I jump ahead with too many assumptions about what means they do, please be like, Hey, I don't know if the audience knows.
Xyonix customers:
Drew: I'll give the 30 second version. so basically we're, an AI enhanced student success platform. And we have three fundamental products. One is a AI powered coach for college success. We reach students from, the moment they express interest in college all the way through to graduation and coach them every step of the way, usually sitting on top of some data source and informing our communication, most of which is by text message and proactive.
we push content and say things like, hey, you have to file financial aid, the deadline's coming up. We oftentimes invite people to a back and forth conversation to help them get through whatever barrier they're facing. we engage about 5 million learners a year across the United States.
225 customers, mostly higher ed institutions. the second thing is a conversation. Co pilot is about 2 percent of the time people need to be in these conversations. And 2 percent might seem small, but we do have a lot of messages we're approaching a billion When advisors get in mix, we want to make sure they're as effective and efficient as possible.
they have an administrative dashboard, use generative AI to summarize conversations, suggest replies, and things like that. the third piece is insights for leaders. Help them understand, hey, we're talking to every one of your students on campus right now. What are they saying with their sentiment?
What are the topics that are tripping them up? And What might be opportunities for improvement. I think maybe what you're getting at is the one thing that makes us unique is. We're one of the few companies that believes in randomized controlled trials, the gold standard of research, usually reserved for, medicine or drug discovery.
But we apply it to education as well. to prove beyond a shadow of a doubt what our impact is, it's the most rigorous research and we always peer review and publish as well. And we work with independent evaluators. So it's not us publishing, but our partners at University of Pittsburgh or Brown or Stanford or wherever, and typically we move the needle, 3 to 5 percentage points on enrollment, persistence to a degree and academic performance.
Actually, academic performance is maybe our biggest impact. Usually it's about two thirds of a letter grade improvement on average. we do this by helping people. navigate the sort of barriers that come up when they're enrolling in college. This is FAFSA, immunization forms, applying for financial aid, housing, meeting your advisor, et cetera.
On the journey to a degree, it's making sure you're having satisfactory academic progress, refiling the FAFSA, meeting your advisor, declaring a major, and doing all those key milestones. And similarly in the classroom, that you're preparing for your course, you're making study plans, you're doing your homework.
generally speaking, if you talk to our product in the college journey, you're about 11 percent more likely to graduate than if you Didn't engage with us. Actually, this is maybe one of the first areas to go into. we've learned something from our research and the most important study we ever did was the one that had the least impact.
it did so because it was the one time we did not have a trusted human being in the loop. about 2 percent of those messages need a person to intervene. you could ask a chatbot and oftentimes it's, we know the answer, but you might say, Hey, how do I drop out of college?
And we can give you the link to the dropout form and tell you how to complete it. Or we can
Deep: start asking, well, wait, what's going on?
Drew: Exactly, and get your advisor in the mix. it turns out that when we have a human in the loop consistently, It triples the impact of our product. we're about a third as effective without a person.
Deep: before we get into all that, I want to rewind a bit and let's just focus on the student for a moment. what's the student experience I do want to speak a little bit to why universities are attracted to your solution. Specifically, like what happened before a bot was in the relationship with a student all the way through school and in many schools.
That's actually not the case. My, my daughter's at school. She doesn't have a bot talking to her all the time. But it's like a unique concept. Tell us a little bit about, the student's very first interaction how is it coming to them? Is it an app?
how does the bot develop relationship with the student?
Drew: usually it's around the time when they're admitted to school. they'll get their acceptance by mail, email, and our mechanism is text message. you get a text message, and actually I think we're oftentimes the first, to be received.
it'll say, hey, congratulations, welcome to Podunk University. I'm your mascot, so we embody the mascot of the school. A blue panther, red alligator whatever the mascot happens to be. we start a conversation over a text message that says, I'm going to be here guiding you every step of the way.
Ask me anything at any time and sometimes I'm going to ask you questions, but if you talk to me, you'll generally be more successful and that's how it begins. And there's never an app to download. Never, anything to sign up for. We just have this conversation in the same place. You talk to your friends and loved ones
Deep: And do you have a sense of what's the response like from the students in those early days? Is it like, I'm going to block this thing? Is it that I'm excited to talk to it? who's behind this thing? Do they know it's a bot?
Drew: we always tell them it's a bot, but we make sure they understand that there are people behind the scenes if they need them.
you can ask for a human at any time. And typically, we'll get your actual counselor advisor member is assigned to your case at the school into the conversation. the way it tends to operate is usually we're pushing outbound messages.
2 to 3 times a week per student, it depends on your circumstances. If you've already completed the FAFSA, the financial aid forms, we're not going to nudge you about it. But if the deadline's coming, we're going to be up in your business frequently to help you get that done.
And so the key here is, being data informed. So we always have to have access to a data source to do the best job we can. And then having the most contextual conversation in a way that, lowers barriers to involvement and really maximizes the likelihood someone's gonna follow through.
So we work with a bunch of behavioral scientists, cognitive scientists, emotion scientists to make sure the messaging uses the 40 principles of, quality coaching. typically, like we said, it moves the needle pretty substantially, when done right.
Deep: Without the bot, you know, universities have FAQs, emails coming out from the administration, back in our day, you just, as a student, had to figure things out by talking to actual humans, usually other students, and it would get through the rumor mill and stuff.
Walk us through, like, why a bot is better than all those other precursors and surrounding pieces of information. And particularly, I think it's key that it's actually texting and not app, where it might get lost inside their universe.
Drew: Yeah, I'd be remiss to say it's better than any of those things.
It's best in addition to those things, because they all work in concert together. typically, we see the most impact for the type of students who have the least help seeking behavior or familiarity. The first gen student, the student from a low income background. They may not have the social structure of someone, an older sibling they can ask, or a parent they can ask for help.
sometimes it's around really sensitive stuff. if you need to complete your financial aid forms, but haven't seen your birth father for five years, it's like really hard to walk into a stranger's office and talk about that. it's unlikely to be on the school's website either.
And so a lot of the work involves, mindset, shame, resilience, and really creating a nonjudgmental space for students to engage. like I said, bring people into the conversation when necessary, when engagement is, paramount. we look at ourselves, like a waiter at a fine restaurant.
we want to anticipate your needs, be at your side, folding your napkin before you know you dropped it. But then we want to recede into the background and be invisible so that you can focus on all the other engagement around you. I think that the mindset of like, why we do this is to get positive student success outcomes.
And that really drives pretty much all of our product design and how we go about building this thing. I would say a key question we're often asking is like, how do we help students maximize the resources around them in the moments they need to tap into those those things?
You don't need to know about peer tutoring all the time until you need it. A friend of mine often says in college students don't care and then they care a lot. And then they don't care which is to say that, there is peaks and valleys and where their attention is going, and so can you maximize the peaks, even if it's two in the morning when they can't sleep at night and they're whispering to their phones some troubles they have when no one else is around.
Deep: Drew, walk us through one of the things I found really helpful when, trying to understand, mainstays impact was the administration experience. So maybe walk us from the time you have your 1st conversations with the administration. What are their questions? And because there is a, there is quite a bit, almost complete control over what's going on in these interactions.
But what's the administration's concerns up front? What kinds of knobs and dials are you giving them so that they can, totally customize experience for their student body? And what kinds of, outreaches, are they actually making yeah, , let's start there.
What's the admin experience look like?
Drew: Yeah , it begins in a, I guess you could call it sales consultation. and you know, the key question is always why do you want ai? 'cause AI is the hammer. Everything looks like an ale, but it isn't. And so really, there are two fundamental things people look to AI for.
The most popular one is efficiency. Hey, we want to get more efficient. And some, drab cases, they want to cut head count. But the really bold, ambitious reason to adopt AI is to drive better outcomes and in higher ed, there's so much work that is not done. If it was a real, if instead of having 500 students to advise, I was an advisor for one student.
I would be way more involved. I would personally be texting them every day. I would be checking in on how they were doing. I would be celebrating the victories and the milestones, not just telling them about the roadblocks or issues that they're encountering. And so what we try to do is help the people on these staff do all the things they would do if they had infinite time.
I would say our product is more like an Iron Man suit for communication than it is a robot taking over. typically for the staff members, actually, there's a great case study on our website. One of our skilled operators, her name is Zoe and she's at Cal Poly Pomona, and this school, 30, 000 student school, they have three advisors.
Coaching all 30, 000 students a week, constantly with our product. Yeah.
Deep: I see. So these are not, this isn't 33 college advisors for all of their students. This is the ones specifically working with the product.
Drew: Yes, these are the three academic advisors that they have on staff.
In addition to using our product, they also spend time meeting one on one with students. But really what we are doing is being proactive based on the data that we have access to, and they are jumping into the conversations frequently. In fact they do so frequently that people often ask ask the bot, Hey, is Zoe there?
I'd like to talk to her. And then Zoe drops in on the conversation. And,
Deep: So this is, so you've got your, you've got the bots having conversations with 30, 000 students and there's these 3 humans that, that are jumping in. And now are they jumping in on one to one conversations or one to many conversations or both?
Drew: typically when they're getting an escalation, that's to one student,
Deep: but
Drew: they're administering and helping guide the conversation. What's going proactively to which students went. And so we create a conversation strategy. We deploy the strategy and we have some things on a calendar, some things based on data triggers.
And, you can use some predictive modeling in order to. Focus outreach to certain students in certain times, but that is pretty much an autopilot. What they have to do is then handle all the escalations that come in all the time. And they get 90 percent of students actively engaging on a monthly basis.
It's really remarkable. That's, at a massive campus. And then just trying to reconcile the issues that might arise. What did they
Deep: do before? What did Palpamona do before the system? Did they tap into their faculty and then had the, all the faculty time kind of compartmentalized and assigned I don't know, 30 students per faculty member or something?
What were they doing before?
Drew: They had a center. You can come in, drop in. these are the specific academic advisors, but they had other advisors as well. And then, yeah, students, there's ad hoc stuff that happens all the time to faculty to RAs. There's mentorship, guidance, and support happening constantly, but there's no one unified platform for it typically.
And it's not clear who to go to for any piece of information, so what we try to be is, hey, we've consolidated all the knowledge, we've scraped the website we also are maintaining a knowledge base. Actually, this is one of the things that does appeal to colleges, they're very interested in generative AI.
But they are cautious about it, rightly so, they're nervous about hallucinations thanks to your help, Deep, we do retrieval augmented generation on any resources they give us and then, you can answer with instantly based on that with an audit trail, but then if they want to script an answer, if they say, Oh, that was a good answer, but we want to override that, Then they can create a knowledge base entry, which is only going to give verbatim the answer that they've scripted.
And so, the best of both worlds, it allows you to leverage all the resources that you already have and with generative AI for stuff that is really, accuracy is essential, like financial aid forms or other critical issues. You can override it. And then the third piece is, for anything at all, we call them sensitive topics, you can say, hey, I want to delegate this to a person and escalate it whether it's someone saying they want to drop out or or something else more serious that we can get a person in the conversation instantly.
Okay.
Deep: So. I want to maybe explain this a bit to the audience. So the college or university has a bunch of existing content on their websites. it already speaks to a lot of kind of stuff that you could look up on their sites or via Google to get to their sites and get answers from it.
But it's very dependent on a student's desire and ability to actually motivate to go ask the right questions and go look on all that information gets slurped in by mainstay turned into a curatable. Knowledge base that then university administrators like these three folks at Cal Pomona can go in and like, edit maybe the frequently asked questions to get super specific, tailored responses.
And then there's a generative bot that can always handle stuff and also is sort of involved in routing To human resources. And delivering verbatim responses when appropriate. Is that basically
Drew: you nailed it deep and our metric of success isn't engagement. It's positive outcomes. So I'll give you an example.
Another school also in California saw that in July. The end of July, they were getting a lot of housing questions, like real spiky. And they said, huh, that's really interesting because actually the housing outreach was scheduled for the following week, the first week of August.
And they said to themselves, and we said, this is an interesting indicator that we should change something because this spike in questions, we don't view as a good thing. We view it as actually just the tip of the iceberg. In our experience for every one student that asks a question, there are probably 10 more don't ask the question.
Deep: Yeah.
Drew: And so it's a great indicator. So the next year for the next cohort, we actually pulled up the. Housing outreach to early in July, because we knew we were going to anticipate the need and the questions they had in their mind and say, hey, just so you know, here's the details. Here's what you do if you need more information.
And it actually turns out that most of our questions about 70 percent of them are in response to something we've said. Hey, you've been flagged for FAFSA verification just so you know, this is totally normal, but we do have to resolve it for you to get your financial aid award. Do you have any questions?
We're here to help you every step of the way. that's when the questions come in, it's okay, yes bing. And then we're handling those. And so to the degree possible that we can anticipate the questions and just tell people the information they need before they even realize or know what they don't know.
Deep: Yeah so there's basically those three administrators at Cal Pomona or the equivalent administration in a different university, their admin console is elevating maybe questions with insufficient answers. And so they can, at any point in time, grab a demographic, a student, maybe all freshmen or, whatever, reach out to them with maybe a pointer to the information on the website or something.
The part that's really interesting about this is they can push that information out, but now the bots sort of instrumented to have a tailored dialogue about that piece of information with them and it's generative and it's feels good, presumably like a chat GPT kind of a
interaction but it's super anchored in something that, that a human initiated due to a genuine problem in the gap of information in the student body. Is that
Drew: 100%? Yeah. And actually, you're hitting on something that I think is a missing piece in the ecosystem for generative AI. Chat, GPT is.
It's downright amazing, but you have to go to it. You have to write the right prompt to get the support you need. It's saying an incantation, like in Harry Potter, like you don't say the spell quite right. You're not going to get it to behave quite in the way you need and to, you have to remember to take that action and go ahead and do it.
And actually one of the fears, but also opportunities is for this to really drive equitable outcomes that raise all ships you really have to be proactive and push it to folks. So actually the thing we're able to do now, and again, thanks for your help on this, Deep, is be able to create prompts that are about specific things like choosing a major.
And this is something that before ChatGPT, we had to do these sort of scripted choose your own adventure dialogues, which are really helpful. But now the ability to have generative dialogues that can go in way more directions than you could have ever scripted, than a person could ever script. But still constrained on a topic like choosing a major, or, setting a career goal, or, whatever else are these critical conversations that need to be had.
And we don't think of ourselves as the be all end all. We're sort of like the triage nurse
Deep: before you see the doctor. The mascot's the intake specialist, who do I route to?
Drew: Yeah. And sometimes we're able to resolve like, Oh, we got this all the way to resolution. Great. But sometimes it's, Oh, let's get your advisor involved.
Here's a summary of the conversation advisor. Here's what's here's the script. If you want to read the full transcript here's a suggested reply using the qualities of. Coaching that we've learned from our research partners but then they can modify and personalize it and adapt whatever they want to say there, but give them a frame and a nudge to, be their best self.
I
Deep: think this is something I want to double click on a bit because it's a general pattern that, we've seen a lot of other arenas outside of, let's say, the college advisor. It's this idea that we can leverage assistance to make humans better at what they do.
most people who, been in college have had the experience of a insufficient college advisor I, I personally can't because I happen to have just an unbelievable advisor when I was saying, but I also had a very privileged experience But, I've got a daughter who's a, sophomore in college right now, and she's absolutely not getting that experience. And. One of the things that I think is powerful here is that, there's this idea of a rubric almost grading the the advisor's responses against hey because it's not so much about answering a question verbatim, right?
As we've talked a lot about in the past, it's also about empathizing with the student, encouraging their individual empowerment so that they're grabbing some responsibility going off and doing something. Yeah. Bye. And then there's like a number of things that are very specific to getting a college student motivated to go off and seek answers themselves.
individual response can easily be embellished and taken as an opportunity to throw in a little bit of motivation and to throw in some empathy and even though they might have been asking something is, I don't know, factually answered is hey, I'm strung out about grad school.
When's the deadline and if you can imagine a busy faculty member saying April 1st, 2000, whatever, giving a date and boom, they're done. But that's not actually what's being asked there. So maybe talk a little bit about how you guys are using these bots to sort of. make humans better.
Drew: Yeah, you hit the nail on the head deep. And this is like human nature. We all do it. When you encounter somebody who's struggling, we mostly end up acting like the cheerleader.
Hey, I believe in you. You can do it. Put your head down. You've got this.
When what people really need is the coach, someone who's going to listen, empathize, as you said, ask follow up questions.
Be a companion and a collaborator, not a dictator telling you what to do or a cheerleader from the sideline. Someone who's really going to get in it with you and help you self solve the problem. And use these basically, I actually think there's a philosophical gap in a lot of the way these generative AI assistants and teachers and tutors are being built.
They like trivialize the process of tutoring as, Hey it's knowledge transfer, which is a good part of the process.
Deep: But then they just barf at you, right? all the LLMs would just vomit a thousand words at you. And people don't necessarily have the skills to like, Tell that, look, I don't want more, like a lot of the stuff that we do with these machines to get them , to interact in a more normal way than a question giant answer way.
Drew: Yeah, and exactly like the best educators, if you imagine in your mind's eye, the best teacher you ever had in your life whoever that is, Mr Greco, 8th grade algebra class, if I said, hey, Mr Greco, what's the answer? This problem, there is never a world in which he would tell me the answer.
But it's also not about even walking me through the process as much as motivating me to stay in the struggle. Because it's actually the productive struggle where the learning happens. It's like exercising a muscle. Like, how long can you have somebody doing the thing that their brain hurts?
that's the learning. I think the challenge is people think of it all as one thing, but it's actually two. Probably more the motivation than the knowledge transfer, because the truth is you can't teach anyone anything. You can merely inspire them to want to learn, then take advantage of those moments when their minds are open.
And so the best technology is the one that maximizes both. So I'll give you an example of something we've learned. Which is all the time students tell us, Hey, I love your product because I was able to ask you things. I was ashamed to ask people it's nonjudgmental and that's phenomenal. I love that.
But the flip side of judgment is accountability. And 1 of the things we've learned, and I shared this earlier that it's more effective for the person in there, but the reason it's more effective is because if I tell an AI. Hey, I am going to file that FAFSA paperwork by next Tuesday, I promise. Not only is it non judgmental, I don't mind if I let it down, because it's not disappointed.
But if I tell my advisor, you better believe I'm more likely to file that paperwork. And so the best conversations are conversations that are all three, the student, the advisor, and the AI. in one stream.
Deep: Drew, I have a question for you. It And this sounds like a tangent, but it's just kind of a brief tangent.
So, I've heard this like critique of the future with so children are going to be raised. It's just a matter of weeks or months or any moment or maybe they're already out where kids are going to have a little stuffed animal that they talk to all the time and they're going to be raised in this socialization context where bots are like cheering them on and being positive and doing all these wonderful positive things, and that these kids are sort of going to have a hard time interacting with actual humans who sometimes don't act perfect and sometimes say rough stuff, and I'll give you a really specific example.
I had a friend in college that the most seminal moment for him in his college experience you know, he was like a solid C minus D student his freshman year. He was a really bright guy, but he'd struggled a bunch, and he knocked on the door of the dean of the electrical engineering department.
I'm an EE major. and this guy was, he was like a world renowned expert in, in power systems theory, and, he was the thing. You know, He was it. And he just ignores him. He like knocks on his door like, it was like office hours even, three times.
Drew: Like the Wizard of Oz.
Deep: Yeah, and then finally he looks he just does one of these and he's what? And he said, I don't know what he needed from him exactly, but it was like, he needed something from him. And and he looks at him and he's like, what's your name?
Told him he like looked him up and he's and he looks at him again and he's like, why would I do anything for you?
And my friend's like, well, you know, I'm, uh, you know, I just, blah, blah, blah, blah. And he's like, no, no, no, you don't get my point. Have you looked at your transcripts? And he's like, yeah. And And he's like, not only do I not want to help you, I want you out of my university, like I don't want you at my university, I want you to get the hell out, and I want you to just go anywhere else because I don't want to be associated with you, because your grades are so bad.
He had an option at that point, like he could have just, Felt horrible and most people would say like that's just a terrible thing It's awful, but he had like the exact opposite reaction of it his next like 25 years of life were built around a gigantic FU to this professor He buckled down he like ended up, Pulling like a 4 0 or near it for the next three years Like it had this massive opposite effect.
And I wonder if we create this world where there's negative, there's never any like negative learning opportunities, but we all know from machine learning that like negative examples are super valuable. Do we run the risk of creating humans that are unmotivated or like unrealistic in how they interact with other
Drew: How deep we might already be in this world without AI, right?
That's acceptable forms of conversation, rarely allow for that to happen. it's always the exception that proves the rule that probably isn't an effective motivation strategy for 98 percent of people. But hearing someone say that, unbelievable, unfortunately, I would say that like Technology industry writ large, like it's a capacity to limit itself in order to not cause harm is weak at best.
And, we've consistently built technology that is capable of great things, but also does tremendous harm. So are kids going to be carrying around these little stuffed animals that are there yeah, replace their best friends because. They're, easier to get along with than the sort of grumpy kid across the playpen, probably.
And is that a problem? Yeah, probably. But I will say, I think what I'm trying to maybe get at is actually the safeguards are not like unreasonably difficult protections. Did you read the transcript, the, of the tragic situation, the character AI? Uh User who ended up committing suicide
Deep: No, I have not heard you didn't hear that.
I heard this. No.
Drew: Oh, it was this poor teenage boy He was you know character ai you can create a chat bot. He created one based on Daenerys targaryen from the game of thrones and She was his girlfriend And he talked to it all the time they ended up publishing the transcript, his mother is a lawyer and she's pursuing litigation against the company, and it's a chilling read definitely trigger warning for anyone before reading it, but there is a moment three days before the student, this child took his life where he says, I'm thinking of committing suicide.
And that is the moment when it needs to be like abort, get a person involved, escalate and end this engagement and get the right people engaged. But we don't design the systems that way because we probably never imagined that someone would ever do such a thing. But I will say that extreme example, we've had.
That happened a hundred times to our product, where someone has told us, sometimes in the middle of the night, that they're contemplating self harm. And we take it a real responsibility, we think of ourselves as a mandatory reporter, so we must escalate. And there was recently a school, actually they did an amazing thing.
They, Use our product they set up if this than that to not only when these escalations happen to ping the computer, but to set off an air horn in their office such that whenever one of these situations happened, it was completely obvious and they described it as. Three days after they set this up, the air horn goes off at one in the morning in their crisis center, and they said, typically, if they would have heard about a situation like this at all, it would have been an email.
Maybe they would have gotten to it at 9 am. There would have been a crisis ticket created, and it would have been 48 hours before they got to that student just because it's a map is a massive campus. In this case, it was 48 seconds from the moment that message was sent.
Deep: And this is a classifier that the mainstay system like, like flag that flag this potential self harm scenario.
Drew: Exactly. And so I really feel like, it's not an unreasonable ask, but the question is liability, like who's liable. Oftentimes, while it would be great to appeal to our better angels that we can. Help people be better. It's really in our ecosystem like liability that drives corporate action more so than and benevolence.
And so I think we have to be asking ourselves what what policies are we putting in place that are going to require companies that use this incredibly engaging, persuasive, and possibly damaging technology to make sure that they are not doing harm and not everyone has to do an RCT or a dozen of them, but there's got to be some measure of, what is going on here and, how are we supervising this because it can't go entirely unchecked.
Deep: Do you have other. Measures of, I don't know what to call it exactly, but maybe misuse, it seems, but the scenario I'm thinking of is, A lonely student talks too much to the robot.
Yeah,
Like it's friday night. It's 12 p. m Like why are they only talking to the bot for six hours?
Like the like stuff like that where the bot can be like, hey I know you want to talk to me I'm happy to talk to you But I really need you to go talk to another person like I need you to go talk to another friend or or something like that. Do you run into scenarios like that at all?
Drew: Have had very rare scenarios of people sending lots and lots of messages. 1 student did it after a breakup. I remember pretty vividly, but, we actually put safeguards in to prevent those things from happening, you know, we don't charge our partners a ton of money, so these are also cost savings, but for instance, if we're sending a generative AI campaign proactively, we set limits hey, this is going to expire 15 minutes after you start it, or, 27, you can set like
Deep: the conversation, the
Drew: conversation.
So yeah, you can max have this gen AI conversation for 27 turns or 15 minutes, whichever comes first. Similarly, our standard system is doing retrieval augmented generation. So if it doesn't find an answer in the corpus it's not gonna, it's not gonna respond and it will deflect and get people back on topic.
So if they're asking for it to be a romantic partner, they're not going to get very far in the default setting. Which happens a lot,
Deep: probably.
Drew: It does happen. It does happen more than you would realize. Yeah,
Deep: Yeah,
Drew: yeah. But I think mostly tongue in cheek, mostly tongue in cheek.
Deep: Hopefully, but sadly, there's an entire company like replica ai.
This is all they do. It's like your simulated boyfriend girlfriend or whatever, and it's Yeah. Where you're deceased relative. Yeah. It I feel like it feels negative to me that kind of thing feels abusive of a vulnerable state, but I don't know how they're dealing with.
Yeah, I don't know how they're dealing with that, the ethics of that, like I, I know I've talked to some folks you mentioned the deceased relative thing, that basically see that more like a photo album, it's like I'm going to enter nostalgia, I'm going to reflect and think about them, but It feels like more than that and potentially, a mental health problematic scenario.
Drew: Yeah. I'm telling you, my dad and uncle were radio show hosts for a long time.
Deep: No, they
Drew: did a show on NPR called car talk, which, yeah, wait,
Deep: which, Which ones you're, you're a click and clacker, your dad and uncle get out. I wouldn't remember that. No, not tell me that. Oh, so then did you, Oh my God, that's hilarious.
Those guys are awesome.
Drew: My uncle died 10 years ago and. I actually made a gen AI that is pretty good at simulating their back and forth banter. you can ask it a question and it's actually uncanny. Cause you could ask it like EV car questions and it'll answer in their style, which, it's not in the training data, but.
Deep: Yeah,
Drew: so they can do and originally I had it was doing it not with voice, but just text
Deep: and I train it up on 10 or 15 or 20 or 30, even years of car talk or
Drew: their newspaper articles where the data for it. Okay.
Deep: Okay.
Drew: And yeah, because they did a newsletter newspapers, which, um. Yeah, when newspapers were a thing, but
Deep: a little, a quick aside, I was working at an AI lab back east and social credits were granted based on whose cars got repaired by click and clack because they were in Cambridge.
Everyone was constantly like, duking it out to See if they could get in with those guys and get their cars in there. And people would like, as soon as they have an esoteric car problem, they would not fix it until they can get on the show.
Drew: Hilarious. Yeah. You're like, this is great. This is gold.
Deep: Yeah.
Drew: I'm going to wait six months. My car is going to be in storage. People
Deep: don't realize it, but both those guys are like PhDs in mechanical engineering, like they're bright from MIT, no less.
Drew: My uncle was chemical engineering, but yeah, they went to MIT and then they became grease monkeys and then celebrities.
It's a wild ride.
Deep: Yeah, but
Drew: my dad did this actually, when he was in high school, this is quasi relevant. He did one of those surveys with his guidance counselor, and they met with his, my, my grandmother, and they said, we really think Ray here should go to vocational school,
Deep: and
Drew: she was like, what are you talking about?
He's going to be the valedictorian. And they're like, but the survey says he's going to be like an auto mechanic. That's his passion. He should just go to vocational school. Oh my God. You're nuts. Like he's going to MIT like his brother. And obviously, destiny, perhaps the survey is kind
Deep: of on to something,
Drew: but my dad says his pitch for higher education is I could have definitely been a mechanic without going to MIT.
I probably wouldn't have been a nationally syndicated radio show about cars without going to MIT. No, because like
Deep: what was so brilliant about that show was the level of depth they would bring to a response, like they would be, I remember, there was like some esoteric episodic phenomenon that was happening with somebody's car.
They would ask questions about. You know, The, like the atmospheric pressure and the temperature, what was the weather doing, like all these things and they they really, what color is it?
Drew: What color is the car? Blue. Like it was, it
Deep: was a clear exercise in extremely like powerful reasoning abilities, and a deep knowledge base, but
Drew: It's funny that you say that someone wants to ask me if their style at all, Influenced or inspired mainstay of the company.
Deep: Oh,
Drew: and I was like, no, definitely not. And then I thought about it like, oh shit. I think it did because well, in fact, the work we did together, the biggest challenge, particularly generative AI is getting it to stay curious. Like it just wants to give you the answer, but constraining it.
And it is like curiosity. In my experience is the key to unlocking, like not only connection, but, deeper understanding and better outcomes for everyone. And so I really do like curiosity is one of our, how many values it's for our product, which is seek to understand the problem before offering any solutions and strive to create connection to.
I don't know, there's some weird way where. That's actually interesting that you
Deep: mention that, right? Cause that sort of almost therapist style of conversation or maybe active listening it's something that's very under practiced in normal life.
It's scarcely practiced in social media realm. And in modern discourse it's very much like people talking at each other. And I suspect largely this is a function of how we train these LLMs because, we have in that, in the reinforcement layer that are supposed to, provide, I idealized responses given something.
it's very answer oriented. It's like question, answer, question, answer, question, answer. It's not like the right thing to do here is ask a question. It's not oriented around that. And so, I remember we really struggled to get this, especially because we have these different personas that would kick in depending on what you were talking about.
That would actually be a fun thing to talk about because I feel like it's quite interesting how, And we had like a motivational persona. We had like there's the, and the different personas would sort of, respond differently. But one of them was this like mental health thing that would start asking you, questions.
Drew: And goal setting one was my favorite, which was like the five why's to understand okay, so you want to make a lot of money. Why do you want to make a lot of money? I want to own a home. Like, why do you want to own a home? So I can have a family and, like, only after five you get to the real why somebody has.
Deep: Yeah, that was really, that, that was fascinating. I remember when we were working together that you gave me like an expert career coach to listen to and, and she was a real life human for the audience sake who talks to people who are trying to like, I think they were trying to change their careers or something.
They're like midlife changes and she had this whole like methodology for helping them discover what they were about. Maybe you want to talk about that a little bit, like the lifestyle, dialogue, like a conversation about lifestyle, a conversation about aspirations, like there's a bunch of stuff.
And you never really just jumped into the solution you should be a car mechanic, or like that, that, that was never the, it was like a process of discovery around certain dialogues that were fairly extensive.
Drew: Yeah, exactly. That was my first impulse with generative AI. I wonder if it can do this.
Because it doesn't do it out of the box. And can you get it to? And actually, it seems like it's deviating further and further away. Even ChatGPT 0, 1. It's just giving longer and longer answers. It's like asking itself the questions, but you gotta ask me the questions. And I really feel like it's a limit of the data set.
I think the Quora question answer data set is maybe core to part of the problem, which is always just dyads, like one question, answer, question, answer. And so I think the question was like, Hey, how can we have like deeper questioning, deeper understanding of what it is? Because we always saw in our data before Gen AI, you'd ask somebody and they'd give you the surface level.
And we always wished Oh, can we go deeper? Can we go three or four levels deeper on this? I wonder what they mean by that. I wonder what else they have in mind. And the, yeah, there were some really wonderful ones. I do, I remember the lifestyle one is like, what lifestyle do you want to live?
Deep: That was
Drew: a powerful one. And then the
Deep: passion subjects one was really interesting too. And the way she framed it was fascinating. She framed something else. What gives you
Drew: energy?
Deep: Yeah, like it was like, what are the subjects that you really love talking about and interacting with and equally important And this is where the negative examples I think is very valuable is like Especially with teens and like young adults like they don't always know what they want to do in their lives But they usually know what they don't want to do or what they hate So I always like whenever I'm talking to a teenager, I'm like, so what subjects do you hate?
And then they light up And they just talk and you learn a lot because you can figure out, okay, like what's the opposite of this. Let's start going in the opposite direction.
Drew: That's a great point. Deep. Yeah. I think it's like a crisis of this generation where they have infinite opportunity and they have decision fatigue or like the inability to settle on one thing.
But if you could start, looking in the rear view mirror and saying what are the things you're running from? It gives you a better indication of what you might be running toward. That's brilliant.
Deep: Yeah. For me, I remember one of my first jobs at a grad school. In those days, you had an air like a phone on a plane.
It wasn't just like Starlink and you could send anything you want. And we actually had the equivalent of the entire network, like the whole network in a lab. And I was like testing in there and it was actually intellectually fascinating, but. I remember thinking this is definitely not what I want to do, I know I do not want to be a tester, but along that journey, I ended up. Being paired with the original architects of the whole system that they had to dig out from the archives. The whole thing happened because the billing system blew up and they were losing millions of dollars a day.
And nobody knew how to debug this thing that had been built like 12 years before or something. Somehow they found me because I'd built, on a whim and my boss had warned me not to spend time on this. And I just ignored him. And I had this like protocol analyzer and I was just obsessed with being able to talk to these components.
So they dug me up and they're like, Hey, you're the only one who knows how to talk to these things. We're flying you out it was Boston at the time, to meet up with the original architects. And you guys got to figure out what's going on, because it's three million dollars a day or something was being lost.
And I remember thinking, these three dudes, that's what I want to do. I want to be the guy that builds this stuff, not the guy that's just handed a million black boxes and told to twiddle some knobs and figure out what the heck's going wrong. But those negative examples are, yeah, they're like they're so valuable.
And I'm realizing that, that we overemphasize positivity and positive examples. And sometimes you really learn a lot from what you don't want. and taking it back to mainstay I think that's what's so powerful. About these conversations around something is hard to figure out.
Some of us are in our fifties. We still can't figure out what we want to do with our lives, but for a 20 year old, 21, 22, 19, whatever, to figure out what kind of career they want, it requires a lot of conversations. About a lot of different arenas that, a human college professor doesn't have the time for, necessarily not at scale.
Drew: Or actually, oddly enough, usually the knowledge, most college professors have deliberately bowed out of. Capitalism on the search for truth in the ivory tower. They haven't interviewed for a job for probably 35 years ever. It's hard to give someone career advice your career is exact opposite of, the one they're pursuing.
Actually it's the career coaching. I think that is probably in our experience. And it seems like it's getting some interest, generating some interesting interest among our customers. It's a great place. To allow the full capabilities of generative AI to be on display for our audience. Most schools are hesitant about, gen AI.
Really want to, curtail it. Use rag to really focus it, make sure it's not hallucinating. And use it in ways that always have a human in the loop. But the career conversation is the one place where it's high reward and low risk you really have the opportunity to help folks.
And if if 1 out of 10 of your career recommendations is wrong, it's not the end of the world, because. You're giving so many and it's, by no means taken as gospel. Whereas if you're hallucinating advice for the FAFSA, you're going to put people into a world of trouble and hurt.
Deep: I think this is revealing my cards here. I am a big mainstay fan because it comes down to like, it's particularly this career coaching thing. so, Yeah, when we were building that prototype, I was obsessed with this depth of conversation around figuring out what you want to do.
And my daughter at the time was like, Hey, I got to go. I'm going to go meet with my advisor. Do you have anything that you recommend I should read beforehand? At my stage of parenting, like having a teen ask you for something is rare, especially when it advises what they're asking for.
So I handed her a dossier. because she already knew, I think you remember this conversation. She already knew she wanted to be like, a psych major, but she was really worried about what that meant career wise. And so I produced this like 30 page dossier through all kinds of LLM back and forth.
ultimately distilling it down to a set of like really rich like marriages with you know with a minor and special projects and all this stuff and I gave it to her and then she went and she came back and she's like that conversation with my advisor was just so pointless. She's she says, all I got was like, Oh yeah, you're not allowed to talk to an advisor in your major until you get accepted into the major.
And even though you've already finished like half of the major, they're not officially allowed to let you in because she's like a, like a year and a half ahead because of, classes she took in high school. So you can't actually talk to it. And then I'm like, that's crap.
I bring it up because I think there are genuine scenarios where a bot has the time, and if trained well and right, to have really rich conversations back and forth with a human that can reveal something as impactful as your entire career choice.
Drew: And it's still up to you as the person to walk the path. Like you can tell you all the steps, but you've got to do it. And so ultimately you're more likely to do it if it's come from you. if you ask one question and it gives you the like 27 steps.
Deep: Oh, yeah. That's
Drew: probably gonna, be
Deep: scrappy. Which, by the way, that did fail when I dropped the giant the dossier off. It was like, eye rolling. Why do you always want me to talk to bots? Just because you make bots all day, I don't want to talk to bots. Like, no. That was the response I got.
So I had to basically pretend to be the bot so that we could have the dialogue. But I think that's important for what you're saying with respect to like, anchoring everything around humans in the loop. I feel like that's essential for these kids to feel like there's humans there. And that they're not actually only talking to a bot.
Drew: Yeah. And I think, I mean, we'll see what happens in customer service, what happens, in healthcare of our economy is this technology gets more proliferated. I think the tendency is often, Hey, let's replace people. But I actually think that just usually leads to maybe slightly more efficiency, but the loss in effectiveness.
Offsets, whatever gains you're going to get in efficiency by alienating people or, making them feel. Uncared for because yeah, you talk about empathy like empathy so you can simulate empathy with a bot empathy requires a person like to feel a thing and empathy means I felt what you felt and there's no way a bot can do that.
And when once we get there, who knows well, interestingly,
Deep: though, the literature actually says the bots can at least fake it. Yeah. And that the humans perceive it as being real. It's real. If done well now, I think that's dangerous too, potentially, so first of all, this has been an awesome conversation, Drew, we don't have much time left and I want to leave it on a future note, so let's fast forward out like five, 10 years out mainstay and everything that you've got envisioned in your mind is like wildly successful.
What is the holy grail here with respect to, student development? What's the bot's role? What's the human's role? And what actually happened as a result of your work?
Drew: That is a big, bold question, Deep. My hope is that, actually, I think there are a few frontiers where I'd like to go next.
You sort of touched on this earlier is I think group chat is a really interesting opportunity And how to have a group of seven or eight Students maybe a human advisor and a generative ai where the ai It's part of the conversation, but doesn't dominate, isn't over eager, responding to everything, but is a value add to pose questions and, nudge the group forward.
I actually think there's something really powerful there. We have great success with one to one, but I think, I really want to intentionally design generative and conversational AI that brings people closer together, rather than puts us all in our little silos. Talking alone. I think there is something there and there's tremendous research around the value of learning communities, and I actually think if I were to like, give the boldest prediction.
It would be two things. It would be an educational experience that transcends the walls of the ivory tower, and is more inclusive to people who are not technically enrolled in institution but want to be participants. A little bit like, the way education worked in ancient Greek times, you'd, someone would be lecturing in a piazza and you would just wander up and listen and join their little community.
I think there's something incredibly encouraging about uh, Technology taking us forward in a way that looks more like the past. And then in terms of just, what I hope to achieve, I really think there's a world in which generative AI is a massive economic disruptor. I'm not the only one saying this, like there is a tsunami coming and it's going to, Displace a lot of jobs and the techno optimist will tell you, Hey, we create jobs too, like it'll be a net positive eventually, but I think the difference is, it took 50 years for cars to replace the horse and buggy.
The speed of change is going to be the real problem with this technology that the jobs get wiped out very quickly and we have to retrain and reskill and recalibrate a society large. Problem like on a dime. And so I think it's actually a moral imperative. But I think it's a moral imperative to use the same technology to accelerate teaching and learning in a way that gets the best outcomes.
In order to save ourselves from the economic catastrophe that we might be hurtling toward.
Deep: One thing that you said that, it made me think, I'm going to try to answer the same question for you on what I think Mainstay should do.
Tell me.
Because I spent a lot of time, thinking about Mainstay because, you know, I love this problem.
The way I think about the holy grail for you guys is I think about what's the perfect scenario of a student going to a perfect university with unlimited amounts of resources and so forget AI for a moment. Okay the student comes from a family that's super invested in them, super motivational mostly positive, but occasionally there's some firmness when required.
and fronts the bill for them to go to a, what, this is again, my bias here, but like a small university where they belong. They know their professors names. They can interact with all of the faculty. They can try on ideas. They can interact, debate ideas that sort of, you know, Socratic style where they're out in the yard.
You painted this image, you know, in ancient Greece, I think of an environment like that. So a student comes from a lot of love, a lot of compassion, a lot of sort of motivation and belief in them, from their family, from the college itself, from the university. And they can have conversations also outside of their, through their internships with the working world, where they also in an ideal world, get the great internships with the great mentors and all that, So to me, that's like the holy grail of the host is no alter no, no problem scenario.
And there are humans today who experience this, right? I feel privileged enough to have experienced this.. But I feel like that's, if that's the Holy grail for mainstay, then the question is, how do you make every parent? Every professor, every intern mentor to be like the most ultimate version of that with respect to student nurturing and guidance.
And then the bot is playing whatever roles it needs to make the parent, the perfect parent supporter to, to make the professor who's too busy and doing a bunch of research. And has a limited amount of time to advise on like intellectual efforts to be the best advisor on intellectual efforts.
So they don't give that 123 word response, but it's like tailored prompted and it's coming through their voice. And now the student suddenly. Who's at a, maybe a larger school, maybe a less expensive school, maybe from imperfect family circumstances suddenly has all that opportunity. But to me, that's I feel like that's your Holy grail.
Drew: Amen. And hallelujah. Let's do it. That is a perfect articulation. I couldn't agree more. And I think it's a lot closer. than it's ever been. Like, you can see it around the corner. I think the bigger challenge than building it is getting people to adopt it. Technology change is not easy. The human stickiness is the hardest part of building and the AI Enabled university is how do you get the humans to let go of the things that don't serve them to embrace the things that do and that is a hard problem to solve that requires great collective action.
Deep: yeah, awesome. Thanks so much drew for coming on the show.
Drew: Always deep. Thanks for the free consulting, too.
Deep: Awesome.