In this episode of "Your AI Injection," host Deep Dhillon chats with Adam Binnie, the Chief Innovation Officer at Visier, about the transformative impact of artificial intelligence on people analytics. Adam explains how Visier leverages AI to revolutionize the way companies understand and optimize their workforce, highlighting the shift from traditional HR practices to advanced, data-driven strategies. The discussion delves deeper into the practical applications of this technology, where Adam outlines how Visier integrates various data sources—from employee demographics to behavioral patterns—to enhance decision-making processes at all levels of management. He discusses the nuances of interpreting complex data sets and how Visier’s platform helps managers not only understand current employee effectiveness but also predict future trends and needs.
xyonix solutions
Learn more about Xyonix's AI Testing, Compliance & Certification Solution, the best way to ensure your company is following the law with AI regulations, being ethical, and thoroughly testing and optimizing your AI systems. Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance your customers' satisfaction.
Learn more about leveraging AI in business:
Generative AI and Beyond: Strategies for Directors of Innovation to Thrive
Embracing AI in Business: Navigating Misconceptions and Implementation Hurdles with Elise Oras
Or talk to our AI-Powered Chatbot Xybo HERE!
Listen on your preferred platform here.
[Automated Transcript]
Deep: Hello there. I'm Deep Dhillon, your host. And today on your AI injection, we welcome Adam Binnie. Adam is the chief innovation officer at Vizier. Adam applies AI to people analytics, transforming how businesses understand and optimize their workforce. Adam, thanks so much for coming on the show. Maybe get us started off today by telling us What problem Vizier Analytics solves and perhaps start by walking us through what people did before the product arrived and how that's different using Vizier Analytics.
Adam: That's a, it's a great question. Maybe a 3 hour conversation in itself that I'll probably try to avoid. Summary GPT 4 summary version before summary. Um, so what problem are we trying to solve? We, we fundamentally think that, and we look at the world, use of information, data driven behavior and stuff has been, you know, amazing around process.
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Adam: And then it was. You know, the nineties, the eighties and nineties, very much around process optimization the last 20 years. We've had a lot of focus around customer and customer process and customer listening and, you know, we think the next big threshold, the next big opportunity and productivity is around employees.
So that's kind of the world we look at it. And we think that people data or data about your workers, your employees, your associates, whatever, however, you describe the people that do the work for you in your organization. Um, but our people data is business data. We think that that. That people are what drives business impact and creates change and opportunity.
So we're very focused on that world of people data. Like, how do we take that people data and use it to create impact, create enhanced productivity, increase employee satisfaction, Create better employees, create better managers, create better companies, right? That's kind of, that's the world we're in. So the second part of your question is what was there before us?
Xyonix customers:
Adam: Um, you know, when I joined Visier even 10 years ago, the idea of using all that people data to help drive better outcomes for the business was, was fairly novel. And, and many organizations just didn't do it. They, you know, the people organization, hr, you know, they, they would advise on social and cultural things, but they didn't really use a lot of data in that process.
And I think there were, there were a bunch of really amazing innovators that came along. I think we were one of those innovators, but, you know, there's been lots of people out there that have talked about like the really early days of taking people data and using it to improve employee experience and employee effectiveness and manager and manager effectiveness.
And so, you know, we sort of have been part of that journey. And today, you know, we're delivering that capability at scale to an organization. And it's not about, you know, some aspect where it's about providing very key insights. To decision makers and executives, but we ultimately our passion is how do we bring that to the manager to the employee so that we get this huge impact at scale from this information from this inside.
So, you know, are people being effective? Are they doing the right work? Are they in the right role? Is there another opportunity for them in the organization? What is their continued relationship with the company going to be? That's, that's what we, that's where our heart and soul is. You know, how do we bring this people data out and create real impact to the business, real impact to the people.
We like to say that, you know, people have, you know, have a huge impact on, on business. I mean, you want to understand how your people impact your business, but you also want to understand how your business impacts your people. Yeah, true.
Deep: I mean, that all makes a lot of sense. Maybe to make it a little more real, let's, uh, or crisp, let's Maybe walk us through like what are the inputs to the system?
So you're signing up a new company. What are they plugging into it? I assume at a minimum you need the employee roster, but you maybe have other data sources Maybe like ADP data, maybe I don't know walk us through some of that Those inputs what kind of sensors do you have? You know, are you tracking? Are connected to like email responses and slack and, uh, you know, and, and other things it's maybe walk us through both the implicit feedback and the explicit feedback implicit, maybe how documents are moving to the entity through the, through the organization, explicit meaning, maybe you're sending out specific forms and gathering data.
So, yeah, so maybe just walk us through this, those inputs
Adam: in the sensors and there's 3 big clumps of data. There's a hundred ways to look at the, you know, the data sources that we ingest, but yeah, the first one is probably. I would say declarative data around the employees. So, you know, information that's stored formally about the employees, what, what's their job, where do they live?
What do they work? What is their desk location? Um, how much are they paid? Um, who do they work for? What organized imagine all of that information. And that exists across an entire landscape and of the employee life cycle from. Tensional candidate to an actual candidate for hire to, you know, an active new employee being, you know, onboarded to a fully tenured employee to an alumni.
Right? And so there's lots of different systems in that journey that we bring data in that sort of takes that core employee. Record and adds all of this insight around it. So, you know, why were they hired? What did they do before? What could they do next? How much were they paid? How much overtime did they do last week?
If they're on an hourly schedule or, you know, salary, you know, maybe what bonuses they're getting and so on. So there's this huge gamut of that sort of very factual information. And then the 2nd set you have is what I was. Call more qualitative information. So, you know, it may be that you're surveying for engagement or you're bringing in understandings of cultural norms or attitudes to the organization or performance reviews, frankly, they're all essentially, you know, human gathered opinion.
Uh, from either the employee on the company or managers on the employee or the employees on the manager and so on. And then the last is behavioral data, where you may be recording what work people do, what projects they're on. Um, and we may be looking at, you know, um, documents being worked on together, who collaborates with who.
Uh, we're looking at, you know, we may have some sensors around, you know, what's going on in their communication, how much communication do they have. Um, with whom and in which functions both into inside the organization in some cases and outside the organization in some cases. So we may be looking at, you know, how, who does a salesperson interact with?
What type of job titles do they talk to? We may be looking at a new employee on boarding and asking, you know, how, how well are they getting in attached to the various networks that exist throughout the company? And so there's this. Quite a broad range. And of course, every one of our customers does it slightly differently, and they have slightly different sets of that.
Um, and it is, it is an enormous range, hundreds of different systems that we touch and pull data from. Our average customer is probably loading data from six to eight different core systems that they pull data from. But we have customers that are upwards of 60 plus systems that they pull data from. I hope that sort of gives you a sense.
It's this very Oh no, this
Deep: is, this is awesome. It gives us a good sense of, of, of what you're gathering both on the implicit and explicit side. Maybe another lens you can walk us through is what are the user types and what are their dashboards look like? So you mentioned the manager, there's the employee, how do they interact?
Is it a, is it a desktop thing? Is it a phone thing? Is it both? What causes them to jump in and engage and give some feedback? Is it triggered by performance review? Is there something? Like a regular cadence, like a weekly drop of some kind of temperature check. Like they're happy, not so happy with their job, like whatever.
Yeah. Walk, walk us through the dashboards for the, maybe for the manager. The employee, and maybe there's like an executive dashboard or something. What's the Intel they get out and what's the info they put in.
Adam: This is, I, I always, I always think about that. The audience is a little bit, uh, maturity curve for our customers, right?
The more mature, the further down the audiences they go, but you know, where, where does most people start? They start, they usually have a very targeted business problem. They may be very focused team or area, but generally it stays for our customers. It starts usually with a people function. Looking at being able to articulate some very, very core and foundational metrics and being able to really understand the organization that they have and focused on things like, where are we losing people that are critical?
How well are we effectively hiring the people where they're critical? That kind of question. And then the second audience typically from that sort of focus team Inside the inside the HR function, it will typically broaden and go to executives. So, you know, the executive decision makers, what are those big decisions that need to be made?
How are we going to fix? Uh, you know, if we have issues with diversity, where are they? And how do we fix them if we're having real challenges firing? Where are we having challenges? And what are we going to do about it? We're having issues with retention, you know, who's at risk and what could we do to change that reality?
Those are some of the really foundational things and so i'll go out to executives and of course There are a whole gamut of business problems to be solved at that level And then typically that's where the organization steps into that next step in maturity around scale and the next level of scale may be to leverage the hr's natural Alignment to what you might call general management, the overall management tier.
So maybe not the top executive tier, but the more broad management tier where in, in most organizations is a job function called an HR business partner. Their job is to articulate people's strategy and consult and advise the business on how best to leverage its people. And so that may be the next audience.
And then ultimately. When you have that conversation, there's two users coming in there, the HR business partner and the business partner, who are both getting very involved. That's management. Um, and then the next tier is first line managers. First line managers is I've got a small team of people and I need to understand the issues.
As you get out to that layer of the organization, of course, you're dealing with people who don't have time to learn product that they're not going to spend. A lot of time learning this that kind of needs to just work for them. And that's a big focus right now for us with generative AI, right? We're looking at thinking generative AI and a very simple conversational AI experience is a really great way for us to reach broader out to that layer of the organization.
And then ultimately we also want to go help employees themselves. Most employees want to know what their manager sees and understand how they're perceived and potentially have an opportunity to improve their career. So we also have a layer of the product which serves as the employee, but that's typically in this very.
Narrow, close sort of feedback loop around who are you collaborating with and who are you being effective with. So you're sort of getting peer feedback and that's part of our, um, the, the sort of that employee level of the conversation. But so we think about organizations trying to go from that central executive layer, all the way out through managers, to the employees, to get to ultimate scale of bring, bringing these insights out where it matters.
I was having a great chat with one of our healthcare customers on. On Monday about our AI generative AI solution that they're getting ready to roll out into production. They see lots of people going to use that executives will use it. HR business partners will use it, but their ultimate dream is can a nurse manager on a ward use it to identify.
You know, who else might be able to help in a crisis situation when they're missing, you know, a nurse or a particular specialty, or they need to actually make sure that they've got the right staffing or the right kinds of employees or the right kind of hiring going on, uh, to be able to sustain a ward.
But that's, that's somebody who, you know, lives in a world where they can't, we can't send them training. They can't have, you know, they could have a destination experience, but, you know, they're not going to learn it. It has to be super simple. So that's kind of our spectrum. And then. Of the users, and then to support that spectrum of users, we have lots of different ways to interact with the insight.
So the core of Visio is, of course, this huge body of knowledge has been beautifully well organized is highly consistent across lots of organizations and is really standardized so that we can benchmark and compare and provide all that insight, but you can interact with it through dashboards, um, you know, highly summarized sort of presentation.
You can interact with it through conversational AI. You can interact with it through what we call guidebooks, which are these sort of narrated stories that tell you and explain how to use the information, how to action on it, what it means to you. We have a very power sort of analyst tool called Explore, where people can go and create those assets, those stories, those dashboards, but they can also just browse data.
In a very more structured way. Um, we can generate things that are essentially pushed into your email. So people get an email connection and of course that leads them back often, very often on mobile. So they get that email, they can click it and then they can go into that fuller experience. So we have a.
Footprint on Microsoft Teams, all of those, all of those different ways to get to this insight. And there's, of course, because you have this huge range of audience, you're not, you're not serving one person, you're serving this hugely different group of people, and they all have slight different needs and ways they want to get to the information.
Deep: So when it comes to AI and machine learning, are we talking about leveraging a natural language interface to automatically generate. Queries into some business analytics system or a data warehousing system? Is it something like that? Or are we talking about something else?
Adam: We use machine learning in multiple ways.
We've, we've, we've had a lot of data science in the product for a long time. We use it to generalize, benchmarking, we create. Predictions and risk around like how likely people are to turn over or what might be great roles for them. Um, so we have lots of different use cases in that sense, categorization, normalization.
But today, one of the things we're rolling out very much is that generative AI using a large language model to give us that natural language query. We are using, you know, a mechanism where we are making sure that that data stays safe and secure. Um, that there's no exposure of that data to the large language model.
This is really important in people data. Large language models aren't very good at filtering what you're allowed versus what I'm allowed. If it's in the model, it's sort of accessible. Yeah. Are you guys running
Deep: your own models or are you going to like open AI after anonymizing your data or something?
Adam: We use, um, we're actually using a, as you reject GPT for turbo right now.
As a large language model, but we don't need to train it on a customer's data. So we don't need to, like you said, we have anonymized and generalized the data for all of our customers. We have a really robust standardized model and we understand the structure. So essentially we use the large language model to do the translation of the question.
What did, what were you asking? Yeah. All the engineering and that kind of stuff. That generates a robust query that is still going against their security system. So we know that you can or cannot see this. So you can ask a question like, what is deep's current pay? And Visya would say, I know what you're asking, but I'm not going to answer that question because you're not allowed to know the answer to that question.
I bet you might
Deep: ask something a little bit more anonymized, like of my team of data scientists, you know, what's the average pay or something like
Adam: that. So. And if you've been given permission to see that, then Visio would answer that question. That's something we've always done, the ability to provide aggregate views without detail.
Deep: Um,
Adam: but again, the idea of like, let's say that, let's say you could ask, if you ask Visio a question, there's a lot of questions that we, we don't even want you to know that there is an answer, right? So there's that kind of whole secure privacy kind of aspect, but that's because what we're doing is we're using the LLM to construct a question, not construct the answer.
So Visio then answers that question back. And we can use a secondary LLM in a very non persistent way. So you're not training it to take that off and re summarize it.
Deep: Presumably pre GPT and LLMs, you already had your product representing employee data in a way that enabled you to. You know, query across these aggregations and get these kinds of summarizations and, and the LLM is really a transformation layer into that query.
Whereas maybe before or in addition in other features, you, those are formulated maybe in a different way.
Adam: Yeah, I mean, historically, you would have a very. Structured navigation, right? Filter this on that, pick this from a list, right? What the LLM is letting us do is to go, you don't need to learn how all of those controls work anymore.
Yeah. It's the question you have, but it also. We started to get C language chaining, right? So we're saying, take this question, this question, and this question and the, and the LLM can also chain those together. So it might not be one question, one query. It might be a series of queries where busy is taking, you know, ask this question, get the response, the LLM uses the response to get the next question and so on, so you can get chaining and all this kind of stuff.
So you can ask really quite complicated questions and, and be offered back something. It also then led to like. Something we've always done, right, is we don't want dead ends. When people use data to make decisions or get informed, you get one answer and it almost always leads to another question. So we kind of want to have that continuous, that conversational experience, which we've always sort of had in that formal navigation.
Now, also in the, in this sort of informal sort of chat model where we're like, Oh, did you mean this? What about that? Do you want to go and do something? And so that, you know, we always find people like, Oh, wait, I want to know how many people work for me. Seven. Okay, but, you know, okay, which, how, who's the most highly paid and who's most likely to leave?
And I've always got another, what are the questions somebody else in my context might have asked? Yeah.
So how do you, I'm curious how you
Deep: measure. The effectiveness of the LLM or whether you do specifically in the transformation to the structured queries. Do you have a, you know, a training data definition process that you go through to maybe random sample of cost? The questions that people are asking in natural language.
And then what are the actual structured queries that got generated from that? And then having, you know, Some human or another LLM go through and figure out whether those were correct mappings or not, and then measuring efficacy in a more, you know, in a traditional sense.
Adam: So there's sort of three pieces to that today.
Um, you know, one of the things is we have a lot of history of the questions people were trying to ask now, not in the same form, they're not in natural language form, um, but the basic truth, we already know that, but right now, the way the product works is when you ask a question, it's going to give you an answer.
It's also going to give you alternatives to sort of help lead you in that conversation to maybe where you were trying to ask the question. It also has a feedback mechanism. So you get to say. Yes, this worked or no, it didn't. So we get a lot of very direct end users. This, this solved their problem or it did not solve their problem, which, you know, is a common sort of practice.
And then for the user themselves, we also have a, okay, if you don't really understand what you got, the answer is 42, okay, but did you really know what the question was? Um, you know, then, then we also provide. Explanations of why we chose that and what we were trying to do and again, that whole process of creating that better relationship between the user where we're focusing on users.
In a lot of cases who don't know if the answer is right or wrong. There's a huge trust building exercise with every user to make sure that they feel confident that. Not only do they get an answer, but if they wanted to, they could make sure that they had the backing to explain where that answer came from.
So that if they're challenged by, you know, if somebody says to you, okay, how many people work for you? And you said 42 and somebody says, are you sure you can say, well, I can show you the 42, I can give you the list. I can give you the background material. So again, building trust. Because unlike if I'm asking a co pilot to generate a document for me, I'm going to read the document afterwards and make the final decisions.
A lot of the times our users are, I don't know if those answers, that's factual data coming from a system. I need to know that it's correct factual data. So we're creating a lot of that feedback loop. And then obviously using that to continuously refine and retrain. And build up the correct connection. So that, okay.
Yes. When you ask that, that question in that form, you wanted output in that shape. Um, it
Deep: seems like you can kind of mitigate the hallucinate the LLM hallucination problem because person asked the question, you translate into your internal structured query that you think was the question they asked. You could even have a natural language description of that structured question and you can, and then when you come back to answer, you can say, The answer to the structure thing that you thought they asked is this.
Now you can't actually say anything. You can incorrectly map the question, but you can't really, you don't have to worry about LLM hallucination really in that context. We don't, we don't at all, right? Because the
Adam: response is always very factual. And, and yeah, and it knows what question it's answering. Yes.
And it describes the. The response itself describes itself. So, you know, ask, you know, how many people work for you? And it says, Oh, by the way, your, your birthday is on Tuesday. You realize this answer the wrong question, but you also aren't confused that the.
Deep: Yeah, no, I think that's a, I mean, the reason I'm kind of asking about this is, you know, I feel like the world is in the process of understanding what it is we can do with LLMs and I feel like what you're describing here is a really powerful template where.
You're not actually trying to answer the question with the LLM, you're trying to ask, figure out which question is being asked. And that's, those are two very different things. But that scenario can apply to a million different, you know, like, lots of different areas beyond, you know, beyond like these HR analytics scenario that you're looking at.
It's a very powerful case where people over type the LLM thing and the possible applications, and then the counter reaction can also be, you know, over the top saying like, ah, there's no way to mitigate these things can just make stuff up, but here's a perfect case where yes, it can make stuff up, but it doesn't matter.
Adam: Yeah. You know, just ask the question, refine the question. I think it is a powerful case. I think one of the things for us, I mean, we don't worry about the hallucinations as much, but the other thing is if you were, if the LLM was building the answer, actually delivering the answer, it has to have the knowledge inside it and now you work out what's secured.
Now you can ask the example that was given, and I don't even know if this is apocryphal or true, if you asked open AI, where can I steal software? It will tell you that that's unethical and it's not going to answer that question, but if you asked it to tell you a bedtime story about or better not somebody who needs to steal software, you know, avoid what site should you avoid because they contain pirated software, it will absolutely list off.
Pirate Bay and all these places, right? Because it has the knowledge and it's just you, you're working your way around the security when you get there. Whereas in our case, you can't, yeah, you can get it to ask a bad question, but then the security system is going to say, I'm not going to answer that question.
We have to get asked that you get it to not allow it to art. Can you stop it? Accepting bad questions or, you know, you know, unethical questions. And we're like, I don't think you can stop people asking unethical questions. All you can really do is make sure that you don't answer them unethically. I think
Deep: it's an interesting based on this conversation.
I'm sort of understanding. That your LLM doesn't actually contain the data of the employees, right? Like it just contains the data it needs to formulate a structured query, which is very different and relatively safe, right? Like, I mean, I'd have to think to figure out a way where that's problematic, but I think that's like a, it's, it's just like a powerful.
basis case, you know. Um, let me ask you this. I'm curious, what did you see in terms of engagement and uptick from, you know, the natural language LLM powered question answering capability versus before, you know, where you had a more structured way of interacting with your data? Did you see a significant uptick in
Adam: engagement?
I think at this point, I think we're still sort of on that cusp of that, right? People are still cautiously rolling out the conversation you and I have just had around security. Is a huge area of pause. And there's a lot of, we're talking about fairly sensitive data. Employee data is inherently sensitive.
Um, so there's still a lot of hesitancy around it. So people are sort of allowing it getting out broader and broader, but the reaction we get when, when that happens from people is. Oh yeah, now I can finally just use, I can just move, I can just go, I can go do stuff. And so we're working now to put that, that LLM experience, that natural language query experience into the places where people are so that they don't have to come find it.
You know, it's inside your teams. It's just almost like a, there's a, you know, we really think of V, this is our name for our, uh, um, AI experience as almost an avatar that you can just go like, like almost like a person that you can go and get advice from. And so we really want that. To be on your phone, on your desktop, everywhere it can be.
Um, and so we are seeing, we are definitely seeing people. A very immediately, very fond of it, but it's, it's one of those things. The first people who are using it are people who are already experts in the old way of doing things. And so what is inherent bias towards Ireland to see if it can do the most complicated thing in the world.
I think it's going to be really interesting as they, we started to see our customers now rolling it out much more broadly. And of course, because we were dealing with employee data itself, and we know everyone who works in the company. We know who the user is and we know what their job title is. So we can actually look and see utilization and say, do we see first line managers using it?
Do we see people who are not in the HR function using it? Do we see executives using it? So we're actually sort of watching that insight right now to see, are we, is that our thesis that this allows data that was always valuable to those people to be used by them directly, rather than the way it was in the past, before this sort of very simple natural language experience was, you know, I just emailed or I texted the analysts in HR and say, can you send me my and I think that's the metaphor, right?
Deep: Like, there's somebody is an expert in a, in a sophisticated tool. Um, you're trying to get, you know, somebody who's not an expert and it goes to that person gets them to interact with the tool and get them the results they want in this case. The. The machine learning the AI system is trying to serve as that person.
Yeah. I mean, like, I think, um, uh, Google analytics recently released something that's kind of similar, right? Like, analytics is quite powerful. And, you know, you have to spend a fair amount of energy to, like, really figure out how to, like, manipulate and get all these different, uh, permutations and, and data results that you might need.
But, you know, now you can sort of quickly just get to your actual analysis and get those results. Let's maybe take a little bit of a turn to some of the More like non generative AI problems that you mentioned before. So I think you mentioned churn prediction. Walk me through like, because a lot of our listeners are folks who are trying to figure out how to leverage machine learning and AI in their products.
Uh, so there's a lot of product managers, a lot of, uh, technology managers, uh, maybe people familiar with more traditional software development. And the question I have is like, walk us through like how you determine what Is worthy of a machine learning or AI solution? Maybe not the generative AI case, but these specific ones like, oh, we really want to figure out how to get a churn model in.
We really want to figure out how to get, you know, maybe a loyalist model in and how do you, how does that work from a product standpoint, you know, with your, your product folks? How do you find the problem? How do you, you know, maybe organizationally find the solution path? Yeah. And then how do you get started?
Do you prototype first? Like all that kind of stuff.
Adam: Yeah, I mean, I think we have some good fortune from some good decisions a very long time ago. In that we have, I don't know, 48, 000 companies that all use exactly the same foundational data model. And that means that we have, again, anonymized and aggregated vast seas of data that we kind of know the shape and we deal with an organized, we deal with the practice in, in, The people practices and organizations extremely consistent across industry areas of the world.
So we've been lucky that we have this very robust set of data that we, we can learn from. And I think that's always a, an enabler that you need to have. You need to have consistent data that's normalized. So, you know, okay, all these people's job levels are roughly the same meaning and all these people's job titles are.
The same job occupations, um, that all these people's all these locations are the same, you know, where they actually are. And, you know, you can, so there's all this sort of work that happens on the data side, but once you have all that data, so that that was sort of just there for us, then the business problem solutions get a lot easier because you can really walk into a business problem, which is quite easy to articulate, which is highlight.
Areas where again, risk of X can create me a machine learning. That's just using historical turnover and projecting forward a prediction of who's most likely to to to behave the same way. And again, that is a relatively clear value to people because it's helping to inform them. You can you can also use all that data.
To come up with all kinds of things. Now in the people space where we really struggle a lot is, is being very careful around the ethical boundaries because you know, there is a lot of sensitivity making promotion recommendations or hiring recommendations is extremely risky and extremely fought with fraught with problems.
So we've kind of, we've sort of, in some cases, that's been more of our. Uh, consideration than perhaps the technical challenges of coming up, uh, with a, with a machine learning example, but in other places, for example, You know, inside the product, we can have very small features that use this capability. So, for example, we have a whole bunch of machine learning driven decision making around what counts as an exception.
What counts as something that. Needs you to be informed so we can sort of generate. Insight that is pushed to you in the, when we think that the set of data has created a trigger condition, right? You're an outlier change in some value, an unusual reality, a problem with, you know, with something as suddenly changed.
So then we have a lot of that kind of activity, but they're all in that case, there's lots and lots and lots of examples, but they're all fairly easy for us to implement because we're, we're working from this very robust foundation. Where we struggle more is when you need to go. All the way back to get the data.
So for example, we do a lot of career pathing. So we're looking at, you know, where an employee's career might go. And, you know, we do this across an incredible huge body, 25, 000 employees. But when we wanted to go and do that across that community, we needed to go get. People's job histories, like not just their job histories inside the company, but job histories outside the organization.
And we needed to do an awful lot of normalizing of that data so that when that model gets generated, it is much more robust and it's reliable. And so that's, that gets much more difficult and much more expensive because if you have to go back and get the data and you haven't had it in the past, that is a, have to touch every customer, have to talk to everybody, have to make sure that they have it, have to make sure that it again gets normalized.
And so that gets into a much. More complex, much more expensive feature. So there's kind of that trade off exercise between yes, we can see the value. Do we have the data? How do we get the data? And if we don't, and how reliable is that data going to be in some cases, we can get data from outside the world from the outside world and buy data if you like, but generally.
You know, the best, the best problems are solved when we can actually get robust, well cleaned, reliably maintained data into the system.
So a lot of our listeners are kind of want to
Deep: understand the mechanics organizationally, like in, in your org, how it might happen. So like, let's say a product manager. Decides like, Hey, we have the ability to predict who's likely to leave, but we also want to understand, you know, the attributes of somebody who is likely to stay for a long time.
So how does that work? Organization? The product manager has the idea. Do they have the autonomy to go grab their data scientists, pull the. Historic data out, explore a model, uh, explore an explainability, uh, explainable form of that model, get that feature somehow kind of prepped, and then they're deploying it across all of your customer base.
Like, how do you carve out that type of work across a set of product? It sounds like you guys are a fairly decent size, you know, across a set of product managers and our teams.
Adam: Yeah, so our world, okay, maybe useful to explain a little bit about how we structure that sort of funnel product management. So we have, yeah, we essentially have two layers of product management.
So we have a layer we call solution management that lives in the domain of the business problem. So that you would say they're HR, knowledgeable about HR. They understand how the HR teams are working. They understand how people works. They understand how management works. That's kind of their background.
They're sort of subject matter experts, deeply into the business problem. They live in the land of this is the business problem to solve. We're trying to solve for You know, uh, pay to stay, you know, what are the criterias that, um, can we guesstimate how much more likely This pay rise or that pay rise would have on somebody's retention rate, right?
So that when we're making Pay recommendations, right? We know we know a lot, right? We know a lot about how comparable ratios and feelings of fair compensation and behavioral Um, stuff impacts that, but as they're getting into that kind of decision, so they're coming up with a business problem. They're articulating more of what would be useful, what would be helpful, but they're not data science experts.
They're not, they're not really even like, how is the data engine going to service all of this kind of problem? So then they would take that to the product management team where there are specialist product managers who really spend all their time in. With the data scientists coming up with, you know, how to productionize a piece of science.
How would they validate and make sure that we are ethical? How would we validate and make sure the data is good enough that we have quality that we're, we're not bleeding over sensitive, sensitive information and stuff. So they will basically create the business problem, pass it down. And then they would create the technical problem, which would then go off to the data science team.
And possibly, um, you know, when we think about data science products, I have those really simple frameworks. There are four things you need to do. Well, to deliver a data science product, you really need to have a good problem and you need it well described. And that's that kind of solution manager's problem.
You need data and you need someone who knows the data that you need. You probably need data science, so you probably need the ability to take algorithms and know the right algorithms to apply the right models to use. And so that's kind of where the data scientists and the product manager, the more technical product managers get involved with.
So what sort of approach are we going to take and do we have the data? And if we don't, where are we going to get it? And then finally, you've got a software engineering element, which is how are you going to do this at scale for a million users across 15, 000 organizations? Right? So, you know, what is going to be the reliability?
Who's going to get it? How does it, how does it run at scale? So in our world, we actually have an entire part of our backend servers that runs this sort of data science environment. And it connects all those 50, 000 customers into a sort of a shared area where Again, we're very careful data doesn't bleed across that boundary, but where we can implement very quickly, these, these core functions, these core capabilities, and then they can be leveraged by the customer applications.
And then, you know, we're still a business. So, at some point, when we create something new, like that, that solution manager would then deal with packaging it, marketing it, explaining it to customers. And then obviously we'd go out and sell it to people as a another thing that they can, they can add to their capability set.
Does that
Deep: make sense, Steve? Absolutely. That's exactly what I was kind of looking for. I think you described a world where that makes sense why that would work. It's also a place where a lot of our listeners kind of struggle because, you know, not all companies have this sort of deeper technical data science DNA in them that knows how to operationalize machine learning and AI capabilities.
What you're describing here makes sense. They almost always have what, I forgot what you call them, your, your more solutions oriented, the domain expert type of product managers, they will definitely have those, but what you're describing is somebody can translate that desire into data science conversation that, you know, that, that where you can go talk to some data scientists and actually, you know, as well as the backend engineers and the other folks involved in the process.
I think that helps a lot.
Adam: Yeah.
Deep: Thanks a ton.
Adam: I spent like 30 years in analytics and I'll see a really exciting time in analytics right now. As generative AI really, you know, up levels of the whole conversation, but we've been driving data driven for 30 years. I think one of the things that's interesting as a product manager in that space is.
You know, analytics is a little bit of a separate beast. It's a bit like, you know, if you're a product manager for say a word processor, right, versus a product manager for an enterprise app, right. Enterprise apps are very workflow oriented. You can sort of see users going down a very linear path, a website production, very linear path.
Like you can sort of think about it as step, step, step, step, step. But a word processor doesn't have, you know, they all will first press the K key and the G key. Right. And analytics is a bit, this sort of canvas world where. I don't know the question you're going to ask, and anything I build for you, a dashboard I'm going to build for you, it's going to last a couple of months before you want a better one, right?
So you have this short durability, so you have this kind of whole very fast evolution of content, which is really different to, say, most app building. And I think this is where a lot of application product managers struggle, because they're looking for this very golden path. Well, there's also a lot
Deep: of uncertainty in it, right?
Like, not, not everything has signal. I mean, I'm sure there exists questions that you want to predict for employees that you just don't have enough signal. And then there's maybe some gray areas. There's like a spectrum of cases where, yeah, we can predict it, you know, plus or minus 3%, you know, maybe 92 percent efficacy.
There's other ones where you're down in the 60s or 50s or God forbid below. And then there's a balancing act, like, well, what, when is it good enough to bother rolling out to our, to our customers and, and how do we even expose some of that uncertainty? Maybe that's, that's the question I have for you. Like, how do you think about exposing the error rates to your customers for something?
Let's take something like churn, right? You know, that you can predict it with X percent efficacy, plus or minus something. How do you communicate that to your, to your users? And how do you think about communicating it to them?
Adam: Yeah, I mean, you know, this is one of those user maturity questions, I guess, because we, you deal with a lot of different user personas, and there's a layer of users where you could definitely, they would understand error bars, if you like, and there's a layer that you're, that's just noise, just as confusion, and so it's a really interesting balancing act.
I'd love to say I, I had a, you know, beautiful answer to that question, but the truth is it's a constant dilemma because the love of simplicity and the love of just give me the answer I want to give me just tell me it's 42 good enough, right? Like, I don't even need to know why
Deep: the problem is
Adam: sometimes it's going to be
Deep: 47.
And then they come back and be like, Hey, you said 42 and now you've deteriorated the trust. Whereas, but yeah, either way the trust is going to take a hit, you know, but, um, at least in one case you can say, I never said it was 42. I said it, I think it's
Adam: 42 plus or minus X or Y. The plus and minus is really interesting.
You know, you'll see organizations go, okay, well, my engagement score was, you know, an 80, 82, right. And then next, next time they run it, it's. And so he's like, Oh my God, it dropped a point. And you're like, yeah, but it's got an error of plus or minus five,
Deep: like,
Adam: you know, you're, but we won't tell you it was plus or minus five because that's noisy, but now you've got somebody who's, you know, seeing like micro change and not realizing that's well within the bounds of just statistical noise.
You also see people, you know, you'll see people who insist that everything comes to the scent. But you're not really measuring cents, right? You're you're guesstimating costs, right? What's the cost of the what's the cost of replacing an employee? Well, it's sort of in the 150 grand range. Okay. It's not 146, 312 dollars and 13 cents, right?
And yet you'll see that presented presented to people. I think that's been an ongoing problem with analytics and sort of that analytical awareness that what's directional? You know what's informing what's really just a range. Um, but we still, we still deal with a lot of people like a lot of users who, if you gave them a range, they're like, well, where is it in the range?
Like, you know, it's like, well, given your range, because I can't tell you that. But now you're like,
Deep: that is a constant challenge. I mean, I think it's, it's a place where, you know, really good user experience folks can. Can bring a lot of value in saying like, Hey, if it's 42.2 and 42.3, sometimes in 39.8, we're gonna just round to 42 'cause I don't want to confuse them,
Whereas engineers are like, no, no, no, you have to show the exact thing plus the error bounds. And then they can argue in a room for a while and eventually, you know, some, some something comes out. I want to jump up a little and just talk about the macro findings that you have with respect to employees, because you have a very interesting perch where you're looking at quite a bit of employee happiness data.
What are, what are some of the things that really surprised you in the last few years, you know, around, you know, that, that your engine has taught, taught you about? What makes employees happy and what doesn't make them unhappy or what works or doesn't what chases them out the door? Like what are some big surprises you've found?
Adam: Well, there's some big surprises we found Um, we actually have it like if you went to a busy. com and you went to the research and there's one of my teammates Andrea Who has produced some really amazing research on all kinds of topics. One of the things we see in our industry is the, there's this very interesting swing, right?
The market swings, everyone can't find people, hiring is a nightmare. They're spending a fortune trying to get people in the door. They're doing everything to retain people. They're, you know, incredible investments in learning and development. And then, you know what, the market suddenly flips and then it's like, you know, it's like they're working out how to let people go and, you know, you sort of see these huge.
Swings and, of course, you know, covered the last economic crisis. You know, all of these have had these huge swings in terms of this reality for organizations. And I think organizations really struggle with that because they're trapped in that reality. But at the same time, that isn't really the best way to do things.
Right? Um, so they're kind of struggling with that. So we've got research on different things. We've got research on. You know, and how that works. And you can see other things that are sort of subject to those huge shits, you know, focuses on right on diversity. COVID was a disaster for diversity. Absolute disaster, you know, we'd love to say that we're gender neutral when it comes to who gets to stay at work.
Um, but the reality is that most, uh, home care was, is provided by women. And so as a result, Diversity in the workforce dropped dramatically during that period of time. So, you know, we, we see a lot of that. We, we actually do this research on a monthly basis. We're constantly publishing some of that research.
Um, if I spin back in my head, one, I, the one I most enjoyed, I think perhaps was most, um, and I don't think it really taught us anything, but it was really interesting to see it in data is the, is pay equity, um, is really not a simple function of, of male and female in the same role. Because the promotion history for male and female is so dramatically different through their age groups.
So, in the 20s, males get, tend to get promoted a lot faster, as a result, they're at a higher level. So yes, they're earning more than their peers. Essentially, where, you know, people who started at the same place, and it was interesting, it rapidly flips in the late 30s, where women suddenly get this huge acceleration in their careers again.
And so it's really, that was a fascinating piece of research that we did. So we have lots of that, right? We're doing it all the time, and it's well worth going to. I was actually joking with a, with a, with one of my colleagues of the day that, um, you know, one of my proudest moments last year was that one of those pieces of research showed up in men's health now as a software company, being in Forbes or in Harvard Business Review was obviously our kind of goal, but being picked up by something so obscure as men health, because it was talking about burnout and use of stress and how stress was affecting people at work.
So we've done a ton of that kind of research and it is really fascinating to see. And we actually published, um, Like voluntary turnover rates and what's changing in that, which is people quitting on our website every, every single month with like constantly updating what, what's happening in, in the world of turnover and retention and stuff like that.
So there's lots of that stuff. I'm probably the best person to interpret that. We have a whole team of people that do that. And then I said, I read it just like everybody else, because it's fascinating to see these huge macro trends and also how that impacts people in the workforce. Yeah.
Deep: I imagine you just.
You guys are just in an interesting position because you're, you know, you're seeing not just one company, but like, I can't remember that number you said, but it was, uh, quite a, quite a, a decent number. 50, 000. Yeah, almost 50,
Adam: 000. I'm probably going to double that next couple of years. So it's going to be exciting.
My favorite last question is like, let's, if
Deep: we jump out five, 10 years into the future, if all of these sort of Advanced ideas that you and your team have, you know, get implemented and, you know, your product is able to do kind of everything that you want. How is the world look different from an employee vantage and maybe from a manager vantage?
Adam: Well, I think on a really positive, I'm trying to decide if I go negative or positive first. Because I think there's some really big challenges ahead of us. Um, I think generative AI creates some really interesting societal challenges that Second and third order problems not just you know, oh my god, i'm gonna lose my job But like second third order problems, but let me go there second I think the first thing for us like the journey I think we're on is How does our AI become a true companion, a, a, a coach, um, some, something that knows when I care, when I, why I should care and helps me actually learn and develop into being better at what I'm trying to do.
And in our world, of course, that is very focused on a manager. How do I make managers better managers? Not a ton of science out there for how to be a better manager, how to lead people, how to develop people. There's lots of guidance and you can ask people to help you, but. To be honest, it's, it's a skill set that we basically say you're a manager today.
And if you suck at it, you'll stop you being a manager in a, in a year or two, right. After you've probably lost half your team. And so I think that's kind of my dream is like, how do we get that? And then how do we get a lot of science around that? Right. So that we actually know the right things to do and recognizing that that's a hyper personalized reality, because what works to manage one employee.
Doesn't work to manage the next employee. So it's never as simple as you just, you know, big, big bowls, you know, sort of everyone do it the same way stuff. So that's kind of my, I think that's the dream we're driving towards how to make people more effective as managers. And then obviously indirectly, obviously make their teams more successful, more competent, you know, more high performing.
I think one of the challenges we're going to see in the near future, though, just to look back on the other way is how do we create learning environments that When you've got something able to do the early part of a job. So what you'll, you know, you look at AIs today, we're very clear. If you want up to journeyman level, um, writing the AI can do that for you.
Mastery today, most people in those jobs have mastery. So they're using these things to optimize their performance, which is great for them because, you know, just take away all the mundane work I don't want to do. But if you're a new hire coming into the workforce in 10 years time, when The first 10 years of your job can be done by an AI.
How do you get to the mastery of the 11th year, right? How do you learn your way up? And how is somebody going to pay you for 10 years when the AI can do it better?
Deep: It's
Adam: a massive
Deep: societal question, right?
Adam: I mean,
Deep: I mean, we can only hope that we can leverage these tools into being great trainers. Yeah, and that we all just get to a better place that today is not the place Today's top performers are not 10 years from now's top performers.
They'll be much better Because it'll be so much easier, quicker, and more efficient to learn and maybe learn what we need to, because we can all like lean on these reasoning engines. Now it's going to be an interesting journey, right? So it's going to be, I mean, I have real questions that aren't like, we're not going to be sitting on GPT four in 10 years either, right?
Like there's a huge arc. of advancement happening there. And it almost feels safe that they're going to get an awful lot better. And they're already mind bogglingly good, these systems. So the pessimist in me is like, why do we need people at that point other than to do the novel? Which is important, and it's a core function.
But then, The optimist in me is like, well, I think we'll just get really good at learning what we need to and how to differentiate ourselves from the bots. And I think maybe we all just wind up like a Star Trek episode where we're making pottery and, and like an empathic and like really, you know, You know, elevating each other's emotions or something like, I don't know, but it, it, it feels like we're gonna, it's going to be a different world in 10 years.
Like, I don't know exactly what it's going to look like, but it's going to be different.
Adam: There's an old quote that I, I always, I use a lot, which is Bill Gates quote, right? People always overestimate how much can get changed in one year. And underestimate how much will change in 10. Yeah, probably paraphrasing a little bit, but I think that, you know, back 10 years, 2014, right?
We weren't even that smartphone or fight at that point, right? We're, we were still, you know, we had apps, but we weren't living on our, we
Deep: certainly weren't sitting around worried that our 13 year old daughters were, you know, were mental health was going through the floor. I mean, like the way we think about it now is, you There's a lot more societal immunity that's been building up.
It took some time, but we had a much more, I mean, I, like, I can even go back further. I mean, I remember 1994, the first time I looked at a web browser, 1993. And, um, and I said, like the day Mosaic came out, I remember looking at it and it had this flash of, Oh my God, everything's going to change. And I had this vision of it all being like, it was complicated.
But I remember thinking, like, on the whole, this is going to be great for democracy, anyone can publish anything, you know, and, and here we are, 30 years later, fighting like mad to, like, preserve democracy, and, you know, because these things have, these things have this ability to just change things so much for good and ill, that, you know, Society needs a little bit of time to like figure out how we're going to adapt to it, you know, like we're putting in all kinds of safeguards.
It's still, you know, into the internet era stuff. And we'll be doing that for the next 30 years for this, you know, AI driven stuff.
Adam: I saw a, like, a live, um, presentation by Malcolm Gladwell once here in Vancouver, and he, he was talking about how do social networks create change, or do they reinforce the status quo?
And his argument was that they reinforce the status quo. And it was a lot about, you know, small groups of highly motivated individuals create change, not, you know, Large, vaguely connected groups of people, which of course is what our social network is. I wonder what if he would change his viewpoint. I guess, I wonder if he hadn't factored in.
Yeah, but what if you have a social network where 90 percent of it is bots with a particular viewpoint being injected into them as part of that conversation. And you have no way to disintermediate between bots and humans anymore. Because that's the reality we live in now, right? How much of It works as bots.
You
Deep: know, I mean, the irony to like, this is something that people don't take me seriously, but the most interesting conversations I have, this one exempted, of course, is often with GPT for, you know, like, or, or with Gemini of late, like, You know, with the right prompting, I can have the most intriguing conversations, you know, that I, I've built this like Jungian dream analyzer, uh, thing that I talked to in the mornings to analyze whatever dreams I had last night.
I can't have that conversation with a human. I mean, there's a few humans I could have it with. This has been a fascinating conversation. Thanks so much for coming on the show. That's all for this episode of your AI Injection. As always, thank you so much for tuning in. If you enjoyed this episode and want to know more, About leveraging AI in your products.
You can check out a related episode of ours by Googling Xyonix product management. Also, please feel free to tell your friends about us, give us a review and check out our past episodes at podcast. Xyonix. com that's podcast. Xyonix. com. That's all for this episode of your AI injection, as always, thank you so much for tuning in.