What separates true intelligence from mere mimicry? In this episode of Your AI Injection, host Deep Dhillon chats with Lucas Hendrich, CTO of the Forte Group, into the fascinating future of Artificial General Intelligence. Together, the two explore the critical role of 'intent' in the quest for true machine intelligence. Lucas shares his insights on what it would take for AI to move beyond predictive algorithms toward authentic, independent thought, drawing on his early experiences working with AI visionary Ray Kurzweil. The two also discuss the practical challenges of implementing AI in business, the evolution of data architecture, and the ethical and operational guardrails needed as we inch closer to a new era in machine intelligence.
Learn more about Lucas here: https://www.linkedin.com/in/lucashendrich/
and the Forte Group here: https://www.linkedin.com/company/fortegroup/
Check out some of our related content:
Expert Tips for AI Implementation and Data Strategy with Paul Lewis
AI Copyright Regulation: Navigating Legal Challenges and Ethical Dilemmas
Embracing AI in Business: Navigating Misconceptions and Implementation Hurdles with Elise Oras
Subscribe on your preferred podcast platform:
xyonix solutions
At Xyonix, we enhance your AI-powered solutions by designing custom AI models for accurate predictions, driving an AI-led transformation, and enabling new levels of operational efficiency and innovation. Learn more about Xyonix's Virtual Concierge Solution, the best way to enhance your customers' satisfaction.
[Automated Transcript]
Lucas: It almost makes you wonder, though, how intelligent are we when we speak? What kind of intelligence goes into that translation from symbolic thought to language? Which is why I think that's the limit. Right? And just to respond to the question about the singularity for Ray Kurzweil, it's a moment where, you go past that moment and everything changes it accelerates at such a high speed that we have no idea what's on the other side, like an event horizon.
And I think what that really means is intent when something has intent and acts upon intent that's the A. G. I. Challenge. Right? That would mean not just something that could respond, but something that could initiate.
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Deep: Hello, I'm Deep Dhillon, your host, and today on Your AI Injection, we're joined by Lucas Hendrich, Chief Technology Officer at the Forte Group.
Lucas specializes in modernizing data architectures and governance to pave the way for effective AI solutions. With a background in AI that includes early work alongside Ray Kurzweil, Lucas has a deep understanding of how to align tech strategies and business needs. Lucas. Thanks so much for coming on the show.
Lucas: My pleasure.
Xyonix customers:
Deep: So, listen, before we dig in, I think we have to just start with the Kurzweil thing. what did you do with Ray Kurzweil? I just have to start there because there's very few people who are as. Provocative bizarre and intriguing at the same time as Ray Kurzweil.
Lucas: Yeah, and I'd agree with 3 of those adjectives. So,I was extremely fortunate to meet Ray. early on in my I. T. career. Was, coming out of, probably the most corporate gig you could imagine working as a program analyst at MetLife. , I was fortunate enough to do research for him and what research meant was, literally, like, looking at old, magazines from the 70s to figure out pricing of components and eventually build up.
Yeah. a data series that demonstrates different exponential trends basically Moore's law. compute getting less expensive over time. at the same time, I was working with colleagues early neural network type projects doing, pattern recognition of,stock market trades.
And this is back in 1999, 2000, so pretty early on and definitely before the era of large scale, GPUs and the ability to like thousands of
Deep: quants working for firms, with really exact operations.
Lucas: Yeah. So it was, a couple of years. It was a lot of fun. definitely an adventure.
From there I went back to digging into, visceral technology problems and, working with, more immediate lines.
Deep: And for our audience members who don't know, I'm sure most of you know who Ray Kurzweil is, if you start hearing about the singularity and, people mapping their mental processes onto silicon and wandering around the universe, he's the godfather of that line of thinking,
He also is really well versed in Moore's law and its application I think that's how he gets to some of these wild beliefs in terms of the timeline,
Lucas: yeah, the kind of extrapolations of a trend that if you think about it, he was talking about AGI.
Before anyone was talking about that in the context of LLMs, right? So maybe the approach was unknown, but certainly the brute force required to get there was trending towards that direction, there was a lot of thinking about if you could just image the brain at a granularity, you could replicate that the same way. If you had enough rules, you could create a rule based system, an expert system, broad enough to pass the Turing test. And I think what wasn't known then, is that that's probably not the approach,
it's direct analogy of brain hardware to. A circuit, it's been fun to watch that over the years.
Deep: Yeah. Before we get into reality, realized you have this really interesting background and philosophy. Why don't we justwander around in outer space for a little bit, and then , we'll get down to the nitty gritty, the neural complexity of an actual human brain.
is something that we, if you look at likesome of these new GPT models, I mean, just from a sure,sure neuron standpoint and the order of magnitudes complexity, we do actually, have really complicated, Capabilities now, massive storage, a lot of the pieces that, Kurzweil a long time ago was talking about, what are people even talking about when they're talking about the singularity?
It's kind of a fun topic to toss around, What does it even mean to, mimic, your neural structures and understand it? And what would it mean to put on silicon?
even if you could do that, it's a bit of a philosophical question as to whether that's you there or not. I think most of us would think it's still not. It's like a snapshot and then it evolves.
Lucas: Well, a couple thoughts on that. I read this really interesting book a couple of years ago.
I remember the name of the book. It's called The Mind is Flat. My main takeaway, was how specialized different parts of the body are in how we actuallyreact and interact with the world.
There's an awful lot of reaction going on there that is just sort of responding to a context. And I think the magic of, why LLMs are so interesting right now is that, we say, there's no real intelligence there, they're just responding in a context, it's about the prompt, but so much of what we do is like that.
And that's why you can have, something generated that. is almost hard to tell if it's a human author or not, because I might generate the same thing without much thought, because of the context. In that sense, I think we're pretty close to human level. If we're talking about language, I think where it becomes more complicated is.
Intelligence in the human sense, is more physiological than we think, the way the eye processes information is highly specific to the eye and not to how the ear functions, which isn't applicable to how the gut has intelligence built into it, too. The closest model to that, that I think would be most interesting, is When you have models that are highly specific and coupled to hardware on a small scale.
The assemblage of that could be closer to a general intelligence if orchestrated correctly than one big massive LLM with infinite tokens in it.
Deep: I don't know if you got a chance to play with O1, the latest, OpenAI model, GPT 4, , I still continue to be impressed with 4.
O1 is like, it took me a while to come up with four is so good that I could really get apples to apples comparisons. But yeah, it's true it's not intelligence in the way that humans think right?
It's like this massive pattern matching machines but they certainly create the illusion of intelligence not only of intelligence but of pretty smart folks that are smarter than most folks I run into on a normal day to day basis. At least in terms of generalized knowledge, less so in terms of specialized knowledge.
But with O-1 , I think it was scoring 92 percent on Ph. D. level physics qualifying exams. , it took a massive jump on the LSATs, like a lot of these other exams went from, the fifties to, the nineties maybe it's not, intelligence, but it doesn't seem difficult at all anymore to mimic the feeling that you're talking to a human, at least with respect to the early, scoring tests
Lucas: oh, I agree. It almost makes you wonder, though, how intelligent are we when we speak? What kind of intelligence goes into that translation from, symbolic thought to language. which is why I think that's the limit.
Right. And just to respond to the question about the singularity for Ray Kurzweil, it's a moment where, you go past that moment and everything changes it accelerates at such a high speed that we have no idea what's on the other side, like an event horizon.
And I think what that really means is intent when something has intent and acts upon intent that's the A. G. I. Challenge. Right? That would mean not just something that could respond, but something that could initiate.
Deep: People say this in different ways, right?
The way I tend to think about it is can we build something that can do the inventing come up with thoughts that in the aggregate form that this thing has slurped in,
but I would probably need an entire new field just to figure out how to measure this stuff now. Whatever we came up with before, we can give it all the puzzles we want, but if it already had that in the training data set. It doesn't feel that interesting anymore.
so let's, take it back to The kinds of problems that you see in your day to day these LLMs are getting, more and more capable is, astonishing. If you look at the trajectory from, GPT two to three to three, five to four O to four under for the little letter O.
like those jumps are all big. one jump is really big in terms of its reasoning in the small, I don't yet have a good sense of how it reasons in the large. We've got the models , integrating memory so that, there's conversations or remember data points or remember they're pulled into your profile, you're able to use them.
The whole worldview, my whole worldview certainly changed after the first time I saw 3D5. I was like, wow. And then 4. 0 is like, yeah, everything I thought was gonna happen is happening. What does that mean for your clients? How does it hit them? What kinds of questions do they come to you with in the first place?
is everybody wrap their heads around this enough to the point where they have specific problems that they think I can handle? Or are they coming to you with higher level opportunity candidates, if you will, and you're sort of helping them think through them.
Lucas: what I'm going to say is probably not that surprising.
Right?First, if I limit the question to LLMs, which is really a very specific subset, of non-deterministic intelligence we have clients that we've been helping solve machine learning problems for quite some time.
Deep: Let's not limit it. Let's go to just machine learning in general.
Lucas: machine learning in general, I think the. Problem space has to do, at least our observation, more in problems of scale and operations than in model creation on the model creation side, it's really asking the right questions.
We start this conversation around something that's very philosophical. We're amazed at these advances and then you get your hands into everyday business. it's incredible when you can introduce algorithms that have predictive utility.
And I'm not even talking about sophisticated, chatbots, interacting with each other I'm talking about just the bare bones of, linear regression and using that to make a decision and then repeating that action or predicting, a customer action based on, some Bayesian thing, just being able to do that as a leap for a lot of companies still, so just having the fundamentals in place to operationalize, Technology in general, and then this subset of technology, which is, highly specialized, is a huge challenge.
the problems we're helping people solve are really more about, we know how to run fast. How do we run slow?
Deep: Can you give us maybe some specific examples like industry specific problem? Like, are we talking about customer churn? what kinds of problems are we talking about?
Lucas: Without talking about the core business model of some of our customers we're helping them optimize that business model. I mean, there's a lot of practical uses of right now, like, less rejection of claims, less manual intervention and some kind of, insurance claim process or payment process.
Less humans in the loop when onboarding customers, sharing of knowledge in an organization. Hey, I have a question about my dental plan and want to ask that in natural language form. AndI don't want to have to open the manual and find the page, right? So, so those kinds of problems
and that's become more operational due to, some of its custom, integration of existing software and services it's a question of plugging that into your business process. there's reference architecture to do this stuff.
There's a real complexity of solving these problems with LLMs beyond just the chatbot space, right? And when you, when you think about, I want to create an application architecture where instead of having error handling right to a log, there's actually something interacting with the application that can, in real time, respond to that error and Maybe even fix the code and check in a ticket someone just has to approve it,
Like That's really interesting stuff, and that all of a sudden becomes part of your application architecture, it becomes part of, your stack, those kinds of things, and, I can't say that we're doing a lot of that, because that sounds to me, that's a long
Deep: conversation in and of itself, that's getting kind of wild.
Yeah. Like creating, sandboxes where you can go off and, adaptively deal with errors that you can't anticipate.
Lucas: Yeah, I mean, it's the combination of your pattern recognition and parsing with the current LLM capability, right?
which naturally manages a certain amount of ambiguity. To generate some kind of response, the big challenge there and what makes it hard to operationalize and now I can speak to a specific case. So, imagine you have a bunch of analysts producing a very specific kind of content, right?
And that content is gated and that's your revenue it has a high reputational value. you know that 90 percent of this content can be generated it's a question of getting to the right. sequence of prompts and, figuring out how to inject causality so basic logical stuff doesn't go wrong in the response.
But then there's this last mile problem, where the reader is, on the last paragraph and they read something, which is just absolutely absurd or untrue, that's a really interesting problem to solve. You're talking
Deep: about justsolving the hallucination problem in general with LLMs?
Lucas: but within the context of a product, right? So if the product is content, then. that's where advances in models gets really interesting, but the double challenge back is, is that your tool sets changing every day because you have to keep up with these competing models it's hard to have, a reasonable life cycle of an operational tool on a weekly basis.
Yeah. And the
Deep: API
APIs are constantly coming out one day. I mean, if you're sitting on open AI alone, the API changes are a whirlwind. You've got the assistance API, you've got model drops constantly. And then every once in a while, somebody shows up and says, you really should look at cloud for this particular case.
it just feels like we're all on the bleeding edge.
Lucas: constant fear of missing out, right?
Deep: the jumps are not subtle. They're really big jumps in terms of performance. So you really need to grab them. I was just in a conversation yesterday with somebody thinking about deploying a virtual concierge service.
they looked at it a year plus ago and they had built a prototype internally and used, you know, an earlier version of open AI back when you had to, you know, build your own, rag system. there was a lot of room for error. And for whatever reason, the system didn't work.
I was Baffled, but assumed that they were, working on some of these higher level APIs. as soon as I asked a couple questions, like, oh, yeah, there's a long list of errors that could happen in there that could have led to that.
Lucas: that
Deep: So let's go to the guardrails question a little bit.
we've all heard about. the Chevy Tahoe dealer site that sold a Chevy Tahoe for a dollar in the Air Canada case, you know, where I think it was like a 1, 200 plane ticket that the court said they had to refund because the bot said that they could. We all know it's easy to spoof these things.
What's the level of sophistication you're seeing, or employing on behalf of your clients with respect to statistical model validation of your fine tuning how are you communicating the risks to your clients and what kinds of stuff are you doing about it?
Lucas: So, in some ways, it's just amplifying the software development lifecycle to change your QA process to adapt to output, to adapt to product that's utilizing these tools. And that's where LLMs, create a huge challenge. And that's why the deployment of them is in non risk scenarios.
it's where there's no potential lawsuit. . The worst you get is, someone puts a thumbs down when they get the survey. oh, this wasn't very helpful. I'll answer it this way. I think anything can be tested. It's a question of covering your test cases, and that I think still applies. if you know how to break something that becomes part of your test suite. I don't think that approach changes. I think it becomes more complex. The area that I think is of more concern, particularly where we're helping companies with their own digital products and especially SAS products is, if you're using LLMs to generate your code You use a suggestion that's under a license that puts you into a noncompliant spot or creates a security risk that to me ismore of a chief concern because the lion's share of our business we're, you know, more of a full software development life cycle company coming from 20 plus years of platform development.
So, We have a gen AI capability. We have a machine learning capability, we are building products that, that it really depends upon where that fits into the product. So that's not the specific service we offer. Therefore, we cannot build something that's going to create risk,
Deep: isn't that an oxymoron. Like, aren't you always building things that create risk?
Lucas: Well, yes, we should be very careful about how we're building things that's where a lot of our thinking goes in terms of guardrails and best practice.
Anyone could go to stack overflow and grab, a code snippet, there's a step involved and it's not something that's directly suggested and it's not built into the IDE, which a lot of tools are now built right into your,IDE. So it's easier to accept.
Things into your code base under the veil of being, a good suggestion because it seems more standard could, create problems aren't easy to detect.
Deep: there's a risk of developers getting dumber?
Lucas: no, I don't.
I think philosophically, it's like supply and demand. You can look at this from the point of view of scarcity or growth. I think any tool that allows a human to be more productive ultimately leads to more total factor productivity ultimately leads to more growth.
So if there are things that, we do today that can easily be done with an LLM, it opens up our cognitive space to take on different tasks. It shouldn't shut down the cognitive space. and I also don't believe that it's going to replace people altogether . we have the capability of self driving cars. People like to drive cars. We have, tools that help us drive better and safer. That doesn't replace.
Deep: Yeah, I hear what you're saying, in the large sense, but in the smaller immediate sense.
let's take a specific example. you're a developer, you'rewriting code that, has some recursion in it and has some exacting thing that has to get figured out. But it's a well encapsulated program, the whole thing is going to run inside of, one function that's fairly short.
That's an example of a problem that anyone who's been writing software for the last 30 years or so runs across all the time. It's a kind of problem that LLMs are pretty good at and can solve them. And so I wonder if The new generation and I used a dumber, a bit hyperbolically. But, this younger generation of programmers, if they're just not getting that level of exposure and experience reliant on the machines to, Hit that and it's a scenario where, you can maybe articulate a test efficiently and the model can nail it.
they're inevitably going to get less good at that kind of, thinking, that's not necessarily bad because maybe they'd spend that mental energy on larger architecturally more exposed problem, but it does seem like we lose something.
And this isn't only with respect to, human assist capability with coding, but any AI powered assist capability is gonna make that human's job more boring example I use cardiologists do a lot of things, They wield a scalpel but they also just take a stethoscope listen to your heartbeat and Assess what kind of murmur you have or what kind of heartbeat anomaly if all of that's being done algorithmically, they're inevitably going to get less good at that.
that might not be bad, but it might have unforeseen consequences. I'm curious how you think about that.
Lucas: Well, I think there's plenty of, metaphors just physiologically, right? I mean, humans used to be very good at walking all day long and stalking animals and killing them and eating them.
Deep: Not many of us are good at that anymore.
Lucas: No, so, so then you have to think about in the evolution of humans, what are the skills that are best retained and service in the long run for our adaptability as a species survivability and what skills are kind of ancillary nice to have you know, it was great to be able to eat raw meat, but.
We don't really need to worry about that anymore. you could say the same thing with medicine or anything. I wouldn't, want to go back 300 years ago and have an abscess tooth, It could kill me. sometimes when I drive around, it drives my wife crazy that I turn the GPS off and say, you know, I, II like to figure out my way.
it's quaint.
Deep: Yeah,
Lucas: I think another example is, a few years ago, I went back and got my master's in economics Part of why I did that was I really wanted to challenge myself in the hard math. I hadn't done hard math in a while, , as a CTO, I'm not going in and solving these problems as cognitive tasks on a day to day basis.
The cognitive space changes.
You're just usually
Deep: not well formed problems like that, right? Like you're solving all kinds of problems.
Lucas: precisely. I think you hit the nail on the head. well formed problems are ones that are captured into a function. They're ones that are captured into a space that you have certain parameters.
to me, the difference between, you know, you could say, chess is a hard problem to solve, but eventually you solve it because there's only a finite.
Deep: it's a very well formed. And this brings up questions about how we educate, our populace, like, At least with respect to engineers and to some extent, computer sciences.
we tend to educate mostly off of well formed problems. once you enter the working world, you tend to work mostly with not well formed problems. we're making an assumption in academia that the way to educate somebody is on a well formed problem. I understand why we do that, because assessment is a lot easier on well formed problems.
a couple days ago, I wanted to see how good O1 was. the numbers seemed too crazy for me to wrap my head around. So I'm going to do this myself. I sat down and thought, what problem can I get this thing that would actually let me know unambiguously, at least with respect to this problem, that this thing's better or worse?
And so I finally came across, an old, I was a nerd in high school and was on the math team there was this class of question that I always found really interesting. It was using the number 4 and any mathematical operator you want. they'd give you some integer, like the number 37, and there could be constraints on it, like you can only use four fours, or you can't use this mathematical operator,
And so, you can see how that's easy to assess, efficacy on, because you can just run whatever expression GPT 40 or 01 gives, and, it either equals a number or it doesn't, and it either adheres to the rules or it doesn't. I think that's kind of what academia has done, at least with respect to engineers, is these well formed problems.
With respect to the liberal arts, I think they do the opposite in many ways. Like, they're almost never well formed problems. And I'm curious, what do you think about that? what does the future employee need to be?
Lucas: I'm a liberal arts major. Then a self taught programmer.
it was very hard. as it is for everybody to
Deep: teach yourself as a programmer.
Lucas: Well, to get the fundamentals, right. And I think that another way to think about it is. you have people who can, pick up instruments and play them by ear, but they can't read music.
Deep: I was, academically trained as an electrical engineer. We solved all kinds of problems, but I don't know if you've ever hired an EE, but they're horrible programmers, no offense to all the EEs out there, when we come out at least when I was there, we came out, maybe knowing how to deal with MATLAB or some kind of high level simulation language.
Lucas: But there was this whole world of stuff that I had to self teach and learn on my own to learn data structures and other things. I don't think it was easy, even though I had that engineering background, So, bringing it back to the original thought of, an LLM tackling that well bounded task. that for a human could either be challenging in a fun way or a not fun way.
Deep: right.
Lucas: What we're coming to is the problem space becomes bigger, more ambiguous, and people have to navigate that to be successful. Which I feel like is a positive trend because the biggest problems to solve, constrained. they are big problems.
translating between domains, for example, when you build a,complex application, you're creating a world as a software engineer, you're creating all the rules in that world. You're doing that with a team, whether it's one mind or, 12 minds coming together, you're defining names for things.
These all become like the creatures that live in your world. You're defining those things. You go to the next world over.
Deep: Yeah.
Lucas: And there's no guarantee
Deep: the terminology maps, you mean,
Lucas: has anything in common with the other. So that by its very nature kind of points to the fact that, we'll go up another layer of abstraction or change the layer of abstraction, but we're still solving these problems.
And there's still technical problems. are we going to solve them in a layer that's closer to natural language? Probably. Is that a bad thing? Not necessarily. Natural language has great benefits. It's easy for people to read and understand, but it contains an awful lot of ambiguity and uncertainty.
Deep: Yeah.
I, and I imagine we're just going to have a,a new kind of bug Because there is that inherent ambiguity from the natural language specification to the resultant code. I think that's already happened in programming over the last 30 or 40 years.
you
like a C programmer dealing with their own memory management versus a higher stack C sharp Java programmer, versus a more dynamic language programmer, Python or whatever. you wind up with different problems in terms of where the problems lie.
maybe going back to your customers a bit, what do you think are the most exciting problems that you help them solve and what makes them exciting?
Lucas: what makes them exciting is, a question of personal preference, I think solving data architecture and data governance problems.
Those are really exciting because it's kind of like, whoever wrote the constitution did a great job, as a framework, it works really well. It keeps things in check. separates concerns. Decouples things. Makes big
Deep: changes hard, but smaller changes easier.
Lucas: Exactly. having a data architecture. in a business that has that kind of built in flexibility, but also just ownership, governance over quality. it's democratized in the sense that you get, use across a business and every business unit is able to make decisions based on data that gets me really excited.
and it's exciting to help our customers solve those problems, which, oftentimes seem very simple.
Deep: can you walk us through that? what exactly do you mean by data architecture? And maybe use some examples. what exactly do you mean by data governance?
Lucas: data governance really just means that someone is concerned about the quality of the data. What the data means, how it's used. There's a common lexicon. there's, guardrails data is, is private when it should be private. It's a whole bunch of things, why is that interesting, as a concept, because often engineers are, responsible for maintaining an ETL pipeline, if that pipeline breaks, I'm going to have some downstream impact.
Somewhere a customer is going to get upset. What could break it? It could be broken because of a bug. It could be because some exception that wasn't handled. It could be something in the data. Oftentimes it is depending where the data is coming from, how it's structured. all of that process is an engineering problem to solve, but there isn't always the same awareness of that data and what it means inside the engineering unit responsible for that process that's owned by a business unit.
They can define how that data is shaped what it looks like. So unifying that in a framework where people are responsible for the quality of their data and the engineers have an understanding of what that data means.
That's kind of an output or a result of what you could call good data governance.
And what that allows a company to do is to utilize that data in a richer way, from an architecture standpoint, I think the simplest way to think about it is, You have a platform, an application interface and data that gets stored, right?
And you interact with that data a couple of decades ago you could think about in terms of client server, right? The client is, typing in the server is holding the data and running the application. if you want to do a report on that data, if you use the same place where you're storing and processing , you're going to have some conflict because you're going to do some operation that is going to lock up that database.
And so you start replicating that somewhere and say, okay, that's where I'm doing my reports. It's not a transactional database. That becomes a warehouse.
Deep: the whole data lake warehouse. The evolution of
Lucas: that over time.
Deep: You're talking about walking into an org and helping them instrument all of their product data, exhaust, and get it all into a warehouse, then Funnel it into their analytics teams or analysts
Lucas: exactly.
Part of it is the need for real time data access and the ability to propagate changes without losing a source of truth.
To train models and then deploy, retrain, and deploy in production in an efficient way helping customers on that journey is exciting. it's hard to decouple that from governance because the governance dictates those use cases as well,
Deep: Particular data sets are problematic, maybe due to human entry and so they're less clean. absolutely.One of the things I Talk about a bunch is,if you rewind 10, 15 years ago, you know, there's this huge buzzword, big data, big data, big data. And it spoke to a lot of this kind of stuff that, entities, companies, organizations if they really need to kind of, in essence, instrument everything, collect all of this data, centralize, or at least make it easily accessible for analysis and interpretation, product improvement, improving their customer experience, et cetera.
And one of the arguments a lot of them made were. machine learning AI, you're going to be able to predict, which customers are going to quit before they quit, figure out which ones are your best customers, if I look back, it feels like the last 10 to 15 years were predominantly about instrumenting these orgs to gather the data.
But I saw relatively few orgs where there was a lot of prediction or analysis for a lot of the fancy stuff that were the carrots that were hung in front of people to get them to expend all this money and effort. really being realized, it was relatively minimal, and then my hypothesis is that we're in a new phase now, where it's relatively rare that I run into an org that doesn't have, a Snowflake instance or something where they're warehoused and you can generally access everything.
There might be some gaps here and there and certain sectors are maybe less mature than others because there's some blockage like in manufacturing. There's a lot of times like old machinery that, doesn't quite have easy, internet access But for the most part, a lot of orgs are already there and they're gathering their data and it's time for the next phase of trying to
build these predictive models. Are you seeing that? that transition having happened and that we're in the thick of it? Or do you think we're more still in that big data era?
No,
Lucas: no, I share the view.
we have to at some pointbe specific about. You know, what, what's the kind of company is it? What's the industry? Are they a digitally native SAS company or a manufacturing company that is mildly tech enabled in the latter case, you're building and selling tables.
you have a website, mostly Shopify. And then you, you havea plant and a lot of manual stuff and people keep track of stuff on spreadsheets and it's okay and doesn't, it's not great. You know, it's a huge challenge. Getting that data in a state where you can start doing these things.
Most software companies like a SAS company do this. From the start nowadays.
Yes and no. I think, and I'm going to speak in general terms
if you're building a SAS platform, And it's a B2C platform. You're solving something that makes quality of life better by simplifying something for a person. your first priorities are features that are getting users to adopt your platform and data as an asset is like a 12 to 18 months part of the roadmap.
But it's never at the top. And by the time you get to the point where, okay, are we going to get all this benefits? there's always a new feature that trumps that in the technology roadmap. I'm being very general, but I think that there are completely,
Deep: I would argue that most product managers will push early on to at least capture the click stream and gather that data, but it's rough, right?
Yeah, exactly. it's not the right thing to have captured. It's rough. Technically. Yes, you can derive what you want out of it, but it's a bit of a mess. Are you talking about thinking through what you gather at a higher level so that it's ready and right for analysis?
Lucas: Well, maybe there's a market for it. Maybe there isn't. that kind of analysis. AI means so many different tools. Is it something that's going to be a revenue stream or not? Is it something that's going to, Optimize your business and reduce your spend.
Is there a third party tool that can do it for us? And, has a bigger data set beyond what we have, you know? So those are allthe questions you'd have to ask. does it make sense to build it ourselves? where's the benefit?
I think one shift, is that there are more companies where it's core to the model, right? And the companies we work with how do you make, any kind of exception handling in any business process? B to B or B to C where humans getting involved is, the 80 percent versus the 20 percent cost wise, right?
So how do you reduce that? and you get better at predicting things that way and you own the data, or at least it flows through your system. So you're able to do that kind of analysis. I think those are, Pretty strong and growing.
Deep: yeah,
You guys do a lot of, a I consulting. We do a lot of a guyconsulting. Why should companies and organizations use an external a I consultant?
Lucas: I think the question is broader and it's why should you hire a consultant to do anything in your business? It's about specialization In our business, there's always a cost component, too,
one major tectonic shift in software outsourcing, where, for example, we're international, we have offices in Eastern Europe and in Europe and Latin America, there's obviously,some cost arbitrage involved in software outsourcing and that's how it's grown. but that's shifting because what makes outsourcing more attractive, is specialization and experience.
So we're exposed to a wide variety of domains and every domain, every industry, every vertical has its own reference architecture to solve very specific problems. if it's easier for us to go, from 1 to 10, then someone going to 0 to 1 because we've done it several times, then that's where that makes sense.
going back to your question, AI is, a hard expertise to home grow in your company unless it's deeply ingrained your business model, you're offering and the product that you're building.
Although I would say for us the more common problem we're finding to solve is more the data behind the AI than it is the AI itself, the AI part is kind of elegant. it's a very expensive, fighter jet that requires highly refined fuel and absolutely no pebbles on the runway to take off
that metaphor.
Interesting they say that
Deep: because I would probably argue the opposite, depending on the kind of model, but at least with respect to unstructured data, I'd say, give us the mess, tell us what kind of insights you want and we'll see if the mess can be transformed to get the insights.
Obviously, garbage in, garbage out on some level, but part of the beauty of these systems is that, garbage can go in and you can still get something pretty amazing out, there's no shortage of garbage on the internet and all the LLMs are slurping it all in, but sticking something pretty good on the other way out.
Lucas: But I think you're always going to run into a buy versus build question one thing we've done in a couple of projects is. it goes back to the open source or not question, if I'm going to propose a solution and part of that solution set is you're going to use open AI for this part of this process, and then I'll build some other tool chain to do the rest while I'm introducing another,another subscription you're at the whim of another provider,
Deep: privacy issues with your data going to this vendor, all that gets opened up as well.
Lucas: Yeah. So to be able to build something custom, this is, part of our business since, open source. Came out, you can build, a highly functioning web application with, free technology. Right? And that's a beautiful thing.
Not just from the cost perspective. it's a higher cost of ownership. just the control and privacy the last thing you want to do, at least in my view, is that, you know, say the startup is all of a sudden assume all this, you know, cost of subscriptions, for your business processes, it doesn't scale.
Deep: well, yeah, it depends. sometimes it's early in a life cycle of a prototype You want to just get moving fast. later on, you want to plug in really sensitive private data. Maybe, you grab a Llama3 version and yourun a local LLM you're in a different space.
You pay, you sacrifice the efficacy that you get with a higher reasoning model for more control. What blocks companies from delivering AI powered features and capabilities?
It sounds like what you're saying is that the lack of, good data architecture and governance is playing a role there. what do you think blocks AI and machine learning capabilities from being realized inside of the orgs you see?
Lucas: a lot of the organizations we work with are kind of brick and mortar, technology enabled, but not tech.
I feel like it's almost like LLMs can completely disrupt a business to the point of extinction, right? what is the company that just kicked out Salesforce and Workday? was it the credit company? Starts with a K, Clarnia, I think.
Deep: I don't know.
Lucas: They just automated a whole bunch of stuff that they could use LLMs and it would just, you know, eliminated a whole need for, for a bunch of software. Right. So, so it's, it's either. And again, this is just sort of the, the feel of it right now is that it's either really disruptive and changes everything.
Or such a small piece of optimization or a little bit of sugar that it's like, meh, do we want to do it? Do we want to invest in it? The prototype is nice, but it doesn't have the big bang for the buck. that's what I'm seeing is that it's either huge and disruptive or the cost of operationalizing it and really getting it to a state where it could create measurable impact.
Is just higher than the benefit so you're left with likehanging problems. the other thing is that a lot of enterprise company. I mean, they're still figuring out technology, you know, it's not so. The question isn't why aren't they using AI? It's it's how are they using technology in the 1st place?
And that's not, you know, it's not because big companies somehow to lack intelligence. It's just it's more complex ecosystem to be in. There's more, there's there are more controls across the organization, so it's not something you can easily inject without dependencies on other things.
I think I has a large amount of dependencies in an organization. It has the ethical guidelines. It has the technology support the operational side, which I think is a huge challenge. And then you have, the data itself, right? that's the problem space that I think often we shift to well, we could do it if we, had better data to train with, if we had access to data, if someone hadn't, added.
50 columns. We don't know what they mean to this table. We'd be able to understand what is in there. And literally that you see this, right?
Deep: I mean, certainly outside the team or sadly, sometimes even inside of one team. this has been a super, fun conversation. It's Been interesting.
I'd like to end with, a future oriented question. if we fast forward 5 or 10 years. from where you sit in the ecosystem of AI, what do you see happening with respect to your clients and customers, the type of work you do, paint the picture you expect in 5 to 10 years.
Lucas: the only thing I feel any confidence having informed speculation about. I would answer from the practitioner point of view and how we use AI and build things. I think that's going to increase. I think it's going to make platform development more commoditized, faster, easier.
I think programmers will have a different approach and the job description of a software engineer could change radically because of how we're interacting with the software, a new layer of abstraction. We'll think about, tokens instead of bytes. I think this is all known now and it's just going to grow.
I don't think it's going to reverse back. I think that core competencies around data will become more important than solving the sort ofalgorithmic stuff, the basic computer science stuff, that'll be replaced by, you know, maybe more of a focus on it because it's a thing about the amount of data that's being generated that then has to be handled.
it's a huge quantity, right? So there's a scale question.
Deep: They're already struggling with the fact that so much content is. coming out of the bots in the first place that they're now training against. So yeah, a lot of folks are hiring college students to prove that they manually wrote something.
Lucas: So I think that's the only area where it's going to change engineering organizations, just the structure of the job descriptions. It's going to change how things are developed, fundamentally. if the last 20 years were about building big transactional platforms to solve these problems, then the next 20 years is really going to be about how to manage data on a large scale.
those I thinkare trends, which I'm safe saying are going to continue. As far as we want, we
Deep: want to hold you to the singularitypredictions.
Lucas: Yeah,
I mean. I'm really interested in, what are we going to apply our cognitive space and calories to,
Deep: yeah,
Lucas: if we're shifting that out, the things that I don't have to think about,
Deep: right, like, what is it, it's like the, The old Star Trek episodes where they wind up on a new world, like, are we all sitting around making pottery, or what are we doing with our mental cycles, if a lot of the basics are taken up, do we all just become really amazing, thinkers, and we're able to sculpt and control this machinery towards the ends that we want.
Lucas: Well, I think we'll ask better questions. And I see this in my kids, like they forget about it. they're using chat, GPT for everything. They're not doing a scratch of essay writing anymore. they're just refining stuff. If you get better at asking questions, then as a first principle, you're going to get more precise in terms of your solutions.
This often happens, when you're just trying to figure out, your product market fit, There's a lot of learning that can be involved, because maybe you're not asking the right questions.
Deep: Yeah, I think learning how todo some of these, we talked about dealing with well formed problems.
and I think we've kind of were in agreement that we all need to get better at dealing with ill formed problems. And I think there's a couple other ingredients, like the ability to ask questions, the desire to ask questions, and really dig in,
those are like reallyessential to, being able to navigate the more ambiguous intellectual landscapes that I think we're all going to find ourselves in. So anyway, thanks so much for coming on the show. This was a really fun conversation.
Lucas: Yeah. I hope you enjoyed it. I really did too.