.jpg)
Digital Squared
Digital Squared
AI's Unseen Evolution: Insights from a Microsoft Veteran
On this episode of Digital Squared, Tom is talking to Luis Vargas, a tech industry veteran with a fascinating journey from his upbringing in Mexico to being on the cutting edge of AI. Luis talks about his 15 years at Microsoft, where he played a key role in the company's AI strategy and their partnership with open AI. Now as a co-founder and CTO of his own startup, Evolver, he's at the forefront of applying AI technologies to real world business challenges. We talk about Luis's career path, delve into Microsoft's lesser known early work in AI and discuss his unique insights into both the technical and human aspects of this transformative technology.
Intro 0:00
Welcome to Digital squared, a podcast that explores the implications of living in an increasingly digital world. We're on a mission to inspire our listeners to use technology and data for good. Your host, Tom andreola is the Vice Chancellor for Information Technology and data and Chief Digital Officer at the University California at Irvine, join us as Tom and fellow leaders discuss the technological, cultural and societal trends that are shaping our world.
Tom 0:30
On this episode, I'm talking with Luis Vargas, a tech industry veteran with a fascinating journey from his upbringing to being on the cutting edge of AI. Luis talks about his 15 years at Microsoft, where he played a key role in the company's AI strategy and his partnership with open AI. Now as a co-founder of his own startup, he's at the forefront of applying AI technologies to real world business challenges. We talk about Luis's career path, delve into Microsoft's lesser known early work in AI and discuss his unique insights into both the technical and human aspects of this transformative technology.
Tom 1:08
Luis, welcome to the podcast.
Luis 1:11
Hi Tom, thank you so much for inviting me. Excited to talk.
Tom 1:14
Absolutely, absolutely. I'd like to kick us off, starting with you, taking us through your upbringing, your education, how you got to Microsoft. Let's start with the story of Luis and how he came to be the person you are today.
Luis 1:27
Sounds great. My name is Luis Vargas. I grew up in Mexico, in a small city in the middle of Mexico, a couple of hours from Mexico City. My father was a civil engineer and my mother was a teacher. Since I was a kid, I remember them talking about the importance of education. They always told me that my only inheritance was going to be education, so it's going to be important that I made the most out of it. Remember, since I was a kid, they always trying to teach me whatever it was interesting to them at that point in time, like my father was talking to me about how houses are built and the construction process, and He will take me to see them and trying to show me the interesting aspects of being an engineer. And my mother talking to me about just all sorts of interesting learning topics, from history to math to geography and whatever. So anyway, like, I grew up in a house with lots of encyclopedias. There wasn't Internet back in those days, so it was all about whatever you could read at home or go to the library, learn from that. I was always a very curious young man. I remember, I'm from my parents, remember me opening VCRs and opening TVs and even opening mattresses, because I was very curious about how the coil and then the foam mechanism works, I opened a few mattresses. I'm sure they were not very excited about that.
Tom 2:51
Yeah, sure. So where did the computer programming kind of get your interest?
Luis 2:53
So I guess my first interaction with anything digital was video games. Like I started playing video games when I was four years old. And back in those days, it was like the Atari 2600 and since then, I have played many more things, but that was my first interaction with digital things. And then it was in high school. It was in high school when I got access to the first computer. It was a 386 PC. I remember a VGA screen, probably even before that, there was, like a black and white, like green, tons of chains of green monitor, and it was a floppy disk. Remember? I do remember, yes, that was the first one. And in class, that was my first programming class. It was very simple. It was like logo, or do you remember, like, the logo basic? Like, it was like a solo language where you had to move a little cursor, like in a chip of a turtle, across the screen, and you could put some basic instructions, and then from there, it was basic. That was the first time that I could give some more little instructions to the computer, have it do some basic arithmetic operations and outputs on response in the monitor. And I was fascinated by that. Like, I was pretty excited to the things that I could do. And, yeah, like, I couldn't wait to have my next, I guess, programming, not even programming my computer class in high school. So I was my first, my first interaction.
Tom 4:19
So then, obviously, you went on to study computer science. And you, I think your PhD program is really interesting, when I was doing some background check on you, can you talk a little bit about that, and then how that led to the career part we want to talk about.
Luis 4:30
yeah, as I was finishing high school, I got to decide what I wanted to study, and I really enjoy, like, the computer classes that I had at that point in high school. Plus, as I said, like I was always very curious about video games, and in my mind, I have this idea that I wanted to understand how video games were made. So weird reason to go into computer science. But I think the combination of these things made me want to study computer science. Even since that age, I thought of myself as being quite methodical and logical. So I think. Think all of this kind of pushes me towards that direction. So I ended up going to computer science in college, and that's where, like this is in a university close to Mexico City, and this is where I get first engaged into computer science topics, so things like programming and compilers and database systems and distributed systems. So that's really when I started writing pseudo code and algorithms and implementing them in different languages. Back in those days, I think my first language was in college, was COBOL. If you remember COBOL, someone like writing an English letter. It was funny because, like, now, literally, we're writing programs in English. So yes, the first one was COBOL, and then I moved from there into Pascal and C and then, like object oriented programming, so some Java. And interestingly, as I was finishing college, like my last year and a half, there was a program which is in every country where you can participate in the Microsoft Academic Training Program. So between Microsoft and the local government, like they have this program where they could educate kids in college and learn about Microsoft technologies. So I spent a year and a half learning about Visual Studio. Started with Visual Studio 6.0 and then the beginnings of visual studio.net like.net has just came out, closed database system, like I learned about, SQL Server, I think it was 7.0 it might have been 6.5 like the very first or the second version of SQL Server. So I did that, and yeah, I graduated from college with my computer science degree, but also with a bunch of certifications from Microsoft that I can learn about these different tools for programming and for backend in enterprise software, which is interesting because then after that, we're going to talk about that. Well, I ended up working at Microsoft, so pushing in that direction, but then
Tom 7:00
Luis back then, very different from today, certificates were few and far between, and you were in a much more select class to have certificates at that point in time. Today, certifications are everywhere. Every vendor is doing them, but back then, there were only a handful that we're doing and there were only a handful you could get. So you were really in an elite class even at that point.
Luis 7:17
Yeah, I got lucky that my university was one of the few ones in Mexico that had this partnership, both with the Mexican government and with Microsoft, so that I could participate in the program like I think my university was probably 25 to 30 students, so not that many, and I participate in the first class that did this. So I actually got lucky one year later, and I wouldn't have, maybe the access, or definitely one year before, like, I wouldn't have participated. So, yeah, it's just like the right time I managed to participate in all that. And as I said, like, I finished college with my degree and a few certifications on these Microsoft technologies, and then I had to decide what to do, go and get a job in the real world or try to delay it. And I decided to delay it. I was interested on just continue my studies and and have the opportunity to just learn more about computer science. And that's when I applied to a PhD program. So I applied to Cambridge University in England, and I decided to go there. I always thought that I was probably gonna go and end up working in the US. That was my thinking. Very basic thinking. I was like, just barely 20, and I was like, probably gonna end up working in the US. That's where the big technology companies are. I also wanted to be close to Mexico. So my logic at that time was like, why don't I go and get to know Europe and Cambridge had a great scientific history, everything from the creation of calculus to the discovery of the atom and the discovery of the DNA to like the notions of the Turing machine and the first programmable computer with EDSAC anyway, like even more recent things like peer To peer networks and virtualization and some discoveries on encryption. So I was very curious about going into Cambridge, and that's where I ended up spending four and a half years, four and a half years, and my PhD was a combination of database systems and distributed systems, and that's because I had enjoyed both of those topics deeply in college. So that's what I was doing, like just looking at new algorithms for database systems in a network to be able to share information effectively. This was 20 something years ago, 2122 years ago.
Tom 9:43
So back in those days, yeah, we were identified by age on this, I know you say floppy disk. We know how to put you in ourselves. When we talk about some of these older pieces of technology, we have for sure, you have to be careful. What do you do? You do? So you got to PhD, and then you went to work for Microsoft.
Luis 9:56
I went to work. Yeah, I thought about it, maybe delaying going into the real world as much as I could. And I was like, No, I think at this time it's time to to actually go and work. And as I was finishing my PhD, like, I started talking to Microsoft, and I feel like we very quickly found some interesting synergies between what they thought that I could be useful and what I wanted to do. And that's how, yeah, before I finished my PhD, like, I already had an offer from Microsoft. And the day that I finished my PhD, like, I moved to Redmond Washington, here at work for Microsoft. And the first, my first job at Microsoft was working on the database systems team. So working on database systems, and, unsurprisingly, working on the distributed systems, aspects of the database, which was how to have the First it started with a more researchy kind of thing. It was more of like, how do we you remember back in those days? It's like 17 years ago, 18 years ago, we're talking about service oriented architectures, and how do you have these contracts that define the interaction in an asynchronous way between different systems? So how do we implement that in the database itself? Like different databases collaborating together through triggers and messaging? I work in a system called service broker. There's one of these features inside of SQL Server, the Microsoft database for having databases communicate like this, and then that evolved towards high availability and disaster recovery. So that evolved towards having multiple replicas, standby, replicas, locally and remote to be able to recover in case of failures, failures localized to the database system or the server, or even failures that impacted the whole site, which actually I ended up seeing because, like, one of our customers was a bank in Ecuador, close to Quito, the capital, and they actually had an eruption that was like 40 miles from the data center. I remember, like working with this bank in Ecuador as they were, like, doing disaster recovery. So anyway, a bunch of interesting examples of companies having to guarantee availability for mission critical applications, even in the presence of some big faith. There was another customer in the US in New York that helped to deal with Hurricane Sandy. You remember Hurricane Sandy, and like I remember talking to the customer as they were trying to get water out of the basement so that servers didn't get wet. So anyway, like I work on databases, particularly distributed aspects, and more specifically, high availability and disaster recovery, and I was leading that space for SQL Server. I worked there seven years, and after that, I moved towards the beginning of the cloud. That was when cloud was starting, and Microsoft was working on Azure, the Microsoft Cloud, and I was collaborating with a team specifically making sure that Azure was enterprise ready, so making sure that we could support enterprise mission critical database systems, starting with SQL Server, but from there, just expanding that to all sorts of enterprise applications and making sure that we had the right support with regards to compute, storage, networking, security, availability, etc, to be able to support those applications. So I did that for another three to four years, working on the beginnings of the Microsoft Cloud, and after that, I pivoted towards AI, and this is eight, a bit more than eight years ago. Yeah, it was focused on coordinating technical strategy for AI in the company. So I was reporting to Kevin Scott, the CTO and enterprise president of AI, and he reports to Satya Nadella, and my role, working with Kevin was to define technical strategy for the company, as I say, like eight years ago, like we already were looking at increasingly larger models. Some of these to help, for example, in being in semantics, relevancy, and some of these coming from Microsoft Research, like the idea of increasingly larger model with more parameters and more data, like had already started to become more and more interesting, but it was really like seven years ago or so when, like, we look at the opportunity for self supervised models, like these models that just learn by themselves. And you don't need humans like showing them examples of what is a cat and what isn't a cat and what is good and bad sentiment analysis, but they could just learn all of these directly from data. And this is where we started training self supervised models of increasingly larger scale, and in the process having to figure out all the different angles of this from infrastructure, so like working together with Nvidia on the first clusters of computers that we were going to put together to be able to train some of these models, to realizing very quickly that you need to have system software to allow you to parallelize the training of these models, because the models don't fit in one GPU or even like a whole server of GPUs or accelerators, so you need to parallelize the training across different GPUs and different machines in the cluster, just putting together a team that was helping us to do this, and then another team to actually do the work of training the models. So we started training some of these models. This is like six, seven years ago. We call them Turing. So some fairly large models back in those days. Like we trained a 17 billion parameter model, and then 50 billion, and then a 500 billion together with Nvidia. So we train quite large models back in those days and and and then the next phase was, how do we use this model? So starting to get all of the portfolio of products across Microsoft, from Bing to Office and Dynamics and LinkedIn and Xbox and GitHub and everything else, to start using these big models that were centrally trained inside of the company, as opposed to like these, joined independent models that every product unit before that was training and to prove across the company that we could take a dependency on a handful of models and achieve better quality than what we had achieved before, and achieve this across, as I said, Like many products and hundreds of scenarios. So that was very roughly like what we were doing, and I was coordinating the strategy across the company.
Tom 16:30
So there's some I want to interject here, because I think it's, I think it's a lot of people maybe don't understand this about Microsoft, right? It's right now you sit and it's open, aI had all this innovation. Google was doing stuff. I think a lot of people miss that Microsoft was doing things too, but, but it was strategy inside the house, building capabilities. It was about proving capabilities to support the Microsoft global enterprise. And it wasn't, they didn't, they didn't talk about as much what was going on, and only if you really knew people like you inside of the organization, working in the CTOs organization, or the research organization, that these things were going on just the same as they were going on at Google or a lot of people didn't know about AI, but I think that's one of the things that people didn't understand. Microsoft wasn't sitting around open AI and, oh, this partnership happened. No, Microsoft was actually had a lot of experience in this. I'm really curious. You know what you can share with us? Because I know you were a part of Microsoft team that started the discussions with open AI, what was it at the time, in terms of what you had learned and it built capability towards and then meeting this small, little startup set of research engineers that you said, Huh? What do we do strategically from here, what can you tell us about those days?
Luis 17:43
Yeah, so open. AI was interested on training large models as they come in and will continue, will be interested. And they they were very curious about the infrastructure that we have built, just because the programming paradigm was amenable to what they liked, so they wanted to have access to the infrastructure, and we did that, and we gave them access, and they tried it, and they liked it. And in the process, we started discussing the scaling laws of AI. My like opening I has a pretty famous paper we have observed the same scaling laws of increasingly larger models with more parameters and more data, and because of that, more compute acquiring a better, more nuanced understanding of language, but not only language, like images as well. I mean, able to support tasks in those spaces in a better way. So we had seen similar things. So after they try the data infrastructure, and we discuss, how do we see the space, it was clear that we have a bunch of commonalities on how we see space evolving, and the missions of the companies align really well. And that's how the companies decided to start a Data Partnership. And yeah, like the partnership has been going on for many years, and we are, like, part of the reason why Microsoft was also excited about the partnership, because we have seen some of the same things happening inside of the company and and we have already put some of these centrally trained models across many scenarios and many products and show the value of this. So I feel like all of the right pieces were in place by the time that we started discussing the partnership.
Tom 19:40
One of the things that Microsoft has really been good at over the years, from perspective of being a customer, but also being a student of the business, is scaling things. Right? Microsoft is incredibly good at scaling things. What were some of the tension points? Right? Because I'm sure when you were working with open AI. Scaling was not a strength of theirs. Matter of fact, maybe today, it still isn't one of their strengths. What were some of the tension points of obviously, really smart people, really ahead of the rest of the pack on thinking? But there must have been tensions there, in terms of they brought certain things to the table, but they couldn't see some of the challenges of scale. I would guess.
Luis 20:00
Yeah, just to be fair, to open, I think they have done a fantastic job with chatgpt, and they have a scale. It's called chatgpt, immensely. Well, yeah, I can talk from the Microsoft side, like on the Microsoft side, like we we have to obviously think about scaling, like the inferencing of these models right across hundreds of millions of people, devices, and how to do that while maintaining reliability and while minimizing costs. So obviously, we have been very motivated on lowering the cost of all of this, which, by the way, like we ended up putting in the platform in Azure as well, so that all of those cost savings that Microsoft benefited from, like developers outside of Microsoft could benefit as well. So I will say scalability, performance, security, obviously ensuring like that the all of the privacy agreements that we have with our customers were kept and respected. And then cost, I think those were the main aspects. But once again, we had done some good job already doing that for internal models, which now people will call llms. Like back in those days, we didn't have a name for these things, I guess we call them foundation models, or models as a platform. But like, we had already built this platform where all of these products across the company were already taking a dependency, and we already had to figure out a lot of these scaling aspects in advance. So by the time that we have newer and better models, then what we had to think about is what new scenarios can benefit from that increased level of reasoning capability. How do we benefit the users and enterprises from these increased capabilities? But a lot of those scaling problems that we had already had to deal obviously, the models were bigger and it included additional complexities that we have to deal with, but like for other simpler, smaller model that we had already done it,
Tom 21:50
Yeah, so you recently left Microsoft and you're into a new venture. What can you tell us about it?
Luis 21:56
Yeah, last year, I co founded a new startup Evolver AI and basically, like, together with my co founders, like we, we have looked at the AI landscape and is changing every day. We have seen it like every day there is hundreds of new ideas and papers that are happening. We are really at the beginning of this exponential wave like clearly, the world is rapidly changing, not only from a technology perspective, but also from the implications of the technology as well. And what are the early beginnings of this exponential wave? So what we notice is companies in some domains like trying to figure out, how do they get concrete, measurable value of all of these good innovation that is happening. And then how do they enable their employees and their customers to collaborate effectively with artificial intelligence, to build these virtual cycle and feedback loops between humans and artificial intelligence to bring all of this value to the table. We think that there is an opportunity to help organizations to better generate intelligence and actions in some of these domains. We're starting in domains like finance and operations and compliance, and looking at these spaces. And what we did is we combined some deep domain expertise in all of these different domains, and then we combine it with deep AI and technology expertise. So really companies a combination of both parts, and we are working together on building a platform and a set of solutions targeting those different domains to support these areas, and to be able to help customers gather the most important and valuable information that they need to do intelligent decisions, but also to support increasingly more complex tasks, workflows. So part of all of this is just giving you all of the information that you need at your fingertips so that you can make the best possible decision at any point in time. But part of this is also helping automate some tasks so that you can achieve those tasks, hopefully with higher accuracy and with a better understanding of the domain, so that is closer to what the end user is going to need, and do it then a fraction of the time as well. So that some of these tasks traditionally will have taking months, right now we're talking about taking days, and later we're going to be talking about taking hours. So that means that these tasks can happen a lot more frequently that they used to happen in the past.
Tom 24:35
Yeah, no, I think to your point, it's changing so fast right from I'm sure you could probably tell a story between the time you started the company and the type of tools you thought you'd have available to build a platform with, versus today, I find it an amazing time in terms of the evolution. Knowing that you don't want to talk too much about the company, I do want to ask you about some of this accelerating change around the technologies that. We can put underneath these companies, put underneath your platform, put underneath my organization. And it's funny, it's like, I feel like I can almost do a separate webinar or event every couple of weeks, because there's always something new. So a few weeks ago, trying to capture the moment, we did a webinar about Deep Seek. And it wasn't just about Deep Seek. It was really to try to help our business community around us in here, Orange County, California, help technology leaders understand the open source frontier model frame and to understand it enough themselves that they could turn around and talk to their CEOs and their boards about it, right? Because the moment captured everyone's imagination about, oh my gosh, we just took something, and we can now take advantage of AI, what 1/10 the cost, which gets every CFOs attention, which gets every CEO's attention, which gets the board's attention, and of course, they were looking to technology leader to come in and say, so should we be doing this? What are your thoughts on trying to build a company and a platform that take looking at the frontier model scape, Nvidia just had their annual conference and talking about the future of computing that's got to be entering into your to your mindset. How do you think through building something in an environment where you have a new tool that could be better than the one that you were using yesterday? How do you think through that as a business strategist, building an enterprise?
Luis 26:24
Yeah, I think the various thing is to think about value to the company and value to to the humans working in the company. At the end of the day, like companies are just aggregation of humans, with some alignment. So it's about looking at what are they trying to do. Where do they want to accelerate things or make them more accurate, or make them faster or save cost. And from there, you can derive what you need to do. I think is great for all developers to have access to this incredible amount of innovation happening every day. It can also look confusing when the things are changing every day under your cover. So what we do is focus on the task value. We identify different levels of ROI that we believe companies are going to get out of these tasks, and then we define what are the needs that these tasks are going to have with regards to accuracy, cost and generalization and etc. So based on this, we can start pinpointing, like, what are the set of solutions that become relevant here? And as you say, like, there is many different aspects from a technology perspective, like starting with the foundation models. People call them llms Or like MMA, mm like multimodal models as well, because this is going beyond language into language and images or language and video, etc. So there is a variety of these. Some of these are going to be specialized in some particular tasks. Some of these are going to be better for like code generation than others. Some of these are going to be better at reasoning than others. Some of them are going to be better at doing planning and orchestration of sub plans, or sub tasks inside of a task. Some of these are going to have higher, even within each of those tasks, some of them are going to have higher level of capability, but presumably higher cost as well. So you need to look at all of these dimensions and decide, for this type of task, starting from the company perspective, what do I need to bring them value and like, I'm going to decompose this task into a set of sub tasks, and then for each of these sub tasks, what are the best tools and capabilities that I need to bring? And then you can start seeing like, I need a model that has all of these characteristics. And here, I don't care as much, like you just need to support a minimum level of capability, but I care more about cost, because it's going to be a task that you're going to be super repetitive, and you can start putting all of this together. And I think the real trick is expanding from that, and like supporting the other side of innovation happening. Because obviously it's not just the models, but it is, how do you communicate with the models, and how do you make the most out of there? So it's a lot of this prompt engineering work that needs to happen. Plus, how do you infuse domain expertise into the models, both before they get into production, something like fine tuning, for example, but also once they are in production, and how do they continue learning from the humans? And how do you build this feedback loop so that every time that a human interacts with the solution and the underlying foundation models like these models are getting better as well. So you start talking about things like different versions of reinforcement learning with human feedback. So anyway, and beyond this, like there is all of these aspects of, how do you bring the right acknowledge into place at the right time, and how some of this is going to be from inside of the company, some of this is going to be from the web. And then how do you introduce national memory? Because knowledge and memory and reasoning, all these things come together as well. So you need to start thinking about what tools do I use to implement memory, and then do I implement short term memory or long term memory, or do I need a combination of this? And so anyway, there's a bunch of decisions, but I will say, before you get too complicated, like it's all about join value to customers as soon as possible. And then you can start getting fancy. How do you do it dynamic, and how do you accelerate the process, and then how do you lower cost? And blah, blah,
Tom 30:24
yeah. Oh, that's good. I've got to ask you a question, Luis, since we've got you about so just the context of the organization I'm a part of right UC Irvine, we are a combination of a small city, an educational institute, a research and discovery organization. We take care of patients. In the context of my role, in addition to trying to bring these capabilities into place, I'm also the person who has to facilitate the conversation around ethical use of these tools, responsible use of these tools. I'm curious with the journey you've traveled here, starting with, as you said several years ago, with Microsoft, in developing these internally, using it within Microsoft. How do you talk to people about the challenges of having these incredible capabilities now available to us with words like ethical usage, responsible usage? How do you talk about those things from your unique set of experiences?
Luis 31:17
Yeah. So something that, like I know Microsoft did pretty well, like we we are doing at Evolver is just having a clear set of principles that we obey to like different people call in different ways, like I think most people have heard in like responsible AI. And it's about having some objective, quantifiable mechanism to measure things like bias, for example, or transparency. Is the solution being transparent with the user about what it's doing and how is he doing it? Why is he doing it? And does he enable the humans to give feedback and to change that in some way, but also at the same time, like, protect that there is certain ranges that are acceptable. So you need to be careful with all of these things, things like privacy and security, like making sure that yourself, with traditional systems, and especially now with AI, that you protect access to data. And just as you look at what systems have access to what data, like what pieces of AI look at which pieces of data. That's important security and authentication and encryption, and all of these pieces had to be in place. So I think it's about having the clear principles. We have established this in the company and and we review them, and just as how we review accuracy and reliability and scalability in the system, like we review privacy and security and transparency and fairness, like all of these are set of things that that we review before we push new changes into the platform. But even before this, like all of these things have to be considered even at the same time. So when you are designing the solution, like you need to think about how you're going to take these considerations into place, and what are the set of things that are implementation time you have to be aware of, and then you need to have some evolution mechanism. By just saying that you're doing these things, even if you're really implementing them, is not good enough, unless you have some benchmark that you can run. And obviously we are learning about the type of benchmarks that you need to implement depending on the particular domain, or domains that you're working on. Like, we are also building the set of benchmarks so that every time something changes in the system, or we start supporting new new types of tasks and domains, like we have the ability to evaluate how well are we doing for those set of principles. Also, like, those are the main things, obviously communicate these effectively to stakeholders and customers like they need to understand what set of principles are you adhering to and respecting, and what benchmarks you're running, and then listen to feedback like the reality is no company is going to have all of the answers, so we need to put something in place that We think is a good starting point, but then listen to customer feedback and understand their perspectives and any trade based on that.
Tom 34:08
Do you think that if you, as you think about your platform and different customers, that you know those those principles, right? Let's say you're an old, 150 year old, established company, and you have a certain risk tolerance, very different from the 150 day old company that's willing to throw to the wind. Do you think that the models they'll use, the principles will be parameterized differently based on things like innovation mentality, risk tolerance? Will those things come through in the models, and will they be programmable to each type of use case that they're thrown at?
Luis 34:44
Different companies will have the freedom to decide what particular scenarios or use cases they want to start deferring to platforms AI or non AI based, and to what extent they can decide and say, maybe this particular part of the process is fast. Fine, and let's see how it works, and maybe after this, like, I'll figure out the next part of the process. And as you said, I assume that, yes, there's going to be companies that are going to be wanting to move faster and explore bigger and broader scenarios and workflows, but I think obviously all of them should have the flexibility and freedom to to decide, like, the specific parts that they want to start offloading into the platform, and also level of engagement that they want employees inside of the company to have as well. The feedback loop between humans and AI systems is going to be critical. Like, by default, is going to be there, and they can determine, like, how much do they want the humans to be more to what level, to what level. So, yeah, I think it's going to be very interesting to see, particularly in the domains that Evolver gets involved, as I said, finance, operations, compliance, see for different types of scenarios, like how this is going to play with different types of customers. We're already seeing it, and there's always, like, a lot of excitement about the value and the opportunity, but there's always the questions about, like, explainability, for example, repeatability and, like, all of these responsibility aspects that we have been discussing. So this is, it's going to be a very interesting year.
Tom 36:15
It is. It totally is okay. The last question I have for you is one that every one of our candidates, or, excuse me, our guests get which is I ask you to share in the last 90 days, because it's moving fast to your point, what's the coolest or weirdest thing you've seen someone use AI for?
Luis 36:34
I don't know if you say the coolest or weirdest, but I see a lot of people like you're starting to use AI to bounce ideas, which I think is fascinating, like always having, like, this artificial entity next to you that can help you improve on your ideas, and you can improve on the ideas of the agent and start this loop. Like, I do it myself. Like, every time that we have an idea, like, I start bouncing ideas with AI and feeling now like, maybe I didn't think all this, and then AI will respond something else. Oh, that's a good point, but maybe you missed this AI, and then you're like, Oh, you're right after a few iterations, like, I think we improve on the ideas. I think that's a cool mechanism. So anyway, like, and also, for people who are not doing it makes you look really smart, because you can read all your ideas before you bring them to other humans. So I feel like that is one, personally, one of the things that I've been doing, and this has been going on now for a few years, is with my daughter. So I have a couple of kids, and instead of reading books like bedtime stories, we actually generate stories every night. So we use AI to generate stories, and every night is a different story, so it never gets old, and every night it's always about princesses and dragons, and it's always about those two components. But like, I guess, they are interrelated in a bunch of different ways. But anyway, we generate the stories at night, and then they can ask questions about what happened to the princess and what happened to the friends and what happened to the dragon and the prince. And so we can continue the story, making it interactive. And they can see also images. And they can see pictures generated even videos these days, videos about what happened to the princess in the story and how does the dragon end it? And then we can decide in a week, like, maybe we want to go back to that story and see where else do we take it? So it's like this continuous creation, which I think my kids really enjoy.
Tom 38:33
I think that's really cool. And as someone who told a lot of stories to kids growing up, the cognitive load that you are relieving the storytellers. The story creator of is amazing, right? The degrees of freedom in which to create new content and new versions of the story, I think working in the images and videos, again, it's now not just a storyteller and a listener. It's now a multimedia event. I think that's really cool, and that's something that's probably just getting started, right? Yeah, who knows what you'll be doing with your daughters in a couple of years, given the way these tools are continuing in advance, that's fantastic. Luis, I want to say thank you for joining us and again, sharing kind of your journey, your path, the things that you're working on, and the perspective you have of this incredibly interesting time that we're in. So thank you so much for joining us today.
Luis 39:34
Thank you Tom, thanks for inviting me.