Digital Squared

From Classroom to Congress: Examining AI's Societal Impact

Tom Andriola Season 3 Episode 3

On this episode of Digital Squared, Tom is talking to Dr. Jules White, a Professor of Computer Science and Special Advisor to the Chancellor on Generative AI at Vanderbilt University. Dr. White is at the forefront of integrating artificial intelligence into higher education, leading initiatives to democratize AI access across campus and developing popular courses on prompt engineering. With his unique blend of technical expertise and visionary thinking, Dr. White offers invaluable insights into how AI is reshaping education, careers, and society at large.


Intro 0:00  
Welcome to Digital Squared, a podcast that explores the implications of living in an increasingly digital world. We're on a mission to inspire our listeners to use technology and data for good. Your host, Tom Andriola is the Vice Chancellor for Information Technology and Data and Chief Digital Officer at the University of California at Irvine, join us as Tom and fellow leaders discuss the technological, cultural and societal trends that are shaping our world.

Tom  0:28  
On this episode of digital square, I'm talking to Dr Jules White, a professor of computer science and Special Advisor to the chancellor on generative AI at Vanderbilt University. Dr White is at the forefront of integrating artificial intelligence into higher education, leading initiatives to democratize AI access across campus, and developing a popular course on prompt engineering. With this unique blend of technical expertise and visionary thinking, Dr white offers invaluable insights into how AI is reshaping education, careers and society at large.

Tom  1:07  
Jules, welcome to the podcast.

Jules  1:08  
Thank you for having me. 

Tom  1:10  
I'd like for you to start talking about your career journey and how you came into this world that we're going to talk about, which is the world of artificial intelligence.

Jules  1:19  
I've been in computer science for quite a while. I actually started as a visual arts major as an undergraduate, and then I switched to East Asian Studies. But at the same time, I was doing all the assignments that my roommate was doing in computer science, but I was just doing them for fun. And then I was like, it's crazy if I don't switch majors. I've been in cybersecurity, software engineering for a long time, and I've looked at applied AI in that space. And then, really, my career shifted when chatgpt came out. Because what I tell people is, November 1 of 2022, if you'd stopped me on the street and you'd said this thing called Chatgpt is going to come out, and here's what it's going to be able to do, I would have been like, trust me, I'm a professor in computer science. I won't live to see that level of advance in computing, and then a month later, it came out, and then basically, I changed my career based on what Chatgpt was capable of, and I think it's going to redefine everything we do in computing.

Tom  2:10  
So I'm going to ask this question now, which is, I was in a conversation about a month ago, and someone referred to this period we're in right now as 'a moment in time', in the way that Netscape Navigator was a moment in time. Would you put what we're going through right now in that category of significant impacts as we look back 1020, years from now? 

Jules 4  2:35  
Yeah, I think it's, I think it's on the level of above the internet personally, I think that many of the things that we've seen from a technology perspective, you could have anticipated them. To some degree. It was like a natural evolution. It made sense, but I really think that this was an unexpected jump in capability. It was more of like we had science fiction discussions. But now that it's here, it's science fiction, but it's so much more capable than I think we thought through. And for me, this really hit home early, probably December of '22 when I was messing around with Chatgpt, and I had my dad on the phone and we were chatting. And so we asked Chatgpt, what would have to change in the world if we didn't have odd numbers, right? Computers don't answer that question. And it gave this incredibly thoughtful answer. And it was like at that moment that I said, Wait a minute, this is something different in computing, like computers shouldn't be able to do that, and that's really what I think the last two years have been, is like computers should not be able to do what it just did, and now it can. And so that means that computing is going to have a much broader impact than ever before, but it's also going to make it much more interdisciplinary, where the innovators aren't necessarily the computer scientists.

Tom  3:49  

 Which is interesting, right? Because so much of the innovation of the last 30 years really has been led by this very small set of people who have had these specialized skills, who understood certain languages and tools, this has been much more democratized. Are you surprised at how fast things have evolved since the chat GPT moment? As a computer scientist?

Jules  4:11  
I'm surprised it hasn't evolved faster. But I think that the challenge is that this is a technology, in my opinion, that really shines when paired with human beings. And so it requires people to adapt and evolve in a way to begin to use it effectively. So I always talk about we shouldn't be thinking about artificial intelligence to replace people. We should be thinking of augmented intelligence, generative AI, when you pair it with a human, you can allow the human to go and do things that they couldn't do before, like an exoskeleton for the mind, it's not about replacing them, it's allowing them to go explore more solve bigger and harder problems. Now the challenge with that is that people have to learn to drive this exoskeleton and learn how to use it and to think through the other challenge with this technology is that. To really benefit from it, you have to be creative and start rethinking, because we couldn't do these things with computing and before. So somebody has to have the creative inspiration to say, can it do this, and could I figure out a way that it could tackle this problem? And there's a lot of that creative sort of exploration that still has to happen. So I think we're absolutely in that moment in time, but it's actually going slower than I thought, because it's so much more dependent upon human beings figuring it out. The technology itself, I think if we just stopped with chatgpt from January of 2023 we haven't even come close to hitting the limit on what we can do with it, and we're playing catch up to the technology, but the humans have to figure it out in order to make it really shine.

Tom  5:45  
So if we then talk about what you do for a living, or should say, one of the hats that you wear, because I'm going to ask you to talk about a few of them, but let's talk about the hat of Professor instructor, how are we going to see educational approaches change, given what you just said? 

Jules  6:08  
That's a great question. I think we can't really even foresee all the different ways. But I'll just start with computer science, right? My discipline, a lot of disciplines say this is going to really your shape our discipline, and it has this big impact. I'm like as a computer scientist, the most of what we do for the first, probably two of the four years of college is teach people to code, and every one of our classes depends on coding. I think that's going to radically shift, because I'm not sure that we need to teach people to code anymore. And that's really changed for my career. I took a survey of all my PhD students in my software engineering with AI class, and I said, How many of you think in 1-3-5, and 10 years that humans are still going to write at least 5% of the code. And I showed them all these examples and capabilities with generative AI, and basically we had universal agreement, within 10 years, humans will write less than 5% of all code. And I think that's a very conservative estimate. So if you look just in computer science, what we're going to have to do is we're going to have to really focus on the things that we do well, in which we've always told students are the important things, like the principles, the architecture, the critical thinking, the problem solving, all of that stuff is what's going to really matter. And I think what we're going to see is the same type of thing carries over to other disciplines that a lot of the things that we thought were the way that we taught it, in terms of hands on, it's really we got to go back to, we're really teaching people to think and solve problems, and the tools they're going to use to do that are going to be completely different. And I think the other things is that disciplines that didn't think of computing as being a core part of what they do. I don't think that there's gonna be any discipline that's going to see computing as not being like a totally core part of what they're doing, because there's going to be augmented versions of just about every discipline where they're using generative AI as a partner to solve problems.

Tom  7:56  
You've been one of the real leaders, at least in our industry, as I've watched you and I've watched your posts, and I've listened to you speak in front of audiences in developing prompt engineering courses for those of our listeners who may not be following this as closely as you and I do. First, why is prompt engineering so important? And why do you see it as a universal skill for everyone?

Jules  8:20  
Yeah, I think that one of the challenges we have right now is we don't have good names for the skills that we need to teach people. And so one of the things I see is that some people view prompt engineering as about messing around with individual words to try to get it to do something. And I think of it as much broader. It's about learning to converse and solve problems with generative AI. I look at it as like the thing that comes after software engineering. It's the democratized version of computing where we go and build systems through conversation with AI, and we work with generative AI to translate our goals and concepts into computation. And we're trying to figure out how to architect these systems and work with them. So that's what I think of as prompt engineering. But prompt engineering is not, because this is a democratizing moment, it's not something that you go and just teach computer scientists. It's something that you teach everybody, because everybody can go in and now use this to create computation to solve problems in their domain, but computation now can do things that it couldn't do before. You can go in and say, give me three possible ambiguities that the team needs to discuss from this meeting transcript, and it can look at whatever the topic is and say, These are three things that are potentially ambiguous, right? And that's not something we did with computation before, but now it is something we do with computation. And then, okay, based on those ambiguities, what are possible courses of action? Give us five possible courses of action to resolve them, things like that. Now that's a series of computations that we can go and design to follow up on a median transcript, but that's something that is going to be a liberal arts domain of thinking. What are the right questions and how do we approach that? It's not just a computer science focus anymore. 

Tom  10:07  
Yeah, I find when I'm talking to people who don't come from a technical background, yeah, I try to describe it as well. We all solve problems in our head, and there's a thought process and a set of reasoning steps that we go through to solve the problem. Think about, how do you get that out of your head and into the prompt window? And that's how you'll start to see that this tool is not a Google search replacement, but really a thoughtful companion that can think along with you, and in contrast to you, to bring value to what you're looking at, just like you were sitting at the college, I was thinking about how we were going to increase our market share, you know, and I was thinking about a and b, and what do you think about this? And doing that with these tools, I found have really opened up my mind to what's possible and where it could play a role. Along with me. 

Jules  10:56  
I think you hit on something also that's important is if you look at least what I see over and over, if you look at the way that people approach it, when they don't have any type of training, is they go and they approach it like it's another internet search, and that is like the absolute like most surface use of it, it's also not the greatest use of it, and that's why we have to start teaching it. But we also have to start teaching it because there's more thoughtful ways to use it. What I see with students is they go in and they look to use it to give them an answer, as opposed to using it to explore different perspectives on an issue and inform their own thinking and then decide themselves and to broaden their understanding of the problem and the issues. And so I think we have to not only teach people how to use it as a tool for computation and in a much deeper way, but also how to use it in a thoughtful way that doesn't replace their thinking. Because I tell students like the one thing I can guarantee you we can automate is copying and pasting from somebody else's question into Chatgpt and then copying and pasting the answer back out. I can automate that right now. And so what you need to do is make sure that you are in the middle of that providing unique value, and your thought and esthetics and creativity is what needs to be in the middle. 

Tom  12:10  
When we were in the first phase, I think we're in. I like to use the analogy. This is the Tour de France, right? There's 21 stages. Stage One was really about getting people to experiment with these tools. As I got a little bit more comfortable and and capable with these tools, I changed my language from you need to get out and experiment with these tools to I now use the language of need to build a relationship with these tools. And that really gets people's head to cock sideways. Is what do you mean a relationship? It's I've been using some of these tools on certain topics for months, and it's like an ongoing dialog where it continues to hold on to the context of what we talked about two weeks ago, as well as deepening my understanding, and also allows me to deepen the question that I pose against it, so I have a relationship with it around a topic. It's not a transaction anymore. It's actually a give and take. And I said, try that, and you'll be surprised at how much more valuable you'll find these tools when you give it the opportunity to develop a relationship with you around something you want to use it for. And I found that it's part of my evolution, and I'm in your camp, which is, I think, no matter what field you're in, you're in the technology field, you're in the finance field, you're in the supply chain field, you work for a nonprofit, these tools are going to become as ubiquitous in the terms of how you go about doing your work, interacting with others around your work, and we just need to get to all, get everyone to a higher level of competence. Jules, what do you make of these people who make the argument that prompt engineering is going away?

Jules  13:46  
I think if the definition is that it's messing around with a bunch of words in a prompt and one prompt, and you look at it in that level, then maybe it will go away. Maybe you won't have to buts around with the structure of your prompt. But I don't that's not what I think prompt engineering is about. I think of prompt engineering like software engineering, which involves like, understanding the requirements and figuring out how to break down and decompose and express the problem to generative AI, and then how to iterate and improve and test and maintain and all these other things. And for the same reason that people are always like, it's just going to magically understand everything and do everything for you. And I'm like as a software engineer who spent my career talking to other human beings and trying to magically create the software that they would like to solve their problem, I can tell you that it's a much harder problem than and it's not something that AI is going to be solved because people don't know what they want. They don't know how to express it, they don't know how to break the problem down. They go and ask for something that really doesn't solve for their problem, and then we get it, and we misinterpret because they haven't perfectly communicated it, and we mistranslate. And we're going to have all those same issues with generative AI. People aren't going to know how to express they're going to express that they want the wrong thing. Generative AI is going to misunderstand because there's not perfect communication. Is going to mis-implement because it's not perfect. And so there's always going to be error in it, and that's why there's always going to be a need to teach people to get better at thinking through how to solve the problem with generative AI and how to collaborate with it to arrive at a solution. 

Tom  15:12  
When we come back to your classroom, and what you're trying to do with your students, how much of it is helping them develop the skill, and how much of it is helping them develop a new way of thinking to work with this exoskeleton.

Jules  15:25  
I think it's a bit of both. I'm teaching both people who are computer scientists and people who aren't. And so I think there's two different approaches for computer scientists. It's about really a lot of what I'm doing is trying to say we need to rethink what it means to do computation, and how we get there before, it was about teaching people to code. And the way that you design computation and you control computing is through code and building systems. And now you can have a prompt that does the same thing as wildly sophisticated software. So the other day, I was telling my son, you know, because he's always collecting things. Now, he's like, collecting shoes. And I was like, You need inventory management. And he's looks at me like, I'm crazy. And I'm like, don't worry, we can design it. And so we had one prompt that implemented inventory management, and it was, I'm going to give you pictures of things. Whenever I give you a picture, you're going to extract an item, a description and an approximate price, keep the inventory up to date as I get as I add pictures. And he could just start going and taking pictures with his phone and giving him to chat GBT, and it would just keep building the inventory for him and keeping it all up to date. To write that in software and implement that would be wildly difficult, and it's three sentences. And so we're going to be building systems in totally new ways, and as computer scientists, we need to recognize that, and we also need to start thinking about, how are we going to manage all this complexity when we democratize computing, everybody can go and innovate, that means it's going to be at a much bigger scale of complexity, and we're going to have to rethink how we go and approach it. So I teach that, and the computer science focused. I also teach them all the nuts and bolts of prompted engineering, and think about it that way, but everybody else, I'm just I'm really focused on teaching people to compute, but in natural language, and do more than just like the internet search, but think of it in a deeper way, and teach them the new abstractions to computing through generative AI like personas act as and suddenly that gives person power to do stuff or teaching them, templates and all kinds of things like that.

Tom  17:32  
In addition to being professor at Vanderbilt, you also have this special assignment right being the Special Advisor to the President around this topic. What is that role, or the platform of that role, which I think about it and just following you, it's your role at Vanderbilt, but it's also your role for Vanderbilt, where I see you be a spokesperson for things that Vanderbilt is doing, as well as the broader impacts. How have you thought about that special assignment and tried to use it. 

Jules  18:02  
The original pitch was like, let's go and explore how to incorporate generative AI into our operations in a university as a lab model where everybody's gonna need to figure out how to incorporate generative AI across all these different functions. And a university has just a huge array of functions every we have HR, but then we have landscaping, we have endowment, we have alumni relations, we have communications, and then we have every department. So let's use that as our lab, and let's go figure out and pilot and see what works and what doesn't, and then we'll do the research while also transforming our own operations. And so that was what we went, set out to do. The first part of that was, how do we get generative AI to as many people on campus as possible, as quickly as possible? And so we went and built an open source, essentially chat environment, and you can build assistance and everything, but it's like an end user democratized version where anybody in any department can go in, they don't have to code, and they can get unlimited access to open AI, anthropic, meta, Mistral, all of the sort of the major providers, and they can go and experiment, build assistance, do all this. So we did that. We started with a initial pilot of 25 people, and now we have unlimited access for every faculty, staff and student on campus. And now what we've really switched to is we're still building out the platform and continue to innovate with that, but we have a lot of projects starting up that are focusing on specific sort of operational groups on campus, and then also trying to figure out. One of the things that I'm now excited about, and that we really want to do a lot more of, is let students reimagine what the campus experience could be like and will be like with generative AI. And so I had students in my class this semester just building an amazing variety of things from an assistant that could help you navigate the visa process for international students and answer all those questions, like, what is the timeline and when do I have to pay these fees and what happens If this happens? We had students that were like building assistance to help them plan their studies, and they would take all the faculty syllabi and digest them into one schedule for their semester, and could answer any question they had about any course, but then also student scheduling of courses, all this type of stuff. So I think students are going to be a big part of being really creative and reimagining what we do.

Tom  20:23  
We've thought of it in a similar way, right? Give it to everyone, democratize it. Does that mean that you've, you've held prompt engineering or provided the opportunities for people to build their skills around prompt engineering for everyone, not just students who are in classes, but also for people who are in the finance function? Who are in the marketing function? Have you given all of them access to become better prompters, to be able to take advantage of these tools and think about building gpts for their units?

Jules  20:51  
Yeah, we've done a ton of stuff. So there's a variety of things. Everybody on campus can take any of my Coursera courses for free. So we've had a lot of staff that have gone and taken my Coursera classes, my prompt engineering for chat GBT is probably the main one that most people have taken. And surprisingly, it was exciting to me. I would go and meet with staff groups, and they would say, oh, yeah, we've we all took your prompt engineering for chat GBT. A group of us took it together, and we had a weekly discussion about the class where we met. So we've done that, but we've got on campus things as well. So like the Data Science Institute and Jesse Spencer Smith, who's our chief data scientist, they've held all kinds of trainings. I've done all kinds of talks. We have other groups on campus, like our group that works in helping with teaching, and they've held workshops on how to use generative AI in teaching and learning. So we've had, there's just, you know, I'm certainly a piece of it, but people all across campus are offering workshops and talks and things with hands on training to people.

Tom  21:49  
That's great. I know another really critical opportunity that you've had is to actually talk at the US Congress and advising them on the impacts that these tools are going to have on learning and how to think about the skills gap right the skills that we're going to need to be competitive in the future. Can you give us a little bit of a sense of what that conversation was like, and what message you tried to give to our lawmakers to go home and think about as they think about building the constructs to support our economy and society around this.

Jules  22:20  
I think of my number one thing was to impress upon them that this isn't just some incremental evolution of what we've been doing. This is really transformative. And if you look at a national competitiveness standpoint, like it is fundamentally important that we really take advantage. We got very lucky that it was invented here and that we are leading in the building of these models, but now we have to make sure that we take advantage of them. And it's an interdisciplinary thing where we have to teach people to figure out how to go and innovate on top of them. And this is, like you said, a moment in time where we have to really focus. We really have to pay attention to this, and we really have to invest and support it. A lot of the things that we are fearful of comes from, I would say, the old AI, which couldn't do certain things and worked a different way. We have something that works in a very different way in many respects, and we have to think through it and regulate it in an appropriate way for that. But also we have to really support the educational innovation aspect. Because basically, the day chatgpt was released, everybody on the planet essentially needed education and how to rethink what they do in computing based on this new technology, because it's not just isolated to computer science, and I think higher ed in particular is going to play a really important role and figuring that out, because we have representation across all the different departments and subject matters, and it is our job to go and do research and figure these things out, and we have time in the day and space to really go and think through those hard problems. Industry. Yes, they're going to go innovate, but they don't have that concentrated time. People are active in their job roles. They're already at capacity in general, and we have that space and time and infrastructure to go and really think about these hard problems and figure out how to innovate and solve with them. So I think higher ed is really well positioned for this, and is going to play a critical piece in not only figuring out how to innovate and use it, but then how to teach everybody else how to do it, and that next generation, they'll be coming out this year and next year and the year after that.

Tom  24:29  
And I'm always curious, because I've been in a couple of these settings, not at the US level, but did you find that the questions were they more about trying to put the guard rails and make sure that we don't get to unintended consequences or opportunity or the right level of blend from your perspective, because you know, you talk to 10 people about this, you get 10 different kind of perspectives on where people sit on a continuum from this is a moment in time that we'll look back at some point and say, how did we live without this and others would? Is we got to keep the genie in the bottle as long as possible, or we'll lose our sense of humanity right to to paint both extremes. Where did you find the people that you're interacted with in terms of their curiosity around both sides of the continuum? Just curious, because it's always very interesting to me to see how those conversations go. 

Jules  25:17  
I thought that there was definitely an awareness of the importance of getting any regulation or policy. And so I would say that I went into that not knowing what the perspective is going to be, and I came away feeling much better about everything, because I said these are people who really care about trying to figure out how to go about supporting promoting innovation, but at the same time making sure that we do it in the right way. And I didn't see anybody that had that was jumping to say, let's put these crazy regulations on or operating out of an area of fear. I thought everybody really thoughtfully trying to understand where the impact is going to be. What are the policies that could make the biggest difference in terms of making sure that we're successful with it. So I thought it was a great experience, and I came away feeling like, I think that Congress gets it, and I think that they're going to figure it out. Now, who knows what's being said in the news and all this stuff, but I think if you actually get on the ground with the congressional staffers and you talk to them, I think that they really get it and they're trying to figure out the right path forward. 

Tom  26:18  
Yeah, and that's exactly why I wanted to ask the question, right? Because what's reported in the reported in the news is not necessarily the way that these conversations play out. The other thing that I think we need to give people some latitude on is it is incredibly difficult to try to figure out what the right thing to do at their level with the velocity that this thing is moving in a direction that you can't even anticipate, right? Even for me, and I'm sure you feel the same way at times, is just the number of new companies, new models. It's like you can't get through a week without feeling like there's been some kind of major upping of the game. And then to try to think about, how do you put a framework around this to ensure that we get the benefits and that they're, you know, that they're well distributed across different types of people in our society. It's incredibly challenging. But I think sometimes we forget how difficult it is to sit at that level and try to just ingest this with the speed that this is moving right? This makes the internet look like snail's pace with the way that this has been moving in the last two years. So next question, I'm a 21 year old, I taking advantage of your office hours, assuming you still do in person office hours. And I'm like, I'm worried about my career. I came here because I wanted to be in whatever field it is. Is the job that I thought I was going to have? Is it going to be there? Is it going to be different when students, and this is one of the top two or three things that are in their minds, right? It's they have both healthy concerns and ambitions around this, but a lot of them that we've interacted with at our school are like, what does this mean for the career that I'm supposed to go out and have? How do you talk to your students about that?

Jules  27:53  
I taught software engineering the age of AI, and we did that poll where everybody, like, decided, yes, within 10 years, humans aren't gonna write 5% of code. And then everybody in there who thought, My life and my job is gonna be to write code as a computer scientist. So I had a lot of people that came up to me afterwards and said, What does this mean? Did I just waste a bunch of money getting a master's degree in computer science? And my answer is no, there's going to be, I believe, particularly in computer science, an explosion in demand for those skills, because at the end of the day, code was something necessary that we had to do in order to go and build and control computation, to build systems that helped people to control computation. But going and writing code is not necessarily going to be necessary, but having the skills and understanding, how do you architect these things, algorithms, data structures, all of the complexities that come in with trying to integrate systems together, all those problems are still going to be there. They're just going to be scaled up. And so your job is going to be less of these probably mundane details about code, and it's going to be more on thinking about these fun, hard problems, and there's going to be a need for a lot of people in these roles. So I think it's going to create demand. I think what we're going to see is that right now, a lot of people are worried, because it's like there are certain job roles that either what they do on a daily basis is clearly going to change, or the job role is not going to be probably necessary, and so there's going to be a shift in labor and where labor is needed. And I think the challenge right now is we're still in the innovation period, where we don't and can't easily predict this is where there's going to be this explosion in growth that we're going to need people. I can say one place that I think there's going to be an explosion in growth is where there growth is where there always has been, as computer scientists, I think there's going to be an explosion in growth there. But like all these other areas, it's hard to predict. If I could predict, I would go and invest in them and I would make the money, but it's hard to do that while we're in the process of creating this growth. All. All we can see is like, the things that we know will be impacted, and that's that I think creates a lot of the fear. But I'm not I tell students like, look, the most important thing right now is your creativity, critical thinking. Go focus on those things, and you'll do well regardless of what major you are in. Because if you're smart with these things, you pick the major, even if it didn't computer science to see you might be an innovator in computing, because you can use generative AI to take all these creative ideas and translate them into computation in new ways that that change things.

Tom  30:30  
Yeah, and then if they do that, they're going to be ahead of the curve, right of the mass employment. So they'll be the more attractive employee if they're going as a candidate for the role, or they'll be the one that seems to perform the best or the most creatively, that makes them look in the best light for the promotion. And so it's all about helping them stay ahead of the curve. That's something I think, when you're in a technology field like we've been, technology is always creating the next curve, right? And so we're always thinking about when is the right time for us to jump on the next curve. And career wise, that's always led for me to a lot of opportunities by being somewhere in that bleeding edge, leading edge, early adopter phase, and over time, having some, having enough data points to say some, sometimes you want to be a fast follower. You don't want to be in the bleeding edge, but you don't want to be late to the game, because we've seen what's happened to companies that have missed the boat and sometimes don't survive. All right. Last question, fuel Street today. And I love this one because it is a fast changing world in the last 60 days. What's the coolest or weirdest thing that you've seen someone use one of these new tools for?

Jules  31:38  
I think the weirdest one that's on top of mind for me is I'm getting ready to release a Coursera class on generative AI security and privacy, and so I have a whole module where I built deep fakes of myself, and then use Chatgpt to build out the whole script and generate all the video of myself. So I've just spent like a week staring at a fake version myself, and it's funny because I've been re watching the battle Star Galactica series, the remake, where they have Cylons that look like human beings, and they can't tell the difference between humans and Cylons, and there's all this worry about it, and they're trying to get a detector for it. And I was like, This sounds exactly like what's happening right now with deep fakes. So I went through all these videos, and I'm like, I can't visually. I can't tell that's not me the voice. I'm like, That's not quite me and voice, but boy, it's a wild world. So it's weird. I just had this weird experience of I watched Battlestar Galactica all the way through, watching all this worry about like Cylons and robots that look like humans, and then I just watched myself be turned into a robot. I think that was the wildest experience for me. Personally.

Tom  32:43  
I think it's amazing. I made my wife watch Reid Hoffman interviewing Reid Hoffman, just to show her where these things were, because she's not a technology person. She's an interior designer, but sometimes she thinks I'm just embellishing, and I'm like, just watch this, and it'll blow your mind. And clearly it was just one of those things that's fantastic. I will have to figure out how to get access to that, because I would love to see the deep fake professor Jules White. Jules, thank you so much for joining us on podcast. As always, it's so interesting to hear your perspectives on this. We continue to follow everything that you're doing. Thank you for joining us today.

Jules  33:17  
Yeah, thank you so much for having me.

People on this episode