Digital Squared

Decoding Data and the Future of Human Knowledge

November 21, 2023 Tom Andriola Season 2 Episode 4
Digital Squared
Decoding Data and the Future of Human Knowledge
Show Notes Transcript

On this episode, Tom talks with Dr. Christine Borgman, a Distinguished Research Professor and the Presidential Chair in Information Studies, Emerita at the UCLA School of Education and Information Studies. Borgman also has over 250 publications surrounding information studies, computer science and communications, including three award-winning books from MIT Press. Together they discuss the evolution of data & technology, the need for today’s students to understand “ground truth”, and the importance of open data & open science. 


00:00
Welcome to Digital squared, a podcast that explores the implications of living in an increasingly digital world. We're on a mission to inspire our listeners to use technology and data for good. Your host, Tom Andriola is the Vice Chancellor for Information Technology and data and Chief Digital Officer at the University of California at Irvine. Join us as Tom and fellow leaders discussed the technological, cultural and societal trends that are shaping our world.

00:30
On this episode of digital squared, I talk with Dr. Christine Borgman, a distinguished research professor and presidential chair in Information Studies, Emeritus at the University of California at Los Angeles School of Education and Information Studies. Her expertise in computer science, data governance and policy and information literacy brings an insightful perspective to the relevant issues around data and the popular topic of AI in society today. Together, we discuss the evolution of data and technology, the need for today's students to understand ground truth and the importance of open data and open science. Christine has been both a collaborator and mentor to me over the years, and I hope you enjoy her perspective as much as I have. Professor Borgman, thank you for joining us on the podcast today. 

 01:19
Tom, it's great to see you again in your new role. And this is a very interesting role to take on podcasting as well.

01:24
We try to keep it fresh around here, really looking forward to it. And again, you joining us, you're bringing to us maybe a voice that we haven't had from multiple perspectives that I really want to bring out, Christine, as we talk today. So first of all, is you have been in this kind of space of digital technology data for a very long time. Before it was big, when it was a thing that happened down in the basement. And you've seen so much change and so much acceleration of the change over the over those decades. Can you take us a little bit through how the journey looked from the vantage point that you've had to sit on the train? 

02:06
Sure, and I made a few talking points to come back to, but I want to make sure we take a really long view. And start by pointing out that Herman Hollerith patented the punch card in 1884.

 02:19
That far back?

 02:21
Yeah, I was using at the 1890 census, we've got well over 100 years here to think about digital data is not just these last couple of decades. So let's bear that in mind. But we've learned as we go through and are going that's one of the points I want to bring out is we tend to think this data deluge is a new thing. Where coming more from the information world, when people say, Oh, we're buried in data, I tend to remind them that Francis Bacon complained about too many journals. So these are actually not new problems. There's maybe new bottles, but a lot of these are actually very old problems to think about. Just briefly for the bio bit I know you need in here is it's easier, of course to make sense of a career in retrospect than it was going forward, but I started the math degree from Michigan State. Then I got a master Library Science degree from Pittsburgh, which was mostly in information retrieval. And then I went to the Dallas public library, where I led the automation of the first major public library in the country to go online. And we wrote it in assembler language on the city's mainframe. And that really got me a sense of not only how difficult it was to build things, but a very early look at human computer interaction issues. Because if you even think about libraries were were beginning to automate, actually, in the late 1960s, we had our own internets really built across the country and across the world, long before it was public in many ways. And the automation that was happening, particularly with the circulation systems, where the computers would face the staff, they wouldn't face the public. So we came online in 1978, with public facing computers, IBM mainframe and Bs computers were actually networked across the city. So every person in the city of Dallas who had a computer on his or her desktop had access to library. So arguably, it was also the first networked system as well. But you could see people would walk up to this thing, and have no idea where to start. And that was what led me to get the PhD in communication from Stanford, which was again the rare program at the time that was focused on really communication technology. Doug Engelbart came to our class we hung out at Xerox PARC and what ensued with future was early Silicon Valley days was very exciting time to be thinking about that.

 05:02
Christine I had a question was communications technology back then focused on moving bits around between two points? Or was it focused on the, 'How does this become something that the human brain can digest and understand?' What was the conversation like in the classroom and around the lab for things like that?

05:20
Good questions, communication in those days was roughly divided, and people did mass media, and did interpersonal communication. So those who fell in the middle that said, let's talk about how people interact with technology, were already a strange bird. And that's what led me to hang out with the computer science and AI people. Feigenbaum was there and McCarthy was there. But when you got over to the psych department, that's also where Steve pinker got his start. And this was the era of Amos Tversky, Barbara Tversky, I took his classes, Amos, Barbara Tversky, his wife was on my dissertation committee, they were much more concerned about the mental models of how people were thinking about them. And they would do things like experiments between regular for function calculators and reverse Polish notation calculators. So it was what's going on in the brain and is the brain like a computer. So my dissertation was on mental models, information retrieval systems.

06:23
That's fascinating. And as you saw this go from things that were happening in an academic environment in a research lab to something that was on everyone's desk, and then it's come into something that's in everyone's palm, as you look back at that transition into application and everyday use, what fascinates you about the journey that we've gone through as a society?

06:46
Ah, many things. In those early days, you really had to understand how the computer worked from the inside. We learned to address cells in the computer to be able to do that first assembler language. And computers were expensive and people were cheap. So you had to do it that way to learn, there's just no such thing as a black box. And the early retrieval systems were line by line, Boolean operators. And so you had to spend your time thinking about the problem in ways that you could translate it to the computer. Now, people are much more thinking about the computers as black boxes, and especially as we move toward the chat GPT, they're really thinking about black boxes. And that tends to disguise the very hard problems. And that's again, a thread that we'll make sure we carry through here is that most things that look like computer problems are human problems or social problems, their policy problems that are masquerading as technology. And that's certainly when you and I were working together for the University of California, academic computing and communications committee, which we did the policy work over, we kept having people bring us things that really were not technical problems, there was a computer in it, they decided it belonged on our desk, and we would find ways to push it back to educational policy, push it back here, push it back there. So I think that's part of what is happening is people, we're back to that black box again. And we're not asking as many people to unpack it. And that's a lot of what I do in my own research is really trying to unpack 'what problem are we trying to solve here'? 

08:34
It's interesting, you raised that point, right, the challenge of the black box continues to get to a higher and higher level. You're talking about the black box of understanding the assembly language to the next level. And today's society, we're talking about the black box of the algorithm, not understanding how it got to the answer. The data running all through all of these neural network paths, etc. And this is what kind of drives the existential conversation or the challenge of the existential threat is that if we trust in the black box and not understand everything that's inside the black box, which is now several levels deep, we lose control of the narrative and the reality and this is what people are now bringing into the conversation today. It is fascinating how those black boxes have just continued to get larger and larger, as we become more complex in our ability to develop and deploy technology.  We have titled this podcast, digital squared life in an increasingly digital world with all of this, and you know, my background, having played with this much more from an application perspective and a product perspective in my years working in the private sector before coming to university, but what are you, as you look forward, what are you the most hopeful for? And then what concerns you the most on the path that we're on? 

09:55
Let's talk about the concerns first, I think it's probably easier to frame that way, which is, I'm concerned about probably that blackbox issue, society's ability to capture and preserve and utilize human knowledge. over decades and centuries. It's much easier to create those bits than it is to keep them in any way that you can go back and make sense of them later. And that's why they might chose to live on more than Library Archives side of the world, rather than the straight computer science side of the world, is proven to be more and more valuable, because we think about not just data or information, but we try to, again, to unpack that and say, What are data, what is information, and always pushing back on this notion of, there's no such thing as the data, that tends to be a black box. So you've got to keep on pulling threads and pulling threads, and unraveling as we go. So it raises all these different social and policy questions about who controls the data, who has access, who preserves it, to what end, because the the people who have the data have the information and are able to exploit it are the ones who are definitely going to rise up in the digital world in which we're living.

11:12
Data has become a control point for so many organizations, right. And for profit making organizations they have understood the power that data can drive to drive their businesses and their organizations forward. You don't have to go back that many years when not many organizations were thinking that way, but now many organizations understand that might be their most valuable asset that is more controllable than people, right. People that should always be the most valuable asset, right. But they just a lot more predictable than people in most, at least most days of the week. What are you most hopeful for as we look forward to this? Because he's because there's a lot of hope and a lot of promise, and hopefully a lot of good. We can bring what what are those hopes and aspirations you have for us

11:55
let's dive into my favorite topic, which is open science and use that as a case rather than lots of sort of high level hand-waving. Is the movement toward open science and open data, which is now several decades along is certainly very hopeful, in that we're spending billions, trillions dollars Euros, Yen, pick your favorite currency, on conducting research. And the result of that research is generally some kind of data, whether it's astronomy, whether it's biomedical, whether it's environmental. And what we would like is for those data to be reusable and combinable. But it took a century to build climate models. Paul Edwards has a brilliant book called The Vast Machine that shows the how that century it took to get from agreeing on things like average daily temperature, before you could get enough consensus on even how to record climate data attempt to use it down the line, are spending billions on astronomy projects, billions on biomedical projects, without really sitting down and saying, what's the fundamental unit? And to whom should those data be useful? Again, that's where I'm spending my time. So the good news is, we've had these open data policies, we're moving toward agreements, we have agreements on things like the fair principles, the findable, accessible, interoperable, reusable, and so on. The hard part is actually executing on those, because the expertise that goes into conducting the science is very different than the expertise that's required to be a data scientist. To get down to say, for what should we be capturing? In what form? What are the units? How are we going to label them? How do we match up? How do we keep track of the software as we change it down the line? So that gets us examples galore. But say, our astronomy data, which is where I spend most of my time nowadays, it's coming off a telescope, which is very expensive to get those data. But once you've got them, then you're running them through many layers of pipelines. And it can take months or even years to turn those data into something that you can actually get a paper out of. And then they accumulate over time, and you want to aggregate them. But then you realize over the 25 years that you've been collecting these data and things like orbits around the black hole and set and Milky Way, it will come back to why that's an interesting problem. You realize we started with speckled data in the mid 90s. And now we have adaptive optics data. And we have changed the instruments over time. We're looking at the same object, but we're looking at it with different instruments. We're looking at it with different software. We're looking at with different tools. And then I start coming along and asking questions like, what's the fundamental unit? You know, should we look at a photon, we'd look at night? Should we look at a plate? Should we look at one spectra? Should we look at one image? Because everybody goes coming back to those data later and asking very different questions of them than they were collected for in the first place. My concerns are, what does it take to get those kinds of expertise, and to get the next generation of scientists to start thinking about their data as assets that they need to protect and use and exploit that then these last two papers we set aside to talk about are going from there up layers of how should the university be thinking about it? How should the educational system be thinking about? How should the funding agencies be thinking about? So it ramifies up many layers? 

16:01
But don't we have the challenge, that's always going to be a moving target? If I stay with your examples of astronomy, when the next better telescope gets developed? It's going to change all those definitions that we have, let's say been using with the last generation of telescope. I mean, so is it a constantly moving ball on us to keep data assets relevant as we move forward? 

16:21
Of course, absolutely. It's a constantly moving target. But to make sense of the old data, because you want to compare the old data to the new data, and you want to be able to say, is this object that we're picking up, is this the object we saw two years ago, five years ago, 15 years ago? Or is it a new object? You need to be able to make sense of the old data. They've been digitizing 100 years worth of plates at Harvard because the arguably that was the first all sky survey. And now coming back and saying, was this object there in 1895? Or is this indeed a new object? Or is this a comet that came back again?

17:03
You did a piece with Philip Warren in Harvard Data Science Review, entitled, why it takes a village to manage and share data. What's the point that you really are trying to get across to the audience that you're trying to reach with that? And is there a more broad message for the general public in what you're trying to say?

17:21
I think there's several messages in this certain across over into your interests in biomedicine. And it came out of the new National Institutes of Health Data sharing plan that became effective in January 23, which is 20 years after the first one in 2003. And Phil and I were part of a national academies panel that put together two day workshop and how the community was going to deal with this new set of guidelines, could they really implement them? And Phil, for bit of background, he spent most of his career at UC San Diego, and is now the dean of data science at University of Virginia, in between was the first Chief Data Officer for the National Institutes of Health. He knows the biomedical data issues from every angle possible, and we're both concerned about is how do you actually get this done within the university? Is it really the sole problem of the PI to make those data available? The labor to take the data from those biomedical projects, and process them, preserve them, release them, have a helpdesk answer questions about them, could be as much work and labor and cost as collecting those data in the first place. Saying it's the PIs problem ignores the need for broader infrastructure. And it's that broader infrastructure that Phil and I both think about very deeply for bringing these pieces together. And so our argument in that paper is to share data, it's not just the PIs problem. Within the university, we need to have a Vice Chancellor for Research, a Chief Information Officer and a university librarian at minimum, thinking together about these data being assets for the university. And the Vice President for Research, Vice Chancellor for Research has an obligation of funding agencies to preserve these. The library is often the one with the expertise. And it's the CIO who has the technical infrastructure to make these things happen. But there's a lot of interplay, and we're not seeing it happen that much at many universities. So we think if any place can get right, we hope it's at the University of California.

19:41
Certainly at the scale that we operate that the University of California campuses can get this right. That means a one out of every $9 is fitting under these principles that you've espoused. 

19:51
We're such a research powerhouse, and it's really the way to attract the next generation of researchers, the next generation of students, is to say we're the ones who don't get this right, we're the ones that can show the funding agencies how to do open science, and come play in our fabulous playground. 

20:08
And effectively saying we've got better data assets curated here, any place to advance the science of your field, this is where you want to come to it. That's the value proposition we're trying to create for the next generation of researchers. Let's jump over to the broader institutional challenges of higher education in a place to you spent most of your career I've spent about a decade of my career now. And you did a wonderful article with one of your colleagues at UCLA call 'data blind universities lagging and capturing exploiting data'. I'll talk a little bit about this, because having worked with you on the Academic Senate, and then watching this, I'm dropping down to this role at Irvine where I think and live inside this challenge every day. And then to see this article from you. Tell us a little bit about the article, how you got the idea that you wanted to write about this, the research that you did, I think it's a fascinating story, as well as being spot on, by the way.

21:02
Thank you. Where that started, my co author, Amy brand, she she's not UCLA she's from She's the director of the MIT Press. And I've known Amy since she was a vice provost for scholarly communication at Harvard. And her PhD is actually in cognitive science from MIT, in between she's played key roles and things like orchid and CrossRef. So she really understands Digital Science at a very deep level, we were both at a meeting in DC, run by Juliet Lane, who did the science work at at NSF, around looking at that more public access to government data. And Amy and I were both struck that there's the interest in research data, there's the interest in government data, but not enough people were thinking about the administrative data that actually is what runs universities. And so it's that conversation that led us to work together. That conversation began in December 2019. Just ahead of the pandemic, nobody knew what was coming. And by the time we had fleshed it out, the pandemic was it was in full bloom. So we did the entire study by zoom, and using our networks, and it was a mix of we talked to Provost, CIOs, university librarians, heads of academic analytics, and so on. And it turned out to be a natural experiment, because every university in the world was scrambling to put their hands on data. As you were no doubt as well, how many people in the building? How many people need access to this? Who needs a library access? How are we going to live with things remotely, and nobody could put their hands on what they needed. And so that they were extremely sensitive at what their problem was, it was a great time to talk to them. We were hoping we would find at least a few cases where people said we got it knocked, we got a five year plan, this we're in stage three, we're going here. And we just didn't find that. What we found was a lot of very big concerns about the kind of siloing, we found solutions in particular areas that we have lots of case examples in a short paper, but it was more really stressed out. And to summarize, a paper in which we put a couple of years of work into briefly is we found that yes, universities are drowning in data. And a big piece of the problem is this lack of infrastructure thinking similar to what the science issues are, of tending towards silos. And if the VP for finance wants a new system, and it's going to pay for it, they get complete control of their vertical piece. And the Academic Personnel gets their piece and the HR because their piece and the library gets their piece. And there's nobody at the top thinking about what that interoperability layer is. So what happens then is, you have what Joy Bonaguro that was until recently, the Chief Data Officer for the State of California said, these data emergencies, where you send some poor data scientist out trying to pluck data from here, there and everywhere, and throw it into Excel spreadsheets, or into Google Docs, or whatever, and make all these very expensive, one off data products. And if they try to go back and update it a year later, they may get very different numbers because they can't get their hands on the same data again. And so we were really pushing was to think more about what that layer is. We understand the need for outsourcing but there's a difference between outsourcing in silos and thinking about outsourcing in terms of layers. And what kind of expertise you need within the university or within the organization. Now you can find the same thing in government, you can find it in business, we don't want to just say, universities are managed really incompetent. Rather, it's these are very hard problems that are in the sort of top suite across major organizations of all kinds. 

25:22
I want to pivot to talk about this next generation of individuals who are coming to our universities. You get to work with them, I get to work with them. We talk to them in terms of others use the term, they're digitally native, right? They're growing up with the generation that is growing up with technology in their hands and things moving at the speed of light. But to me, technology is just a data generation device. Right? It's the data that's really interesting. It's the data that we create value propositions around whether it's a learning outcome or a diagnosis for a patient. How do you see today's students being shaped by the data explosion? And what type of skills do we need to make sure that they leave us and enter into their next chapter with from your perspective, regardless of what field they might be studying.

26:12
Regardless what field they're studying, they need to learn to think about information as information and data as data and the need to open up the black box. It's got all critical thinking in many respects. And again, that that goes back a long ways. We've had this long discussion around data x at UCLA, which is the broad layer so rather than doing a data science school, the way that Berkeley chose to, UCLA is doing cross cut different universities are approaching this different ways. And a number of the faculty were saying, let's start off with giving them this dataset, giving them that data set is a good one to get started on. And my approach was no, you're just throwing the black box at them. Give them a pencil and paper, tell them to walk around the campus and count the number of trees. Because by trying to count the number of trees, they'll realize how hard it is to define what's a tree? When is a tree or shrub? When is a double tree, a single tree? When is a new blossom become a tree? At what point does something become a tree? And we'll get them to back up and say, Wait, this is a way harder problem than I thought it was. Because the first time I think, stupid problem, but it's getting them rolled back to the fundamentals. And whether they're going into English, get him to count the words in Shakespeare, or get them to count the number of mentions of violence and Shakespeare, whatever, or get them to count the number of cells on a plate. And that will move them forward. And again, I have loads of examples, if you want them from some of the different fields we've had been in, I think you and I talked briefly about this before is you get students coming into fields that historically have done a lot of literal field research up to their rear ends and alligators and mosquitoes are now being handed datasets. And when they get the data set, they don't know what's an artifact, or not. They don't know how to judge what's good data, what's bad data. And that's something we've seen, I've worked in geography and energy and sensor networks and biomedicine and astronomy and so on. And what you find is that people have really been there, if they spent their nights in the cold up in the chair and the telescope, or they've really been in the swamp, or they've laid the transect for the seismic network across Peru, they know what to trust what not to trust, they know there's something fishy here. Were the ones that just get handed the data set don't. So there's no substitute for the on the ground, getting your hands dirty, getting your feet muddy. 

29:04
Is this data literacy? Is that what you're really talking about here? I listened to your story. And this is this generations version of they still have to learn long division before we let them rely completely on the calculator, right? You're really talking about this generations version of that is going all the way back to what's a tree, that thing that has two, two branches, six inches off the ground? Do we count that as one tree or two tree? Do you define that as data literacy? I wouldn't have thought of putting data literacy on that. But as I was listening to you talk, that's data literacy.

29:37
It's data literacy or it's information literacy writ broad. It's also critical thinking.

29:42
it's the foundation that if you don't have you miss a piece of the critical thinking

29:46
You miss a piece of critical thinking, but you're never going to be a good scientist until you've really grappled with the fundamental problem you're trying to solve. And they just see things, so going back, say for example, to the center, the Center for bed networks sensing which is the big NSF Science Technology Center that UCLA wanted in 2002, that Deborah Estrin ran, and we were building sensor networks. And the point of it was to partner computer scientists with people in different scientific fields, primarily in the environmental sciences, where the scientists wanted new technologies to get richer data. And the computer scientists and engineers wanted real world problems to work on to bring them together. And among the many lessons out of this was that it took about the first three years out of those 10 to get beyond just dealing with sheer technology problems, battery failures, sensors that would reboot. And things you couldn't read couldn't make sense that they came back from working in Bangladesh and rice paddies looking for arsenic and found half the data were bad, because they didn't know how to track them in real time, until they brought more data science and statistic people into it. And we would follow them around in my students to Gillian Wallace about Myron Mayer, Nick and others in that period, they were in quicksand, they got sickness at the top dealing with llamas at the top of Peru, they were really out in the field with these people. But you would find things like, they would be tracking sensors, and they wouldn't keep track of which one had tin foil on it, because they're trying to make it work. And which one didn't. They're concerned about sun and not sun. And they were picking up things like this one's not working because the cow stomped on it in the farm field, or somebody ripped off the metal up in a village in Peru. And interpreting the data requires that. So that was part of what we were contributing, was helping them pay attention to those things. And to think about not just what's the signal, what's the number, but how do you interpret that signal? And where did it come from?

32:01
That's fascinating. It's data, information science and data's version of back to the basic sciences. Yeah, yeah. All right, we're gonna finish up with one question here. The question that I love to ask in the field that you're in, it's a field dominated by men. I'm a huge advocate for diversity, equity inclusion in my industry, we've done a horrible job on this topic in the years that I've been a part of it. And we're always trying to do better. And we espouse here at the university to always do better on this. But can you talk about what those experiences have done to you, for you, as you have traversed throughout your career? 

32:43
Just briefly, because of course, we could do a whole podcast on that.

32:43
I'm sure we could do a whole podcast on that, without a doubt. 

32:46
Is having been a female math major in the early 1970s. When you're often the only woman in the class, never had a single math instructor who was female. Loads of mansplaining, which that we finally got a word to put on it, assuming that the woman the room was not getting it, and somebody else had to explain it to her. But it's it took a while for you to realize that was really what was going on. I think I was also struck by when I finished the degree, I was offered two career options. One was to be a high school math teacher, and the other was to be a software programmer. There was no no encouragement to go onto graduate school in math, and no sense of anything you could do with a math degree. Nowadays, we realize a math degree can take you almost anywhere. It's just a great background for things. But that certainly was no encouragement, like fine, go be a teacher, you're a women, that's what women do. And it was my mother, who was a university librarian, who's who hated computers, scared to death of them. But said, libraries really need people who understand computers. And that's what got me to the master library science degree. Of course, they had no idea what to do with somebody's math degree either. But that's how I ended up there. And it turned out to be a highly unlikely career move. But one, it was more acceptable to get a library science degree as a woman in the 70s. But then I landed in Dallas with that. And because I did like computer science courses, and they let me take data structures. And so the advanced catalog and course it was a very strange natural library science degree. But when I landed in Dallas, and it was was ex military mostly running computing, and you can think about a 24 year old, fresh out of master's degree, walking into the ACM meeting in Dallas, as a librarian. And I would introduce myself, I'm a systems analyst with Dallas Public Library, and they're like, what does the library need a computer for? And of course, 20 years later was the opposite questions. Now we have computers, why do we need libraries? Both of which turned out to be a teaching opportunities. But aside from the mansplaining, I tried to use it as much as I could to turn it around. But it's always been hard to get my computer science colleagues to recognize the very deep expertise about technology that exists in our field. And when I finally was made a fellow of the ACM a few years ago, it was like light bulbs. Oh maybe she does know something. But that took a very long time even to get those kinds of credits. But I think oh, yeah, I think I also want to say that I've had the opportunity to work with some absolutely brilliant women professionals over the years who have been icon starting with Lillian Bradshaw, who was the head of Dallas Public Library. She's very famous in library land, she was the head of the American Library Association. I think her husband was head of the fire department. And the two of them were the ones that could go to the city council and say, we're going to make the library the number one priority for computing over police and fire. And that's why we got the biggest chunk of time on the mainframe, was because of Lillian Bradshaw. And because of Dallas being the pioneering, it's got a bigger bed or a newer heater in the 1970s. Without Lillian Bradshaw that never would have happened. And similarly, Deborah Estrin, computer scientist, a very multi generation UCLA is the history that you probably know, who ran the Center for bed network sensing, that was the first Science Technology Center UCLA ever won. It's one of the few, I think, was the first NSF ever awarded in computer science and engineering. But Deborah being a great entrepreneur, she got the chair of the State Senate Science Committee on the board for it. She got Vint Cerf, very good, very well known UCLA alum, Turing Award winner to show up and pitch for us. In fact, it was absolutely amazing the backing that we had to do it, and the pioneering work. And she was such a good manager, that she attracted fabulous people. I think we got $100 million over the course of 10 years for that center. And she was really good at moving things along. And then the last few years, I've gotten to spend time with Alyssa Goodman, Harvard astronomer who's a longtime collaborator. And that's how I managed to be a visiting scholar at the Center for Astrophysics at the Harvard Smithsonian. I was also a nicely weird bird on home there by working with Alyssa and work on their teams. And in the last few years, I've been working with Andrea Ghez's group here at UCLA. And we were working with them for four or five years before she got the Nobel Prize. And so it's been just an absolutely amazing experience, just to watch about how her life has changed, and how she's tried to keep her science going, and her team going all the way through where the spotlight has gotten even bigger honor, these are all been really important partnerships. For me, I would say.

38:03
You've taken undergraduates, students, how are those relationships? And those experiences shaped how you've mentored the next generation?

38:13
Good question, I think because I've been very aware of male female politics and the various other, the racial and diversity politics and you know, women are at least as sensitive to those things as men are, and trying to make sure everybody gets a fair shake, and make sure that the women don't get mansplained to any more than necessary and calling people out if you have to along the way. And I've even found, after a while, some of my students would call out when they realized I was being mansplain to by other collaborators on projects and saying,  why are you putting up with it? But after a while, it becomes so invisible, you don't always realize it's happening. But I do remember time with some my med students are saying, Wait, shouldn't this the balance of power just not right here. They're trying to get them to call it out. And also learning to critique each other. And I've always done, say, collaborative papers, collaborative writing, and talks and expected everybody to bring drafts for everybody to critique, including me, and helping the students learn to critique my work in the same respectful, critical tone as theirs. And actually, I miss that right now. Because every talk I gave up until pandemic time, measure worker that by Zoom is, my students would hear a dry run first, and they would be straight and be honest with me. And it's just an openness and a collaboration that you need to move the next generation along.

39:51
Christine, I want to thank you so much for joining us in a fascinating conversation delving into the black box and the boots on the ground reality that we cannot lose sight of as we march forward. Thank you so much for being with us.

40:05
Enjoyed the conversation very much as always, Tom. Thank you.