What it Means to be Human in the Age of AI

Author: Beverly Wright


In this podcast episode, Dr. Beverly Wright, Vice President of Data Science & AI at Wavicle Data Solutions, and Tamara Lambert, CobiCure MedTech Innovation Fellow at the Ann & Robert H. Lurie Children’s Hospital of Chicago, explore the profound question of what it means to be human as AI and technology continue to reshape our world. From chip-to-chip communication and neural augmentation to the ethical dilemmas of transhumanism, this discussion challenges us to consider the delicate balance between innovation and the essence of humanity.

  

Speaker details:

   

  • Dr. Beverly Wright, Vice President of Data Science & AI at Wavicle Data Solutions    
  • Tamara Lambert, CobiCure MedTech Innovation Fellow at the Ann & Robert H. Lurie Children’s Hospital of Chicago 

 

Watch the full podcast here or keep scrolling to read a transcript of the discussion between Beverly and Tamara:

 

 

Beverly: Hello. I’m Dr. Beverly Wright. Welcome to TAG Data Talk. Today, we have Tamara Lambert, PhD, and we’re talking about being human in the AI era, a very interesting topic.  

 

So, tell us, why are you so cool? Tell us a little bit about your background. I know you have five degrees.  

 

Tamara: Well, I actually have four. 

 

Beverly: Oh wow, just four? 

 

Tamara: Yeah. So, my background is in biological engineering. I got a bachelor’s degree in biological engineering from Cornell University, a master’s degree in public health from Emory, and another master’s in global environmental health. Then, I got another one in bioengineering from UIUC. And then, finally, I got a PhD in biomedical engineering from Georgia Tech and Emory.  

 

Beverly: Georgia Tech and Emory are two places I serve on boards. I love that. So where are you today?  

 

Tamara: So, I am currently in Chicago. I’m doing my postdoctoral fellowship.  

 

Beverly: Okay, great. We’re talking about being human in the AI era, and you obviously have a strong science background. This is a really interesting topic, given that everybody’s talking about AI right now and how it’s going to make our lives so much better, do all these things for us, and automate everything. But when we talk about being human, is there a path of what it used to mean to be human, and has that changed now that we have AI or do we look at it the same way?  

 

Tamara: I think that’s a good question. I think it depends on what perspective you’re looking at it from, whether you’re looking at it from maybe an evolutionary versus a philosophical versus a religious perspective. Back then, or maybe I would say even today, we think about it more as humans typically have emotions, have the ability to reason, maybe can develop more complex structures in societies than perhaps animal species. But I think that definition is definitely going to change as AI and editing technologies come into play.  

 

Beverly: Okay, so let me make sure I’m understanding that. First of all, there are dimensions of what it means to be human. You mentioned a couple, like biological, psychological, and evolutionary, and what does it even mean emotionally. Secondly, do you think that with AI coming into the forefront and being a part of our lives on a regular basis, it could even change what it means to be human? 

 

Tamara: Yeah, I think that with new AI technologies, one of the things that I’ve been thinking about is Neuralink. For example, if you have a technology that you place in the brain, will you still have brain privacy? So, for example, would there be a possibility of one chip in the brain signaling to another chip? Therefore, would you not even be able to conceal your thoughts anymore? I think that once we start really integrating these technologies, and we have that ability to perhaps communicate chip to chip, will you be able to have that thought privacy that we currently have? And that’s one perspective from which I’m looking. 

 

Beverly: Fascinating. Give us an example. So, someone that has some kind of paralysis and maybe their body actually works, but the neurons are not firing and not telling the leg to move, as an example. Would that be chip-to-chip technology where those chips would talk to each other?  

 

Tamara: I was thinking about it more like, for example, someone who has already experienced injury, but let’s say if it’s someone who wants to improve upon where they are already. So, for example, if you’re a perfectly healthy human being, but you want to augment yourself, could you place a chip into your body that could potentially speed up that processing in your brain so you can perhaps think faster, do things faster?  

 

But then, at the same time, would you have that trade-off of, well, now you have this chip that is potentially reading your thoughts and is connected to the cloud? So then, do you really have that privacy? And, of course, chips and various devices can signal to each other. So then, would it be possible if one person has a chip, another person has a chip? Could you potentially be sharing your thoughts via those chips?  

 

Beverly: So, if I wanted to get to be a better pickleball player, and I never want to miss a line shot, but if I could just see it coming a little bit faster, then what would that look like? I would have a chip-to-chip thought on being able to see things faster and react faster.  

 

Tamara: I mean, perhaps you could do that. You know, you could use that to augment your abilities. But then at the same time, is there a trade-off to that? And I think that’s the place where I’m looking at. Yeah, we can use these technologies to perhaps enhance ourselves above what we can currently do. But then, if we do that, are we trading off something else that we also value?  

 

Beverly: I see. To your knowledge, has there been experimentation where these things have been effective but still been able to keep them private?  

 

Tamara: I don’t know if it has gone that far because, personally, I haven’t. Although I’m reading in that space, that was not my specific research area. But these are concerns that are being brought up. So, it’s important that as we are developing, we think about, well, if we are going to use these technologies, what could potentially happen in the future, and what should we mitigate for while we’re developing these technologies?  

 

Beverly: Yeah. And so, there’s a potential of losing your thoughts because it would be really convenient if somebody could just read my mind sometimes, right? So, you lose the privacy of your thoughts. Another potential is that we might become more reliant on the chip or the machine to help us make decisions or do certain things. Once we become reliant, we might give up or not want to worry about doing it ourselves. Is that a possibility, too?  

 

Tamara: Yeah, I think so. Going back to what you said before, things like Neuralink can be helpful if someone, for example, has paralysis, and that’s a good use case for that. But then, if we want to go beyond that and say, “hey, what if I want to integrate this technology just to make my life easier?” There are a couple of things that we have to think about. And if someone does not get this augmentation, can they really compete with a human that does have this augmentation?  

 

Beverly: I see. So, humans are augmented by machines. Are there other scenarios where the machines would get augmentation of human material, like organic material of some sort?  

 

Tamara: I think that’s a good question. I’ve heard of energy harvesting. As humans, we are generating humans via metabolism. So, if we have energy coming from humans, could humans be used to power machines? So perhaps machines can benefit from humans in that way. And of course, we’re also producing a lot of data. Can we use all the data we produce to produce better machine learning models?  

 

Beverly: Is it possible that over time, if we exaggerate and keep going and think about several years out, we start getting implementation of this and augmentation of that, and maybe I want to work on this? And next thing you know, I’m like a bionic woman. Does that change? Does that change our mind as well? So far, we’ve been talking about the physical side of it. But what about our thinking and cognition?  

 

Tamara: I think that’s a good question. And I wouldn’t say I’m an expert on mind-body connections. But there is some. So, I don’t know if you’ve seen this, but there was a commercial that came out that basically showed a head being removed from a human because the human is sick. Still, the head is fine—potentially putting that head on another body, which could be a human or even a robotic body.  

 

But then the question is, given that we have our memories not just stored in our minds but also in our bodies as well, would putting someone’s head just on another body suffice? Would you have those memories to be able to function normally? And so, I think that it’s kind of crazy because we’re going into this area where you could essentially become – have you heard of the term transhumanism?  

 

Beverly: Tell me more. 

 

Tamara: So basically, it’s the idea that we can become better than humans via these technologies. So, are we getting into the area where we can evade death and achieve some level of immortality? And so that’s really kind of sci-fi. As far as I know, I don’t think we’re there yet, but many people are looking into ways to improve longevity. 

 

One of those ways that was projected wasn’t received well, but basically, there was someone who was like a scientist thinker who proposed that idea as a video. And so, the question is, would that even work, given that mind-body connection? And I would say, at this point, I don’t think so. But then again, science has pushed the boundaries before. So perhaps with enough work, we could get there.  

 

Beverly: Well, we never thought we’d be flying, right? I mean, that was like a dream, but now we’re flying all the time. So, there are a couple of concepts that I feel like, as humans, we tend to strive for, and we can’t reach them, but maybe with machines, we could. 

 

So, as an example, you talked about perfection and youth. Those are two qualities and attributes that we always want to go after and, you know, being stronger and better and smarter. So, let’s take perfection as a scenario here for a second. 

 

If you’re looking at artificial intelligence and you’re saying, okay, I’m using AI to scan these resumes—something really simple—and I’m going to have it emulate what a human does. Except there’s a difference: with humans, we expect there to be errors, but with machines, we kind of require perfection. We don’t want any kind of bias in reviewing resumes. We want them to be perfect, but we want them to be human-like except for the more, you know, 2.0 human, like the new and improved human version of what we would want humans to be, and it’s got to be just right. So, is our striving for perfection and excellence drawing us to this? Doesn’t it seem like you should just leave it alone?  

 

Tamara: I feel like that’s a part of it, but I think another part of it has to do with longevity. You know, the human spirit doesn’t want to die, right? And so, I think that many of the pushes, like when I read these articles about people who are innovating in this space, seem to be like, for example, they might miss their loved ones. 

 

So, they would invent something that, for example, you use this natural language processor and these different forms of AI to recapitulate that person almost, even though that person’s already gone. So, you know, using their pictures, videos, voice, 

 

Beverly: And even their smell. 

 

Tamara: Yeah. So, I think that, in a way, we want to basically live almost forever. And so, I feel like that push for eternal life is what’s driving some of these innovations.  

 

Beverly: Interesting. How much do you think biology is tied into it? You know, our human flesh, like being able to have skin and flesh and a heartbeat and blood, like, does that really even matter?  

 

Tamara: From what I’m seeing, people are looking into various ways, whether it’s using flesh, as you mentioned, or maybe not, using a robotic body, uploading that consciousness to the cloud, or something like that. People are even looking into that as well.  

 

Beverly: Wow. 

 

Tamara: Honestly, I feel like it’s not going to be the same if you don’t have flesh. Flesh is very much a part of our human experience.  

 

Beverly: Really?  

 

Tamara: Yes. 

 

Beverly: Okay. So, in ways that we don’t even know yet, there may be things about having flesh that we haven’t recognized, and maybe they don’t make mathematical sense. Like, it shouldn’t matter, and perhaps it does. But we only know that once we don’t have it.  

 

Tamara: Right. 

 

Beverly: Interesting. So, as far as what it means to be human, how do we see a shift post-AI implementation? I’m sure the chip in the hand is going to be very soon. I’m sure that tracking devices will be implemented very soon. I’m sure that all this is going to happen in the near future. How will we define what it means to be human post-AI integration?  

 

Tamara: I think that is a good question. And I can’t say I have a great answer. However, what I would say is this: I think that, as you mentioned, there’s going to be a conflict between human 1.0, who we are right now, and human 2.0, that modified version of using gene editing, using AI. Those boundaries are going to be pushed because there was some legislation I was reading in Chile that was talking about how they are trying to protect those who have either mutations or alterations to their genome. 

 

This is something that’s being thought about. So, how exactly will we define being human? I don’t know, because if you modify it, I think one of the things that makes us human is our genetic makeup, right? But if you have someone who has integrated synthetic genome into their genetics, are they still human?  

 

Beverly: And is that going to pass on in some way? Over time, will that somehow get into the babies?  

 

Tamara: Yeah. I would say, especially if you are going to do that to the germ cells or the sperm and the egg, then that’s definitely going to be considered. Like, what are we producing? So, if we do those things, would we still be human? I don’t know. 

 

Beverly: Wow. These are some pretty interesting questions. 

 

So, we don’t know how it’s going to change, but we feel confident that it’s probably going to change because we’re going to start tampering. Do you think there’s a fear that instead of us being in charge of the little machines throughout our body, the little machines in our body will be in charge of us?  

 

Tamara: That is an excellent question. Honestly, I believe that could be possible. Especially if we are making these machines to the point where they have better processing than us. Maybe you can say they’re even smarter than us. Would it be instead of us using AI, AI is using us? So, I don’t know if you’ve heard of Ray Kurzweil, but he basically talked about how he expects that nanobots will be in all of our bodies by 2030. 

 

I don’t know about the timeline or anything like that, but I do think that’s an interesting question. If we do have nanobots in everyone’s body at some point in the future, would that have control over us? Because, after all, it can pick up any information. Like basically, it probably knows more about you than you. And so, at that point, if something knows more about you than you, is it in control?  

 

Beverly: Yeah. Is there anything I can do; AI can do better? Are there things that AI really can’t do that humans can do? What do you think?  

 

Tamara: At this point, I definitely think that humans are the masters of AI, right? Because we have simple AI, it’s narrow AI. It’s not super-intelligent at this point. However, given that we are people who love to innovate, we’re going to continue to innovate. Is there a point where it will outperform us? And it’s a very real possibility that that’s going to happen, I don’t know, maybe long after we’re gone. 

 

Beverly: Like 500 years.  

 

Tamara: Yeah. We don’t know. But I think if we continue to innovate, at some point, we have to get to that point where it’s going to be better.  

 

Beverly: Interesting, indeed. What do you think the future holds for us as far as being human and keeping our humanity? We’ve got to strive to keep some level of humanity. Love is one of the only things, and caring and certain emotions, are one of the only things that we hold really precious and dear that are a little bit unique and maybe desired.  

 

Machines can do things we tell them to do, but they don’t have the desire to do things like that. How are we going to hold those things? How are we going to build our thresholds? Will it require legal intervention, policies, and regulation? Or are we going to be able to control ourselves?  

 

Tamara: I don’t ever think that humans control themselves per se when it comes to innovation. Actually, I think we should probably come up with policies that release certain things, and we didn’t. 

 

If you combine humans with machines, does that human combined with a machine have the same ability to love? No, maybe that person who is also integrated with AI does. But I would say that it’s up to each person to choose because society is always going to change, but we can’t force any individual person.  

 

But I think you always have to have one of those personal boundaries of how far you want this to go. Then, the other thing you have to do, hopefully, is discuss as a society how far we want this to go. And what are the boundaries that we’re going to put around human augmentation?  

 

Beverly: Right. A priority. Don’t wait until after. 

 

Tamara: Yeah.  

 

Beverly: What final piece of advice would you give to people when they’re looking at the future of how we define ourselves as human? Would you just recommend that they’re cautious, and be aware of the risks? Or is there some way that you want them to learn more? Or what would you recommend?  

 

Tamara: First, people must be aware that this is happening. And I think many times we just kind of go with the flow and in hindsight, think, “oh, well, maybe we should have thought about that.” 

 

So, I think actively thinking about and talking about these things is very important. Once we really get those conversations going, then we can start deciding what boundaries and limitations we want to have. Although I did value my education, one thing that I would like to have seen more, which we don’t necessarily do, is that we do have some components there, but we should talk more about ethics in the scientific community.  

 

Many times, we focus on development, and we have a lot of great advances. Still, we never think about 10 or 20 years down the line, and how can we really put some guardrails around so we can protect, behave, and do things in an ethical manner? 

 

Beverly: Excellent. So, focus on the ethics, think about what you’re doing, be deliberate, and learn more about it so that you’re not blindsided. And don’t just go with the flow; really consider what you’re stepping into. Excellent. Thank you so much to Tamara Lambert for talking to us about being human in the AI era. 

 

As we stand on the brink of transformative technological advances, there needs to be deliberate thought about and ethical consideration of how we use AI. By being informed, setting boundaries, and prioritizing ethics, we can ensure that innovation serves us without compromising the values that make us uniquely human. 

 

Explore the full catalog of TAG Data Talk conversations here: TAG Data Talk with Dr. Beverly Wright – TAG Online.