Relationships at Work - The Guide to Building Workplace Connections and Avoiding Leadership Blind Spots.

How AI will Impact the Employee Experience w/ Alex Schwartz and Nate Thompson

Russel Lolacher - leadership and workplace relationship advocate Episode 158

In this episode of Relationships at Work, Russel chats with speakers and workforce consultants Alex Schwartz and Nate Thompson on the impact AI will have on the employee experience.

They share their thoughts, stories and experiences with...

  • AI's role and adoption in work environments.
  • AI as a tool, not a threat.
  • The need for continuous adaption and learning.
  • Cultural readiness and leadership mindset.
  • Early adoption and experimentation.
  • AI and diversity, ethics and privacy concerns.
  • Defining human and AI work roles.

And connect with me for more great content!

Welcome back to Relationships At Work podcast – the leadership mindset guide to creating a workplace we love.  I’m your host Russel Lolacher. 

I’m a communications and leadership nerd with a couple of decades of experience and a heap of curiosity on how we can make the workplace better. If you’re a leader trying to understand and improve your impact on work culture and the employee experience, you’re in the right place. 

A place we constantly hear about is the future of work. Not so much the flying cars side of things but more the AI of it all. 
 What is it? And I’m not sure we’re even getting that right most of the time.

What will it do for us?
 How will it affect my job? Will I have a job? 
 I went on Answer the Public to see what people are searching for… and most with “Will AI Take Over…” my job, the world.  

So there’s a lot of misunderstanding. A lot of fear. And that’s something we absolutely have to have a conversation around. 

What I’ve heard most is that AI won’t take your job. Those people who know how to use AI will take your job. 

So what do we do, where do we start and what are we even talking about anyway?

Today my guests will share a lot of this, and a few things that probably aren’t as obvious to how AI will impact the workplace and the employee experience. So I’m pretty happy they’re here.

Hold on. 

Russel Lolacher: And on the show today, we have Alex Schwartz and Nate Thompson, and here's why they are awesome. Together, they are keynote speakers and founders of the Disrupted Workforce, a consultancy and learning platform that helps leaders practically build the workforce of tomorrow, today. They are the faces behind the Disrupted Workforce podcast, which furthers their focus and the message of the need for us to have a future of work mindset.

Both bring decades of transformational leadership experience previously working with organizations like Comcast, Nestle, Marriott, and more, more, more. And they're here. And I heard the word mindset in there, which is a big part of this show. Thank you for being here, gentlemen.

Alex Schwartz: Thank you so much.

Nate Thompson: Thanks for having us!

Alex Schwartz: Russel, it's a real pleasure. We're so excited. We love your work and can't wait to dive in.

Russel Lolacher: All right, gentlemen. Well, I love the panel. I love the having the two because then we get to wow. And then my voice cracks. I'm going through puberty right through the podcast. So as the first question I have out of the gate, because I love just having the different perspectives is what's your best, Alex and Nate, or worst, Alex and Nate, employee experience?

Alex Schwartz: Alright, I'm gonna jump in and go first, and I am gonna go with my worst, Russel. So, I was an early 20 something, trying to make my way in New York in the entertainment industry. And I got hired for a bigger than my age or experienced job by a very producer who has only become more famous, movies, television, directing.

I am not going to name their name. And it was a very stark lesson in how things look on the outside versus how they operate on the inside. So he had promised me that they were transitioning offices and that we would only be working out of his and his partner's apartment for a short period of time.

That was not. the case. So my entire tenure with this, very interesting character was spent working out of his daughter's bedroom while she was at school sitting on a two foot wooden stool talking to the who's who in Hollywood. And on a regular basis, Russel, I would come into the office, And he would say, these are the lies that you need to maintain today for me to keep this ship afloat.

And these were related to his personal life, these were related to his professional life, and it was a very difficult thing to reconcile. In my early 20s because on one hand from the outside everybody thought oh my God, you've got this amazing job and you're working for so and so and you're interacting with everybody and you're moving up so fast. And on the inside it was completely broken and it was like living through a, a, a, just, just like a, like a really strange Stanley Kubrick movie.

And what I realized the big lesson to me is that my integrity was worth more than getting ahead

Nate Thompson: Yeah.

Alex Schwartz: And that some businesses that look highly functional are absolutely not. And that having a leader, not only with integrity, but with good mental hygiene was absolutely paramount for the rest of my career.

Nate Thompson: Mmm.

Russel Lolacher: It's, it's the idea that the end justifies the means is brutal because from hearing that, what, how did that impact mental health wise for you for the next little bit? Because that's certainly something I've heard from others is they carry that with them weeks, months, years, decades after those kinds of events.

Alex Schwartz: I don't know that I would go so far as to call it trauma, but it was, it, it approached trauma on a daily basis being out of alignment with my own value of being truthful and, having somebody that was, just vitriolic and how they would respond to things. One minute they're calm, the next minute they were explosive.

Ultimately this person fired me for something I thought didn't make any sense at all. So add insult to injury, of course. That was a gift because when I got out of that, but I think it's like being in a dysfunctional relationship of any kind, right? Like whether it's a, whether it's a dating relationship or a work relationship, it takes a little while to get out of it and say, Oh yeah.

All that weird stuff I felt, that wasn't me.

Nate Thompson: Mmm.

Alex Schwartz: Yeah, I think that the term everybody uses nowadays is, is gaslighting. Like I was, I was definitely in a, in a, in a, in a early 2000s version of, of gaslighting for sure.

Russel Lolacher: And it's one of those things you don't realize is impactful until you're out of it. Like how many of us have been relationships were like that were toxic or didn't work for us the minute it was over, like, Oh, that was that stress. That's where that was coming from and didn't know till it was over. In the workplace it's exactly the same. So I love that you brought up that it's like a relationship. Hence the name of the damn podcast. Now I have a question Nate, you're not off the hook. You got any stories, good or bad, you can share?

Nate Thompson: I do, and I'm not going to give you two stories. I'm going to give you a two-fer. So, the hardest, the hardest was going through an acquisition. And it was the hardest thing I've ever been through in my career, and I was... I was leading through it, not only for my team, but also for the entire culture integration. And this was the merger between two large asset management firms, global asset management firms. But here's what made it so hard, is imagine all of the beautiful relationships and the culture and the connectivity and that we grew up together and our kids know each other. All of that just gets torn apart. And so it's really hard no matter what you do as a leader, you're watching people walk out in tears. You're watching people recognize this is the end of an era and it hurts and there's a lot of pain. So that experience was traumatic for sure, not only for me, but for everyone. And, and it's the, it's the beginning of the end, but what made it beautiful?

Well, the, the recognition that, hey, this is going to be one of the hardest things I've ever gone through. What if we do everything we can to deeply care about people? And so we created this process where we'd walk the halls, and we would just go spend time with people. And we called it, Listen - Inform - Support. So we would go Listen by being physically present. When a lot of people just disappeared, we were physically in front of people, every day, talking to them. And then Inform, we would do a live cast. Hey, here's where we're at in the process. This is what's going on. This is what's coming next. We want to hear your questions.

Send them to us. And then Support was we started bringing in all kinds of support and including helping people transition, including opportunities for people after making connections, networking and that sort of thing. So it there were tears of pain for sure, but there were also tears for people going the way that you guys showed up during this really hard time meant the world to me.

Thank you for helping me take the next step and thank you for helping me bridge to my new future So I just felt like that for me was profound. It was traumatic for sure it I was, Emotionally exhausted and kind of I needed some decompression time after that whole thing went down, but it was a defining moment for me as a leader to do the right thing.

Russel Lolacher: Yeah, it's so funny that both you're experiences mirror each other because it's really just down to the relationship. One being a toxic, horrible one. The other one is going, what kind of relationship do I want to have with the people around me and what do I need to do to foster that connection? Yeah.

Thank you for your yin and yang, the pros and the cons of the of the, of the relationship side of things. Thank you, gentlemen, Alex and Nate appreciate that. So funny, we're going to be talking about technology even less about the human side, even though that is so integral to actually what we are talking about.

So the question we're trying to answer today, which is how will adopting, dismissing AI impact the employee experience and work culture. But before I get into that, digging into the aspects of all of that, I like starting my shows with definitions because we don't define things ever. We throw out words all the time, like leadership, diversity, but nobody ever defines them.

So we don't know what we're really talking about most times out of 10. So when we're talking about AI, artificial intelligence, what do we actually mean?

Alex Schwartz: Sure.

Nate Thompson: Alex, I'll jump in on that and then you, you can tag on. So I think it's just important to make a distinction that is a... a distinction that needs to be made right now in the context of the evolution of AI that we're in. AI has been around for 60 years or since the 60s I mean, and it, there's a lot of versions of it and the version that people don't know is the hidden version.

So really quickly, that's the version that's behind Netflix, behind Spotify, behind Amazon and your shopping cart and why you get recommended all these things. And it's in your maps and it's in your search and nobody knows it's there, but it's been there forever. And that's machine learning on the back end.

Now what's emerging right now is generative AI, and it's a very different kind of AI because it's an AI that you touch and that you interact with, you talk to it, you type to it, and it responds to you conversationally in human language. This is a very, very different thing. It only started when OpenAI launched it on November 30th of 2022. So I think the AI that we should talk about today is generative AI, because that's the AI that has captured the world.

Russel Lolacher: So why is this so confusing for people? Because we do hear machine learning. We do hear about generative AI, but that's not how a lot of people are talking about it. They're just talking about AI and it's going to take my job. Like that's all we seem to be hearing over and over again. And all the tech layoffs that are happening currently.

And AI is going to replace a lot of those roles as it's being reported. What do we need to do to myth bust a lot of this?

Alex Schwartz: It's a really good question. Machine learning of course is what is, the driving way that AI learns and that all AI is powered by machine learning. So that's you, you input the data, you train the model, and then the machine begins to train itself, right? This is the first time in history that we've had technology that learns on its own.

So that's why we use the term machine learning, and that's very distinctive. When we talk about why is AI going to take my job, or is AI going to take my job, that again goes back to what Nate was talking about, which is this massive shift towards generative AI, and the way that generative AI in particular is performing a lot of tasks and skills that we previously siloed into a very human domain. So the first thing we got to do is take the, the fear out of the room, right? And Nate and I talk about this on stage. We talk about this on our show. This is absolutely mission critical for every individual, for every professional and for every organization. Okay. There's a very common saying that I imagine some of your listeners heard.

AI is not going to take your job. Someone who knows how to use AI is going to take your job. Ignoring this would be like ignoring the internet. And by the way, AI is being adopted a lot faster than the internet was. So it's all happening at such a breakneck speed. And that's what creates the fear. Not to mention that we know that the news is incentivized to show us fear based stories on a ratio of 10 to one.

So I would encourage your listeners go out and Google 'good news stories' and you will see as you ladder up some of the amazing ways that AI is going to impact life on this earth from preventative medicine to help us live longer to reimagining clean energy to all sorts of use cases that are really, really powerful. But let's get back to, you know, this core issue.

How do we explain this? So first, you know, we're talking about this duality of good and bad. And what we need to understand is that on the good side, AI is helping knowledge workers across industries become anywhere from 25 to 40 percent more efficient right now. There's three basic use cases for generative AI, which companies, again, are seeing incredible efficiencies with, and you can think about it in these three buckets, and that's as an assistant, as a strategist, and as a creative partner.

And what we are encouraging everyone to do is start leaning into these technologies, start exploring them and start finding, you know, that they're actually incredibly helpful. The big, big gift of generative AI is that it can perform more and more of our busy work and create time for what matters most. That is the powerful optimistic message that is behind a generative AI that people miss when they're solely focused on Oh my goodness is AI gonna take my job. I definitely have more to add but I want to turn it over to Nate and see what he has to say

Nate Thompson: I'll just, I'll just... Alex, that was fantastic. That was, that was fire. A couple of things that I think are worth adding. Alex mentioned duality. Yes, in the hands of bad actors, artificial intelligence is going to be used in ways that it shouldn't be used. But that's not a AI thing, that's people doing things that they shouldn't be doing.

It's always been the case. The technology isn't inherently bad, it's how you use it. That's one. Two, yes, there are risks. So right now we're seeing a lot of an emergent field of legal, where people are talking about what is it that is okay to do with AI and where does, where do we start getting into trouble?

And we've seen some situations where people have used AI and it hallucinated and it wasn't actually real data. Or people input data that was proprietary, private, confidential data that shouldn't have been shared. And anything that goes into the model becomes part of the training data unless you set the settings to say, do not include my data in the training data. You know, so that we call it the seatbelt slide when we're talking to audiences. Here are some things that you have to do to protect yourself from this, but we always say, this is so new that tomorrow it's going to be different, right? So you have to stay up with this. Now, I think it's worth saying that a history lesson. is valuable. If you look at the history of humanity, we have always been coming up with new technologies that are revolutionary, right?

And those revolutionary technologies allow us to move forward as a species. This is another iteration on that. So that part of the story is very familiar to the history of humanity. The one thing that's different that Alex touched on, and I just want to make sure we all get grab onto this, is this technology learns at scale.

We're talking 200 million users are using one generative AI called ChatGPT. 200 million people. And it's teaching the technology at scale across the world at the same time. What makes this different is we've never had a technology that's learning the human language, the human interactions, the human experience at the same time at this scale before.

So it is going to be different, and that's why Alex is saying, you have to start using this. Don't think of it like this is going to replace me. Think of it like a sidekick. Think of it like a co pilot, a partner. This is this technology that's going to be with you every day, thousands of times a day, helping you do what you do, better.

Russel Lolacher: I can hear in the tone in your voice that you're familiar with talking to people who are not familiar with generative AI and trying to, and I don't wanna use the word spoon feed, but it does. I mean, to point, we have get a where they're like, there's a general understanding. We talked about mindset off the top.

Because as we all know, tactics mean nothing... If it doesn't matter what you type into ChatGPT, if your mindset's not working and you're not using it in the right way. So what are the biggest roadblocks for you two when it comes to convincing leaders to adopt this in the workplace?

Alex Schwartz: The place that we started... I want to rewind to what is mindset and what is our take on it because I think that will help convey our perspective really clearly. So I imagine that your listeners are familiar with Carol Dweck's work. She's the godmother of mindset. She wrote Mindset The Evolution of Success. Am I getting that right? The new evolution of success?

Nate Thompson: New psychology of success.

Alex Schwartz: Yeah, what Nate said. That's exactly, I said the same thing. So she wrote Mindset, The New Psychology of Success. She painted a very vivid picture of what is a fixed mindset. What is a growth mindset and that is a really good place to start. We didn't feel that that was fit for purpose for the moment that we're in.

We're living in the most digital and disrupted workforce in human history. Things are moving incredibly fast and it's really hard to keep up and also to lead with humility, right? Because so many leaders, they want to have the answers. They want to say, hey, we got this. We want to tell you how it is, but they don't.

And being vulnerable and saying, I don't know, that is a superpower now, right? Because anybody that tells you that they're an expert on the future of work is full of it. There are no experts. We are writing the future of work day by day. Literally day by day. So what happened in our experience was we said, hey, a lot of people are coming to us for band aids.

They want the quick fix. They want, hey Alex, hey Nate. What company should I go work for? What AI tool should I adopt? What skills are rising? What skills are fading? What are my unique human superpowers now? All these sorts of questions. They're not bad questions. We understand that people want quick fixes, but we also understand that in an environment that's moving and breathing and living as fast as this one is quick fixes, aren't going to work.

So we developed our own model that we call The Future of Work Mindset. And we have three pillars that under, underpin that. And those are explore, expand, and evolve. And we feel leaders, professionals, and just human beings on the planet, need to adopt these principles because it allows us to find a new way of being and be, get out of the fear, start allowing ourselves to run experiments, to change the way that we learn, to unravel these stories of identity.

We're all so fixed in I am what I do. Especially here in the United States. I am my job, right? You know so many of us identify in that way and when you have a technology that is changing the game like generative AI is, you need to start becoming more flexible. That's hard. That's scary You know nate's got a great slide that says easy doesn't change you. We love that.

We love that message. You know, there is a an element of resilience and leaning into this and, you know, anybody that reaches out to us, we can take them through this model in greater detail. But, you know, the first place to start is mindset. And then the second place is a leaders to really look at this and say, how can I avoid making a reactive, impulsive decision?

Typically large companies do one of two things in a situation like this. And Nate and I have seen this over and over and over again in the digital transformation space back when, you know, we used to have a slide at one of the companies that I worked for that said, it's an app based world dive in, you know, everybody needs an app.

So there's a lot of people saying like, everybody needs an AI tool. Maybe you do. Maybe you don't. So big companies typically, you know, try to go acquire an AI startup. We're seeing a lot of that. That's like trying to put a firework on the back of a battleship. It's not going to get you very far. Usually that merger and integration doesn't work.

Other folks will say, Hey, we need a head of AI, but if the culture isn't ready for it, that may not work. So that the step back to say, where are we in our digital maturity? What does our workforce need? Are we going to be blown out of the water and irrelevant if we don't do a massive transformation with AI?

Or are we better suited to start small and run a few experiments and sort of figure out what is the business case, what is the ROI, and what is the best way to roll this in iteratively to add the most value and allow the culture to acclimate to it?

Russel Lolacher: I can understand this being difficult for many leaders because they had just gone through the pandemic and had the litmus test of whether they're good or bad leaders through that change. And then at the well, quote unquote, end of the pandemic, here's generative AI. You also have to figure out, it was just this tsunami of change that they were having to be challenged as leaders when they had really not been challenged to this level since I guess the industrial revolution or the internet at this point.

So I can see them being resistant. But I want to flip it more to the human side because I can see leaders going well, competitive advantage, competitive advantage, money, money, money, money, customer, customer, customer. That is probably where their brain is going to go to when it comes to convincing them to adopt things like this.

But my show is about the humans that work within that organization. So how do you convince a leader to understand the impact AI positive and negative will have to the culture, to the employee experience.

Nate Thompson: Hmm. I think the interesting thing here is it's less about convincing. Convincing to me isn't a great influential strategy. What I think is powerful is a go and see. A go and see is this idea of hey, I'm not going to try to convince you of anything. Let's go look at it. Let's go watch other people use it. Let's watch other people talk about it. Let's watch this video of someone using it. And then let's, at the end of the go n'see, have you use it. And just, you know, notice what you notice. And there's no pressure. There's no expectation. There's no fear about, am I going to look stupid in front of anyone? It's literally just, let's go and see this thing. And what happens when you do that in a group or you do that with individuals, is the person starts to loosen the model. Loosen the model is a psychological term for, hey, I thought it was this. It's starting to loosen up. And now it might be something else. And that moment is when you finally have traction. But here is the important distinction. You can't lead what you don't understand. So from a human perspective, any leader who's listening to this, you are not going to be able to lead AI if you don't use it and you don't understand it. Full stop. So, it's really important for a human to start to use this technology in the ways that Alex was talking about as an assistant, you know, as a creator, as a strategist, and start to put your world into this tool, even if it's just personally, even if your company says no one touch this technology. As a human being, you can start to use this so you get comfort, right? You gain awareness, understanding, comfort, and then you start to go, oh, this isn't what I thought it was. This is actually really helpful. This is good. That moment, which is a very human moment, is what it takes to start to transform an organization. Because if people talk about it with abstraction and puffery, like, Oh yeah, AI is really good, we're gonna use it. And then everybody looks at each other and goes, Are we using it? I don't know anyone who's using it. You know, it's just like really awkward, sort of. No, let's, let's be the curious people, so we say explore, expand, evolve.

Explore generative AI. Start to use it yourself. Ask it about your vacation. Ask it about children's rhymes and dad jokes. Ask it about how you should plan your day, right? Just start with the basic stuff and then you'll start to get your feet and go, Oh, I can see how this could start to add value in my company. Now here's how we'll connect the culture to this for us. If you create a cross functional group, who are not AI experts to start to come together inside of the company to shepherd AI, to foster a culture of AI, these cross functional people will start to be your anchors in the organization to go, I'm using it and I'm not a coder. I'm not a software engineer. I'm not a developer. I'm using it, and it's really helping me. And what that will do is start to create connectivity around the organization in a powerful way, and then we'll start to build a narrative, a strategic narrative around, Hey, AI is here to stay. We're all using it.

We're getting better at it. Come on, come with us. Let's go on a journey. So it's, it's I think the most important thing to say here is, transformation change is deeply personal. Generative AI is just a technology. This is about what's going on inside of people and marrying that to a go and see experience.

Russel Lolacher: Totally get the cultivating a curiosity, really appreciate that. I want to dive back to what Alex said though, earlier, when you were talking about workplaces, cultures, not being ready for generative AI, what does that look like? What are the traits of a culture that should not be maybe doing this tomorrow?

Alex Schwartz: Well, I think some of the things I touched on earlier, just quickly is, are you being reactive? Are you being impulsive? Are you putting people in charge of AI programs that know nothing about AI? I think those are some of the, I think those are some of the, the, the warning signals that we see. And are you doing this in a way where you don't truly understand?

What is the value that it's going to bring to your organization? Look, one of the things that is interesting about this moment that we're in is if you Google what companies are using AI well and what companies are not, you don't get a lot of results and you have to say, well, why is that? And the reason why that is is that we are in this early adopter, adopter phase and a lot of companies are keeping their efforts, behind closed doors.

So there's a lot of clandestine activity around AI. So that's one of the things where if you go out in the wild and say, hey, why can't I find more use cases of how companies are rolling it in? But I do want to go back to, maybe through the lexicon of how are we using it, we can explain this even better.

So we knew that we wanted to start using generative AI to become more efficient at what we do. And we have been experimenting with it and reframing on a, on a, on a weekly basis. How does this impact our workflow? So we might be using ChatGPT for editing our scripts and helping us to find the, the right clips and create show notes for the podcast.

We might be using MidJourney for our presentations. We're using Canva for our presentations. We're using Perplexity. ai to do deep research faster than we could do it. Otherwise, we're probably going to start experimenting with Claude pretty soon cause we've heard great things about that product from Anthropic and that it provides some nuance to ChatGPT.

And we are taking immersion courses and learning and asking a lot of other people that we interact with, how are you using it? So that we have this knowledge share that's going on. So, I think, I don't want to say that we have this perfectly figured out. But between Nate and myself, we do have a lot of experience with digital transformation.

Nate has a lot of experience with culture transformation, and we're trying to use those lessons learned to purposefully roll it into our own organization.

Russel Lolacher: What do you say to organizations that might be a little bit more nervous on the privacy ethics side of things, especially bringing that into an organization that might be feeling like they're a little resistant to this new technology and they're putting in things that they're like, am I okay to put this in?

Is this telling the, the, the overlords, the, the AI overlords too much about the organization. I'm just trying to understand it from a, from a privacy ethics perspective of adoption.

Nate Thompson: Yeah, there's some really simple things that companies can be doing. One is you, you can bring in experts on that topic very, very easily and have them come and talk to the company. But here's the thing. Organizations have risk, they have compliance, and they have legal. So there's already a strong foundation of what should we be doing and what is outside of the boundaries. And then you can start to bring in some of those experts around ethics, privacy, exposure, mitigation, that sort of thing. But here's another simple way to go out and get a framework even if you don't want to pay for it, even if you don't want to pay for a speaker to come in, the United States government is putting out their framework around generative AI and AI in general. And so when you have the US government in a bipartisan effort to go, Hey, what are the guardrails that we should put around this? And they're making that information publicly available. That could be a key resource for a company who may not want to spend as much to go out and learn that sort of thing, to grab onto a structure that they can say, Oh, here are the framework recommendations by a bipartisan federal government. Right. And here's how we can take some of this thinking and build it into our organization, actually just marry it with a lot of the processes that we already have.

Russel Lolacher: Do you feel AI is helpful on things like diversity, equity, inclusivity, belonging? I'll, I'll, I mean, it's a... it's just a tool. I get that. It's just a tool. It is not anything other than is it's good as bad as you use it. But we've got a lot of organizations that are trying to understand themselves, trying to be more inclusive. Is AI a useful tool for that?

Alex Schwartz: I love this question so much. I love this question so much because it is a really loaded question. There is the fact that some AI tools, are demonstrating incredible bias in negative ways. And I'm going to let Nate tell a story about a conference he went to to bring this to life because it's a really great story.

But you need diverse teams to code AI and to program AI and to input the data because when you don't have that, people's inherent biases wind up in the code, and then ultimately in the AI tools themselves. And that is not good for society at large. I want to say that. I want to underline that three times.

That is not good for society at large. And we also must acknowledge that all of us, no matter where we come from, what culture, what race, what gender, what experience, we all have inherent biases. So the only way to get to, AI that, that has a broader sense is to get a lot of different kinds of folks together in order to drive the, the, the data and the learning and, and make sure that there's truly an inclusive set of principles and opinions that are put into the programming.

Now, let me take you in a very different direction. I want to talk about AI hiring tools and this is really cool. So when you look at, the evolution of AI tools for HR, what we are understanding is that AI actually allows hiring managers and chief people officers to reimagine talent in ways that human beings cannot.

And so this can create a lot of diversity within a company's ecosystem for upskilling, reskilling, moving folks around, considering people from backgrounds that, you know, may not be suited for a particular role on the surface, but actually have fantastic skills. You know, looking at someone who's got a resume of a bartender and understanding, Hey, this could be a fantastic sales person.

The HR leader may or may not make that connection, but AI will. AI can look at these skills in a very, very broad and tactical way to reimagine roles. And when you're going through the shift that we're going through and roles and organizations are being reimagined daily, this is a tremendous way to get more diversity by looking very specifically at skill sets.

And that, I think, is agnostic in many ways of gender, race, and other biases.

Russel Lolacher: Now, I want to hear this story, Nate.

Nate Thompson: Yeah, so, I think Alex makes an important point that I'll set up this story with, which is. AI is a reflection of humanity. And I think it's interesting when people are like, Oh my gosh, you know, AI is biased. It's like, hello? It's trained on our data. We're training it. It's us. It's reflecting it back to us. So, it's a, it's kind of like an uncomfortable moment where, you know, have you ever woken up and you didn't get enough sleep and maybe you were Having drinks the night before you stayed up too late or whatever and you look in the mirror and you're like, oh my I don't look that great today. You know, that's kind of the moment when AI goes Hey, here are all your biases and we just look at it and we're like, oh, I don't really like that but it's we need this we need to learn from this now this story.

I'll do the the short version of it. Amy Webb is fantastic. She leads the Future Trends Institute and she is bringing the future to us over and over again. And one of the things she did at this conference that was so powerful is she goes, Genitive AI has biases. I'm going to show it to you right now. And she said, I asked ChatGPT and Midjourney to show me what a CEO looks like in a large company. All white males. Show me what a CEO looks like in midsize companies. All white males. Show me what a CEO looks like in small companies. And in only one of those four, it's a four box layout, only one person was seen. semi diverse. So now you have it spitting out that only white males are the leaders of an organization. And then she takes it a step further and she goes, show me what a CEO looks like of a tampon company. All white males. And everybody in the room is like, Oh my gosh, this is the best example ever of bias. But it's not, it's not the point isn't to go AI's bad. It's the point is to go AI needs better training. AI needs more diverse people. It needs more thought diversity, cultural diversity, into the training so that this tool can get better.

Alex Schwartz: I want to add one great resource for anybody looking to go deeper into this, is the documentary Coded Bias by Dr. Joy Bulawami. She's fantastic, and I think she really shines a spotlight on what we're, what we're talking about here.

Russel Lolacher: That's been in my Netflix queue for a while. So thank you for reiterating that I need to dig into that.

Nate Thompson: It's a great one.

Russel Lolacher: So we've mentioned it sort of in the DNA of our conversation about change that resiliency is something we talk about a lot too. But when it comes to AI, that it's constantly changing, that things, as you said, will change tomorrow.

So you better get on this train now. How do you keep up with the levels of change that an organization, the culture, the employee experience is like, you know what, I've had enough change and you want me to be in a constant state of disruption? How do you approach that for an organization?

Nate Thompson: Can't out hustle an algorithm. Alex, you want to take that?

Alex Schwartz: Well, as Nate said, you can't out hustle an algorithm. So that's, that's starters. I think first and foremost, there needs to be a massive level of acceptance and humility, right? It's the acceptance that this is moving faster than any other technology we've experienced. And the humility to say, you know, we're probably not going to be able to keep up with it.

However, AI is a little bit like the wild wild west right now. You have a lot of startups, you have a lot of players, you have a ton of VC investment. And as we've seen in, any new cycle where something new to the world is being adopted quickly, albeit not as quickly as generative AI, things tend to shake out.

You know, it emerges who the new players are, it emerges which, which, what are the best platforms, it emerges what are the best use cases. And I think it's going to become less chaotic than it feels now, you know. As Nate said earlier, we are in this early adopter phase. And that's the good news and the bad news.

The good news is that for people that are leaning in now, they're going to be able to put their surfboard on the wave faster. It also means that they need to try to keep their balance on that surfboard by not getting overwhelmed by all the things that are happening, and just building slow iterative habits. You know, habit research shows us that we can't build more than three new habits at a time that it typically takes about 90 days to ingrain a habit. So using these things that these, these, these simple contextualizations that tell you how our human CPU works, gives us the advantage to say, all right, what can we reasonably do and what can we accept that we can't do and how do we allow our strategy to go from there?

But resilience and adaptability are the name of the game now and we do want to feel like we're getting a breather And we're not getting one just yet. And I know that feels really hard and I want to kind of hold the space for anybody who's feeling like oh my god this feels so overwhelming. But the best wisdom I ever got from a leader is, I I was mentored by a wonderful man by the name of Jeff Hoffman.

He was one of the founders of the Priceline family of companies. And he gives these great keynotes to startups and entrepreneurs and business leaders all over the world. And one of his core principles is get off the couch. When we're sitting on the couch, when we're sitting in fear, when we're stewing in that, there's no ability to start to see what possibility looks like, right?

You know, you move a muscle, you change a thought, and we must all just accept that staying in motion right now, that is our solution, and that is our safe haven, and that's what's going to get us through.

Russel Lolacher: I want to wrap it as we get closer to the end of our conversation, I have a question for you with the idea around curiosity and the ideas around that this is still an early adopter phase. And this is for both of you. What questions do you still have about how AI is going to impact the workplace?

Nate Thompson: Oh, gosh. Yeah. It's a laundry list. I mean, let's, let's just look at work and there's, we could take this a million directions, but let's just look at work. Now you've entered a layer cake workforce, which Stephanie Nadi Olson, and the we are Rosie team. That's their way of contextualizing a layer cake workforce, but generative AI is one of the layers in this layer cake workforce. And it's really starting to ask the question when you have a human and a piece of software that's intelligent or a bot doing work, how should that work be broken up? Who should own that work? Who should be compensated for that work? Who should get in trouble if that work goes awry? You know, like, who, how do you, how do you handle the kind of ownership is a huge question about this emerging world. And then you can flip that and go, well, let's take this conversation into compensation. How do we compensate for this? If a person is 50 percent more productive than their peers, if a person is 100 percent more productive than their peers because they're good with generative AI, should they be compensated differently? Should that be a specialty role? If it takes someone out of a $50,000 job and launches them into a $200,000 job in the same company, is that okay? Like, there's all these questions kind of emerging around this. Can you be fired for it? In some companies, if they say this thing is bad, which we don't recommend at all, but there will be leaders who go, general AI is bad. Do not use it. If you use it, we'll fire you. Okay. Is that okay? Right? Or is that person going to be promoted in a different company for doing the same thing? It's kind of, it's going to unlock an entirely new world of leadership in HR around how are we thinking about characterizing and utilizing this exciting new technology, and are we doing that in a healthy way or an unhealthy way?

Alex, I will kick it over to you.

Alex Schwartz: I love what you said and we contextualize this. If you, if you want to just pull it into one sentence, it's, what is my work versus AI's work? Right? Where does that split, and how do we understand the broader implications of, of what that means? I'm going to share just, I'm going to share the negative side.

The thing that scares me, are deepfakes. I'm really freaked out about deepfakes and the evolution of deepfakes, of the idea that, you know, you could get a video message from your boss telling you to do something, and it's actually not your boss. I think there's some, you know, how we filter out what's real versus what's not real at work and in our lives.

That to me is, that to me is the scariest thing about AI. I know we will get there and I know we'll develop better tools, but right now we don't have them yet. So that concerns me. On the, optimistic side on what I'm excited about is I'm really excited to see these plethora of use cases of people using it as individuals at work to become more productive, using it as teams, and just all the myriad of ways that it's going to evolve how we work and I truly believe make our lives easier. And allow us to focus on what matters most. And I want to go back, Russel, to a question that you were really hoping for us to answer concretely about what are, you know, bad leaders doing. And if I could sum it up, they're doing nothing. If you're doing nothing about AI, ultimately, you are screwed because, again, ignoring this is like ignoring the internet.

Russel Lolacher: Fair. All right, gentlemen, Alex, Nate, I have the last question to ask both of you, which is what's one simple action people can do right now to improve their relationships at work?

Nate Thompson: Oh my gosh. Listen and be present. Okay, so when humans are going through this much change, intersectional change, compounded change, Russ, you said it. Like, can't we get a break, please? Can we have a break from change? Well, it's not coming. We'll, we're here to say it, and we're not being mean. You're not going to get a break. So, what can we do? We can create a space to listen. What are you going through? What do you need? What, how can I help? You know, just to be that kind of human connection in a time of heaviness and hardship and challenge is vital. If you look at when humans come together, it's through conversation and if you, when they diverge, it's when they stop talking and they get into isolation and silos and assumptions and it gets really dangerous.

So I think it's the, it's the human superpower that's critical right now, which is listen and be present for one another.

Alex Schwartz: I love that. My piece of advice is that human interaction is not an app. And I've learned this in my own life, right? In my twenties, I would behave in a more transactional way. I would give to get, and I don't do that anymore. I try to be really, really thoughtful about, you know, mutual gifting relationships and, showing up as my best self no matter what, and giving from a place of, of what is genuine and authentic. But we have technology that solves problems on demand. You know, you press a button, you get a car, you press a button, your groceries are delivered. You press a button, it plays the song you want it to play. And there's an infinite amount of use cases for all of us that we are using every day on our computers and on our phones and on our tablets.

And what that creates is a misaligned expectation that we as humans must operate in the same way. And that's not true. Human interaction, the blessing and the glory of human interaction is in our messiness, right? So, I'm going to take Nate's point of listen further. A lot of people talk about empathy and the importance of empathy Empathy is great and empathy when done well requires a certain level of misery of messiness.

It's not just you tell me your story. I listen. You've been heard and seen and it's over. It doesn't work that way. But the level up for the moment that we're in is compassion because the difference between empathy and compassion is empathy is I see you, I hear you, I respect what you're going through, I'm trying to understand you.

Compassion is all of those things plus a desire to help. And in a moment when we're going through what we're going through as a species, as a workforce, need to be helping one another more.

Russel Lolacher: That is Alex Schwartz and Nate Thompson. They are the founders of the Disrupted Workforce Consultancy and the co-hosts of the Disrupted Workforce Podcast, which now that you've heard them here, I highly recommend you go over there and listen to them there too. Thank you so much, gentlemen.

Alex Schwartz: Thank you, Russel. It's been fantastic.

 

People on this episode