In this podcast episode, Dr. Beverly Wright, Vice President of Data Science & AI at Wavicle Data Solutions, and Brian Burns, Senior Manager of AI at Sprout Social, explore how and where to best apply Generative AI in the world of knowledge work. From defining what “knowledge work” really is to unpacking practical applications and risks, this discussion dives into the fast-evolving space where creativity, cognition, and GenAI meet.
Speaker details:
Watch the full podcast here or keep scrolling to read a transcript of the discussion between Beverly and Brian:
Beverly: Hello, I’m Doctor Beverly Wright and welcome to Tag Data Talk. With us today, we have Brian Burns, Senior Manager of AI at Sprout Social. Thanks for being here.
Brian: Thanks for having me, Beverly.
Beverly: Excellent. I’m super excited to have you talk to us about exploration of when and where to leverage Gen AI and knowledge work, which is a topic that so many people are talking about right now. But let’s start off with a little bit of background. Tell us, why are you so cool?
Brian: It’s a tough one. Am I cool? I don’t know if I am cool. Let’s see. It’s because I’ve got a couple things going on. I am, like you mentioned, at Sprout Social. Also, let’s see, I support the Georgia Tech Master Science Analytics program on their Advisory Board. And I also am getting more and more involved with the Chicago Board for INFORMS.
Beverly: So, we’re talking about exploration of when and where to leverage Gen AI and knowledge work specifically. So, let’s talk all about knowledge work. When we use that phrase, a lot of people don’t know what that means exactly. Tell us what does that really mean? Knowledge work.
Brian: Well, it’s a great question. Knowledge work, really, it’s any work that requires, creativity or brain power, for lack of a better word, is the main output for it. So, software development, writers, other types of content creators, that’s what we’re thinking about when we hear the term knowledge workers.
Beverly: It’s fair to say these are educated, kind of white collar. They have to have a certain craft or a skill. Is that fair to call it? Is that knowledge workers? I mean, we’re knowledge workers, right?
Brian: Absolutely, yeah. I’m definitely a knowledge worker. I’d say that’s a fair characterization. Probably not totally encompassing anything where there’s, I’d say other areas for knowledge work, anywhere where there’s kind of like a not a script, but kind of like a pattern or paradigm that you learn and that’s what you got to know and reference when you go to do it. That falls under knowledge work as well.
Beverly: Nice. OK. It feels like this is the first time that knowledge workers have felt a little bit threatened. Especially like we talked about robotics in the past, like you and I just on the side talked about robotics and how robotics are taking over. I read a stat that was pretty startling the other day about how a large percentage of South Korea’s workforce are taken over by machines.
And so, it’s interesting to think about like, Oh yeah. But the degreed people are safe. And you’re saying that’s not necessarily what we’re saying. There’s still Gen AI that could be leveraged within the knowledge worker kind of area, right?
Brian: Yeah, absolutely. And I think it’s interesting you bring up machines. When I think of what’s happening now, I draw a parallel to what Lean, kind of like the principles of Lean, did for manufacturing, manufacturing process back in the day with the advent of machines, machines that could do some of the tasks within what was then within the manufacturing space. And that’s what we’re seeing now. We’re seeing some technology that can do some of the tasks that knowledge workers are expected to do.
Beverly: Got it. OK, well, what are some, when we talk about Gen AI, that’s another one that could be all over the place. So now that we sort of understand what knowledge workers are, let me let me go back a little bit and tell me what knowledge work is not. And then we’ll talk about what is Gen AI specifically. So, when you think about not being knowledge work, what does that encompass? Can we define that?
Brian: What is not being knowledge work? I think that some good examples of that is things that have to be done in person, right? Some emotional interpersonal work. Think of a teacher. That’s not just knowledge, although there’s a lot of knowledge involved with that, but there’s it would not be effective without practical hands-on experience within the classroom and being there within them. I would not classify that as knowledge work. Then of course, anything that you have to do with your hands, you know.
Beverly: Construction, like construction workers, where would you?
Brian: Yeah, although, you know, there’s lots that’s learned on the job, but for what we’re talking about today, it’s not something that could be done through the content of images, text and those sorts of mediums, right? You can’t hit a nail with a piece of paper.
Beverly: Got it, got it. OK, got it. OK, so knowledge work. So, let’s talk about Jen AI a little bit and then we’ll talk about the content in the context of knowledge workers. So, when we think about that, are we just talking about LLMS? Are we talking about creating images? What kind of references would you give?
Brian: Yes, you hit on some of the main ones that come to mind, especially when ChatGPT hit the market. So, anything to generate text or summarize text, text, text, text, that’s definitely what we’re talking about here. But also, faster and faster we are growing to cover other mediums such as images, videos, etcetera. We’re generating, its generating content, any one of those areas.
Beverly: OK, so my company has this tool called EZ Gen AI and it creates like tables, you know, or it leverages different programs to create something like a summary. And it’s brand new. It’s not just pulling it and then putting it there. It’s brand new, created. So, tables, summaries, any kind of text and even images are what we’re referencing with Gen AI in this particular context, right?
Brian: Yes. And I think it’s helpful also to note that, you know, where this is distinguishable from machine learning or, or arguably unlike the prior wave of advancement here. Yeah, generative AI is key to the word is ‘generating’ these things. It’s beyond kind of a, a decision or probability. It’s creating content.
Beverly: OK, got it. So, AI can encompass things like machine learning models and tools and create different ways of automating, basically predictive analytics and coming up with more efficient ways to do that. But with Gen AI, the operative word here, being generative, and that’s what makes it unique and different. Yeah, I hear this a lot. So, what are some examples of how we could potentially leverage Gen AI in the knowledge work?
Brian: Well, we’re seeing a few come out already. One that comes to mind, it’s really part of my day-to-day. So, software, software development, we’re seeing some of the more mundane tasks being automated from that, from generative AI, some of it being like unit tests. If you ask any software developer, many data scientists, they’re not their most favourite thing to do, but they are core to developing production software.
Those unit tests, while today they don’t come out exactly perfect, they do make it much faster to automate into coverage. Another example is queries, right? So a SQL query to grab data, more and more generative AI is getting better at just grabbing those from a text prompt, making anyone that’s worked as an analyst definitely has had a certain proportion of their day dedicated to ad hoc data polls, requests say can I get data on this, data on that? I think that’s quickly going away and being replaced by generative AI.
Beverly: OK, That sounds great. Let’s do all of it, right? What kind of challenges do we face in the middle of all this?
Brian: Beverly, generative AI, while they are very large, very impressive models, they’re still models. There’s a degree of error in there and there always will be. I think to remember, there’s also a degree of human error in every process as well. So, when we’re thinking about really when we’re using generative AI, you got to think about the implications of what you’re using it for if it’s wrong, right?
So, some of those use cases that I just mentioned, if they’re wrong, it’s kind of obvious that they’re wrong and they’re readily correctable, right? Like a table name is wrong and the query’s failing. In the hands of someone that knows how to write SQL, they can easily correct. If you’re you know, if the consequences of being wrong is weightier than that, that’s where I would get more concerned about leveraging Gen AI.
Beverly: So, what about the human in the loop? Like even if generative AI is being used and some of these knowledge work areas, if there’s a human in the loop, what do we have to worry about? And I’m being facetious, obviously.
Brian: Yeah, I mean, that’s a core strategy and how they’re being trained with RLHF, humans providing feedback on their predictions. You know, humans, usually you’re implementing a model in general, but definitely with generative AI for some of these things to scale. So sometimes you can’t have a human review all the output.
Beverly: Right, right.
Brian: You got to be conscious of that.
Beverly: Yeah. So, in a way, it sounds like you might have human errors and then on top of that, you might have machine errors. And so, it could become more prone potentially for errors. Is that what we’re expecting, at least until they get better?
Brian: I’d say it’s a constant area focus for research and how to and how to, how to get better and how to scale these things. Some of the most interesting things that I’ve seen recently is becoming more prevalent to score your model, kind of like judge how it, how your model is performing with the output of another model. And that kind of has inherent implications, sort of a feedback loop of propagating errors.
Beverly: Yeah, yeah.
Brian: But it’s also very practical, it works, you know.
Beverly: Yeah, yeah. OK. So, it sounds like we’re saying that we can make it happen, but not incredibly efficiently or accurately just yet, but that can get better.
Brian: I guess I’m saying is that, you know, it’s not perfect, but perfection isn’t necessarily the goal, right? There’s lots of things that we could use generative AI for that have a really high utility and that’s what I’m trying to focus on today. I can’t control on where generative AI goes. It’s not my active area work but always thinking about how you know how I can leverage it for my team in the in the products that I am producing in a way that’s responsible.
Beverly: Right. And you mentioned something about the, you didn’t say consequences, but the result or what happens if something goes wrong. Is that part of the sort of evaluation process of figuring out the right places to plug in generative AI and knowledge work? Or how do you know when it’s right and when you should use a type of generative AI within knowledge work?
Brian: Yeah, that’s a great question. And it’s often a challenge when you’re deploying this definition of what’s good is something that you should outline at the beginning of your project or your endeavour. But you know, it’s the metrics that are currently used today to judge the quality of, of LLMs in particular. So like Rouge and Blue, they’re kind of akin to metrics that we’ve were familiar with in machine learning, like precision recall. And so that’s one way that you can nudge it. Basically, that’s looking at text that you know are true or that you that pass your, that pass your, you have been human validated right as being good.
Seeing how close you get to those answers. That’s really the main mechanism that we have right now. And second is that when you’re when you’re scaling this, like when you’re putting this into your product, just like you do when you deploy any other machine learning model or ML based feature, make sure you have that feedback loop in your product. So, did the customer click on this? Did they know what choices did they take from this action? Was there an escalation from this afterwards like would be expected, like you predicted and monitor those distributions, see if there’s any changes?
And that way you can’t prevent the errors, but that’s one way that you can help mitigate it or at least have an assurance that things haven’t changed since you did your last very thorough evaluation of how these things are performing.
Beverly: Got it. OK, so it sounds like you want to choose areas of application in knowledge work where a human in the loop is very practical, where there are ways to validate what’s been produced and where the risk is not crazy high. You know that maybe you start with something internal or what would you recommend for how to get started? There’s a lot of people are spending a lot of human hours on knowledge work and so how can they? They want to be more efficient, but they also want to watch out for the risks.
Brian: Yeah, great question. This is where I turn back to, you know, history as a predictor of the future and I’m learning from the past and I do see a lot of similarities to how lean manufacturing became more prevalent, right. And to be clear, like lean has been applied to business processes and other areas outside of manufacturing for a long time now. But that mindset I think is driving where we’re seeing initial investment in that that’s really, you know, looking at what your team’s doing, what’s valuable, right?
And what is adding value to the product you’re creating, the services you’re providing, and, and also what is the team doing that’s just necessary to get it done, but not necessarily something that they need to do. So, the phrases like automate the mundane, I think that’s where you would look for initial. You can find your initial use cases there. Deploy Gen AI too in a way that’s in a way that’s lower, lower risk, lower impact, but safe.
Beverly: Yeah. So, any kind of task that you’re leveraging fairly high-level talent and they’re doing something that’s mundane over and over again and it can be automated but also has a low-ish risk, OK.
Brian: Yeah. So, it’s just a couple examples like a software developer for unit tests, software development for kind of summarizing some of the work that’s been done in a certain change of the code. But to, to go, to divert from software just in, let’s say, creating a strategy document that you’re asked to put together. You just wrote 10 pages of excellent content. And then comes, you know, the executive summary.
Sometimes it’s, you know, harder to, harder to do. You can get some inspiration by saying summarize, you know, summarize what I’ve written, give me 5 bullet points of what’s been done and take that task right? Which isn’t the core value of what you’ve done, right? You just laid out the 10 pages of strategy, but make it much faster, right? Automate that for you as a smaller example.
Beverly: Yeah. How do you feel about the future of, like, are we going to get this human machine collaboration? Are we going to get this right? How do you feel about the future of leveraging generative AI in knowledge work?
Brian: I think generative AI is going to be a tool, a core tool that we use in any process. So, it’s just something that we think of first. When how are we going to do this? How can we maybe let me take that back. It’s not something that you’ll think of to do first. It’s something that’s just going to be part of the way that you do it.
Beverly: It’s like using a computer. I mean, we’re obviously you’re not going to sit down, and hand write something anymore, right?
Brian: Yeah, you’re not going to, most people aren’t going to sit down and, you know, get a grid paper out to do a table right hand write that out. You pop open Excel and Google Sheets and do it that way. Summarization or yeah, part of your, your day-to-day.
Beverly: OK. All right. And for those out there listening and want to understand, like, look, just give me one thing. What final piece of advice would you give to people trying to understand more about leveraging Gen AI specifically for knowledge work?
Brian: I would, I’d say if you’re one piece of it, one piece of advice, you know, and this is biased, completely biased because I’m applied person, right? So, I’m, I’m looking for how to apply value from these things. But don’t you know, it’s really exciting right now. There’s lots of innovation. It seems like every month there’s new announcements of improvements to different generative AI applications and, and new capabilities unlocked. Don’t get distracted by the new and the shiny.
Really, really know what drives value for you and your team. I’d start there, right? Start from that process point of view. What can I automate? What can I enhance? And then look at the suite of tools and what you can do. I think that’ll be a good guide moving forward and keep you kind of grounded into what you should use.
Beverly: That’s fantastic advice. Thank you so much. Thank you to Brian Burns, Senior Manager of AI at Sprout Social, for talking to us about exploration of when and where to leverage Gen, AI and knowledge work.
As Gen AI becomes more integrated into knowledge work, it offers a unique opportunity to offload the repetitive and focus human talent where it matters most. But its adoption isn’t just about efficiency, it’s about using it wisely. With thoughtful implementation, low-risk experimentation, and a focus on genuine value, we can build smarter workflows without compromising on quality.
Explore the full catalog of TAG Data Talk conversations here: TAG Data Talk with Dr. Beverly Wright – TAG Online.