Expert Insights on Transformative AI Strategies and Measuring Their ROI

Author: Sue Pittacora

In a recent CDO magazine interview, Sue Pittacora, Chief Strategy Officer at Wavicle Data Solutions, and Himanshu Arora, Chief Data and Analytics Officer of Blue Cross Blue Shield of Massachusetts, sat down for an in-depth discussion of prominent AI use cases in the healthcare industry, the challenges they present, and how organizations can effectively prioritize key AI applications. 


Throughout the discussion, you can explore how AI innovations are transforming healthcare, empowering organizations to navigate the evolving business landscape with confidence.  


Watch the full interview here or scroll down for a detailed transcript of Himanshu’s insights.



Sue: Hello, and welcome to the CDO Magazine interview series. I am Sue Pittacora, with Wavicle Data Solutions, and I am delighted to be joined here today by Himanshu Arora, Chief Data and Analytics Officer of Blue Cross Blue Shield of Massachusetts. Welcome, Himanshu. 


Himanshu: Thank you, Sue. It’s good to be here and good to see you.  


Sue: Thank you. Himanshu, we know organizations everywhere are eager to use AI and advanced analytics and new tools, but they’re really grappling with how to get their data ready and what is the right strategy for this. So, let’s jump right in.


What are some of the innovative use cases you have seen for AI over the past year?  

Himanshu: Before we jump into the specific use cases, I think one framing I like to use is this idea that I borrowed from Gartner. It is around everyday AI versus game-changing AI and internal-facing, as in those capabilities that are more internally available to our associates and employees, versus external-facing capabilities that are targeted towards our customers, consumers, and anybody who engages from the outside with the company.  


So, when you start to look at things at that scale, I think there is a lot of excitement about the game-changing AI capabilities, particularly in healthcare. When we look at healthcare, in my opinion, it has a couple of additional layers of complexity, in terms of the players involved – there are health insurers, care providers, pharmaceuticals, labs, and durable equipment manufacturers, so multiple sub-sectors within must work together – and to bring anything game-changing to life, it means that more than one component of the sector must adapt to that. 


The moment we start to look at healthcare and particularly the health aspect of it, something that comes into pronounced relief is PHI, right? We all value our health information, our data, privacy, and the security that goes with it. When we look at AI more broadly, but particularly generative AI, which has been all the rage over the last year plus, PHI can be a blessing but also something that must be dealt with differently.  


A blessing in that there is so much knowledge embedded, and untapped knowledge embedded, within the health data. It can really be game changing, but it’s challenging because security and privacy aspects of it must be kept in mind first and foremost, and there’s a lot of regulatory development on this front, which is welcomed. There is a lot of consumer excitement, but also consumer concern on this front, which is also understandable. So, you know, there’s that two-by-two framework, the additional complexity.   


When I think about where does AI begin to show up for a single-state, not-for-profit, mission-driven, health insurer like us, our thinking is that these capabilities will start to show up in an internal-facing way. Then over time as we develop confidence with them and the broader ecosystem that we engage with demonstrates readiness, we start to make these capabilities much more available in engaging directly with our customers.  


But there are things at the cusp of this and use cases that are at the cusp that I’ll talk about, but purely internal facing. There are things like how we manage the security and privacy of our data and there are AI capabilities that can be applied there, even generative to a certain extent. 


Plus anywhere there is knowledge encapsulation internally. As you can imagine, many of our associates spend time and energy answering questions based off a knowledge compendium. Whether it’s a benefit detail, a plan detail, or a combination of those aspects versus customer service. For example, a member has experienced and has contacted us with issues they have contacted us about before. 


I think that’s one area where we start to say, “okay, can we figure out ways?” And that’s what we’re actively pursuing. As well as, can we figure out ways to bring ourselves into the generative AI world while still being clear of PHI as being the major contributing portion of it.  Then we look at some other capabilities which are about a lot of internal processes which are highly dependent on knowledge encapsulation, but then reworking similar paths around knowledge.  


So, our legal processes and procurement processes are processes that will have the potential to benefit from things like generative AI.  The thing that we’re trying to solve for right now is how do we engage with these large language models in a cost-efficient way so that we can use them to drive ROIs, which can reflect affordability for our members?   


But at the same time, do so in a way where the data that we’re utilizing isn’t fully exposed to the LLMs in the first place. And we’ve all seen, again, this is very early days, so it’s not fully unanticipated or unexpected, but we’ve all seen incidents of companies or employees using these capabilities in a way which ended up inadvertently exposing the data that wasn’t meant to be exposed in some way, shape, or form.  


That’s still a space where there is more caution, as there should be.  But the excitement of how we can push on this and how soon we can do it remains very apparent.  As we start to figure those things out, what are engagement models with LLMs, for example, are we and others like us taking instances of LLMs that exist out there, instantiating them on our infrastructure, running them on our GPUs and therefore, able to engage with an LLM fully without concern of data going out? The compromises you make there is, well, the parent LLM is advancing at a speed that you then are sort of stepped back from. There’s a separation that happens there. Or are there methods like vectorization context setting that can be applied instead of raw data when engaging with a foundational LLM? Whether it’s Lambda, Open AI, Chat GPT, or Gemini, to still protect the data but get the benefit from the large language models in the generative AI front.  


Like I said, generative AI is all the rage, but there’s all previous aspects, which seem a little bit legacy now, but they’re not that old in terms of how long they’ve been around. With machine learning, deep learning, natural language processing, and use cases that sit on that side of the house. The difference is instantiation of those capabilities is comparatively easier done behind a firewall and therefore protecting PHI and PII and related data becomes easier.  


And that’s the difference we are experiencing, at least right now, between generative and everything non-generative.  


Sue: I love your thinking about starting internally versus externally and getting your arms around what are the right use cases and how do you protect PHI and these other things that really have to focus on data security and data privacy. Even more so in the healthcare industry than in other industries.


Do you have specific ideas in mind for the healthcare industry of where you might start, specifically with AI use cases? 

Himanshu: Internally, when we look at how our associates are building up their own knowledge bases and how much training they are requiring up front, for example, especially for forward-facing or member-facing functions like call centers, there’s definitely a use case or two.   


That is: What must you train a new employee on from the ground up? Versus where can we stand up – and this is not Microsoft copilot, but the concept of copilot – copilot-related tools? Where they have an active assistant that’s available to them. So even as they’re on the phone with someone trying to answer a question, or even on a representative-assisted chat, they have a knowledge compendium that they can rely on.  


Right now, there’s that person in the middle of your will is a fail-safe point for making sure that any inadvertent, for example, hallucinations or other concerns that happened with these LLMs, don’t directly pass forward. That’s one area. I think there are a couple of areas that we’re trying to play around with. 


One is this idea of benefits. Benefits in healthcare can be pretty involved and complex to understand.  There are baseline benefits that we have as part of our baseline plans, but many employers, in the interest of providing more exceptional, more differentiated healthcare services to their employees, also have variations on top of them. 


If you’re a customer service representative, and you’re getting a question from an employee about whether this benefit is covered or not, or what are the boundaries around it that they should understand, it can be a pretty long conversation, or it can be a conversation which can involve periods of short conversations, long hold times, more short conversations, and long hold times. 


So, the experience part and the background metrics part of it from our perspective can be very challenging. So that’s one area for us to say, okay, can we overlay our benefits design packages and internal benefits design?  


Again, there’s no member PHI in here yet. These are generic. Even in cases of specialized benefit designs, they do not have any member’s PHI or PII in there. That is handled separately outside. So, the representative should be able to leverage this and ask questions and get answers in a way that can then further help them assess the members clearly and coherently.   


Healthcare, as an ecosystem, is increasingly dependent on mutual entities relying on each other. Vendors, right? Or the common terminology that’s used: business partners. Data exchange between business partners to push your business process forward is an increasingly more treaded path in terms of how companies engage with each other.  


Anytime you bring data into the picture and services on top of it, legal procurement processes tend to become complicated and involved. So, one of the areas we’re looking at is how to speed up the procurement processes, entity engagement processes, and legal reviews. Again, this is a lot of information where there is either repetition or a look back to a knowledge compendium naturally that might happen. So again, there’s no PHI or PII in any of this, but there is business confidential information. There’s business proprietary information still that must be protected. But we have more freedom to figure out how to engage with these large language models.   


So those are a couple of examples which seem pretty conservative, but the idea is we need to get sea legs under us and learn from a couple of use cases because we’ve all seen on the other side, there’s been missteps when chatbots have taken a life of their own.  


Sue: Yes, and I’ve heard more and more people, or companies, are resistant to chatbots right now because of what has happened in the past and customers that are resistant. So now, we have to retrain our customers to help them understand that chatbots are more intelligent now than they were a few years ago.  


I love what you said as you talked about helping improve the process for training employees as well as improving the procurement process, working with vendors, smoothing out the legal process, and those sorts of things. A lot of that relates to business process efficiency.


How are you planning to measure the effectiveness and ROI of your initiatives as you move forward? 

Himanshu: Some of it is a pull forward from how we have measured the impact on the effectiveness of innovative capabilities like advanced analytics, data science, machine learning, etc.   


Some of it is still under development, as it is unique to when we talk AI, but particularly generative AI. If we really step back two or three steps, the reason for there being a two-level problem to solve for there is that when we go back and look at previous revolutions, whether it’s around the advent of internet, the move to desktops, the move to mobile, or even Web 2.0, they have primarily been about creating efficient frontiers from a task work breakdown structure perspective. Therefore, the attribution from an efficiency perspective can be much more at the task level. Then the ROI that a task produces, whether it’s fraud prevention, better clinical care, health outcomes, higher revenues or sales, you measure the two parts: what is the work breakdown structure efficiency and what is the business outcome that I’m driving on top of it?  When we are thinking now about generative, what we are doing differently here, and it is still in a developing frontier, is that efficiency moves now to knowledge efficiency. 


It’s the first time we are primarily focused on encapsulating and streamlining the knowledge component.  And now, how do you value knowledge?  How do you value encapsulating knowledge in a way that goes beyond just efficiency but does contribute to business outcomes?  


The traditional measurement framework that we are using, we are pulling it forward. It is hardwired into business goals and outcomes in compliance in line with the business plan.  We look at, from an adoption and usability perspective, how far, how wide, and how deep are these solutions being adopted within the company. That’s another set of measures. The third is the internal measures around how we are improving the capability step by step over any period of time. So, if my team and I can get a unit of work done with ten units today, it needs to go to eight to six to four to continually become efficient internally over time as well. But that knowledge encapsulation piece, and how we value that, is still TBD. It’s going to take some playing around with that. 


If our audience and viewers have ideas around that, I’m all ears. I’m looking to learn.  


Sue: Himanshu, thank you so much. You shared so many insights today from thinking about game-changing AI innovations, looking at an internal framework, de-biasing the data, using AI for recovery and data breaches potentially. There were a lot of rich insights. Thank you so much for your time today. To all, please visit for additional interviews.  


This is the first part of a two-part interview with Himanshu Arora covering pressing data, analytics, and AI topics. Stay tuned for the release of the second part, coming soon, or read more of our healthcare thought leadership here 


Ready to get started on your AI journey? Get in touch with Wavicle’s experts.