Vai direttamente al contenuto principale

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 49:39

PegaWorld iNspire 2024: Elevating Customer Service: How Elevance Health & Virtusa Used Pega VoiceAI to Drive Efficiency and Savings

Elevance Health embarked on a mission to significantly enhance the work environment and job satisfaction of its customer service representatives. Rather than following traditional routes, Elevance Health forged ahead with an innovative approach, revolutionizing customer service through the implementation of VoiceAI. Discover the compelling narrative of how Elevance and Virtusa collaborated to leverage Pega VoiceAI, transforming the healthcare sector and delivering unmatched advantages to both customers and agents.


Transcript:

- Good afternoon everyone. Thank you so much for joining us here today. My name is Donna Sullivan Peck. I'm the Global Alliance Senior Director for the Virtusa relationship here at Pega. So we welcome you all to this session. It's pretty exciting that the partner that I've managed for seven years now, Virtusa, is such a close and trusted partner of Pega. They've been with us 23 years and really started as an engineering partner for us and then moved into obviously all kinds of solutioning and partnering that help solve the challenges that many of you face today. So it's a long standing relationship, our executives are very closely aligned, we work together every day and we're thrilled with the progress we've made. They're a really stellar partner and well thought of within our entire company. Today we're gonna talk to you a little bit about one of our biggest customers and Vertusa's customers, Elevance, and I have today with me, Sameer and Vijay, who will do their own introductions. But I would like to just make one housekeeping item first in that anybody who has any questions toward the end, and we do welcome your questions, please feel free to use the mics. This is being recorded, so we wanna make sure that everybody can hear the questions and if you gentlemen maybe would just repeat them, that would also help so that our listeners will know exactly what was being asked. But without further ado, I will turn over a very exciting session on voice AI, and by the way, I do wanna note that Virtusa is our only Pega partner that has implemented voice AI for our customers. So it's an honor and a privilege to introduce my esteemed colleagues at Virtusa, Sameer, and Vijay.

- There's always a timing for something to fall down. I couldn't have asked for a better introduction, in fact, she took over a few of my thunder as well. But thank you Donna, so much for introducing us in such a kind way. My name is Sameer Menon, I'm vice president of Healthcare for Virtusa. I've been with the company for over 22 years. I specialize in doing technical and business transmission initiatives, so I've done several of these. I have with me my partner in crime, Vijay Pallapati, he has been driving delivery for us across all of our strategic accounts as well as he's the practice leader for Pega across our company. Thank you so much for joining.

- Good afternoon, thank you.

- So once again, thank you so much for coming here. Elevance Health has asked us to present, so I really think of this as a honor to be able to represent Virtusa and Elevance Health and Pega over here in the front of this audience, I appreciate you showing up over here. I'll start with a quick introduction for Virtusa. For those who have not worked with Virtusa, we are a privately held company, we are about 30,000 employees globally, we operate out of 25 countries. We are very proud to say our NPS scores are right up there in the nineties, which is very important for us. We have a global customer base of about 250 customers. We take a lot of pride in engineering first. So as we have started our journey, we started our journey building products, not necessarily working for enterprises. We started working on products and through that journey is how we started helping Pega in building their products. And one of the first advents for us was really getting into the enterprise is to bring the concept of platforming and engineering, engineer enterprise. We have tried to stay true to that core value and that's what we try to take to all our customers, globally, and across all our industries. Talking about partnerships, Pega is a very important partner for us, we have been working with them for over 20, 21 years and there's a lot we are gonna talk about Pega and voice AI as we go through this. Pega, we are about 3,500 people globally, in terms of our core strength. We take a lot of pride in doing modernization initiatives for our customers. We have set up several upgrade factories. We have won several awards. If you heard this morning, we also won the Blueprint Award for Innovation, which is to say that one of the first products that's been launched for Blueprint, which already has an award for it, that just tells you how well integrated we are into the Pega ecosystem. We work very closely with the sales, marketing and engineering teams, one such is also voice AI. We were one of the first to implement, and only to implement, voice AI currently across the Pega ecosystem and that gives a very good inside line and strength within our engineering concepts. We also won the Healthcare Business Transmission Award of the Year last year, so that puts us right up there in this audience where is primarily healthcare. We've talked a little bit of our most important group in this trio. We have over here with Virtusa, Pega, that's Elevance Health. As you may know, there are a Fortune 20 company, one of the largest out there amongst the healthcare groups. They operate under three brands, that is Anthem, Wellpoint and Carelon. They have about 115 million consumers and members globally and that brings about the complexity to just try to understand the complexity that is there. They operate in around 14 states in terms of commercial products and 22 states in Medicare and Medicaid, that is as complex as it gets from a platform perspective, whether you take customer service, provider lifecycle management, or any of those variations that is there in terms of serving their customers. A little bit about our journey with Elevance Health. We started about eight years back. We started at one of our first initiatives was really as an advisory. We were asked to come in and speak about what does it take for us for Elevance Health to improve the quality of the application, reduce the tech debt, and figure out what the upgrade parts were. And we spent a lot of time with the Elevance health leadership in looking at across 40 plus applications within Elevance Health and we used our tool, which is Triple R, to completely do a full analysis of their platform and we were able to tell them which platforms need upgrade, which will need a complete transformation, which would lead tech debt reduction and that was very well received, that was one of our first journeys with them. As we went along with them, we realized that there's no upgrade possible without redoing modernization, in terms of adopting the right features that comes out and one of the first projects that we did right after the upgrades were modernizing these platforms and that's where our journey in the customer service started with Elevance Health. We have taken through multiple upgrades and multiple modernization exercises, we are engaged with them in user experience transformation, we also set up the Pega COE across the board, which essentially takes into account a full governance that is required across all the 40 applications. Put a plan as to which feature of Pega will be implemented when across these applications and there on therefore put a plan for a multi-year journey of how the adoption will happen, not only as an application, but also the users which are there, who's going to adopt these things. Anything else Vijay would you like to add?

- I think one of the fundamental thing that we have followed was to promote enterprise reuse across all the Pega applications. How do we build those common layers? How do we build reusable components and promote more reuse across the broader service experience platform applications, across GBD, commercial, provider, line of businesses? And also the user experience part was very critical because there was inconsistently across different application, every application was implementing their own way of doing it. We have done some usability tests and assessments and make sure that the usability is consistent across all the applications based on the user base, based on the corporate standards, where the agent can focus just on the screen what information is needed at the right time. So a lot of programs has been done as part of the modernization and UIUX and Pega's UI were some of the things that we were able to, we take a lot of pride in doing that for events.

- That's right, thanks Vijay and Vijay has been spearheading that for us for a very long time, amongst a few of us sitting over here in this room, who have started and also taken us through this journey with Elevance Health for a very long time. So my gratitude and thank you for all of you. Let's get a little bit deeper as to what we're talking about today, we're gonna talk about contact center and what essentially are the key elements of a contact center. We spoke about a little bit about the complexity of the business, the multiple states, the different products, the number of members which are there, and the complexity which will come along with us with this particular stack. So we are talking customer engagement, constant customer engagement, making sure that the the members are happy through all the different channels, whether you take email, chat, phone, or web. It also takes into consideration data analytics in terms of how do we constantly bring insights and information back into the screen of the agent when they're engaging to provide customer service. At the end of the day, it's all about how fast, how well are we keeping our members satisfied through the process. Operational efficiency, we speak a lot about average call handling time, resolution time, and then obviously the member and the agent satisfaction. There is something unique about today's voice AI discussion. As much as we talk about members' satisfaction, Elevance health has taken a little bit of a different approach. They said, "Are the agents satisfied? What is the experience for the agents?" Because we do realize that agent who's able to improve the quality of service is by actually improving their quality of engagement they have with the applications which is there and the focus that we have had always about is what are the friction points that we have seen across all the agents over here who are working? We are talking about 26,000 plus agents in Elevance Health and there is a churn involved in there, we went through the whole covid cycle of heavy churn. There's a training aspect associated with that. There's an aspect of how fast we are able to reflect information back for the agents, for them to be able to quickly cater to the members who are there. So that is going to be really the second half of the presentation today, where Vijay be able to some of the agent experience and how the voice AI capabilities are going to improve on that. So just to take a few examples, access to KM resources. How many times have we been on phone calls where we'll hear, "I'm sorry my system is acting up." At that point in time, they're just looking for a prompt from the system. What's the next step that needs to be done? Script adherence and compliance, we are the world of healthcare, HIPAA has very strong guidelines around what questions to be asked, when, and how do we track those questions, those are very important. Also, these two things together is also a very key indicator of how it impacts the learnability of the agent. So as we say that somebody's always listening on a recorded line, that means somebody's also looking at how well they're looking at the scripts, how well they're adhering to the compliance. So all these things can be automated in the current world and we'll show you some ways of how through voice AI we are able to achieve that. Data interpretation and recommendations, this is essentially about what we call in voice AI, case suggestions. Because as the member speaks, we are able to present them back, what is the likely intent that will be called on or the service request that will be called on, based on the conversations that's happening between the member and the agent. It is also about what is the next best action. We talk about data quality, multiple addresses being there, not having a single source of truth, all these are significantly going to affect the quality of the service that is given. And finally, repeated questions, we know that, sometimes some screens ask a question again, again, how do we look at it? Just having voice AI is not enough, we need to make sure that if there is an information already captured, we don't need to ask that again, we can reflect that back in any screen without asking questions. So that is the general sense in what we'll cover today and really getting to the details of it today. I'll invite upon Vijay to speak and get the details of the presentation today.

- Thank you, thanks Sameer. While Sameer spoke about the key aspects that impacts agents' experience and this morning we heard in the keynote from Kareem about the Pega customer service in 2024 has four components around it, knowledge buddy, agent trainer, voice ai, and suggested articles. So let's see how all these things help us to elevate the agent experience in Pega Voice AI. So what is Pega Voice AI? Pega Voice AI is an intelligent AI based voice recognition system that leverages machine learning and NLP to deliver real time actionable insights to an agent in an active call. An agent is taking an active call, is talking to the customer, members, brokers or providers and voice AI is listening to those conversations and providing the right insights and actionable, than in an active call. So it comes with the capabilities like speech to text, NLP and real time insights. Pega has introduced this in 8,6, way back in 2022 and Voice AI acts as a co-pilot to an every agent, Virtusa and Elevance partnered with Pega to kind of implement the first of a kind in a healthcare payer ecosystem. Or the next slide, I will try to walk you through some of the use cases that we have really implemented at Elevance to help drive efficiency, savings, and improved agent experience. The first one is the intent task. So when an active call is happening between the member and the agent, the voice AI tries to listen the conversation and have the context of the conversation. It tries to recommend the next best action, like for example, there is a conversation about an address change, somebody's calling to an agent to make an address change. So system immediately listens to that and recommends an agent to click on the address change. The earlier scenario was agent has to go and find where the address change is and click on it and come back and wait for it to load. But the system here is ready and prompting the user to click on it based on the active conversation. Form filling, form filling is something like, again, the same active call, when the agent is talking to the member or broker or a provider, the system automatically understands the conversation and starts filling the form, where the agent is just focusing on the conversation, but system is automatically filling the data into the, screens based on the active conversation. So in that way, agent just have to accept whether the data that system has prompted is accurate or not, agent is still in control of the whole process, it's not just the system is driving everything for him, but agent just have to validate that the data entered by the system is accurate or not and this helps the overall data quality, in terms of how the system is capturing in the system when the active call is happening. KM articles, this is a another important thing where during an active call, when a member is asking certain things where agent was not sure, he has to go back to the knowledge repository and look for the content that he needs some help. Let's say an agent is asking about his COVID-19 benefits. The system hears that conversation and immediately starts populating the COVID-19 benefits article on the screen. The previous scenario was, agent has to go search into the knowledge repository, he has to do 10 or 15 clicks to get to the material and then come back and read about it. So this is something like agent is don't have to stress about you just have to focus on the conversation and the information is just there on the screen. Script adherence is a very important compliance thing where agent has to be compliant on the script that was given to them because of the nature of the market conditions or the members were calling from, the particular state has certain things, branding, because a branding agent has to follow a certain script that was given to them. So system validates what agent is speaking on an active call and gives him whether he has added to the script or not. This is another important use case where system is monitoring what agent is speaking with the live member. And the last one is the post call transcript. Once the call is wrapped up, there's an option to download the transcript that has happened on an active call, which can be reviewed by the business call center operations managers for audit and feedback purposes. Let me show you how these use cases look real time in a mock screen. When you look at any application, how does this really... If an agent takes a call, how does use cases really reflect on an active screen? The next screen is a mockup screen where you can find in any Pega customer service application, a standard one, where left hand side you see all the intent service requests and the right hand side is all the 360 degree view of a member, where you'll see all the information. Script adherence is something which pops on the top, where agent reach out to a text that is clearly what he needs to speak and there is a green icon left to it, which is a visual cue that, if it's green, then agent has to add that to the script and if it is gray, agent does not add that to the script. So the system is actively listening to the conversation and validating and giving the feedback and then itself and likewise form filling. So what happens when, let's say a member asks for an address change, he talks about an address, like 123 James Street. System automatically populates that, it knows exactly which field it has to populate and agent just have to click on the green icon next to it. So with this agent, it's just focusing on the call, he doesn't have to type, he just has to validate. Likewise, the suggested intents. System knows based on the active call, what is the next best action that it could recommend to an agent and likewise the knowledge articles. So this is how the real time, an experience of an agent in a live application would be. Now let me talk about a little bit deeper into the approach that we have taken to make this a reality and how we have implemented this entire program. We started with a proof of concept, because any new capability comes with pros and cons. We need to understand the product, the feature limitations, and what best we can leverage for our use cases that we spoke in the previous slide. So we have tried to do a product capabilities mapping with the use cases that we spoke. What is the best thing that we can leverage from the product features, like realtime insights can be used for knowledge solutions and next best action, speech to text feature can be leveraged for script evidence and form filling. So we did some homework in terms of understanding what can be leveraged from the capability. And we also tried to do some assessment in terms of infrastructure. What is that exactly that we need from the infrastructure support standpoint? Since this is a first of a kind, we really want to know what is that we need to bring from our ecosystem and from the Elevance ecosystem that we need to keep it ready. So we understood that Voice AI can use only softphone and there is a voice AI desktop is called VAD, which is a software that is being pushed into agent desktops, which helps to connect the agent's voice and the speaker that is being active on active call. Because this is a software that like sits on your Windows machine, it's like a software push that happens. So it asks agents to give permissions to your access, to your microphone, just like how you install your Teams when you access your video call or when you access your phone, it access permissions, whether it can access your camera or microphone. So that is the software that is being connected with Pega Voice AI, and which is deployed into the desktop of the agent. And it requests a separate node and that Voice AI requires a separate node for realtime data flows and queue processors. And of course there are certain constraints that we have seen that Voice AI cannot do, I'm sure this will be addressed in future releases, like Voice AI is compatible only with Windows machines, as of today and the knowledge suggestions features only works with Pega KM framework, there is a Pega KM framework which is installed across all the contact center applications and there is a tight integration. Form filling does not support some of the controls, the user controls that we have like radio buttons and check boxes and Voice AI does not support Citrix and VDI. Suppose you're using a VDI access from a remote location and not having access from a laptop. So the setup of this voice and the speaker phone and all this stuff of the Voice AI does not support as of today. And it's not a workflow based automation, just like Alexa, you cannot call out, "Hey, start off the case or kick off the service request." It cannot do that kind of a workflow-wise based workflow activations. And how do we align some of these capabilities into our ecosystem? What kind of readiness that we need to do? The applications has to be ready. If you're integrating Voice AI into any of our existing applications, the application code, the design architecture has to be ready, the infrastructure from the application, the app servers, all of them has to be ready. The bad integrations has to become ensured that it pushes to the agents in the right time. KM configuration has to be done and entity and topic configurations has to be done. And at the end we picked up the right use case to prove the proof of concept. What is the right use case that can help us to navigate all these steps and give us the pros and cons of what we can consider for the implementation? So we have considered the form filling as one of the primary use case that will help us to validate for the accuracy and quality of the data for all the screens. Let me give you a view of what runs, high level view of an architecture and to give you a glimpse of how complex and the components are across Voice AI. To start with, let me give you a live example of how when a member calls an agent, the call gets through an IVR, the Genesys IVR, and it goes to agent's desktop. Once it goes to agent desktop, agent talks from a softphone and that's when the VAD, the Voice AI desktop picks up, it tries to hear the conversation of the agent and also from the customer, both from the microphone and the speaker and it transports that speech voice to Pega Cloud. Now Pega Voice AI resides on Pega Cloud. Your application can be on Pega Cloud or it would be on-prem, but the Voice AI component resides on Pega Cloud. So when the VAD or Voice AI desktop app pushes the voice to audio router to Pega Cloud, Pega Cloud takes the responsibility of converting that voice to text and it uses speech service, it uses pre-trained AI model with the combination of speech service and AI model, gives a transcript, which is then converted from voice to text and that is then pushed to your app server for further processing. So if you look at the journey here, there are a lot of components from the member to IVR, IVR to desktop, desktop has a phone and there is a wire machine which connects, pushes the voice and the voice of the customer and the agent to the Pega Voice AI and Pega Voice AI converts the audio to text through speed service and pre-trained models and the transcript has finally been sent to the app server. And apps server, finally, uses the channels and the even solutions and recommends what next best action, the case solutions, knowledge solutions, and form filling, all the good stuff that we have talked will be pushed to the agent on the application. And if you look at this view, the green components are some of the ones that comes out of the box with the Pega capability and the purple ones are some of the things that you have to blend with your ecosystem to make this entire thing work. And a lot of iterations that we have spent to iron out all these components, things has to flow from the member, again, come back to the agent to give the response on time. And it takes close to a second to get that live things going, because it's all happening in an active call and all these components has to work in place to get the response in a second to the agent. Let me also share some of the best practices that we have learned as part of this program. We have started, since we were the early adopters of this capability, we have started this with Pega 8.6 and soon Pega also was also rolling a lot of new improvements and new capabilities, new upgraded AI trained models into the feature. So we would recommend always try to use the latest version of Pega, if you are down somewhere in seven or eight, try to see what is the best version that you can use for if you have to initiate Pega Voice AI, if you're thinking to leverage Pega Voice AI, try to take the latest version where you have all the new newly trained models and better intent for KM articles. Second, Pega Voice AI desktop updates. So this is just like your Windows machines, where you see in there, a lot of software push happens, you have to restart your machines and all that stuff, the frequent updates, that happens, so voice, VAD also have to go through certain churn in terms of getting the updates happening, the frequent updates happening. So what does it mean? It means like you have to do a lot of testing. What does all these things mean to us? You have to accommodate some of these changes happening during your testing cycle. While it's a small push, software push, but from the QA standpoint, you have to ensure that nothing has broken. So both of these are very important to keep in mind and the maturity of this is getting improved, version by version and today that in 2024, I'm sure a lot of these things are being addressed. Understanding dialects, also entity detection, accuracy, there's a lot of dialects we have and then a lot of synonyms has to be created based on the customer language they speak, when the voice has to recognize a lot of synonyms has to be created, the entities has to be detected in the right way to accommodate the diversity and the dialects. Pilot rollout, this is a very important aspect that we were having right from the beginning. How do we take it to the agents incrementally? Because we don't want to push this as a one time thing and say that, "Okay, guys, start using this." So we wanted to go incremental way, to test the waters, like how agent can start adopting to this, how agent can start learning about using this. So we have started with rolling out to 10 to 15 users in the beginning to just to get some feedback, just to get a feeler on how to get incremental feedback from them. How can we improve our application? So as part of that we have used skill-based routing. It's a user control where you can deploy, where you can deflect, okay, when a call is coming, if a Voice AI skill is enabled for certain agents, then the Voice AI kicks in, otherwise the rest of the users can continue their as usual, work. In that way, we choose the number of users incrementally. If we're getting better feedback, we added 100 users, if you're getting another good feedback, we have started incremental. So we have started that approach and rolled it out from zero to 2000 users across all line of businesses that we have done. So that has really helped us to go steady and start realizing the value incrementally. And adoption and training is another important aspect. While we bring all these things, agent has to adopt some of these things. So we have ensured that Voice AI is loosely coupled, like for example, for some reason Voice AI is not working. What will happen? Will something break or the voice when an active call will stop? No, so it's loosely coupled, the call can still continue. The active call will still continue, but Voice Ai can be fixed later. So it doesn't have a hardcore dependency on the active call, but it adds value, that's why it's called a copilot for an agent. So to drive adoption and training, we have deployed a lot of our trained teammates across all the call center locations to provide the confidence and support, both to the agents and also to the contact center operations managers to help help them understand how to use those devices, how to help them understand the Voice AI can help tracking some of these use cases that we have shown. Let's talk about some of the outcomes that we are able to achieve with this program. We are able to reduce manual steps, up to 10% of the agent's time in manual steps, because today, like I said, KM articles or entrance positions saves a lot of agents time. Agent now is just focusing on the call, system is doing its stuff and giving all the recommendations. Reducing system handling time through form filling. So agent efficiency has increased, agent is now not typing, sitting there and filling the forms and everything, system is doing for him. He just has to focus on the call and to provide the best service for the members or brokers. 80% accuracy in detection and form filling. So it's how accurate Voice AI is able to detect the content, hear the speech and convert into form and able to fill it, the detection ratio is 80% and and script adherence also has improved on the ground. So agent knows that the Voice AI is watching them, so it's pretty much Voice AI tracks every line and every word that he speaks, so he knows that he has to speak to the script that was given to them. So overall there was a 90% improvement in the agent behavior, they're all confident that there is a system that backing up them. There is a buddy, that there is a co-pilot, that is helping them to drive some of these things and they're more confident, more on the conversation with the agent, rather focusing and clicking on the system and figuring out things for them. We heard a lot of good feedbacks. Some of the good feedbacks that we heard from the agents are like, they just like liked the way this Voice AI works and they feel like when they work from home, they can take it back home, because we often see things like, okay, this is only when works when it's in on-prem or it does not work when you're in remote location. As long as they're using their laptops or desktops that was provided, it works and there is a lot of good adoption in terms of they really feel like it is helping them and not adding an additional burden. And the pilot rollout strategy has really helped us to really, not to put a lot of stress on them on a single go, but slowly navigate them and help them, give them some time to understand and make them that this is really helping them to excel in the conversations they're having with the customers. On that note, I will hand over to Sameer to talk a little bit about how the future of contact center is going to evolve and Pega Voice AI is just a starting point and we heard today the NextGen AI in the keynote from Kareem about how all the AI things are going to come up and Sameer can talk few words about that.

- Thank you, thank you so much Vijay. Quite honestly, I don't wanna predict what the contact center is gonna look like, because the possibilities are very, very vast and a lot of us took a flight over from where our home is to Vegas today. Do you guys how much percent of the commercial airlines have autopilot? Pretty much all of them. One of the first technology that initiated the autopilot is in 1920 was a gyrometer, that was used for keeping the flight stable, it was used later in in railroad tracks for motor rails. Many years passed by, we went through the transistor, electronic, digital and the second biggest use of a technology to have a awesome path for the plane was the GPS technology. So putting all those things together, technically we don't need a pilot, but regulatory ways, we need a pilot there. And what does the pilot do? The pilot does three things, make sure we take off well, the flight path is taken care of, somebody's monitoring the flight path, if there's a weather issue, you can steer around, and third, we can land the plane. I see the future of contact center being along the same, very same lines. We want the agents to spend time on things which are the most valuable for the members, for the company that they're serving and really look at it as start looking at calls as dashboard, it's going fine, it has taken off, it's on path, there is no intervention required here, there's an intervention required here, let me jump in. That's the way we look at contact center going forward. And given where we are, I think we are in a beautiful place right now where we can start at green lighting. So hopefully next year, when we come back we can talk to you a little bit more about some additional progress that we have made on the advent of voice AI and the AI capabilities that Pega brings together. We will quickly cut over to questions. Now I think anybody who may have questions, I would request you to please come over to the mic and ask, because it cuts out in the recording.

- [Audience Member 1] Hi, thank you so much for sharing your experience, I have two questions. One is, how did you monitor accuracy? Are there some built-in capabilities within the platform to quantify quality of NLP of the solution? And the second one, as a product manager who wants to implement voice AI powered use cases for my organization, what roles and expertise should I consider to be part of the team who is working on implementation? Should I have machine learning and data science as part of the team or some of that is off the shelf and this expertise is not required? Thank you.

- Yeah, I didn't get the first question, but I heard the second question, maybe we'll come back to the first later. So I think a lot of the things comes out of the box. You don't have to have any data science, because all the AI models are pre-trained and deployed. You have to partner with Pega because if you have seen in the Pega Cloud, that's where the most of the components reside. Heavy lifting is done there where the speech is converted using the NLPs and the AI trained models to understand the right synonyms and all the work is done there. Obviously AI has to be learned, self-learned, so there'll be a lot of reports that you can look at it in terms of accuracy and what has worked. There is still 20% of the content is not being done incorrect. So that feedback, you can give it sometimes to Pega and then Pega will try to fix their AI models and then the titration will happen to incrementally get to what you're looking for. And in terms of what skills and assets that you need to have in-house, it's primarily understanding the Pega Voice AI ecosystem that I've told, the VAD integrations, how does the VAD works, that is one thing, there is in installation instructions and guidelines provided by Pega, you just have to follow that instructions and follow those VAD set of instructions and second thing is, from your application standpoint, if there is any refactoring or any architectural level you have to do for supporting some of the things based on what level of Pega version that you are in. But other than that I don't see big data science coming or big AI machine learning, all of that, Pega has done that for you. So you just have to start thinking about how do you use it or how do you install it and how do you use it.

- [Sameer] Was the first question, dd you want to clarify? If you don't mind please?

- [Audience Member 1] Yes, it was about accuracy. You mentioned 80% how you measure.

- Yeah, there are a lot of reports that comes in the background. For example, we talked about the data form filling, where agent has to manually check the form, if the form, the data is filled accurately in the field. For some reason if the form has not been filled, an agent said no. So that's when it's recorded in the system that, hey, system has prompted something but agent has not accepted, So that is inconsistent. So we have to go back and look at the dialects in terms of the speech. What was missed? Is there there additional synonyms we have to add? Or there is a problem in the AI model? So we have to go go back and fine tune that. There's a lot of learning happen, like I said, 80%, 85% system does automatically, there is a 15% that you have to help or intervene to train the model and get those data from behind the scenes and help it.

- [Audience Member 2] So you mentioned in the beginning of your speech that you're like the only one implemented this, you can be really proud of that, but why do you think that you're the only one and no one else has this?

- That's a great question and I think I try to answer that a little way. I think our engineering heritage, our engineering first approach, we always try to pick up... We are not shy of picking up the first technology or the first release of a product that's out there. Secondly, I think one of the more important things is we have a willing customer in Elevance Health, also, who's a huge partner who's willing to take the flight with us, so those two play a very big role for us to do that.

- [Audience Member 3] Hello, hello there. Quick question around accuracy, if you can elaborate a little bit. What was the accuracy approximately when you started and then incrementally how did it improve? How long did it take? And also what was the implementation timeline? It would be good to get some clarity on.

- Sure, I think I'll start with implementation timeline. Since we have started with 8.6, there was an incremental update, that was the version when Pega introduced to Voice AI and then by the time we completed our POC, there are a lot of things that we have found, Pega has made those changes in higher version. So that means that we have to upgrade our application to higher version as we have done. So there are a lot of learnings for us, now with Infinity you don't have to go through the churn, but it took us approximately three months, three to four months to complete the POC with the churn that we had on all the components. And then finally to get to the finish line, we took another three to four, six months, to get to the finish line and plan the rollouts incrementally, because we were like focused more on not to do a big bang rollout in one go, we wanted to go incrementally, but overall it was a nine months journey, from POC to rollout. And can you repeat the first question?

- Accuracy.

- "Accuracy," so accuracy, I think when we started this, the accuracy was approximately like 75% to 80% where some of the things that we have learned was the dialects, the synonyms, that the customer speaks were not picking up, we have to train, we have to work with Pega to include them into their models. So that iterations between us and Pega took a while to improve the accuracy, but it was not like a huge gap. We started maybe a 5% to 6% improvement has happened, but yeah, but it was just constantly learning how system is converting speech to text and most of the time you find some of the things were not being recognized by the system, you go and give feedback to the Pega and then take care of it and then again the next version gets better.

- [Audience Member 3] I see.

- And we got a phenomenal support from the Pega engineering team, the product team. Because when you go on your first rodeo, we need that support, right?

- [Audience Member 3] Oh yeah, they're fantastic to work with, for sure. One last question, as you've described some of the, I would say successes, can you describe some of the challenges that you faced and how did you overcome those key challenges? That'd be great for anybody looking to go in this arena.

- Yeah, one of the challenge was like we have to constantly... To see the churn that is happening, in terms of versions and new features that are coming up, sometimes have to keep your application to the latest version of Pega and earlier we used to take our own suite of time to upgrade to higher versions, we'll plan it next month, next month, now, if you have to get this feature done, we have to go and upgrade our application to the higher versions. And what does it mean? QA testing, functional testing, performance testing, all those cycles has to be quicker and faster and better to get us to that. So some of these challenges was there too, in terms of timeline of how do we really think, but Pega was fast enough to provide us resolutions on any of the product related challenges or issues that we were facing, because we were one of the early adopters when Pega was rolled out back in 2022.

- There's a question there.

- [Audience Member 4] Adding on to that, when you think about the iterative pilot that you went through, did you roll that out? You mentioned, 15 to 20 contact center agents. Did you do that across all of what I might consider your kind of servicing intents? Or did you hone that in and say, "Right, there's there's 100 reasons a customer could call us, but we're only implementing this on five of the 100 cases," Or did you go carte blanche, all 100?

- Yeah, from the use case standpoint, we went all in. But the user, because we wanted to get feedback for all the work that we have done with the targeted users and we have also ensured that those targeted agents are supported by our teams on the ground to handhold them in the initial launch. Even as part of change management, there is a lot of learning for even the L&D team also because this is a new kind of a thing where they need to understand some of the challenges. So we have to go through the change management, train the users, sit with them, coach them that this is what is going to happen. So we picked up only the slice of the users in the beginning, but they got access to everything.

- Yeah and very important, I think how well connected we need to be with the operations team, the training departments and the IT, so that triangle is so important to be able to really do that, because there is patients required on all sides.

- [Audience Member 4] And one final question. When you guys mention knowledge management, may I ask what KM tool are you using? Are you using Pega knowledge?

- Yes, yes, so Pega Knowledge Management is a framework that has been there. So we deployed that in all our contact center applications, it's an horizontal application that's there in everything. So it's tightly integrated with the application and it knows what exactly the article that it needs to suggest based on the call.

- Okay, I think we're out of time over here.

- I would like to thank, sorry.

- Is anybody, one more question? Okay, sorry.

- [Audience Member 5] No worries. So question as far as the script adherence, two questions actually. The first one is, how strict do they need to stick to the script or it can it detect if they're close enough to the script that it'll pass that?

- We used to keep 95%, we used to keep something. But in all fairness, the expectation is a 100% they have to speak to the script, because there'll be some branding that Elevance might use, Blue Cross Blue Shield in certain states, they have to use some of those branding based on the local conditions and the user from where, which state he's calling. So that compliance is, right now we're tracking it 100%, but initially when we started training them, so we left it to a lower percentage, but when the maturity curve has improved, now, we are saying that it has to be 100% you need to follow the script.

- [Audience Member 4] Gotcha and the second question is, does Voice AI have the ability to detect call sentiment or tone of the call based off certain key phrases or words? And if not, is that-

- Yeah, I think with the help of NLP, there is a lot of sentiment analysis it does based on the words usage that a customer uses, based on the confidence of the customer, whether the customer is having a high confidence, low confidence, it's tone, so that the agent can give the respective attention based on his voice and the words that he used. The Voice AI understands it and NLP tries to interpret some of those sentiment and reflect to the agent during an active call.

- Sounds good, we are out of time today, but thank you so much for showing up here and we are deeply honored for Elevance Health for allowing us to come and speak on their behalf. All the thanks for Pega as well for allowing us to speak on their behalf, so thank you so much, you can always reach out to us if you have any questions. You all have a wonderful PegaWorld.

- Thank you.

Risorsa correlata

Prodotto

La progettazione delle app, rivoluzionata

Ottimizza la progettazione dei flussi di lavoro, in modo rapido, con la potenza di Pega GenAI Blueprint™. Imposta la tua visione e assisti alle creazione istantanea del tuo flusso di lavoro.

Condividi questa pagina Share via X Share via LinkedIn Copying...