Zum Hauptinhalt wechseln

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 49:25

PegaWorld iNspire 2024: The Agent is Dead, Long Live the Agent: Pega's GenAI™ Quest to Create the Ultimate Customer Service Agent

Embark on an exploration into the future of customer service, where GenAI stands poised to revolutionize the way customer service agents get work done, evolving the entire industry forward.

We will unpack the multitude of ways in which Pega GenAI enhances the experience of your customers and agents, and streamlines operational efficiency. At the end of the session, you will be not only enlightened but also empowered to embrace the possibilities that GenAI offers in shaping your customer care landscape.


Transcript:

- This might be your last PegaWorld session, but I still want to bring the energy. Is that okay with you? Okay. Okay. But if I bring the energy, I am gonna expect some back. Is that a deal? Right, let's test that. PegaWorld make some noise. Nice. Okay. My name is James Dodkins. At Pega, I like to think of myself as the chief content officer. I'm not that. They won't gimme the title. I've asked repeatedly, but the reason I think of myself that way is because I spend most of my time taking sometimes very complex ideas and then creating content about those ideas to teach people about it in fun and interesting ways. And hopefully that's what me and Jeremy are gonna do for you today. Now, you wouldn't know it from his picture, but this is Jeremy.

- I'm Jeremy. I'm nowhere near as fun or interesting as James, which you will see very quickly. I will look after a lot of our strategy and go to market for customer service primarily. And like a lot of people in the room, I've probably spent a lot of time in GenAI for the last, what, 6, 10 months and GenAI hallucinates. And one of the things it does is it puts hair on you. And I figured who's to argue with GenAI? Let's run with it. So we're running with it.

- Okay.

- Cool. And back to you.

- So, "The Agent is Dead, Long Live the Agent." That's a provocative title, isn't it? Is that the only reason you're here, by the way? Okay. I told you.

- It worked. It worked.

- Right, before I talked to you about GenAI and Pega's GenAI quest to create the ultimate customer service agent, I wanna talk to you about a topic that has been described as the world's most dangerous idea. Ooh, any guesses? Thanks. Well, I'll put you all out of your misery. The topic I wanna talk to you about is drum roll, please. Very good. Transhumanism. Now, some of you might ask, what is transhumanism? Some of you might ask what? It's a good question. Good question. Let me, so a transhuman is a being that resembles a human in most respects, but has powers and abilities beyond that of standard humans. A bit like that, probably. So transhumanism is the belief or theory that the human race can evolve beyond its current physical and mental limitations. Ooh, spooky. Transhumanists talk about some fairly tame ideas like using medicine to help us live longer and to help us do more. But they also talk about some more, let's call them innovative ideas like genetic enhancement, digital brain modification, like putting chips in your brain, mechanical augmentation like robot arms, cloning and fun stuff like that. But basically, transhumans think that we should all be using technology to become beyond human. I needed that thing Don had earlier, though, beyond Don, human. It doesn't really work when you do it yourself, but beyond human. Now, I'll give you an example. There's a guy called Mike Shafner. That's not him. That's an A, in case you couldn't tell, that's an AI generated image. Now, all of these images are created by AI. So weird things are gonna happen sometimes like hair on Jeremy or 18 fingers. But Michael Shafner is an interesting dude because he has had lots of weird and wonderful things implanted into his hands. So in one finger he has had a magnet implanted and that allows him to feel electromagnetic waves, you know, so that he knows where they are. In one of his other fingers, he can open his front door 'cause keys are for losers, I guess. The other finger, he can open his mobile phone because face ID, it's just inefficient really, isn't it? What else can he do? He has got a finger that allows him to exchange contact detail. I dunno who with, contact details. And with his remaining fingers, he wants a payment chip. I assume that's because every time he puts his hand in his pocket, coins gets stuck to his magnet finger. But let's get a poll. Let's do a poll. Who thinks this stuff is cool and exciting? Okay, who thinks this stuff is scary and weird? Who doesn't care? I mean, you didn't know I was gonna be talking about this. That's fair. Now think about it for a second though. Like primitive humans have always had primitive tools, things like stones and fire and Nokia phones. I was expecting a bigger laugh than that, but that's fine. And Nokia phones, and these are all technologies of a type, and they're all technologies that have shaped human evolution. When we say the word technology, you think of computers and things like that usually, but all technology is, is just the application of knowledge for a practical purpose. I'll give you an example. Oh, actually there's a picture of a, I forgot to press the, there we go. This clicker, this clicker's technology, and I've been holding it the whole time. But the question I wanna ask is, did you actually notice? Well, kind of yes and kind of no. Like subconsciously, you know I've got a clicker. You just kind of accept it. But you didn't come here and sit down and go, "Whew, last session of the day. He's got a clicker." You just didn't do it. But it's not just you, it's me as well. Like I kind of forget that I've got it. The thing is though, I imagine the very first time any presenter was using a clicker, the crowd would've been more focused on the clicker than what the presenter was saying. They would've been sat there going, "How is he progressing those slides? Witchcraft?" But it's true. It's crazy. But I forget that I'm using it. I'm passing it from hand to hand. I'm clicking it when I need to, apart from when I forgot to do that one. But what this is called, when a technology gets to a point that we use it so much that we forget we've got it. It's called being semi-transparent. So we'll do a little example. Has anyone in the room, are there anyone wearing fake nails or have ever worn fake nails? I dunno from experience, but I'm reliably informed that you kind of just forget that they're there, but they completely change the way that you interact with the world by how you type and how you do your hair and stuff. I mean, glasses is probably a good example. That's another AI generated image of nails, by the way. That's why there's some extra ones going on. Glasses is a good one. People in the room wearing glasses? I see some of you out there. The very first time you ever put glasses on, it's quite weird, isn't it? But within a very short period of time, you become very used to it. You forget that they're there yet they are completely changing the way that you experience the world. Now, I might be losing some of you at this point, but I am going somewhere with this, I do promise. There was a neurological study done with a monkey and a rake. Let's get some "aws" for the monkey. Aw.

- Aw.

- It's actually not a nice neurological study. So they put the monkey in the cage. They gave the monkey a rake. The only way the monkey could get its food was by using the rake. Now, this is the interesting part. The neurons that fired in the monkey's brain when there was a visual stimulus near the end of the rake were the exact same neurons that fired in its brain when there was a visual stimulus near the end of its hand. The monkey's brain couldn't differentiate between the rake and its own hand. As far as the monkey was concerned, the rake became part of him. Now, there's a philosopher called Bruno Latour, and he says that when a technology becomes so transparent that it becomes incorporated into our sense of self and incorporated into the way that we experience the world, a brand new compound entity is formed. So clicker plus man equals ClickerMan, one of the lesser known Avengers. We've got rake plus monkey equals RakeMonkey. You've got fake nail plus woman equals FakeNailWoman. Well, that, I mean, that's not a very good one. But the point is, FakeNailWoman experiences the world differently due to a piece of technology, a fake nail. We are all augmented by technology every single day, and we barely even notice it. We are all transhumans. Now remember, a transhuman is a being that resembles a human in most respects, but has powers and abilities beyond that of standard humans. You might sit there and say, "I'm not artificial, James. I'll never be artificial." Okay, but the definition of artificial is made or produced by human beings rather than occurring naturally. You were all born, I assume, probably not in the wilderness. From the moment you were born, you were communicated to and taught at least one language. That language, which humans just made up by the way, they weren't naturally occurring. That language informs every single part of your thinking to this day. And as a baby, you were placed in an artificial crib wearing artificial clothes, surrounded by artificial toys, looking at artificial pictures on your artificial walls. You guys are about as natural as a helicopter. And yes, some of you will have noticed that AI baby doesn't have any legs. Don't laugh at the no legs baby. That's, it's AI. It's fine. But it's actually speaking of babies, anyone got kids in the room? Yeah, because I've recently decided that I don't want kids. Look, it's for some people, it's great for some people, it's not for me. It was a really big shock for my family. It was a big shock for my friends. It was a massive shock for my kids. He's like, "Well, I dunno if I can laugh at that. Does he have kids at home?" No, I don't. They're in the car. No, I am messing with you. But my mate, last year asked me. He's like, "Would you ever consider adoption?" And every time I've been asked that in the past, I've said, "No, no." Just like a knee jerk reaction. No, no, no. But I actually checked it out and it actually, it looks wicked. It looks like really rewarding, really fulfilling. So, it's a bit personal, but I do have to say, if I ever did have kids, I'd put them up for that. No, I'm just messing with you. What that was was the world smoothest transition to the idea of us all adopting GenAI. Come on, that was good. Thank you. So we all, without even realizing it, have adopted the idea of transhumanism, which has been described as the world's most dangerous idea. That is exactly how I see the future of GenAI, which has also been described as the world's most dangerous technology. But when we do all adopt it, when it becomes the way that we augment our world, we need to ask questions. So, what new beings will it transform us into? What new capabilities will it unlock and how will it change the way that we experience the world? It's these exact questions that we ask ourself at Pega every single day on our quest to create the ultimate customer service agent. Now, some of you out there might ask, what is the ultimate customer service agent? Some of you out there might ask- A fantastic question. Thank you. Great question. I'm not really prepared to it, but I'll answer it. So I'm gonna say something controversial. I'm gonna say something else controversial now. We all know it, but I'm gonna say it out loud. There are some exceptions, but call center agents in the Western world are looked down on. They tend to be the least paid, the least respected, the least trained, the least empowered, the least considered people in any organization. But they are the ones that are interacting with your customers the most. Like it or not, there are some exceptions, but in most cases, the majority of interactions your company is gonna have with your customers, it's gonna come through the contact center. And to try and put that into perspective for you. Let's say an agent spends six hours a day dealing with customers. Let's say you've got a thousand agents, okay, that's pretty conservative. Let's say you operate for 50 weeks of the year. That alone is 2.1 million hours of experience being delivered by underpaid, under-respected, under-trained, under-considered and under-empowered entry-level workers. And of course you will have heard this before, but they say that it only takes a minute to make or break a customer relationship. To put that into perspective, 1.2 million, 1, 2, hang on, 2.1 million hours a year is 126 million minutes. 126 million minutes of experience being delivered by these forgotten people. At Pega, we see a completely different future. And it's at this point, I'm gonna hand over to Jeremy to guide us on the next chapter of our quest. You didn't have to clap for me. I'm not finished, yeah, thank you.

- Follow that, right? I think if there's a theme that we're trying to get in front of, it's that it's all gonna be okay, right? This stuff sounds scary. This is kind of a tongue in cheek slide. Different bookends than you maybe normally see. But what does this slide actually represent, right? Every figure in this evolutionary journey is more capable than the one before it, right? It's got new sets of skills that are particularly tuned to excel in a new and changing environment. And we've got a whole bunch of CSRs who are gonna be going through a changing environment. We need to provide them some new skills. We need to provide them some new technologies and the skills that consume those things so that they can become the next figure in this image. It's easy to remember that stuff that feels really, really revolutionary at the moment, in the rear view mirror feels like just another stop along the way, right? So it isn't that long ago that contact centers as a concept came up, right? We all just take it for granted that we've got a thousand people who spend six hours a day sitting in front of some sort of a machine, a virtualized desktop or something. They get a bunch of interactions routed to them, calls, chats, whatever they might be. They've got some amount of customer information, hopefully available to them, maybe some purchase history. This stuff's not that old, right? In the wider arc of an evolutionary diagram, the entire constructs here are pretty new. Everything on this list felt absolutely revolutionary at the time. We now just take it for granted. Look forward just a couple of years ago, right? I'm old enough. See hair again. I'm old enough to remember when it was not even possible and then very weird to send a chat to a brand. That's just a given now, right? These are really, really recent evolutions that felt amazing at the time. Again, we all just take for granted. As work gets more and more complex, as these agents are expected to be able to do harder and harder work in a world of automation, guided workflow enters the contact center. Obviously, we at Pega are gonna push this really, really hard. This is kind of our bread and butter. Getting people through from the beginning to the end of a journey, guiding them step by step, giving them the skills to work confidently, even if they're new or working in a domain they're not as familiar with. These are relatively recent innovations in the contact center. I think there's still a ton of room for growth in these. Real-time AI, right? Even before the LLM, GPT sort of explosion in the last year, there's some really, really amazing stuff happening in this world. I hope that if you're in this session, you've been by the customer service boost in the innovation hub. I hope you've gone to see the Voice AI and Messaging AI stuff. It's only a couple years old. Absolutely feels like magic the first time you start playing around with it. I hope you heard the Albans case studies in maybe some of the breakouts, right? Listening in real time to voice conversations, to messaging conversations, extracting information from them, guiding agents through, getting their hands off the keyboard for data collection so they can pay attention to the customer needs and provide the empathy that only they can do. This is really, really brand new, feels revolutionary. And very quickly is gonna feel like just another step along the way. I've been in this space for a while. I think it's reasonable that people are concerned about all the dangerous ideas, right? A lot of the concerns that are being expressed about LLMs are specific to LLMs. A lot of them aren't, right? We've been doing a lot of really interesting AI at Pega for a long time. I spent a lot of time in the world of NLP prior to this gig. There is a real change in the way that you have to think about probabilistic systems instead of deterministic systems and how they work together. Is the black box impenetrable? Do we know how decisions are being made? Can we, you know, can we re-inform ourselves? Can we audit them if we need to? If you work in a regulated industry, this is a really hard set of things. Predates all the LLM stuff, right? Totally valid set of concerns. These are results, by the way, of our internal surveys to folks just like you. Some of you might have taken this survey, right? Half of the people are rightfully concerned about how to think about that problem space. 28% of people are concerned about the new skills that are required to adapt these things. This feels almost tautological to me. Like of course it's brand new. Of course, new skills are required, right? Used to be data science, probably still is data science. Used to be linguistics, probably still is linguistics. And now we've got prompt management, how to engage with an LLM with intelligence and creatively and iteratively. There are new skills that come into play here, both on the administration side, there are new skills that we need to teach our CSRs on how to engage with this stuff. And we'll get to that in a little bit. And my personal favorite and a little bit more of the James windup. A lot of people are really concerned about the robot overlords, right? So, alright, we're all worried about that. The same people who took the survey are really excited about this stuff. Make sense of that, right? A lot of people think it's gonna drive real value pretty quickly. This is the same survey, the same people. So are we concerned enthusiasts? Are we cautiously optimistic? We're probably all of those things and that's a reasonable place to be. But again, in the rear view, this is all just gonna feel like the next step. All right? So that's sort of pivoting into what is the next step, right? The next step is obviously the emergence of the LLMs, right? The GenAI, the GenAI, the GenAI, the GenAI, the crew was talking about, right? Of course. You may have seen a slide like this in some other sessions. The left brain, right brain stuff, AI that's statistically based, obviously even LLMs are really just probabilistic, I cannot say that word, statistical models on the word, the little segment that's gonna come next. But they do very different things. They feel like they're creative, they feel like they're generating things, right? Which they are. They're interacting with us in a way that feels very familiar and human because it's language based. And so we start to use them for things that we previously sort of reserved for, you know, human work types of things that felt more creative, right? No surprises there. As one example we wanted to show you, I believe this is from Sora, OpenAI's latest visual generative AI. I'm gonna hand this to James, who's gonna do a way better Attenborough than I'm gonna do.

- Very cool.

- To lead us in.

- Were you trying to say probabilistic?

- [Jeremy] I was trying to say prob, blah blah ballistic.

- I don't even know what that means. Right? Okay, so imagine with me if you will, that's my David Attenborough voice. Close your eyes please. Several giant wooly mammoths approach treading through a snowy meadow. Their long wooly fur lightly blows in the wind as they walk. Snow covered trees and dramatic snow capped mountains in the distance. Midafternoon light with, who wrote this?

- [Jeremy] I believe Dawn did actually. This is the prompt. This is legitimately the prompt we sent to Sora. Good pacing by the way.

- Thank you. Wispy clouds, keep your eyes closed. Wispy clouds and the sun high in the distance creates a warm glow. The low camera view is stunning, capturing the large furry mammal with beautiful photography, depth of field, Right? So I mean, number one, that was great. Come on, make some noise from my David Attenborough voice. Thank you. That's the biggest clap we're gonna get today. Right? You may have a picture in your mind. It may look a little something like this. Now that was the actual prompt put into Sora to make this video. And it's still very new technology, but in a not too distant future, you are gonna be able to go make an action movie where I'm the hero and I save the world and everybody loves me and JLo's there and or whatever. But it's gonna happen and it's gonna be cool when it does. It's this sort of creativity, it's this sort of imagination that we want to harness, but it's also this sort of imagination that can get us into hot water. At what point does imagination become hallucination? So now you may have heard of this story already, so I'll do just a quick run through. But basically Air Canada put in a GenAI chatbot. Somebody went and asked it a question. It didn't know the answer, so it made up what it thought. It used its imagination. It thought, "This is probably the right answer. I'm gonna tell it that." When the person went to claim the thing that the chatbot told it it could, Air Canada said, "Well that's not the policy. We are not honoring that." They said, "But your chatbot said so." And they said, "Oh, we don't care." So then they got took to court and in court their argument was that they should not be held responsible for what the chatbot has said and it should be classed as a separate legal entity, right? That's pretty bad, isn't it? What's worse is that it was a grieving customer asking about a bereavement policy for moving a ticket. The issue with this is all of the problems that came from it were human problems. The chatbot just did what it was told to do. It was the humans that didn't set it up right in the first place. And then it was the humans that decided not to honor what it had said. They were human problems. But yeah, the chatbot did make something up. It did hallucinate. And there are ways to avoid that. I'll tell you a little bit about it. It's called retrieval augmented generation. I'm guessing this isn't the first time you've heard about it at this event, but I'm gonna try and explain it to you in the best way I can think of. So things like ChatGPT are trained on the entirety of the internet. Dunno how much time you've spent on the internet, but there's some pretty mad stuff on there. So it's not really a surprise when sometimes you get some mad answers from these things. Imagine something like ChatGPT, but instead of it being trained on the entirety of the internet, it's trained on all of your company's policies and procedures and your knowledge documents. And then when somebody asks the question, it goes away, takes a little chunk of relevant information, it brings it back and it says, "Answer the question only using this. And if you don't know, say 'I don't know.'" That's how we can start to use RAG or retrieval augmented generation, which is a real technical term that you can tell your friends to make them really impressed with your GenAI knowledge. That's how we can start to use that to try and avoid this hallucination. But, to really understand RAG, and to really understand how it works, we really need to ask Jeremy, 'cause I actually don't know.

- That was like 90% of it. So I can breeze through this. Well done sir. And I'm gonna start pivoting into like the sort of practical recommendations of what those next figures in the evolutionary slide might look like, right? So we're not just having to evolve the way CSRs will adopt some of these new technologies. We have to start thinking about the way your organizations can consume them, manage some of those concerns and manage some of those risks. And in most of these conversations that I'm having with a lot of folks just like yourselves, maybe some of yourselves are actually here as well. RAG is a really good place to start, right? Because people are really concerned about the hallucination problem of LLMs, but they're really powerful in the way that they can summarize and sort of get really concise answers to you and pretty good responses to you. So when you can do that based upon your own documentation and constrain the dataset so that we're not hallucinating, you can start to get a really, really high value time saver in front of people pretty quickly, pretty low risk as one of the first use cases for your organization to adopt, one of the first use cases for your CSRs to adopt. So you take all your, you know, published knowledge management, everybody know what RAG is? Do I need to go through this? You pay, you take all your published knowledge management, your policy documentation, right? You can chunk this stuff into different datasets for different audiences to manage permissions and roles. It's all sort of built in. This is the first half of this, is basically a semantics search problem. You take these documents, you slice 'em into smaller pieces so you don't have to just evaluate them as 10 pages, as a hundred pages. You evaluate them as little chunks of a thousand characters. You get to control all this, right? You go through a process of semantic mapping, which is sort of converting it all to math vectors. And you store it in a very special place that's designed to store this kind of stuff, right? So you have these numeric representations of these individual chunks of all of your policy information or your knowledge. All of those chunks can refer back to their core documents so you can know where they came from. That's the first half. So when someone asks a question, as James was saying, we sort of do it in reverse. We take the question, we map it in that semantic space, we go to the storage and say, "What are the other things that look like this?" It feels a lot like a search, but it's a vector search. It's also language agnostic. So you can ask a question in German and retrieve data in English, doesn't care 'cause it's a numeric representation, which is actually pretty cool. It goes and it gets the most closely related items, the close, most closely related snippets of that content. They might be distributed across a number of the documents or a number of the sources, but it gets the ones that clear a certain threshold of similarity, which of course you can control as well. And only then does it sort of package up and go out to your LLM and it says, "Here's the question," this is a prompt, "Here's the question I got. I would like you to answer this question in this format for this kind of audience, but only use the data sources that I'm about to give you, which is the chunks that we just got out, the snippets that we just got out. Here's the 10 things we think are related to the answer. LLM, you take these 10 things and formulate us a simple, concise answer." And to James's point, "If you don't know, say you don't know. Only use this data." It's a trusted response. It's a concise response. It's based on stuff that you know, right? So if you've got a relatively decent publication and management process and curation process for your content, and I hope you do, and if you don't, go to the knowledge management booth and we'll help you with that. LLM is a great layer of sort of getting really, really instantaneous search. You can get concise answers across huge datasets. And you can present that to anybody, but we're here in a CSR session. And so we present that to a customer service agent. Another reason this is a great place to start, most customer service agents, and we hope yours are in this category, are pretty familiar with using knowledge within the context of an interaction or fielding a conversation with a customer. They kind of have the general idea of I need to get an answer. And they might go and search through a knowledge base and read, you know, five, six answers and try to come back with that. This kind of does that all for them, but they're familiar with the idea, right? In our solution, we put it right next to the same widget. It's in the same place that knowledge usually is. And so instead of having to do that search and run through a list of 10 items and read four or five pages of stuff, we just gave you the simple answer, we link you back to the stuff that says, "Hey, if you want to click in and see where we got this from, here's the documents we came from." But it's right there. And if you went to the demo booth, you saw this being deployed proactively in lots of cool scenarios, right? Agents on the phone, Voice AI is listening in. Voice AI recognizes a question, Voice AI passes that question to Knowledge Buddy. Knowledge Buddy returns an answer for the agent. The agent didn't type or do a thing and they have the answer right in front of them, which might have come out of 100-page PDF that was in some system they might have not even known they had access to. Pretty cool place to start. It's low risk, it's an internal employee. You're not doing it straight to, you know, your chatbot. It's on content that you trust and publish and it's in an environment that CSRs are familiar with something quite similar. Great place to start. So what are the other tools that will be coming on very quickly as sort of the next iterations? Another lovely thing about presenting with James is I end up like the third callback to a RakeMonkey. I'm not sure if everybody thought that was gonna be our closing bit in the customer service arc, but we hope you remember monkeys with rakes. That's our big bit. It's important to understand where this stuff is probably going to deliver the most value. CSRs have high turnover rates, in most industries. I'm sure that's mostly true of you as well. It is a problem to constantly spin up new agents and get them trained. So this is a study, and again, this is all fresh. This stuff's all really new. GenAI is brand new. But Bureau of Economic Research, I believe it was, 5,000 customer service agents. A mix of experience levels, mix of domain expertise given Generative AI suggestions while they're on the phone with a customer. And you can see how much more efficient the different pools of agents were. The new agents, this kind of fits what you would assume, right? This is a reasonable. New agents, you know, got quite a bit faster. They got two and a half times to three times more value out of these suggestions coming out of an LLM. Hey, here's what you might answer this customer with, right? Two and a half times to three times more value than a tenured agent. To be expected. A lot of this stuff is going to help new people come up to speed quickly, all right? So think about that when you're thinking about where to start. Start with an agent pool that's gonna maybe get the most value out of it, right? Help guide them along. So that agent pool, what are we going to be delivering to them? What are, what have we already delivered to them as Pega? What will we be working more on? And what do we hope you start to roll out to your CSRs, right? We talked about simplified knowledge, right? Knowledge Buddy. These RAG solutions that can look across disparate sets of policy guides, answers, and proactively get answers right in front of agents. That's sort of probably a good starting point. We've got a ton of tooling. And again, if you were in the voice AI booth or the conversation AI booth, you saw a bunch of really interesting stuff that LLMs are also really good at, right? LLMs are really good at summarizing large bodies of text. They're excellent at it, actually. So if you've been on the phone with someone for 10 minutes, Voice AI writes this giant transcript of that entire interaction, it's been helping the agent all along the way. In a normal contact center, that agent's having to take side notes on the side so when they wrap up, they can say, "Here's what happened," right? They spend two minutes in wrap up. I don't know what you guys spend in wrap up, but that's a pretty normal number in a lot of our industries. Sometimes it's quite a bit more. And they've been taking notes the whole way. They've been distracted, they haven't been able to pay attention to the customer all the time, right? We can automatically summarize everything that happened during the nature of that call. Because we're Pega, we can also summarize all the work that happened and essentially eliminate wrap up time. We can take a minute off every call. That's a pretty cool thing. There's a lot of value there, right? And that agent doesn't have to be distracted trying to take notes because they know they have to record what they were doing. We're recording the actions through the workflow and now we're also able to summarize the conversation itself. Right, we can do some other stuff related to automating the engagements that the agent is actually having. A lot of agents are right? They're talking to people, they're chatting with people, they're emailing with people, they are engaging with folks. There's content to be generated. LLMs are pretty good at that, right? You can sort of combine templates and personalization via an LLM to say, based upon everything we've seen about this interaction so far, we think you should respond with this kind of chat, with this kind of email. Some pretty cool technologies there. And I'll mention Coach in just a minute. So for those of you again who saw Voice AI, initially before the summarization, before the LM stuff, right? You can listen and guide, you can recommend cases of knowledge. We just talked about that, right? You can extract data from the conversation stream and automatically fill in fields in a case. It's pretty cool. Because we know what the agent is saying, if you're in a regulated industry and you've gotta read a disclosure, we can track, did the agent read the disclosure? Did they read it word for word? You can set thresholds for different phrases. Oh, the welcome message will let people just say close enough to welcome, but they've gotta read the legal disclosure word for word. And Voice AI will tell you that. So that's all sort of what it was already doing. I already mentioned what do we do with what? You know, what can you layer on top of that with the LLMs? This is all available today, right? You can do the summarization stuff, right? So we already talked about this, right? You can summarize the wrap up stuff, but you can also summarize midstream. So when I pick up a new call, when I pick up a new email, I can see what happened the last time in a summary form. I can see what happened over the last several times. If I get transferred for some reason, if I have to bring in and rope in some other agent, that new agent picks up, they can see the summary of everything that's happened up to that moment, right? These are pretty cool time-saving things. This is one of my favorites. This came out in 24 1. As part of that process of looking through the transcript and summarizing it, we can look for phrases that sound like commitments from the agent. The agent said, "I said I was gonna call you back." "I said I was gonna execute something on your behalf because it was hard for you to do so, but it's just something I voiced. I just said I was gonna do it right?" It was outside of the normal flow, it was an exception. We can hear that, Voice AI hears that, the summarization hears that, and because we're Pega, we automatically fire off a task and assign it to someone to do that, right? I call this to keep your promises feature, right? That's pretty cool, right? Imagine agents that don't have to think they're gonna forget what they're gonna do because again, they're furiously taking notes, they're trying to wrap up, but they've got someone else, you know, in queue that they've gotta pick up. It's really to forget that you said you're gonna do something. We track that work, right? We let them relax frankly through that work and then we help your business track that that gets completed. And I mentioned before, right? You can do suggested responses across pretty much any channel. Those responses can also source knowledge from the RAG solution we just talked about called Knowledge Buddy, which you saw in a bunch of the other sessions, right? So that you're providing trusted responses based upon again, your known, curated content. Now you heard about Coach a little bit this morning in some of the keynotes. Coach is sort of the way that you bring all of this together to execute work forward, right? So this is a little bit of a future statement on the screen here in the second half of the year, right? To be able to invoke any of these things at any time with case data, with sort of case awareness is sort of directionally where this is going. So as opposed to individual features that only get invoked here or there, oh, just having an assistant that an agent can use to engage any of this stuff. This is where some of these new agent skills are gonna come from. How am I going to work in an environment when I don't have to take notes? That's kind of a superpower. I have something that's gonna do that for me. How am I gonna work in an environment where most of the responses are already available and generated for me? And I'm a little bit more of an editor and a reviewer and some personalization? That's pretty cool. It's a new skill though, right? You've gotta learn that skill. Some of those concerns are valid. So I want to close with something that I don't know if people saw, it was in the booth. How do you actually teach your agents these new skills? You saw in Parim's speech, oh, I'll go forward one more. You saw in Parim's speech the, what was called the agent trainer, right? The agent simulator. That was on a voice channel. That's kind of work in progress. But today, in the previous release actually in '23, we released in production, you can use this now, over the digital channels, the customer simulator features, right? We use LLMs to create representations of real customers. You can mock up any number of customers, you can assign them the kind of work you actually do in your contact center. Here are the types of cases that we do in our contact center. These are the types of problems I want these customers to represent, right? You give them moods, you give them personas, you make them angry, you make them liars because you can bind this to simulated data. You can have someone who's pretending they've got $10,000 in the bank account, but you've got a simulated dataset that says they've only got two. You can make this quite real and you can have many of them and you can put them all in a queue and you can put a CSR hands-on Pega, fielding chats. And on the other side is an LLM representing the work and representing this. So this is in general like a cool training tool. Learn how to use chat. But it is actually much more than that. It's a way for your agents to start to learn how to use these new tools, how to build these new skills. An agent who's never used Voice AI before, who isn't quite sure what to do when the form is automatically prefilled with the content that the customer was just saying. An agent who isn't quite sure how to use Knowledge Buddy yet. Oh it's just popping up an answer right there on the side. There's gonna be a bunch of this new amazing functionality that's gonna move the agent up that evolutionary image and we now have a vehicle to add higher time and incrementally going forward, we're gonna roll out a new feature here. We're gonna make some suggestions with some of these LLMs in the middle of an interaction. Why don't we train you up on how you're gonna deal with that? Why don't we build that skill for you and not do it in a dark room for six weeks. Just do it on the desktop you've already got. And do it in a virtual way so that they build confidence and they build the skills to survive in that sort of evolving space. I love this thing. It's really, really cool. We're gonna try to make this easier to get hands on, by the way, for everybody, if you didn't see it in the booth, it's gonna be an effort for the second half of the year. Make sure it can get in front of you. So we're gonna give a lot of new technology into the hands of agents. We gave you a little preview of pretty much all GA stuff that you can use today. And if you haven't seen the training tools that allow the adoption as well as the skill building, I recommend you reach out and take a look at it 'cause it's a pretty neat way to think about the evolutionary process of the CSR as these things are coming on board the next couple of years. And I think that was, oh yeah. And then at the end of the interaction, it tells you if you did a good job. And I think that was the end of my bit. So are we doing Q and A? I think we're doing Q and A essentially at this point.

- Yeah, so if there are any questions, Jeremy would love to answer them.

- Nothing? So, the question to repeat it for the recording, we forgot to ask people to come to the mic by the way, sorry.

- They've been here long enough to know that that's what they have to do. We shouldn't have had to, but it's fine.

- This was, what are the metrics on the effectiveness of this stuff in the contact center? In most cases they're your standard metrics, right? So you're probably all measuring, already measuring things like the handle time or the wrap up time. And so when these things get deployed, you'll see those things go down. Voice AI in particular does a ton of stuff and we, I've personally done a lot of like ROI calculations with people like when you auto form fill, you'll save two seconds per field. When you do wrap up, you save this amount of time. So, but they tend to be pretty much the same metrics that you're already working with. So, you know, I'm curious if you have a more specific concern about the metrics that you're measuring, where you think the values come from, 'cause it's usually gonna be efficiency, it's usually gonna be turning faster. Stuff that you're hopefully already measuring.

- [Audience Member 1] Just wondering how much more efficient contact centers are when this is deployed.

- Ah, so the question is how much more efficient when this stuff is fully deployed do they become? So I don't think we're-

- A million percent.

- A million percent. Where I was going is I don't think we have the ability to state publicly some of the efficiency stuff that we've sent out with the Voice AI-

- I'll take that back then.

- So I think follow up and I'll see what I can figure out for you on that. People so far are deploying these things incrementally, right? The wrap up stuff is amazing value, right? And again, I don't know what your wrap up times are, but it is common to cut 30 seconds, 60 seconds, 90 seconds off of every single call, right? And I don't know how many seats you've got or how many calls you do, but that is very quickly, millions of dollars or a meaningful percentage of time spent, right? So that's probably the single biggest sort of silver bullet in here. The rest of the stuff is, you know, incremental. But you save 10 seconds here, right? You make a customer slightly happier with the response time here. Those things add up. But the summarization is the real, you know, the real one that by itself is massive dollars. But I'd be happy to follow up. We've got a lot of spreadsheets and stuff we can work through with you.

- Shall I take?

- Oh, oh, we've got one.

- [Audience Member 2] I got the mic too. So the RAG, right? I understand that. And I understand how we want that to control the hallucinations, right? But, and I don't understand it fully, but how much complexity does that add? Like, 'cause in my mind I'm thinking I have my content then I have to build a RAG to kind of control that or constrain that content. So as content changes, am I constantly tuning that RAG as well so like from an ongoing maintenance perspective? - Yeah, excellent question. So, you know the stated pretty clearly, you go through a process of ingesting, right? You have to sort of absorb the content that you've got and that introduces a lifecycle question, right? What happens when that content changes? So, you know, the short answer is there is no silver bullet to that unless you use Pega Knowledge Management, in which case it's built into the publication process. Every time you publish a knowledge article in Pega Knowledge Management or attach a new document in there, it automatically updates the vector representation of that. And so that's one great thing. So if you're in an unmanaged content space where you don't really have a publication lifecycle, we might recommend start using Pega Knowledge for that because it sort of does the RAG stuff by itself, right? So there's really good integrations there. If not, you have to take that into account and whatever lifecycle you use to update your knowledge or your content or your policy guides at that point where you're done and press go, you either need to at that moment make a call that says, "Go ahead and re-ingest this," or you need to batch it up. But if you use our stuff, it's prebuilt. We're building some tools by the way, to help with some of the ingestion in a little bit more of a pull model. Like, oh, go out every night and see if this has changed in this environment. So that stuff's in flight that we're working on. But yeah, you have to manage the lifecycle of it. But yeah, I'll push Pega Knowledge a little bit too 'cause it's pretty cool. I'm amazed by the way, in the last session of the day that you came, so-

- Thank you.

- Yeah, I wanna say-

- That we were, we had a friendly bet going.

- [Audience Member 3] Someone said free beer.

- Yeah, well, so that the bet was will a six pack cover it? And we're glad to say no, it wouldn't have been close. We would've been out a couple bucks.

- I did say free beer. I also said a million percent, can't trust me, so-

- Hallucination.

- No, seriously though, thank you for choosing to spend your last session at PegaWorld with us. That is really cool that you've done that. But just to finish it off, look, GenAI, the possibilities of GenAI are potentially limitless, but there's so much stuff we can do today to get more efficiencies internally, make our employees' lives better, make our customers' lives better. But what if I told you that this entire presentation was written by AI? Well, it wasn't. It was us. We hope you liked it. We'll see you in the bar later.

Related Resource

Produkt

App-Design völlig neu gedacht

Optimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.

Weiterempfehlen Share via X Share via LinkedIn Copying...