Ir al contenido principal

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 44:59

PegaWorld iNspire 2024: Prompts, Buddies, Coaches & Agents: The Next Generation of GenAI at Pega

Where is generative AI heading – in research, in business, and at Pega? Breakthroughs that once took years now emerge weekly, making it hard to keep up. But is there a pattern to it all?

We think there is. From dynamic prompt engineering to creating GenAI Buddies with access to textual domain knowledge to fully unleashing the power of GenAI with coaches and agents that can make sense of the environment, make decisions, execute actions, and learn from feedback, there’s a clear throughline to AI’s evolution in the Pega ecosystem.

Join this session to learn more about our product and research strategy, discover what's possible today, and get a preview of what we have brewing in the AI Lab!


Transcript:

- ♪ You bring the screen to life with code ♪ ♪ With code ♪ ♪ Teach me the ways of this AI road ♪ ♪ Do we trust the tech ♪ ♪ Or do we pause ♪ ♪ Do we pause ♪ ♪ Information at our fingertips ♪ ♪ Let's explore ♪ ♪ Let's explore ♪ ♪ With every lesson ♪ ♪ We're learning more ♪ ♪ Guide me through like a mentor ♪ ♪ A mentor ♪ ♪ Questions broad and answers deep ♪ ♪ Answers deep ♪ ♪ The machine thinks while the world's asleep ♪ ♪ But wisdom's not in silicon alone ♪ ♪ Not alone ♪ ♪ You articulate the doubts we've never known ♪ ♪ Never known ♪ ♪ We're discovering a digital dawn ♪ ♪ With every query our minds are reborn ♪ ♪ Reborn ♪

- Good morning, good morning. Thanks for coming to our session here on generative AI, and actually the future of generative AI, and I don't mean just the future of generative AI at PEGA. I think we also have, we're going to talk a little bit about where we think the entire field of generative AI may be heading. My name is Peter van der Putten. I'm the director of PEGA's AI Lab, so I'm responsible for AI innovation, and I'm joined here by-

- [Stijn] Stijn Kas. I'm a senior data scientist.

- And Stijn has been working on a little Skunk Works project, so that's why I brought him on. And at the very end of the presentation, he's going to talk a little bit more about it, Skunk Works. Okay, but let's get started. Let's dive in. Generative AI, we just heard this song which was completely AI generated just from a single prompt. And now as consumers we're using generative AI maybe to create videos or maybe cookie recipes or whatever it is. It's pretty amazing what it can do, right? And you could see generative AI as a form of creative intelligence, similar to our right brain. Our right brain is responsible for creative thoughts and ideas. Generative AI, you could call that as a form of right brain intelligence. But we shouldn't forget that we also have a left brain, part of our brain where we're making rational decisions to optimize certain goals, maybe to decide the next best action with decisioning or process AI, or using process mining to understand where work gets stuck. And so ultimately, we need to use both parts of our brains. But today we're going to focus on the right brain. Today we're going to focus on creative AI and generative AI. And as I said, like generative AI is changing everything. I gave these consumer examples, but the same goes for enterprise AI. And the changes are like really, really very rapid. You can see these gen AI providers, like OpenAI and Google and AWS, and they're putting out new models at breakneck speed. It's very hard to keep up. And also you see more and more use cases popping up where people are using generative AI in an enterprise context. But then the question becomes a little bit, it sounds even for me as an AI geek, it's hard to keep up with all these little, with all these quick, quick developments. And the key question here is there a pattern? Yeah, if we want to predict the future of generative AI and not whether a new model will come out next week, but more where are we going with generative AI in the next month, or maybe even in the next year, or maybe even in the next two to three years? Is there a pattern to this madness? Yeah, well, we think certainly that there is a pattern and we're going to talk today about that evolution of generative AI, and we have a very dense agenda with a lot of information, so we'll go really, really quick. In the first part, we're going to talk more about the now, yeah. So what's trending in the enterprise generative AI market? How is PEGA are responding to that? We saw some, we already saw a lot of that in the keynotes, but I'll summarize that in a cohesive view. But then the more interesting part maybe is where are we going with all of this? Yeah, so then we're switching from the product more to the lab. We wanted to bring lab coats, but couldn't find them in the end. Yeah but we're going to take a little sneak peek under the hood, right? And really look at, yeah, where generative AI might be going. Okay, so let's jump in, in the first part. So, what's happening in the enterprise generative AI market now? And how is PEGA responding to that? Now, I mean, in the generative AI market, or more in the AI market overall, we just had a press release this morning about a survey we've done to ask enterprise leaders about AI. And you can see that it's a bit of a mix. People are very bullish on outcomes. They see that, well, 93% says like we're going to increase our uses of AI in the next five years, or large portions of our profit, we will be able to attribute to use of AI. An interesting point there as well that in maintaining customer relationships, we trust humans more than AI, but we trust the combination of humans and AI, the cyborgs even more so, yeah. And on the flip side, there's these challenges that we need to take into account and that we need to make sure that we address. And yeah, to back that up with some third party research as well. McKinsey already did this interesting study where they showed that there's only a handful of functions and use cases that will make up 75% of the benefit that generative AI can drive, yeah, so. And yeah, it's not a coincidence that those are the use cases that we also focus on, marketing, sales, operations, service, but also app development, for example, and they're all in this category, yeah. But we also need to take into account the risks, make sure that we address the concerns that enterprise leaders have around the accuracy of generative AI, or how to deal with privacy sensitive data or other proprietary data, et cetera, et cetera. Now, what do we see then maybe with the providers of generative AI, the underlying models? I mean, the more underlying models and services. And it's no secret that up until let's say last year, there were many players in the market, but OpenAI had the strongest models. But that's changing. We can see that this market is very quickly developing. Competition is developing, and it's no longer that there's just one single kid on the block in terms of quality. It also means that clients are going to look at other features than just what is the quality of a model, right? Maybe also the cost, or the speed, or the features, multi-modality, you name it. And we'll be seeing more and more, yeah, more and more options on the market to use these underlying generative AI models. And as you could see this morning in the keynotes, this is how we kind of set up our gen AI architecture, and we have the power of PEGA as a local platform that we use to build all kinds of out of the box generative AI capabilities. We call them boosters. If 75% of the use cases drive the value, you need to be able to develop those journeys and features real quick. But we also allow you to build your own capabilities on top of this architecture. And by having this centralized approach, you can also manage the risks, right? We can filter out privacy sensitive data, for example, before it gets sent over to the other providers. And you already heard a nice announcement as well earlier, and that was that we're going to support a much wider range of services. You can see here some screenshots from the lab where we've been playing around with different types of models powering Knowledge Buddy, right? And we're seeing very interesting results, consistent results when you look at the better models of the pack, then definitely there's more choice than just going for ChatGPT, OpenAI. All right, so where are we now then? Let me talk a little bit more about these various capabilities that we already have today that you can start using today. Actually, in 2023, we already put out like a wide range of capabilities. We put out this gen AI architecture, and we built out these vertical use cases across customer engagement, customer service, and intelligent operations. So, but you can see some examples here. So in customer engagement, for instance, generating these variants of these different types of treatments in customer service. Could be summarizing a customer service goal or providing more guidance, the agent trainer that we saw on the keynotes. And in operations, they're similar use case about trying to help end users to resolve a particular process or resolve a particular case successfully. But we've added more, yeah. We started to develop very specific capabilities to address the needs and concerns of these various target groups that you see here. And I'm gonna review a number of those along the lines of designing more, developing more or doing more. So let's start with designing more. And I think that also taps into this, we saw this nice slide about transformation at the end of Kerim's presentation that we're living in an age of transformation. So that also means that we shouldn't just use gen AI for small productivity hacks, right? We should really think like how can we, for instance, reimagine how we want to do business? Yeah, and of course that's what we position PEGA Blueprint for, so yeah. And now, what does PEGA Blueprint tackle? It helps actually the actual ideation process for new applications, this whole process that's normally hidden. It's not hidden in the sense that it's maybe not visible to the developers. They're waiting until the requirements are shaped up, yeah. But obviously you want to make that more of a collaboration project, let's say, between business and IT, and that's where a PEGA Blueprint can help. And yeah, indeed, I have similar feelings like Alan, my definition of hell is getting into a meeting room and saying, "Oh, let's design everything from scratch. Here's a stack of post-its," yeah. A much better approach, of course, is to say you have a particular business problem, I want to make an application to support an insurance claim, or maybe how to apply for a new loan, or you know, maybe to do lama rentals, little inside job, and then we can use, but we write that prompt and outcomes and application, yeah. So we all know what it looks like and we just saw it. It looks a bit like this. No, not entirely. This is actually an AI model where you can type in a description of your process and it generates an IKEA manual. No, that would not be very useful. But this is what it looks like. And actually, yeah, you already saw that, that we created like additional experience on top. You just describe quite high level what it is, what your application's about, and then based on the best practice that's sitting behind PEGA Blueprint, but also the creative powers of generative AI. It will generate a starting point for your application that you can then further tune to your liking. Yeah, so in the same application, we can actually take it and we can actually start to import it into our own backup platform environment, yeah. So we can really basically pick up the work where the ideation team left it off, yeah. So a seamless integration between the two. Okay, so that's great, we can design more, but of course that would be an issue if we accelerated design, the ideations of these applications but we still have the constraints that we don't have sufficient, let's say, developers to build out these applications, so then we still have that bottleneck. And so it's really, really important that we double developer productivity, as Alan was alluding to. And that's also important because developer, demand for developer skills is always growing, yeah. So sometimes there's these stories also out of this survey, am I going to lose my job as a developer? Of course not. Never, ever, yeah, because we all know that the demand in the business to build business applications is endless, yeah. So if we're, you know, if we get quicker in building out applications, we can just roll out our programs quicker and there could be more business demands that we can actually meet. Yeah, but it's really important then that we help to accelerate that productivity. Now, the capabilities that we already built in 2023, yeah, we had all kinds of capabilities to, well, you could have a similar experience as Blueprint, like really starting an application from scratch. But there were also in the developer experience all kinds of fragmented places where you could use generative AI, generate test cases, fill a form with example data, you know, generate a pick list for a particular field, whatever it is. But now what we're doing on top, we want to make a little bit of a one-stop shop where the developer can go and can be kind of helped in kind of creating that application. Yeah, so, and that's what Autopilot is for, that's basically generative AI for the developers. It's a very first version of where we ultimately want to go, but we can already do interesting things, and we can ask questions and it will, how to do something. For example, I want to build a particular report. How do I do that? It will actually use Knowledge Buddy on top of PEGA product documentation to give you the answer how you can build a particular capability, yeah. But if you don't then say, yes, I want to do this, you can just click on it and it will jump to the right place in the application where you can get this work done. In this example, I was building a report. Okay, awesome, yeah. So we can ideate faster, yeah, and then we can develop faster to make sure that all the nice ideas that we generated, that they can actually get built. But a lot of you here in the audiences, you know, you may be managing end users, yeah, so people that need to process a claim or process a service guest or whatever it is. So then your question will be, yeah, but what's in it for us, right? Now, so that's really the use of generative AI to help end users to resolve a particular case or process to successful resolution, yeah, so. And that's not just about efficiency, it's also about effectiveness. It's also about doing the right thing, yeah. And that can be challenging enough because we saw this picture also in the keynotes, "Getting work done in a large enterprise can be very complex," yeah, so. And ideally what we want to do is we want to make sure that we can almost make every employee like your best employee, yeah. So what does your best employee know? Yeah, your best employee knows, like particular case comes in, oh, you know, is this simple or is it complex? Or what should I do about this? What is the best action that I should take in this particular situation? But your best workers, they can also deal with, you know, differences in the month. Like if it's really busy, they can work say quicker on a particular item, whereas if there's sufficient time, they can spend more time, yeah. Or they can, if you change your rules and regulations, they will know instantly what to do next, yeah. So it's all about making every employee closer to your best employee by leveraging generative AI. Now, and we have a range of capabilities that are targeted more at end users. You can see them here. But a key one, again, that's fronting it all is GenAI Coach. So what is GenAI Coach? You see, like I said, basically, you could also see it a bit as a gatekeeper to these other more detailed capabilities and Knowledge Buddy to answer questions based on, you know, particular tax corpora or package AI analyze where you can, you know, generate a report by just using natural language. But Coach is the thing that fronts it all because yeah, that's where when we're working on a case, we can just ask all kinds of questions that will help me do a better job at resolving the case. Now, you see an example here where we're using GenAI Coach in the context of our sales automation product, right? So there's a salesperson here and the salesperson is asking, oh, you know, based for this particular account, what are the most important actions that I should take next? Yeah, so we're coaching, in this case, the sales rep through her interaction or his interaction with a particular client based on the prompt that actually sits underneath and that you can tweak, yeah. So you can put your whatever in this example, your sales methodology inside those prompts, making sure that GenAI Coach is doing the right things, yeah. And obviously GenAI Coach has access to all the key information that's embedded in the case, yeah. So it's really grounded in that sense, in the reality of this particular case we're currently dealing with. Another example where we saw Knowledge Buddy already is basically giving you this experience where you can ask questions on the basis of a particular domain or corporates, yeah. So maybe all your cell service documents, and then I can ask question how to deal with this cell service request, yeah. Or maybe you are a KYC department and you have all kinds of internal regulations or the latest regulations from the regulator, and you can just ask questions and then you'll get a reply how to deal with this particular situation. And we have some of our financial services customers who will talk about it in our panel later today, quarter past two, little plug about how they're using Knowledge Buddy for work instructions within a risk domain, for example. All right, so summing up then, how are we responding now? These are the capabilities that will help us to design more, to develop more, to do more. I listed them all here. I'm not gonna read them out again but they have particular target groups blueprint more for ideation than development tools that are more targeted towards the developers, and the tools like Coach, Knowledge Buddy who are more targeting the end users of a PEGA application. Fantastic. That's what we're doing now. Part of the reason why you're here is probably you want to learn a little bit more where are we going with all of this, right? Because there's so many of those new use cases popping up, and on the flip side, oh, there's so many more of these models coming out. You can see that we're supporting Google and AWS services, that we're planning to support them as well, and yeah, I think Google alone has access to 100 different types of models. So where are we going with all of this? Yeah. How can I see the forest from the trees? Now, I think one way to look at it is not to think about where's gen AI going, and can we find use case for gen AI? You need to turn it around. You need to say what kind of company do I want to become? Yeah. And how can gen AI actually help me with that? Yeah, so very simple idea, yeah. So what kind of company do we want to become? What is our North Star enterprise? Or maybe what is the template North Star enterprise that PEGA has for this? Well, that's the autonomous enterprise, right? And this idea of an enterprise that it's almost like a self-driving car. It's a self-driving business that can optimize towards certain business goals by combining AI and automation. So then answering the question of, we shouldn't be answering the question where's the AI going? We should be answering the question where do we want gen AI to go? And obviously if we want to support the autonomous enterprise, gen AI itself needs to become more autonomous, yeah. And that's a little bit the answer to where do we see gen AI going, not just tomorrow, next week or next month, but where we might be seeing gen AI going in the next half year, in the next year, maybe even the next two or three years, yeah. And this evolution, we're going to explain that with a couple of examples. So I'm abstracting away a little bit of these underlying gen AI services or all the use cases on top. I'll basically say, well, there's a bit of an evolution of what gen AI is capable of on this path towards more autonomy, yeah. And let's start with the first step, yeah. So the first step, we call it like engineered prompts. So the idea is, I have an example here from a treatment generation, yeah. In customization hub, we can generate new types of treatments based on certain inputs from, in this example, from a marketeer. Like I want to have, what is this treatments for? The privacy first, I can't even read it. The privacy first hesitant buyers for a particular product, and it will generate, based on these instructions, it will generate these treatments on the fly. Now, how does that work? Yeah, I already said, well, enterprise AI is different from consumer generative AI in the sense that in many cases you don't have this chat box where you're asking questions, and there's this handful of use cases that's driving 75% of the benefits. So we probably built these kind of vertical features or vertical workflows, really built that end-to-end together with the low code platform, yeah, where you may not even see the prompt and you get results back, but we do something with the results to automatically create an email in this example. But purely technically speaking, it's just a prompt template, yeah. There's a bit of a system message, like how do we want a gen AI to behave? There's a particular prompt that we give it with certain dynamic fields. We send it across, we get an answer back from gen AI, and we translate it back into something that an end user will understand, right? Just a single call to a gen AI service with a templatized prompt. Now, fantastic. But how could we move that up a notch a little bit? Therefore I will go into this kind of second example. And second example is about I would say basic tool usage, yeah. So what do I mean with basic tool usage? A good example here is to talk about Knowledge Buddy, and Knowledge Buddy is an example of so-called retrieval augmented generative system, or RAG for short, yeah. And the key idea here is that let's say you have a big corpus, for example, this self service corpus on the website, I have a certain question as a end consumer, for example, how to deal, I lost my credit card or whatever it is, what do I need to do? Now, we can use search, but then we get tons of search hits, and as an end user, I need to trawl through that. Whether I'm an agent or a customer, I need to trawl through all those search results and make sense of it all. Not very nice. The other way to do is to say, well, oh, I use generative AI. Yeah, but if I ask the NFDI question like, "Oh, I lost my credit card. What do I need to do?" It may give answers from a different bank that I'm a customer of, right? So the best of both worlds, if we combine the strengths of both, right? So take the question, first do a search on the very specific corpus set of documents, yeah. It would be self service documents, work instructions, whatever it is, get the search results, and then we use gen AI to say, hey, this was my original question. I lost my credit card. Oh, here are 10 hits from the corpus. Now gen AI, please answer the question, yeah. And that's basically how Knowledge Buddy works, yeah. So you see an example here, Self-study Buddy, that's an example of a Knowledge Buddy on top of backup product documentation. We can ask it any type of questions in this example and I get an answer. What's happening underneath is, well, we have this particular prompt and a particular original question, and then it searches, well, it gives me an answer and tells me what search results was this answer actually based on. So underneath there's this prompt. Here's an example which is pretty close to an example prompt in Knowledge Buddy, where we give it some instructions and we say, based on the question, first do a search, and then based on the search results, give me the end result. Here we see a real example where some of the search results are at the bottom, and based on that, the combination of the two, gen AI can answer my question, yeah. So that's an example of tool usage in a way. So basically we gave gen AI some simple tools, a search engine and a set of documents, and in a very scripted manner, we tell it what to do, yeah. First search, then answer, yeah. Now, if you then think about how could we become more autonomous? Yeah, then a logical question pops up. It's like, ooh, well, what if we would give way more tools to the gen AI, yeah? And what if we wouldn't really script what the gen AI needs to do? Yeah, maybe we give it some vague instruction of what our goal is and it needs to sort out itself. It needs to understand and learn, you know, how to use these different tools in a particular combination, in a particular plan to achieve a result, yeah. So people have been working on this for a couple of years, and this is an example of an early paper that looked at, yeah, can we use gen AI to actually figure out when to use what kind of tool? And so you could see that as part of a much older kind of area, which is called multi-agent systems, and we have current technologies like LangChain, et cetera, that tap into that. But basically that's the path that we're on, yeah. And I said something about the Skunk Works, and the Skunk Works here is that we're actually building out these agent capabilities deep into the platform, yeah. So capabilities, yeah, you see the green ones are capabilities that we already built, the yellow ones are ones that we're working on, and the gray ones are capabilities that we are going to, well, that we still need to, that we will build out later. And there's a distinction in the sense that it's built in the platform and we just start, we're starting to use these capabilities in the applications that sit on top. Ultimately, we want to enable you to actually leverage those capabilities. But I said like Stijn was really involved in this kind of work, so I want to give him the floor to talk a little bit more about what we're doing here.

- Yeah, thanks Peter. And just while we're on this slide, just to highlight some of the things. So you see things like negotiation here, and you see things like communication, and if you're familiar at all with the world of multi-agent systems that might be very familiar, but that's not something that we see in the generative AI world yet. So we do see kind of transition from this kind of very long running world of multi-agent systems onto this newer trend of, you know, making these agents work with models, which is also something you don't see here. It's not dependent on just one model. It's also not dependent on training that model or using your own data to fine tune models and, you know, have big expensive training jobs or anything like that. No, these are really kind of flows on top of the model, and you can theoretically use any model that's good enough to really get these kind of use cases working. And actually, you've already seen one such example of a use case which is very agent based, and it was in a keynote today. The Customer Decision Hub Assistant is kind of our first jump into this world in product. And we've been experimenting and gaining a lot of knowledge and experience in how do we actually build this out properly? How do we make this useful? How do we make it reliable? And so I think you saw a pretty cool example that Kerim showed in the keynote where the CDH Assistant is the chatbot you talk to, but it can actually do more things in the background for you. And you don't need to be aware of everything it does, but you can interact with this one model, this one agent that will go run off, do things for you and then come back in the end. And what I think is interesting, at least when I see keynotes, like today, I always think, okay, well, what's actually happening behind all of this, right? How, we call it an agent, we talk about tools and things like that, but what's actually happening? And so I thought maybe we can open up the box a little bit in, you know, let's take this example here as the first example where if we have the CDH assistant, what we can say is that ultimately it is of course a model. It's an LLM, it's an agent that has access to certain tools. And we can build out these tools as kind of their own thing. You can think of it as a function, anything you can run. But the tools we give to the model are the things that it can do, and from that it kind of makes sense that if we give it more tools, if we give it better tools, the tools are great, and the model will also be better at doing what you want it to do. So here you see just some internals, and what actually happens in the background is you get a user question, right? The model looks at that question. It tries to kind of understand the question. So in this case, if it wants to know the accept rate of some name, it needs to first figure out what is this, right? It has some resolution. It first needs to do some reasoning to verify some assumptions. And then what we do is we let it generate a plan. So we don't say, okay, just go ahead and do what you think you should do. No, we let it think about, okay, how do we solve this? How do we get to a solution? And you don't have to read the text on the right here, but kind of that's the way we see this being split up. So we have a system message which is, okay, what does the agent do? There's some configuration and data about the current system. And then it comes up with this plan where it thinks of multiple steps to do in a certain order, then executes it and then finally comes back to the user. And what I thought was interesting when I was thinking about this presentation and also thinking back to last year's PegaWorld is that we already had a very similar example to what we're discussing here in last year's PegaWorld where Don presented this use case, which, an AutoPilot use case where we asked it, okay, can we optimize a claims processing case in Infinity to reduce our SLA? And you see kind of, it already starts running and gives you answers and it asks you, okay, I can optimize this for you. Do you want me to do it? You say yes, goes on, and it introduces some changes and gives you the results there. And when I saw this, of course, the same question came into my mind like, okay, well, how? How does that work? And so let's, you know, given all the experience we've had in building out these tools in the last last year or so, let's actually take this use case we just saw and let's try to implement it. Don't worry, I won't go into code or anything, but let's conceptually think about, okay, how could we actually, actually get this done with the technology we have today? So you know, we open up, whether it's Coach or AutoPilot, we open up an agent chat window and of course we ask the question, right? And these LLMs are very good at reasoning. So at first it needs to think about, okay, what do I need to do as my next step? And so in this case, I give it three tools. So let's give it a process mining tool. Let's give it an Infinity tool to actually interact with Infinity. And then, of course, a Buddy tool, right? The Knowledge Buddy, it needs to have access to some knowledge sources, to some documentation to properly understand your question, but also embed it in the whole domain. So, you know, thought a bit and it considered its options, and then the first thing it considered with this question is, okay, let's actually figure out if we want to optimize something, what are the problems, right? What are the bottlenecks? And so it called the process mining tool. And it, you know, doesn't just activate something, it actually asks the question in natural text. And that's because on the other side of that process mining tool is also an agent, right? We talk about this multi-agent systems world. Well, this could be anything, but in this case we could also have an agent on the other side of the process mining tool, which is very specialized in process mining, and it can actually call out to PEGA's process mining tool through APIs or some other method. And what it's really good at is doing analysis and finding bottlenecks on real time data. Analysis, finding root causes, that sort of stuff. But for the main coach or assistant or agent, it's just another tool. So it asks the question to the process mining tool, and it simply comes back with a response. And you know, maybe you found three bottlenecks here, two of which I've hidden, but one could be that, you know, 75% of the claims cases are initially misrouted or not routed optimally, and there's some delay there. So, okay, that's an analysis. We know there's an issue somewhere, but we haven't actually done anything yet. So we asked you to optimize, but that we're clearly not done. So if we send this response back to the model, it's gonna again think, okay, what did I just see? Am I done? Can I go back to the user? And in this case it says, well, no, let's first actually reach out to Knowledge Buddy, and let's see if Knowledge Buddy can help here, right? So it calls the process Best Practices Buddy. It says, okay, well, we have this problem of the misrouted cases, can you suggest ways of improving this? Or you know, can we do something about it? And this just goes out to Knowledge Buddy. It has access to documentation, maybe client's own documentation, maybe some best practices. And it, well, obviously comes back with an answer, right? And it tells you, okay, how do we actually introduce process AI in this process to do it, so we can introduce process AI, which would give us a machine learning model, a self-learning adaptive model with decisioning to really optimize this one decision point. And of course it can tell you how to do that, right? It knows how you introduce a model in Infinity. It knows how to use process AI to solve this problem. But the model also has this other tool, this PEGA Infinity tool, and you know, this might seem very abstract, but if we go into this tool, and that's kind of thinks again and thinks, okay, let me see if I can actually do this myself, right? Because the Infinity tool can also think about, okay, what kind of options do we have available? What kind of functions can we achieve to get this done? And it actually starts to create a plan. So the Infinity tool will create a plan on the fly. It will also of course tell you it's not possible, right? So if it's not possible, well, it won't tell you that it can do it for you, which is also quite convenient. The whole hallucination thing is a big problem for these kind of use cases. And of course, it doesn't do anything before approval, right? Because what we get back is a plan. So in this case it says, okay, first we need to know where do we introduce process AI? We need to create a model, and then we introduce it into decision, right? So, a bit of a simplified plan, but let's just say it comes up with these three steps and it actually, it has the tools to actually do this. So then comes the interesting part where the model sees all of these answers, right? It's seen three answers now, and now it needs to, well, reason again, and in this case, it's fair to assume it's kind of done for this step. And so it lets you know, okay, I found three bottlenecks, but actually it doesn't tell you everything it's seen. It doesn't tell you, okay, how do you introduce process AI to do this yourself? And it doesn't give you instructions because it knows it can already do it for you, so, why would it ask you to do it? And so, you know, obviously that's what you want. So if you say, okay, please do. I don't think I need to go all the way in detail, but it's gonna think. It's obviously going to call the Infinity tool. It's gonna ask Infinity to go ahead and execute all three steps of the plan that was previously generated. And then once it's done, comes back to you with a nice status report, which is okay, it's done. And you know, in this case, it put the changes in a change request, you know, it's done some testing, it's figured out on a dev system that, okay, this works and the model is functional and nothing breaks. But of course we're not gonna deploy it to production straight away. There should some human in that loop. So it asks you, okay, here's the change request, please have someone look at it to actually promote this to production. So I hope this very brief overview at least gave you some idea of conceptually how this could work. More than happy to go into more detail but this is probably not the right venue for that. But what I do want to close off with is that, you know, while we've been experimenting and doing a lot of things with this technology and we've gotten very excited about it, I showed you some screenshots, it's still pretty manual work. And so that's actually why in 24.2, so the upcoming release, we really like to get everyone else hands-on with this technology as well. So we would really like you to be able to play around with these autonomous agents, given all the things we've learned while developing our own use cases over the last while. And this is where the new Coach rule, the evolution of the Coach rule can come in for 24.2. So this is just an example of the UI, or it's probably not final, but I've given it some instructions here and I've showed it, okay, these are the tools you have. Well, we can specify here as we can say, okay, we have these two kind of knowledge tools, right? So we have Knowledge Buddy and we might have our process mining tool, which we can ask questions and get data back, but also actions. So in this case, you know, maybe there's like an Infinity tool. Maybe we want to, you know, split this up into multiple different things. But we can give it some capabilities where we can actually do stuff for you through the UI that you can configure in Infinity. And in this way, we'd really like everyone to be able to get the power of these autonomous agents. So to get more information on this Coach rule, there is actually a generative AI architecture booth in the Innovation Hub. There's some very good colleagues there that can tell you all about how this is going to help you in the future. And I'll give it back to Peter.

- Okay, thank you Stijn. Really nice. And like a question that I got when we were showing these to people and they said like, "Oh, well, how far are we away from actually having this?" Right? But I think it's important to reiterate that these core capabilities, we already built it in the platform, so, and then we gave it to other people within PEGA who want to build their own features, like the CDH intelligent assistant. It's good for us because we want to trial run that a lot, but now we're starting to focus also on exposing it and so that you can configure it yourself through the gen AI code rule, right? So a lot of this is already there hiding in the platform. We just now need to make it more accessible, provide a nice design time experience, et cetera. But back to the original kind of onset where I said like, oh, there's so many changes, new models, new use cases, it feels very technical. Is there some development where we can say, well, that's actually development that will be valid for the next month, the next six months, the next year or maybe the next two or three years. And that's where we see this kind of move towards more agentic versions of generative AI. That will be an important evolution, right? And if you look at our current capabilities, many of them fit into the first bucket right now. Anything in PEGA 2023 fits in the engineer prompt one. I think Knowledge Buddy is the only one that we would call basic tool use and maybe intelligent agent in CDH fits in that autonomous agent box. But obviously we're moving more and more of this to the right. But there will always be use cases that you can solve, for example, with a simple engineer prompt that you don't need to use these agents for everything, yeah. So 2024 is truly the year where we're letting generative AI out of its case. We're keeping it on a leash, yeah, it's important for enterprise, but we do open the cage and we take generative AI here for a walk. You want to know more about also our general approach to AI, we wrote this manifesto. So this is just a quick plug where we worked really hard to kind of nail down only in nine sentences or so what is a good way to generate responsible value with AI. So it has two sides of the coin. How can you create benefit, but how can you do that also in an ethical, responsible way? As Stijn said, you can visit the gen AI architecture booth in the tech pavilion, but frankly, yeah, all the booths in the tech pavilion pretty much have some form of gen AI actually built into it. Yeah, so with that we have time maybe for one quick question, and then we'll also stay around for people who want to chat. But maybe if there's a question from the audience, just walk up to the mic. If not, then we'll close it down.

- There's one.

- Thanks. Well, is there one? If not, then thank you very much for your time and enjoy PegaWorld.

Recurso relacionado

Producto

El diseño de aplicaciones, revolucionado

Optimice el diseño del flujo de trabajo, rápidamente, con el poder de Pega GenAI Blueprint™. Configure su visión y vea cómo se genera su flujo de trabajo en el acto.

Compartir esta página Share via X Share via LinkedIn Copying...