PegaWorld | 42:45
PegaWorld iNspire 2023: Pega Process Mining 101: Continuously Improve and Transform Your Pega Workflows with Process Mining
Although it started as an academic research field in the late 1990s, process mining has since evolved into a powerful business tool that allows organizations to get true visibility into their end-to-end processes in a way that wasn't possible before. With that visibility comes a superpower: the ability to take a data-driven approach to process improvements, including advanced analyses, simulating the impact of potential improvements, and monitoring the real effect of those improvements over time. Join Kleber Stroeh, VP of Process Mining at Pega, for a crash course in the power of Pega Process Mining and a look at the amazing impact it can have on your Pega workflows.
Transcript:
Hello, good afternoon everyone. Welcome to our Process Mining 101 breakout session. There's some seats here, guys, please feel free, take your time. I know I'm the last presentation before drinks and before Allen playing chess against 15 people. So thank you for being here. I really appreciate your presence. I'll try to make this as light and as insightful as I can, but I need to establish some rules here. I hate talking alone, so please ask questions. It's a small room. Ask questions, raise your hand. Let's make this a big conversation. Is that fine? So, let's warm up before I start, please raise your hand if you've heard of process mining, but don't really know what it is and came here to learn something about it, please. Awesome. Now raise your hand if you've been exposed to process mining but still hasn't had a chance to really run a process mining project, please. Cool. Last group now, please raise your hand if you have used process mining in this, in your current position or a previous job or in some practical experience. I should raise my hand right, otherwise I wouldn't be here. Okay, good, awesome. So this is what we're gonna do today. I'm gonna try to answer some questions. First, I'll try to say who I am, why we should care about process mining, what process mining is, how it works, when you should apply it, and if you got interested in it where you can go next. This is what we're gonna cover. Simple questions, six questions, that's all, okay? So quickly about myself. I'm Kleber Stroeh. I am the VP of Process Mining at Pega. I joined Pega one year and a month ago through the acquisition of EverFlow. So I've been dealing with process mining for some years. Before co-founding EverFlow. I also co-founded Icaro Tech. That was the parent company of EverFlow. I hold a master's in computer science with emphasis in machine learning. I've been working in the IT sector for 25 plus, emphasis on the plus years. It's almost 30. I do some research on process mining. I'm a proud father of two beautiful girls, devoted husband, spent my Valentine here. Not at home, but that's fine. Valentine in Brazil is is on June the 12th for you to know. And I, when I'm not doing process mining, I love playing the guitar and singing and I'm not on the strip doing it right now because I really suck at it. So I'd rather do process mining. Okay, here we go. So why do you need process mining? So I'll tell you a story. It's about your friend. It's not you, it's somebody else, right? Somebody you know. Basically their boss came to them and said hey, I've got a mission for you, special job. I want you to go and exterminate all the inefficiencies in our processes and automate everything that makes sense. Ever heard this before, sound familiar? And then what you say to your boss, I mean, your friend says to your boss? Yes boss, I'm doing it. And then you use the power of the force and try to exterminate it, but you can't see a thing, right? This is a very common problem. How can you fix something you can't see? This should be the first hint why process mining is important. So trying to be a little bit more serious here, right? It's a business conference after all. So this is, this is a prediction by Gartner, but in two years, 80% of all organizations are going to use process mining to reduce cost or to drive automation in at least 10% of their operations. It's becoming serious, it's becoming real, okay? So we should care about that. It's a great ally in that mission of automation and exterminating inefficiencies. So what is process mining? I like to see process mining as a discipline. So you go to college, you take, I don't know, linear algebra, you take algorithms, you take data mining, you take process management. Now you take process mining. It's a discipline that sits in between data mining and process management. And it borrows from data mining. The very concept that the data generates a model automatically. And that model helps you understand the data, helps you understand the process in this case, helps you make decisions and helps you predict the future. But it's not data mining because the type of data we deal with is specific. It's events, things that happen to your process instances. So we use specific data and we provide specific help for things that have to do with process management. Does that make sense? This is yes, this is no, yes. Thank you, awesome. So if I had to explain process mining in some other way, this is what I would say. We all have expectations about our processes. This is natural. If I asked any of you here about one process in your organization, I'm pretty sure you would explain back to me something like that looks like this paved path. But reality seems to be a little bit different. So if we use process mining, got the data, the events that now, you know, that we use, put it in the process mining tool and look for what is actually happening, we would find those shortcuts across the grass. And that's awesome. That's the best thing in the world. Now you ask me why. Because either somebody found a new way of doing the job that is less costly and takes less time. And if there's no side effect to that, that's the next way you wanna do it. You're learning from your own experience. Or somebody decided to open a bank account and not do the know your customer phase. And that's not okay. So either you are reducing cost and time, or you're reducing risk or friction or generating more value, more revenue, okay? Because now you can see what your process really does, not what you think it does. So far so good? Cool.
Well, everybody's talking about process mining. Seems like the latest thing, the newest thing. It's not that new after all. Process mining has been around for over 20 years now. The first work, the first paper on process mining was published still in the previous century. The first algorithm for discovery was published in the year 2000. The first algorithm for conformance checking the year 2004, the first commercial application, the first company founded on the idea of process mining was born in 2007. It's no longer amongst us to be honest. And then in 2011, the first book on process mining written by van der Aalst was published. And probably the inflection point was in 2014 when Wil published a course on Coursera on process mining and really made it available to the masses. And then people started paying attention to process mining. 2019, we had the first international conference in process mining. I was there. And last week, this is a big milestone for us. We have GA'ed our process mining tool at Pega, okay? Brief history. One thing that people often ask me is, okay, there's process mining, there is task mining. What's the difference? The thing is, with process mining, we use the event logs, I mentioned this before, to understand your process end to end. It's really good at understanding, understanding the life cycle of your cases, right? Seeing how they perform, where your bottlenecks are, rework, slow transitions, everything. With task mining, we can analyze what people are doing on their desktops, how they perform their daily operations. So that's really awesome. When somewhere, for example, in your, in the execution of a process, you have manual steps for which you don't have logs. And in that case, you can rely on task mining to provide extra understanding of what's going on there. So I usually say that with process mining you go wide, with task mining you go deep. And the combination of both is unique and that's what Pega has, okay? And very nice, but where does this fit the autonomous enterprise? Well, it's not difficult to explain. Think about it. Pega is very good, well known, globally known a leader in the automation space. We've got workflows, we've got robots, we've got everything there. We also have the intelligence, process AI for example, to optimize the execution of that. What process mining and task mining are doing is they're helping us connect this circle where we can now analyze how our processes really work, identify opportunities for further automation, and then use the tools that we have to automate that and then optimize that and then analyze again. And you start doing that fast enough, you're basically fulfilling the vision of the autonomous enterprise. Self-healing, self optimization, and all that stuff. It's just a matter of speed. So process mining is a key element to the autonomous enterprise. So far so good, makes sense? Yes, yes, yes, no, no, yes, thank you.
So, and why is this so important? Well, we live by this mantra of build for change. And this is so true, like when you, when you go implement your workflow, when you put it in production, the very next day it has changed because people are using it and people use it in ways that you have not anticipated. And then the market reacts and there are changes and you need to react to that. So the only thing that is certain is change. And the fact that now we can understand what that means is really a very helpful ally in this vision, in this build for change, okay? We understand what process mining is, how does it work? Basically, process mining relies on events. Some people would say these are footprints in the digital sense. So these are log files, these are logged tables, audit tables, history tables. They can come from any system, Pega platform in particular but not only. So we can get those in ERP systems, CRMs or even legacy systems. It doesn't matter as long as those event logs. So this is an example of an event log coming from a real system, doesn't matter which one it is, but we need just three pieces of information. Case ID, timestamp, and activity. So if somebody asks me hey, can I do process mining? First question is, do you have logged events? If you have those logged events, do they have these three pieces of info? Case ID being the identification of that process instance, whatever you call it. In Pega we call it case id. It's a coincidence. But it could be a purchase order number, it could be an invoice number, it could be a ticket, an incident, whatever, right? Timestamp, well don't need to explain that alright? Activity is whatever happened to that case ID at that moment in time. It could be steps, stages and statuses or steps in a process. The way we see it at Pega, it could be a statuses in, in a process. It could be anything. If you have that, you can do three special things. The first is the obvious one, we can do discovery. I started this conversation with this talk, right? So basically event log comes in and you create that model that explains what that process looks like. This is discovery. That model could be a DFG map, it could be BPMN, it could be something like that. The second thing is conformance checking. So basically now somebody provides a model saying this is the way we should be doing our work. And then you get the event log about what has really been done. And you contrast the two because you want to understand when you did not follow the model that you proposed. Why do we care about it? Well this is everything that auditors, internal controls, or people who are interested in compliance in general are interested in. And last but not least, you can do enhancement. Basically enhancement is discovery on steroids because it helps you understand where your slow activities are, where your bottlenecks are, where you have rework faulty states and how you can fix those
And this is where process mining really helps Luke Skywalker on his mission to understand what he can do to, you know, for further automation or to improve the process okay? Process mining is something we show, we don't talk about. So discovery can be exemplified in the way we show what the process looks like. So we have three examples here. The first drawing on the left is a DFG, directly follows graph. It's a very simple graph explaining how activities are executed after each other, how often they are executed, how often you go from one to the other, on average how long it takes to go from one to the other and so on okay? This is very easy to understand. Or you can have something more formal like a BPMN notation that you can use for documentation of your processes to export it to another system. Or you can do a replay. Basically it's Netflix, you just play it and see your process, how it executed in the last month for example. Okay, this is discovery. Conformance checking is how did I execute compared to a model. So in this case, what it will tell you is that a certain execution path was not expected and this is how you fix it. So some things should not have happened but did, and some things that should have happened didn't. And it will show you that so it's easier for you to fix it. This is very unique to what we do. Last but not least, we've got enhancement. And with enhancement we want to show you, you know, where your reworks are, what the root causes of those rework are, bottlenecks and even dashboards. Because with dashboards we can connect the story of that workflow with attributes of the business 'cause I don't only want to see that I have a bottleneck. I want to understand if that bottleneck is associated with product A or B or with you know, branch A or B or customer type C, D or E, whatever. So with attributes about cases about the business, we can bring color to the analysis of the process and make it easier to fill in the gap between the technical analysis of a process and the impact it has on the business. Make sense? Awesome. So I have a very short video just to show a little bit how it works. If you wanna see a real demo, 4:00 PM you go down, get a drink stop of one of the three booths we have for process mining, two for process mining, one for task mining. And you can see it there live. Here, it's just a quick video. Basically this is a claims process, so you can see all the activities that are being run, and there you go. So now we can change the KPI, we can see times, we can see which times take longer. We can replay the process like I mentioned before, you can see the process in execution. You can look for off the shelf analysis such as low transitions. So reviewing a claim is taking too long. You can check reworks again, you do reviews more than once. You look for a root cause and you see that not providing information about location is creating the problem, is forcing you to go back and then you can use your dashboards to verify the root cause analysis and do some extra exploration yourself. Okay? Everybody understands what process mining is, how it works? Awesome. Next question is when do I use it, right? So here are some possible applications. The first one, obvious one used in 80% of the use cases, process optimization and automation. Basically you can use process mining to identify where things are going slow, what problems you have, how you can use either workflow, RPA, whatever technique to speed it up, right? And you can use numbers to prove that that's the right thing to do and you can simulate the change before you do it. Okay? Then we have operations management. Basically we are following up the execution of the process over time. So it's not a one shot thing, but a continuous thing so that we can understand how our processes are evolving over time. If the changes we made got the results that we wanted, it's knowing what's going on based upon the data that we are collecting. The third one is kind of interesting because it's about customer journeys.
Think about it for a moment. If those events that we're using to analyze are coming as a result of the interaction of your client with your systems, whatever system it is, when you put it in this map, what you are analyzing is their experience. You're basically putting yourselves in their shoes and seeing how long it takes to, you know, dial one for and then three and then four. And then he gets lost, he goes back. You can retell that story based on the data, really feel their pain, and that makes it so much easier for us to improve their experience. And last we've got compliance, auditing, monitoring, also known in process mining as conformance checking. If there is a certain way or some ways of doing things, now we can contrast all of our execution, not a sample of it like auditors usually do. All of it against that model to really show you where you're not following the model and even the root cause. Why? In which situations does it happen more often? So that you know how to fix it. So we GA'ed it last week, some clients have had early access to it, they have partnered with us doing this and this is what they're saying. So this is what, this is how they're envisioning using it. So for the first, they want to stay as number one in the client service space in their industry. And they have been playing with task mining and they say that it got them to a certain point. Now they really envision that process mining will be the next leap forward, the next move. Another client, another region basically saying hey, we're gonna use this register cost and they have a target for it, 7%. And the last one is saying hey, we always suspected some things in our process. Now we can prove it. Now we can back up our claims with data. So it's much easier to ask for investment to convince executive levels because you've got the data to back up what you're saying. Okay. Then well, we're not alone in the market. There are other competitors out there. Why are we different?
So the first thing is, should sound obvious, you should make things easy to use. But I mean it, when I say easy to use what we mean is that we want process mining to be used by people who are not IT people, who are people in the line of business, because they are the ones who understand their business. And now we're trying to take out of the equation, the data science part of it and making it so easy, so accessible that somebody who is not an IT person can just click, you know, drag and drop, click and explore it himself and do something for them. Now think about it. If you give it to everyone in the operations, you're basically making everyone an agent, a sentient agent that has knowledge to help make that operations better. It's a whole shift in the paradigm of how we optimize stuff. Second, powerful insights. You've seen it there, you click there, you've got your slow transitions, you've got your reworks, they're all there. The import of the data, they are there. So, and there's root cause and it's right there for you. And last but not least, we built this on top of big data technology. So if you wanna analyze millions, dozens of millions of cases, we're good, we can support it. Wanna learn more? Two things. And they are not these things here because we were supposed to make this presentation before lunch and then we have a meeting at lunch, that meeting already happened so nevermind. So what you can do now. Live here, go to the innovation hub, two booths on process mining, one on task mining, ask them to show you this okay? Or You've got QR code here. So there's a page with lots of information there. If you wanna scan it, I invite you to okay? And I open for questions and expect lots of them 'cause we have 20 minutes and you're not drinking before the 20 minutes. You're gonna ask questions, yes.
Is it working?
Yes.
I'm probably a bit slow of mind, but I just want to have some confirmation. You mentioned that you use discovery to create a model.
Yes.
And then you were going to use that model to analyze it. But I can imagine that when you use discovery for a model, it's a model that has been created by implementing a process that could be a different model than that you would like to have. So I can also imagine that you also have the model that has been originally created by the business for reference in the second step.
Yeah.
Is that possible?
Yes, yes, yes. When we, so when we talk about discovery, that's exactly it. Basically you're using the data to create a new model that is based on what really happened, not what you wanted to happen. It's reality. When you do conformance checking, then you get your original model, the one that you intended to use and use that to contrast with what's going on. No, they're different stuff. You can work with both at the same time. Appreciate it, thanks for the question. Next question.
[Attendee] Yeah, so how this particular tool is supposed to capture the data, which like the processes are running in different systems. So I mean it has to collect that data, right?
Yes.
[Attendee] I mean you mentioned those three fields, at least those need to be captured. The case ID, the timestamp, and the activity.
Absolutely.
[Attendee] So, so is it like, is it having like different connectors for different systems to capture those okay?
You just answered the question, different connectors. Yes, yes.
[Attendee] So, okay, so it has inbuilt connectors, okay.
We use different connectors for different things. Of course we have a connector for Pega platform that is off the shelf. You just say, you know, you just say what, what case type you're interested in, pull it in. That's easy. But we've got connectors for JDBC, we've got connectors for rest APIs, we've got connectors for different file formats. Oh, I'll export the CSV and import, that's fine. It works too. Different methods for importing, yes.
[Attendee] Okay, yeah, and I think it's just that it's not gonna put that extra load onto those other systems for like, because every time, every transaction, every event that is capturing those events, right so.
Thank you, you should be up here man. Yes, we're not, thank you, yes. We're not putting any extra load on the transactional system because we make a copy. So we keep a copy in our system so that we, we do all the processing on our infrastructure without hurting the transactional system. Yes, that's true, thank you. We got, we had a question here. Oh here, sorry, you're next. Yes?
[Questioner] So first off, thank you. This has been the best presentation 'cause you've gotten to a level that like I like so appreciate that.
Thank you.
[Questioner] So do you, do we start with like any business architecture? Like do you, like do like lean IX inputs or mega inputs to like help start this process capture? Or do we have to load something within our environment? Kind of like what he was saying to capture this, to spell out all the, you know, so if we have a large investment already in business architecture or technical architecture app, can we, you know, import that into this or no?
So good question. So we will always work out of an event log. If that was the result of business architecture, good practices, et cetera, or if that was done with not so much care, at the end what we will see is the result of that is the event log. And we will do the analysis based on the event log. Of course the conclusions, the analysis and conclusions you get out of that event log should drive you into better analysis, better architecture in the future. But what we're doing is that we're telling the story backwards. We we're getting what was the result of the execution and trying to analyze how we got there. This is a little bit how we're doing. Somebody mentioned to me yesterday reverse engineering. I don't like the term per se, but yeah, it explains a little bit what we do because you're getting the result and trying to figure out what, what was happening. Did I answer your question? Thank you.
Hi there, thanks, nice presentation. I wanted to just sort of take a step back for a second and assume that I'm a mortgage application business owner or some other application and try and say okay, I wanna monitor this, this application and understand the process.
Yes.
So there's multiple systems involved, there's time involved, it lapsed time across days perhaps, right?
Yes.
So you mentioned that we use logs as input. Pega log is one log, there may be mortgage source systems or rec books or record of records and those kind of systems. How do you know exactly how many systems are involved in the end-to-end process, how many logs you would need to actually input to the process to get the map and how challenging is that over time? Just to try and understand that. And there are parts of the process that may not actually have a system. So how do you capture that?
Okay, that's three questions. I'll try to remember them. So, so can we connect information from different systems in one, in one view, in one process map? The answer is yes. The the only thing that you need to make sure is that you have a common ID. So as long, give you an example, we did this for a telco company. So we got alarms coming from the network, opening trouble tickets, and then some of these trouble tickets became work orders to the field. We could tell the story from beginning to end because the number of the trouble ticket was stamped on the work order and was also stamped on the alarm. So if I use the trouble ticket, even by getting three different logs, I could union them using the tool. You do it all in the tool, just connect the three sources and have one single view that shows the whole life cycle from beginning to end. I think that was question number one. Question number two, how many of those do we need? Tell you a story. I was talking to a bank some years ago and they said this process involves 10 different systems. It's gonna take us forever to get all the logs and import them. My question back to them is okay, from these 10 systems, which ones are really the backbone of your process? They said two, maybe three. I said okay. So we start with those two or three, because what you will see is the backbone of the process. If there are places where you'll see long, activities that are taking long, maybe they're hiding the complexity of another system that is running there. If they are important for your analysis, then you will go grab that information and put it in. But if they're responsible for let's say 2% of your time, we don't care. Right, so using process mining is a pragmatic journey. You really need to know what your objective is about that process so that you do, you know, the minimum effort that gain, that provides the maximum gain. This is the idea. And the third question I forgot now. Oh, systems that are not systems. So basically you don't, you have no logs. Task mining, then use task mining. With task mining what you will have is a piece of software running on the desktops of, of the people involved, capturing what they're doing and that will shed the light onto those things that are not that for which you don't have logs.
So an example would be output.
Yes, yes, emails, spreadsheets, yes. They exist, right? It's part of life. Yes, question?
[Attendee] Maybe it's an extension of that question. So it's question number four.
Okay, who's counting?
[Attendee] Okay, so we do also have like multiple application and the request goes to the multiple applications. And in our enterprise we do have an enterprise log where every application writes the log into the same enterprise log.
Okay, you go there.
[Attendee] We did try a similar thing last year, but the challenge, what we had is every application is writing back to the enterprise log. But we don't have a single correlation ID which actually says this log is related to the one case.
So, so lemme see if I understand. So you have something like a data warehouse where you're storing your logs from different systems.
[Attendee] Yep.
But the logs that are being stored don't have a case ID. Is that what you're saying?
[Attendee] Yes, a single unique id which can relates to every log that's coming from a different applications. So how can the process mining solve that challenge?
Okay, that's, that's a bit of a challenge. You know, if I were a purist I would say no way Jose, you can't do it, you need a case ID. There are some tricks that we can sometimes use to have a proxy of a case ID. Give an example. We did some work with customer journeys in the past and we didn't have a session ID about the interaction. So we did a proxy at that time that was the customer identification plus the day. So we would have daily journeys of each client and that was a good proxy for their case. So in your case, what I would say is we can sit and maybe explore a little bit what it is and some options to see if we can, you know, compensate for not having a case ID.
[Attendee] In that case, how is this process mining different than the, the data that was already there? Once I have those logs that gives me a report, tell me like what, explain which I can visualize. So how does it different process?
Process mining gives you a different perspective into the execution because it's aware of the case ID. So doing discovery is not something you can traditionally do with, without a case ID or with BI tools or you can get into some level of stuff, but things like rework, root cause analysis are really hard to do if you're not using a framework like this one. Really hard. I've seen cases of, of some companies building process mining applications on top of BI tools that exist because they want to leverage the engines, but there's always an intelligence that a normal BI wouldn't have. Did I answer the question, thank you. Question here.
[Participant] Yeah, I have a question regarding the license. We have a Pega cloud license. Is it bundled? The process mining tool license bundled with the license.
So process mining and task mining have a separate license for them. I suggest talking to an AE, your AE about this and we can support them.
[Participant] Is it allowed, is it only available in the cloud, process mining?
On the cloud, it's going to run on the cloud.
[Participant] Right, okay. So we haven't migrated the application to cloud yet. So the business is actually asking some questions like, you know, there are some work items that take longer to process. We don't know what happening. So does this process mining tool help?
So the fact that you're not running on the cloud doesn't necessarily prohibit you from using process mining on the cloud. We just need to pull the data from on-prem to on the cloud. That's fine. And yes, it can help you understand why it's running slow. So which activities are taking longer. Why, you know, that's what what it would do.
[Participant] So does the tool give recommendation on to fix those process that taking too long?
In some extent, yes, because it shows you the problems and you can use root cause to understand what's causing it. So now you know where you should fix it or what it's involved with that.
[Participant] Thank you.
Thank you.
[Attendant] No you're good, go ahead.
[Guest] Hi, I have two questions.
Yes.
[Guest] You mentioned about connectors, right? Do you have connectors for mainframe applications? Because these are legacy mainframe applications, right? Are you expecting teams to extract the logs from mainframe and feed it or do you have connectors for mainframe?
We don't have connectors for mainframe right now, so in that case it would need to be some kind of extraction.
[Guest] Okay, got it. The second question.
Having said that, we've had some experience in the past with databases, relational databases running on mainframe and we could use JDBC connectors to pull data out. So it depends on what you say connector for mainframe, if you're, if it's, you know, terminal 3270 emulation, that we don't have, but there are always, you know, usually.
[Guest] Yeah, at the end of the day, a process might involve like whether it's mortgage application or a new account opening or whatever, right? Because many of these systems exist for a long time and they have not been upgraded. So the back office team would use a mainframe screen and they would be jumping to multiple screens, whether it's a customer ID or whatever it is right so. So you need to create that correlation in the flow for the processes. And that's where my question was coming from.
Okay, gotcha.
[Guest] The second question is we did use workflow in workflow intelligence, but the tool looked almost like a personal management. What it was saying is, oh this user is using Notepad 15% of the time, using Pega BPM 20% of the time, on internet for 30% of their time. And so there is no granular data to interconnect the information. Like as you mentioned, whether it's a case ID or the customer id, that granular information was not available in WFI. So is something changed recently where you're saying that now as part of task mining it is there?
Things are changing. So we're leveraging the infrastructure of WFI and making it more of a task mining solution to match the process mining thing. We can do it now. You can define workflows, for example, by identifying initial screens and end screens for example, and other techniques that we're developing. But the idea is, yes, you connect the events, now they become a task, and the task can be analyzed as if it were a process or complement, or add on to a process that is executed.
[Guest] So now again, I come back to the mainframe screen. So if the user is using mainframe screen, is the WFI or WFM going to capture that or?
Well we'll need to talk about it 'cause we need to understand if the identification of the screen is enough to give us the granularity of what you need. But if so, yes. But to be checked.
[Guest] Okay, thank you.
Thank you.
[Spectator] Hi, I'm very excited about this and I think it's gonna be something that's gonna change a lot for especially someone like me who is in marketing but also in technology on the Pega side. Some of my questions I think are surrounding, for those that are not in the technology space, like my stakeholders, you know, how do you take some of that data that is very clear to us as technology as needing the changes because of the different workflows, and put it into speak that relates to them, that relates to finances and revenue and the pieces that are gonna move things forward?
That's a great question. The idea is that now it's easier for you to connect the two worlds because as you can see the journey, you start measuring it. Time, cost. If you have cost for events, you can put them here and you will see the cost, right? So that helps bridge the gap. Will it do it, go all the way? I can't promise you that, but it definitely helps bridge that gap because now it helps tell the story. Now, your activities, they're not alone in the world. They're part of a case that has a client behind it and SLAs and stuff that matter to the business. They're just not activities alone anymore, okay? Question?
[Patron] Hi, so my question is, while we take the time to build the connectors, the rest connectors, whatever we need to bring that data, you know, into Pega. I'm assuming that there's some manual approach that, right, your users are business operators that you see using this platform. So I assume they can upload a spreadsheet with the three indicators on there right? And so if they do that, is there any sort of protection that you have against PII data if they can see, oh, my identifier is a social security number. Oh, it's a credit card, right? These are business users using it. So what kind of protections does it have against things like PII?
We're working on, on introducing some PII checking on the platform for that. Say hey, this looks like PII, are you sure you want to do it? So please click here. You're signing off that you know what you're doing. So we're doing that.
[Patron] Thank you.
More questions? Okay, you're free to drink now. Thank you so much. It was a pleasure talking to you guys.