So...now you have integrated your favorite LLM. Good for you
A First Look at Our Wins and Challenges
We have finished integrating our solution with a general-purpose LLM service.
Fast fact: LLMs open up a new way for our users to interact with our process analytics platform by providing questions in natural language about different aspects of the process. See the image and one of our test dialogues.
If you take the time, you’ll see that some answers seem okay, but not others. In this case, it has been able to answer how many cases the uploaded process meets an SLA, but it has failed to provide the right data when asked the number of executions that would not meet the expected SLA (the actual case average is 59 hours, but for the LLM is almost sixty thousand hours).
So, new challenges arise:
Getting the LLM to understand which kind of data and analytics it is dealing with, which can essentially be achieved through prompt refinement. In other words, the key is getting the context right.
Feeding the LLM the right analytics information, so that it has everything at its disposal to provide answers as accurate as possible (beyond the occasional hallucination).
Dealing with potential long response times from third-party services. They can affect user experience.
Related to the latter, assessing how much the associated costs will be in the future, as features that are initially appealing might not be sustainable financially in the long run.
Ehud Reiter’s blog has a great entry describing this kind of problems. By the way, you should follow him: with an extensive career on both academy and business fields, I dare say there is no one more knowledgeable than him about these topics.
Thankfully, we have some ideas on how to overcome these barriers. More news to come.