I’ll be reading the Drools JBoss Rules latest book over the next few days, and will be posting my two cents about it as soon as I finish it. I’ve read it’s previous version and loved it, so I’m pretty sure this one will be even better. Stay tuned to hear more about it 🙂
Hello everyone. I’m glad to let everyone know that along with Plugtree’s Public Training in London, we will be having two very special invites doing Drools and jBPM 6 workshops on October 23 and 24. Michael Anstis and Mauricio Salatino (A.K.A. Salaboy). We will be updating the agenda pretty soon to accomodate for both things, but dont worry: We still have plenty of time to cover all topics and questions. This is just a huge plus!!
The topics of the workshops will be related to Drools and jBPM internals, so it will be a great opportunity to ask really complex questions by the very people driving the projects. We’ll be reserving seats for everyone who comes to the training, and both workshops will be free, but seats are limited, so please reserve before they run out!
Hi there, I’m glad to announce the contents of our Drools and jBPM Training. I will be posting here the training material and some speaker notes about the training slides. Please feel free to give us feedback about the content and suggest us missing topics to include.
In this post I’m sharing the Roadmap for this training release, because it’s a work in progress I need some time to get the material published and ready for the community. Please share some feedback about the proposed topics. I will be glad to improve it in order to get a quality training from the everyone’s perspective!
Day 1: Theory introduction: All levels
Part 1: Introduction: A brief introduction about the company, the trainer, our position regarding JBoss projects, and our passions. We intend to cover as quickly as possible the needed knowledge and tools to understand and work through the demos given in this course, as well as present the different technologies and theoretical background we’ll cover in the course.
Part 2: Theoretical Background: We’ll explain in this section all relevant information regarding AI methodologies and principles on which rule and process engines rely on. We’ll cover topics from knowledge generation and gathering, how systems work around such knowledge to emulate domain experts, how rules apply to such mechanisms and in what way do rules work. Also, we’ll cover the theory behind BPM (Business Process Management), its history and life cycle. We’ll continue with topics such as planning problems and event processing, and how all these methodologies work together.
Part 3: Drools coverage: Once the theoretical background is covered, we’ll get an initial glimpse at all the tools that Drools provides to cover them. We’ll make a thorough review of the structure of business rules, events and processes. We’ll also see components of the Drools toolkit (for both versions 5.5 and 6.0) that allow us to handle a knowledge repository and how they interact with the rest of the runtime components.
Part 4: Architectural Overview: Once the technical components of the Drools platform are covered, we’ll take some special time to discuss best practices and known frameworks to handle different Drools based projects architecture. We’ll see how the components interact with each other, as well as how they fit in the overall structure of a system. We’ll discuss different approaches to handling rule and process executions, from embedded architectures to SaaS approaches, covering stateful and stateless executions.
Part 5: Practical quickstart: We’ll cover the basic steps to create your first drools and jBPM project, both using Drools version 5.5 and 6.0. We’ll introduce some “Hands On” projects for you to get familiar with the initial APIs, as well as demonstrations of the existing runtime tools to both create and execute knowledge creating very little to no code at all.
Day 2: Drools part 1: Medium technical level onward
Part 1: DRL Syntax: We cover all the details regarding writing DRL files (the syntax in which Drools technical rules are written). We introduce all components of a rule and all the syntax sugar that exists. How to define conditions, consequences, attributes, events, and queries. We cover as well the API needed to run rules both in Drools 5.5 and 6.0, as well as a Hands On exercise to get familiar with it.
Part 2: Drools Internal Mechanisms: We’ll study in detail how Drools transforms DRL files into an executable runtime, gaining a huge insight on how to make faster and better rules. We’ll cover the different elements in its execution network, as well as the full mechanism that activates rules in a runtime environment
Part 3: DSL and Decision Tables: We’ll study other ways that the Drools platform allows us to define rule based knowledge, DSL (domain specific language) to write rules in a natural language, and Decision Tables to write rules similar in structure in an Excell based stylesheet. We’ll see also a Hands On Exercise to get familiar with the API needed to run them.
Part 4: Drools Fusion Introduction: We’ll get into detail on the event processing mechanism of the drools platform. We’ll cover all the different temporal operators, handling data entry points, time correlations and processing events as both isolated events or streams of events. We’ll cover the API to run them both in Drools 5.5 and 6.0.
Part 5: OptaPlanner Introduction: We’ll learn the deatils of planning problems and how to solve them using this Drools component. We’ll see different example problems solved by the tool as well as how were they implemented. We’ll cover the configuration tutorial and optimization tools, along with a Hands On exercise to see how to run it using Drools 5.5 and 6.0.
Part 6: Debugging, Logging and Tuning: Once all components are thoroughly covered and we are already familiar with the APIs, we’ll start seeing different tools to trace the rules behaviour as well as process executions. We’ll see different tricks that will allow us to identify errors and unexpected behaviour in our rules and processes just as if they were in our everyday code.
Day 3: Drools part 2: Medium technical level onward
Part 1: Drools Guvnor – KIE Workbench Definitions: We’ll learn in this module about the different versions of the Knowledge Management System (KMS) provided by the Drools platform. We’ll learn to define a model and to author our own knowledge assets, scuh as rules, DSLs and Test Scenarios.
Part 2: Drools Guvnor for end Users: We’ll introduce the different components of the KMS from a user persective. We’ll get familiar with all the components in the action toolbar, metadata and version management, as well as package configurations to learn how to adminstrate our knowledge
Part 3: Drools Guvnor Administration Topics: We’ll get familiar with managing the KMS from an administrator perspective. We’ll see how to manage backup strategies, user permissions, logs, and verifications.
Part 4: Rules Design Patterns and Best Practices: We’ll cover best practices for creating and maintaining rules, how to manage the data model, best practices for fact classification, rules extensions and exception handling. We’ll learn as well about the life cycle of rules, and issues regarding performance and scallability, and how to tackle them.
Part 5: What’s new in Drools 6: We’ll cover the different aspects of the new Drools distribution in more detail. We’ll learn the reasons for the changes that have taken effect in the API, as well as the improvements in both functionality and usability of the platform. We’ll learn about PMML importing and Spring and Camel integration.
Day 4: jBPM part1: Medium technical level onward
Part 1: jBPM Introduction: We’ll get a quick introduction to the jBPM project history and it’s integration with Drools. We’ll cover the different jBPM runtime components and its functions, and an example of a business process. We’ll compare jBPM with other products, and we’ll see the advantages of jBPM by comparison. We’ll see the structure of a Business Process, how to define them using BPMN2 and how to run them using the jBPM APIs.
Part 2: BPM For Development: We’ll cover the different stages in BPM where development is more involved, as well as how is the best way to perceive the software writing work related to running effective Business Processes. We’ll finally see how this allows a best fit from an End User perspective.
Part 3: jBPM Components Overview: We’ll cover all the different components of the jBPM platform, and how they fit together. We’ll discuss several practices that allow for effective Business Process Management, and how the jBPM components fit best into those best practices.
Part 4: BPMN2 Writing and Using: We’ll learn in more detail an industry wide standard for writing proceses called BPMN2, its elements, and how jBPM covers it. We’ll see a code demo to jBPM, and how to handle and test different types of processes and subprocesses.
Part 5: jBPM Initial APIs: We’ll cover the different API components for both versions 5.4 and 6.0 of the jBPM platform. We’ll analyze the best way to test processes, how to configure its persistence, and how to work with human tasks.
Day 5: jBPM part 2: Medium technical level onward
Part 1: jBPM Basic example: We’ll learn a few different processes and we’ll see different tests to see how to interact with external services, handle rule execution from our processes, create process that react from external events, and how to pass and check data between tasks of a process instance.
Part 2: jBPM Domain Specific Processes: We’ll discuss the different extensions that BPMN2 process files and the jBPM runtime allow to interact with external applications in detail. We’ll see the different parameterizations allowed for that interaction and how they bind to the jBPM runtime. We’ll learn also about the difference between immediate and deferred external system interactions, with some examples and a Hands On exercise.
Part 3: jBPM Human Interaction: We’ll discuss in detail how the jBPM runtime deals with tasks performed by people, instead of external systems. We’ll see the special characteristics of human interaction, and existing standards to integrate it into applications, including security injection, task life cycle, and some examples with a Hands On exercise.
Part 4: jBPM Persistence: We’ll discuss the way jBPM persists information in detail, and how it is persisted. We’ll see how to configure the persistence and how to use it. We’ll see examples as well with a Hands On exercise.
Part 5: jBPM Advanced Topics: We’ll learn special tricks that will allow us to manage persistence from the session perspective in different strategies than the one used by the jBPM standard persistence. Also, we’ll learn a few tricks to handle external notifications regarding process execution, and tricks to use the process designer to its full potential. We’ll also learn how to debug process executions, which will allow us to identify errors and unexpected behaviour in our processes just as if they were in our everyday code.
Please, feel free to write me back and propose me more topics that interest you. I’m very flexible and I will do my best to reduce the learning times of this amazing project.
I’m preparing a workshop that introduces Business Process Management and prepares you to be immediately effective in using both Drools and jBPM to improve your applications. You will learn how to utilize the different stages of BPM where development is involved, for both versions 5 and 6 of the Drools and jBPM platform.
We’ll discuss several best practices that allow for effective BPM, and how the jBPM components are more suitable placed into those best practices. We’ll also cover how is the best way to perceive the software writing work related to running effective Business Processes and rules, and see how this allows a best fit from an End User perspective.
Where? London, England, Number 1 Poultry, EC2R 8JR
When? October 21-25 , filled with Q&A sessions and workshops, from 10:00 AM to 18:00 PM, with the last two hours for specialized questions and workshopping everyday.
What will it cover? Full theoretical and technical overview of Drools and jBPM. You can download the full agenda from here
We offer different options depending on your interest:
|Introduction: October 21. Full theoretical introduction to Drools and jBPM components. USD 500.00|
|Drools: October 21-23. Introduction + full technical coverage of Drools. USD 1350.00|
|jBPM: October 21, 24 and 25. Introduction + full technical coverage of jBPM. USD 1350.00|
|Full: October 21 to 25. USD 1728.00, and 1929.00 after 9/21/13. Register now and get the early bird pricing!|
Or send us an email at firstname.lastname@example.org for other payment methods. See you at London!
This is a topic I’ve wanted to discuss for a long time. This post is to show you how to use a new component I’ve made called jbpm-rollback-api, a configurable module that allows you to rollback persistent jBPM process instances to a previous step. It makes it possible by just adding an environment variable, a process event listener and an extra class to the jBPM persistence unit. I’ll discuss it in as much detail as possible in Plugtree’s next Public Training in London, I invite you to register
When running process instances, especially during the first runs in a new BPM based project, you might get to a point where you wished you had done something different along the steps of your business process (maybe specifying a different value for a variable, or you end up in a path you didn’t wish in the first place. If you can’t change that aspect of the process instance, you need to drop it altogether. This isn’t an issue when running from a JUnit test case, but if you find this issue in a running system, you might not want to drop the process instance and start again, specially when it involves other people’s work. The possibility of rollbacking the tasks of a process allows you to get to a previous state of the process instance without having to start over again.
The whole idea spins around the way the process instances are persisted today in the database. Here’s a nice explanation of the database persistence if you wish to go into detail about it. In short, there is a small blob of data marshalled into each ProcessInstance row in the database. Since it is overwritten every time the process instance changes, the rollback module takes a copy of that blob and stores it aside to have it available after the process changes. It can’t just copy it every time it pleases when it is inside a database transacted operation, so it does it whenever the session reaches a safe state (that is, after the transaction is finished and the session method is ready to return). And it can’t just do it for all live process instances, that would be too expensive performance-wise. So it does it only for the process that changed during the last transaction.
Overall the configuration looks like this:
The way it works is by four simple components:
- ProcessSnapshotLogger: This class acts as two things:
- A process event listener to monitor for any process instance changes within a persistent session. If a process instance changes, we mark it as a candidate to persist a snapshot after the knowledge session transaction is done. Also if a process instance is completed, we mark the process snapshots of that instance for deletion to keep the database runtime at a steady size.
- It also works as an interceptor to wait for all safe states in a persistent session, to persist any changed process instances. The interceptor is added as a step every time a command of the command based knowledge session finishes executing.
- ProcessSnapshot: A database entity designed to store the full blob representation of a process instance every time it changes, to be able to reload it on demand afterwards.
- ProcessSnapshotAcceptor: Taking snapshots of process instances can affect performance, so this class provides a very simple interface to configure what process instances to monitor for rollback, omitting by default. You have a few implementations available that allow you to select all instances of a given process definition ID, all the instances, or you can implement your own by implementing the method boolean accept(String processId, long processInstanceId).
- ProcessRollback: A utility class to query for old snapshots of a process instance and to paste them on top of the preexisting process instance. You can use the goBack(KieSession ksession, long processInstanceId) static method to go back one step, or the overridden static method goBack(KieSession ksession, long processInstanceId, int numberOfSteps) to go back as many steps as you like
Internally, the rollback recreates the old process instance and reactivates any nodes that were alive at the moment of the snapshot. At the moment (and in this form) you don’t send any signals to external systems that steps taken in the process need to be rolled back. However, this would be a next step for this module; by creating a RollbackableWorkItemHandler interface with a rollbackWorkItem(WorkItem item, WorkItemManager manager) method that people could implement, the rollback could go one step at a time and notify any external systems about a rollback being effected. This is one of the many subjects we would love to discuss and teach about:
We at Plugtree are organizing a Public Training in London on October 21st to 25th at N 1 Poultry. We’ll cover Drools 5, 6, and jBPM 5 and 6 with as much detail as possible. We’ll introduce both the BPM and AI theory, as well as all technical specifications for the different components, in order to take the most advantage of Drools and jBPM. Here’s an overall agenda:
- Day 1: Introduction to all components, for technical and non-technical folks alike
- Days 2 and 3: Focus on Drools components, both versions 5 and 6, from theory to practice in as much detail as possible. We’ll also cover rule writing in most formats and best practices
- Days 4 and 5: Focus on jBPM components, both versions 5 and 6, from theory to practice in as much detail as possible. We’ll also cover BPMN2 writing in as much detail as possible.
You can click here to register. Take advantage of the early bird pricing!
If you wish to start downloading and playing with the code discussed here, you can download it from here
Hello and welcome. In this post, we will try to explain an idea very useful for using knowledge sessions to perform Complex Event Processing (CEP) in a distributed environment for High Availability.
Complex Event Processing is a method of tracking and analyzing streams of data from multiple sources that refers to special occurrences of events in time, and to infer events or patterns that suggest more complicated circumstances. The goal is to identify meaningful events and respond to them as quickly as possible.
This set of functionality is easily implemented in Drools thanks to the Drools Fusion component. However, when thinking of having a CEP system in a productive environment, we usually find a few things we want that require special consideration:
We want to handle thousands of events at the same time, and want it to be as fast as possible(so using a persistent session is not something we want to be using most of the time)
90% of the cases require to have events correlated in time by a very small amount of time, from a few seconds, to maybe a day or two
We need to find a way to not lose all data if the server fails. However, because we want to make it as fast as possible, using a persistent session might not be the best solution.
In order to meet all these requirements, one possible solution was to have the knowledge session replicated among different nodes receiving events from a particular unified implementation of sources (such as a JMS Broker), like the following image shows:
This approach has one big flaw: All knowledge sessions are independent from each other. That means that either no common data exists between two or more sessions, or if it is, rules would be fired as many times as nodes exist in the cluster. Many times rules reference outside systems where the actions are to be performed, and if a complex event is detected, we usually want the rule to be fired just once.
Another approach taken to solve this issue is using a single persistent session shared between many nodes by the database:
This is the safest way to go. When using a persistent session, all the contents of the knowledge session are stored in a database. Every time an event occurs, the session is reloaded from the database, the event is added, the rules are reevaluated and it is persisted again.
The one problem with this type of configuration is, sadly, performance. When you persist the knowledge session, you basically serialize all (or most) of the data that exists in the knowledge session into a blob field in the session info table. Every time you add a new event, the blob grows (unless you configure an object marshalling strategy, but some ID data referencing the object is always stored with the session’s blob, even if you store the rest of the event is stored elsewhere). And when you have thousands to maybe millions of tiny events, all happening really fast, that recording will become slow.
Also, for most cases, the information about the primordial events (the very basic events that compose a more complex event) is not of much interest unless they actually fire a rule. Most of the time they’re only relevant as a collection. So we decided on implementing the following approach to manage our knowledge sessions: We created a non-persistent knowledge session that:
Notifies a group of sessions when a fact is inserted, deleted or update
Notifies a group of sessions when it has already fired a rule for a specific group of facts
Checks if anyone else in the sessions group has already fired a rule for a specific group of facts before firing it again.
To do so, our class (called HAKieSession) receives a special registry interface (called HAKieSessionRegistry) that implements all those events. We provide a special implementation of that registry class that works using JMS, so our schematics looks like this:
It’s very similar to the first implementation, except we don’t only hear events of the JMS broker, but we also send events to it, to notify other sessions when a rule has been fired or when the working memory has been altered.
In this way, we make an equilibrium between performance and high availability. If one of the nodes fails, the others can continue with the operations. Only one node will fire the rules at a time, but any of the nodes can fire the rules if the others fail. Only if all nodes fail would you lose events information. However, there could be backup functionality added to this implementation that creates some database backup only when some server is idle.
But how is it made?
The implementation of this HAKieSession was rather simple, thanks to the elegant API that the KieSession provides. We first wrote the HAKieSessionRegistry with the following methods:
ruleAlreadyFired: Checks if a rule has already been fired by another node. Returns true if so
fireRuleFired: Creates a record that this particular node has fired a rule, to prevent other nodes from firing it.
WorkingMemoryEventListener methods: To create notifications of each working memory change to other nodes.
Then, we simply created an extension to the stateful knowledge session that registers the registry class, first as a working memory event listener, and then creates an internal AgendaFilter. The AgendaFilter checks when a rule has been fired or not by other nodes (using the registry class) before actually firing the rules.
The following UML diagram shows all these classes together. It’s just that simple!
The only thing the HAKieSession class overrides from the original stateful knowledge session is the fireAllRules and fireUntilHalt methods. It merely decorates them to make sure that you always use its internal AgendaFilter. If you already call it with an AgendaFilter, it generates a composite agenda filter, that uses your AgendaFilter’s conditions AND its internal AgendaFilter condition.
Perhaps the most complex of components is the actual implementation of the HAKieSessionRegistry interface we created (JMSKieSessionRegistry). It holds both the initialization or connection to a Topic for all rule firing and working memory change information exchange. It implements also MessageListener to read information sent by other nodes through JMS using other instances of the JMSKieSessionRegistry class.
All the code is available through these URLs to anyone who wishes to download it and try it:
I hope you find it useful. Feel free to contact us to get more information, or to share your experience!
Hello and welcome to a post in which I intend to show you how to create your own implementation of drools and jBPM persistence. I’ve worked on an infinispan based persistence scheme for drools objects and I learnt a lot in the process. It’s my intention to give you a few pointers if you wish to do something of the sort.
If you’re reading this, you probably already have a “why” to redefine the persistence scheme that drools uses, but it’s good to go over some good reasons to do something like this. The most important thing is that you might consider the JPA persistence scheme designed for drools doesn’t meet your needs for one or more reasons. Some of the most common I’ve found are these:
The given model is not enough for my design: Current objects created to persist the drools components (sessions, process instances, work items and so on) currently are as small as possible to allow the best performance on the database, and most of the operational data is stored in byte arrays mapped to blob objects. This scheme is enough for the drools and jBPM runtime to function, but it might not be enough for your domain. You might want to keep the runtime information in a scheme that is easier to query from outside tools, and to do that you would need to enrich the data model, and even create one of your own.
The persistence I’m using is not compatible with JPA: There are a lot of persistence implementations out there that no longer use databases as we once knew (distributed caches, key value storages, NoSQL databases) and the model usually needs extra mappings and special treatment when persisting in such storages. To do so, sometimes JPA is not our cup of tea
I need to load special entities from different sources every time a drools component is loaded: When we have complex objects and/or external databases, sometimes we want new models to relate in a special way to the objects we have. Maybe we want to make sure our sessions are binded to our model in a special way because it makes sense to our business model. To do so we would have to alter the model
In order to make our own persistence scheme for our sessions, we need to understand clearly how the JPA scheme is built, to use it as a template to build our own. This class diagram shows how the JPA persistence scheme for the knowledge session is implemented:
Looks complicated, right? Don’t worry. We’ll go step by step to understand how it works.
First of all, you can see that we have two implementations of the StatefulKnowledgeSession (or KieSession, if you’re using Drools 6). The one that does all the “drools magic” is StatefulKnoweldgeSessionImpl, and the one we will be using is CommandBasedStatefulKnowledgeSession. It has nothing to do with persistence, but it helps a lot with it by surrounding every method call with a command object and deriving its execution to a command service. So, for example, if you call the fireAllRules method to this type of session, it will create a FireAllRulesCommand object and give it to another class to execute.
This command based implementation allows us to do exactly the thing we need to implement persistence on a drools environment: It lets us implement actions before and after every method call done to the session. That’s where the SingleSessionCommandService class comes in handy: This command service contains a StatefulKnowledgeSessionImpl and a PersistenceContextManager. Every time a command has to be executed, this class creates or loads a SessionInfo object and tells the persistence context to save it with all the state of the StatefulKnowledgeSessionImpl.
That’s the most complicated part: the one that implements the session persistence. Persistence of pretty much everything else is done easily through a set of given interfaces that provide methods to implement how to load everything else related to a session (process instances, work items and signals). As long as you create a proper manager and its factory, you can delegate on them to store anything to anywhere (or do anything you want, for that matter).
So, after seeing all the components, it’s a good time to start thinking of how to create our own implementation. For this example, we’ve created an Infinispan based persistence scheme and we will show you all the steps we took to do it.
Step 1: (re)define the model
Most of the time when we want to persist drools objects in our way, we might want to do it with a gist of your own. Even if we don’t wish to change the model, we might need to add special annotations to the model to work with your storage framework. Another reason might be that you want to store all facts in a special way to cross-query them with some other legacy system. You can literally do this redefinition any way you want, as long as you understand that whatever model you create, the persistence scheme will serialize and deserialize it every time you call a method on the knowledge session, so always try to keep it simple.
Here’s the model we created for this case:
Nothing too fancy, just a flattened model for all things drools related. We weren’t too imaginative with this model, because we just wanted to show you that you can change it if you want to.
One thing to notice in this model is that we are still saving all the internal data of these objects pretty much the same way as it is stored for the JPA persistence. The only difference is that JPA stores it in a Blob, and we store it in a Base64 encrypted string. If you wish to change the way that byte array is generated and read, you have to create your own implementations of these interfaces:
org.kie.api.marshalling.Marshaller for knowledge sessions
org.jbpm.marshalling.impl.ProcessInstanceMarshaller for process instances
But providing an example of that would take way too much time and perhaps even a whole book to explain, so we’ll skip it 🙂
Step 2: Implementing the PersistenceContext
For some cases, redefining the PersistenceContext and the PersistenceContextManager would be enough to implement all your persistence requirements. The PersistenceContext is an object in charge of persisting work items and session objects by implementing methods to persist them, query them by ID and removing them from a particular storage implementation. The PersistenceContextManager is in charge of creating the PersistenceContext, either once for all the application or on a per-command basis. The comand service will use it to persist the session and its objects when needed.
In our case we implemented a PersistenceContext and a PersistenceContextManager using an Infinispan cache as storage. The different PersistenceContextManager instances will have access to all configuration objects through the Environment variable. We’ve used the already defined keys in Environment to store our Infinispan related objects:
EnvironmentName.ENTITY_MANAGER_FACTORY is used to store an Infinispan based CacheManager
EnvironmentName.APP_SCOPED_ENTITY_MANAGER and EnvironmentName.CMD_SCOPED_ENTITY_MANAGER will point to an Infinispan Cache object.
You can see that code here:
At this point we have some very important steps to redefining our drools persistence. Now we need to know how to configure our knowledge sessions to work with this components.
Step 3: Creating managers for our work items, process instances and signals
Now that we have our persistence contexts, we need to teach the session how to use them properly. The knowledge session has a few managers that can be configured that allow you to modify or alter the default behaviour. These managers are:
- org.kie.api.runtime.process.WorkItemManager: It manages when a work item is executed, connects it with the proper handler, and notifies the process instance when the work item is completed.
- org.jbpm.process.instance.event.SignalManager: It manages when a signal is sent to or from a process. Since process instances might be passivated, it needs to
- org.jbpm.process.instance.ProcessInstanceManager: It manages the actions to be taken when a process instance is created, started, modified or completed.
JPA implementation of these interfaces already work with a persistence context manager, so most of the times you won’t need to extend them. However, with Infinispan, we have to make sure the process instance is persisted more often than with JPA, so we had to implement them differently.
Once you have these instances, you will need to create a factory for each type of manager.The interface names are the same, except with the suffix “Factory”. Each receives a knowledge session as parameter, from which you can get the Environment object and all other configurations.
Step 4: Configuring the knowledge session
Now that we have our different managers created, we will need to tell our knowledge sessions to use them. To do so you need to create a CommandBasedStatefulKnowledgeSession instance with a SingleSessionCommandService instance. The SingleSessionCommandService, as its name describes, is a class to execute commands against one session at a time. SingleSessionCommandService’s constructor receives all parameters needed to create a proper session and execute commands against it in a way that it becomes persistent. Those parameters are:
KieBase: the knowledge base with the knowledge definitions for our session runtime.
KieSessionConfiguration: Where we configure the manager factories to create and dispose of work items, process instances and signals.
Environment: A bag of variables for any other purpose, where we will configure our persistence context mananager objects.
sessionId (optional): If present, this parameter looks for an already existing session in the storage. Otherwise, it creates a new one.
Also, in our example, we’re using Infinispan, which is not a reference based storage, but a value based storage. This means that once you say to infinispan to store a value, it will store a copy of it and not the actual object. Some things in drools persistence are managed to be stored through reference based storages, meaning you can tell the framework to persist an object, change its attributes, and see those changes stored in the database after committing the transaction. With infinispan, this wouldn’t happen, so you have to implement an update of the cache values after the command execution is finished. Luckily for us, the SingleSessionCommandService allows us to do this by implementing an Interceptor.
Interceptors are basically your own command service to wrap the default one. You can tell each command to add more behaviour before or after each execution. Here’s a couple of diagrams to explain how it works:
As you can see, the SingleSessionCommandService allows for a command service instance to actually invoke the command’s execute method. And thanks to the interceptor extension of the command service, we can add as many as we want in chain, allowing us to have something like the next sequence diagram executing every time a command needs execution:
In our case, we created a couple of these interceptors and added them to the SingleSessionCommandService. One makes sure any changes done to a session object are stored after finishing the command. The other one allows us to do the same with process instance objects.
Overall, this is how we need to create our knowledge sessions at this point to actually use infinispan as a persistence scheme:
Complicated, right? Don’t worry. There’s yet another couple of classes to make it easier to configure.
Step 4: Creating our own initiation service
Yes, we could write that ton of code every time we want to create our own customized persistent knowledge sessions. It’s a free world (for the most part). But you can also wrap this implementation in a single class with two exposed methods:
One to create a new session
One to load a previously existing session
And creates all the configuration internally, merging it whenever you wish to change one or more things. Drools provides an interface to serve as a contract for this called org.kie.api.persistence.jpa.KieStoreServices
We created our own implementation of this interface and also a static class to access it, called InfinispanKnowledgeService. This allows us to be able to create the session like this:
Drools persistence can seem complicated to understand and to get working, let alone to implement it in your own way. However, I hope this shows a bit of demystification to those who need to implement drools persistence in a special way, or were even wondering if it is possible to do so in any other way than JPA.
Also, if you wish to see the modifications done to make it work, see these three pull requests:
A feature request to add this features to Drools is specified in this JIRA ticket. Feel free to upvote it if you wish to have it as part of the core drools project!