BPMN Timers, Red Hat Process Automation Manager, and PostgreSQL 11

Jeffrey Taylor
14 min readAug 22, 2020

Introduction:

When Business Process Model and Notation (BPMN) is used to specify business processes, timers are used in two ways: (1) intermediate timer events, used as part of the regular process flow to introduce delays, and (2) boundary timer events, used as an option to move process flow through alternative paths after expiration time¹. When deploying Red Hat Process Automation Manager (PAM) on Red Hat JBoss Enterprise Application Platform (JBoss EAP), the timers are implemented using Enterprise Java Beans (EJBs) and stored in a relational database.

Out of the box, PAM uses an ephemeral H2 database which will not preserve the state of the timers across restarts. For many reasons including persisting the state of timers, production deployments will need to be configured to use a relational database such as PostgreSQL, Oracle, IBM DB2, Microsoft SQL Server, MySQL or MariaDB. Further, when a kieserver cluster is used, each kieserver will need unique storage so that the timer events will be delivered to the appropriate kieserver.

This article details how to deploy Red Hat Process Automation Manager 7.8 using a 3 node kieserver cluster and PostgreSQL 11 in a manner in which EJB timer events are delivered to the appropriate kieserver.

Potential problems, error messages, and solutions:

There are a number potential configuration problems. The solutions are documented in this article.

Potential Problem 1: not using unique RDBMS storage for each kieserver in the cluster. In this case, the timers events are randomly delivered to the kieservers, resulting in errors being throw by one kieserver

23:53:29,374 WARN [org.jbpm.services.ejb.timer.EJBTimerScheduler] (EJB default — 1) Execution of time failed due to No scheduler found for com.myspace:MasterProject:1.0.1-SNAPSHOT-timerServiceId: java.lang.RuntimeException: No scheduler found for com.myspace:MasterProject:1.0.1-SNAPSHOT-timerServiceId
at org.jbpm.persistence.timer.GlobalJpaTimerJobInstance.call(GlobalJpaTimerJobInstance.java:74)
at org.jbpm.persistence.timer.GlobalJpaTimerJobInstance.call(GlobalJpaTimerJobInstance.java:48)
at org.jbpm.services.ejb.timer.EJBTimerScheduler.executeTimerJobInstance(EJBTimerScheduler.java:128)
at org.jbpm.services.ejb.timer.EJBTimerScheduler.transaction(EJBTimerScheduler.java:182)
at org.jbpm.services.ejb.timer.EJBTimerScheduler.executeTimerJob(EJBTimerScheduler.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)

In the mean time, the timer event is never delivered to the proper kieserver, potentially resulting in hung process instances.

Solution: See the section “Use unique network ports and RDBMS storage for EJB Timers”, below.

Potential Problem 2: Using a singleton runtime strategy.

Do not use the Singleton runtime strategy with the EJB Timer Scheduler (the
default scheduler in KIE Server) in a production environment. This combination can result in Hibernate problems under load. Per process instance runtime strategy is recommended if there is no specific reason to use other strategies.
For more information about this limitation, see Hibernate issues with Singleton strategy and EJBTimerScheduler.

22:50:31,090 WARN [org.jbpm.runtime.manager.impl.SingletonRuntimeManager] (default task-5) Singleton with EJB Timer Service is not recommended as it’s not stable under load

See: https://issues.redhat.com/browse/JBPM-5398

it turned out to be caused by limitation of singleton strategy when used with CMT. The problem is a race condition that might happen when executing on same ksession where on of the threads is managed with CMT. This is caused by CMT thread will leave executor method of ksession that is synchronized before completing transaction (and cleaning up resources use such as hibernate entity manager etc). At the same time another thread can already gain access to this ksession and start working on another command and steal the resources of that thread making them “corrupted” and thus not usable any more.

In general, this is a singleton runtime strategy limitation and in most of the cases should be avoided. Although with limited load it should work as expected. The reason why it failed in tests (again not always) is because timers are firing at very close rate to the requests coming t the server which increases the likelihood of the race condition.

It was confirmed that switching to per process instance eliminates the problem.

02:00:09,073 ERROR [org.hibernate.AssertionFailure] (EJB default — 4) HHH000099: an assertion failure occurred (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session): org.hibernate.AssertionFailure: possible nonthreadsafe access to session

02:00:09,073 ERROR [org.hibernate.internal.ExceptionMapperStandardImpl] (EJB default — 4) HHH000346: Error during managed flush [possible nonthreadsafe access to session]

02:00:09,071 ERROR [org.jbpm.runtime.manager.impl.error.ExecutionErrorHandlerImpl] (default task-21) Unexpected error during processing : org.hibernate.AssertionFailure: possible nonthreadsafe access to session

02:00:09,075 INFO [org.jboss.as.ejb3.timer] (EJB default — 4) WFLYEJB0021: Timer: [id=2fd3c4a2-ad74–4acc-99b2–949b0d7c7d1a timedObjectId=kie-server.kie-server.EJBTimerScheduler auto-timer?:false persistent?:true timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl@63185c32 previousRun=Thu Aug 20 02:00:08 EDT 2020 initialExpiration=Thu Aug 20 02:00:08 EDT 2020 intervalDuration(in milli sec)=0 nextExpiration=null timerState=CANCELED info=EjbTimerJob [timerJobInstance=GlobalJpaTimerJobInstance [timerServiceId=com.boundarytimers:MasterProject:1.0.2-SNAPSHOT-timerServiceId, getJobHandle()=EjbGlobalJobHandle [uuid=1811–5231–679]]]] will be retried

02:00:09,075 INFO [org.jboss.as.ejb3.timer] (EJB default — 4) WFLYEJB0024: Timer is not active, skipping retry of timer: [id=2fd3c4a2-ad74–4acc-99b2–949b0d7c7d1a timedObjectId=kie-server.kie-server.EJBTimerScheduler auto-timer?:false persistent?:true timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl@63185c32 previousRun=Thu Aug 20 02:00:08 EDT 2020 initialExpiration=Thu Aug 20 02:00:08 EDT 2020 intervalDuration(in milli sec)=0 nextExpiration=null timerState=CANCELED info=EjbTimerJob [timerJobInstance=GlobalJpaTimerJobInstance [timerServiceId=com.boundarytimers:MasterProject:1.0.2-SNAPSHOT-timerServiceId, getJobHandle()=EjbGlobalJobHandle [uuid=1811–5231–679]]]]

Solution: see “Per process instance” runtime strategy, below.

Potential Problem 3: Not using JBoss EAP Patch 7.3.2

Includes the fix for JBEAP-19539 — WFLY-13386 — Hung process instances and associated server.log WARN “Failed to reinstate timer ‘kie-server.kie-server.EJBTimerScheduler’ “

2020–04–15 16:43:57,733 WARN [org.jboss.as.ejb3.timer] (Timer-1) WFLYEJB0161: Failed to reinstate timer ‘kie-server.kie-server.EJBTimerScheduler’ (id=33170e5f-3b34–4503–8796–9b5e6871c074) from its persistent state: java.lang.NullPointerException
at org.jboss.as.ejb3.timerservice.persistence.database.DatabaseTimerPersistence$RefreshTask.run(DatabaseTimerPersistence.java:851)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)

Solution: Install JBoss EAP Patch 7.3.2 as described below.

Configuration used in this article:

  • 2 Servers running Red Hat Enterprise Linux version 8.2: one to host PostgresSQL Server and another to host to run Red Hat Process Automation Manager (Acceptable hosts include actual servers or KVM instances)
  • LAN connections
  • An HTML client such as Google Chrome browser
  • In this publication, I refer to hostnames: postgresql11 and rhpam78
  • For the PostgreSQL installation, I refer to database rhpamdatabas, user rhpamuser and password: rhpampassword

Simple example of a boundary timer event [2]:

PostgreSQL 11 installation:

Install PostgreSQL 11 Server (Database Software) on the PostgreSQL (Hardware) Server

(Thanks to https://computingforgeeks.com/how-to-install-postgresql-11-on-centos-rhel-8/)sudo dnf -y install \
https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf module disable postgresqlsudo dnf -y install postgresql11-server postgresql11sudo /usr/pgsql-11/bin/postgresql-11-setup initdbsudo systemctl enable --now postgresql-11sudo systemctl status postgresql-11sudo passwd postgressudo su - postgres
psql -c "alter user postgres with password 'rhpampassword'"

Install PostgreSQL Client on the PAM server

sudo dnf install java-11-openjdksudo dnf -y install \
https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf module disable postgresqlsudo dnf -y install postgresql11

Resolve Network Issues

BEFORE:

# PostgreSQL is listening for connections on localhost (127.0.0.1)[postgres@postgresql11 ~]$ netstat -anlt | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
tcp6 0 0 ::1:5432 :::* LISTEN
[user@rhpam78 ~]$ psql -h postgresql11 -U postgres
psql: could not connect to server: No route to host
Is the server running on host "postgresql11" (192.168.0.12) and accepting
TCP/IP connections on port 5432?

FIX:

1. Edit /var/lib/pgsql/11/data/pg_hba.conf
-- change this line from --
host all all 127.0.0.1/32 ident
-- to --
host all all 192.168.0.0/24 md5
# NOTES:
# (1) Review the text in this file and decide what details make
# sense for your deployment. (2) After a reconfiguration, I found
# that I also needed to update the "IPv6 local connections" entry.
2. Edit /var/lib/pgsql/11/data/postgresql.conf
-- Add this line --
listen_addresses = '*'
# NOTE:
# what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
3. Firewall configurationsudo firewall-cmd --add-port 5432/tcp --permanent
sudo firewall-cmd --reload
sudo systemctl stop postgresql-11.service
sudo systemctl start postgresql-11.service

VERIFY:

[root@postgresql11 ~]# netstat -anlt | grep 5432
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp6 0 0 :::5432 :::* LISTEN

[root@4-postgresql11 ~]# firewall-cmd --list-ports
5432/tcp
[user@rhpam78 ~]$ psql -h postgresql11 -U postgres
Password for user postgres:
psql (11.8)
Type "help" for help.

postgres=#

Install Red Hat Process Automation Manager

Install PAM

sudo dnf install gitgit clone https://github.com/jbossdemocentral/rhpam7-install-democd rhpam7-install-demo

Read and follow the directions at https://github.com/jbossdemocentral/rhpam7-install-demo

$ ls -l installs
total 1296500
-rw-r--r--. 1 user user 206430757 Aug 11 16:46 jboss-eap-7.3.0.zip
-rw-rw-r--. 1 user user 463 Aug 11 16:47 README
-rw-r--r--. 1 user user 784090695 Aug 11 16:46 rhpam-7.8.0-add-ons.zip
-rw-r--r--. 1 user user 229552084 Aug 11 16:46 rhpam-7.8.0-business-central-eap7-deployable.zip
-rw-r--r--. 1 user user 107530273 Aug 11 16:46 rhpam-7.8.0-kie-server-ee8.zip
./init.sh

Install JBoss EAP Patch 7.3.2

Lots of good stuff in this patch, but for the context of this article, we are specifically interested in including the fix for JBEAP-19539 — WFLY-13386 — Hung process instances and associated server.log WARN “Failed to reinstate timer ‘kie-server.kie-server.EJBTimerScheduler’ “

For more details, see the JBoss Enterprise Application Platform 7.3 Update 2 Release Notes

Install Patch 7.3.2

cd target/jboss-eap-7.3export EAP_HOME=`pwd`$EAP_HOME/bin/add-user.sh jbossAdmin jbossPassword$EAP_HOME/bin/standalone.shDownload Red Hat JBoss Enterprise Application Platform 7.3 Update 02 from Red Hat's Software Downloads$EAP_HOME/bin/jboss-cli.sh --connect[standalone@localhost:9990 /] patch apply ~/Downloads/jboss-eap-7.3.2-patch.zip
{
"outcome" : "success",
"response-headers" : {
"operation-requires-restart" : true,
"process-state" : "restart-required"
}
}
[standalone@localhost:9990 /] reload
Failed to establish connection in 6032ms
[disconnected /]

Enable PAM and JBoss EAP access from remote clients

Before this step, the application is only available from the localhost. The “before” configuration makes sense for a personal laptop, but the “after” configuration is more appropriate for an enterprise deployment.

Business Central and the Master kieserver will have access to ports 8080–9029, the Blue kieserver will have access to ports 9030–9079, and the Green kieserver will have access to ports 9080–1002

Firewall

sudo firewall-cmd --add-port 8080-10029/tcp --permanentsudo firewall-cmd --reloadsudo firewall-cmd --list-ports

Listener Binding

[user@6-rhpam78 ~]$ $EAP_HOME/bin/jboss-cli.sh --connect[standalone@localhost:9990 /] /system-property="org.kie.server.location":write-attribute(name="value",\
value="http://${jboss.bind.address:127.0.0.1}:8080/kie-server/services/rest/server")
{
"outcome" => "success",
"response-headers" => {"process-state" => "restart-required"}
}
[standalone@localhost:9990 /] /system-property="org.kie.server.controller":write-attribute(name="value",\
value="http://${jboss.bind.address:127.0.0.1}:8080/business-central/rest/controller")
{
"outcome" => "success",
"response-headers" => {"process-state" => "restart-required"}
}
[standalone@localhost:9990 /] shutdown[user@6-rhpam78 ~]$ $EAP_HOME/bin/standalone.sh -b rhpam78

Verify Now, from a remote web browser, you should be able to visit Business Central:

But the JBoss EAP Console is still ONLY available when you visit from localhost. One way to access the console is to use “ssh -X” (-X enables X11 forwarding back to the X-Windows server on your Linux laptop.)

[user@my-linux-laptop] ssh -X rhpam78
[user@rhpam78] firefox
Visit: http://localhost:9990/console
jbossAdmin/jbossPassword
[Recall that we set these JBoss credentials, above.]

Security issues associated with opening access to the JBoss EAP console from remote systems is a subject for another day.

Create PostgreSQL’s rhpamuser and rhpamdatabase

$ psql -h postgresql11 -Upostgres postgrespostgres=# CREATE USER rhpamuser WITH PASSWORD 'rhpampassword';postgres=# ALTER USER rhpamuser CREATEDB;postgres=# ALTER USER rhpamuser CREATEROLE;postgres=# ALTER USER rhpamuser SUPERUSER;postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
rhpamuser | Superuser, Create role, Create DB | {}
postgres=# CREATE DATABASE rhpamdatabase;postgres=# grant all on database rhpamdatabase to rhpamuser;postgres=# grant all PRIVILEGES on database rhpamdatabase to rhpamuser;

Verify
The following command should work from either host:


$ PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase
psql (11.8)
Type “help” for help.

rhpamdatabase=#

Install the PostgreSQL schema

PostgreSQL schema

$ cd rhpam7-install-demo/installs/## Extract migration tools from add-ons
$ unzip rhpam-7.8.0-add-ons.zip rhpam-7.8.0-migration-tool.zip

## Extract the postgres files from the migration tools zip
$ unzip rhpam-7.8.0-migration-tool.zip \*postgres\*
$ cd rhpam-7.8.0-migration-tool/ddl-scripts/postgresql$ PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase < postgresql-jbpm-schema.sql$ echo "\dt" | PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase
List of relations
Schema | Name | Type | Owner
--------+--------------------------------+-------+-----------
public | attachment | table | rhpamuser
public | audittaskimpl | table | rhpamuser
public | bamtasksummary | table | rhpamuser
public | booleanexpression | table | rhpamuser
public | casefiledatalog | table | rhpamuser
public | caseidinfo | table | rhpamuser
public | caseroleassignmentlog | table | rhpamuser
...

Enable PostgreSQL visibility from the JBoss EAP server

Download the PostgreSQL JDBC driver from https://jdbc.postgresql.org/download.html$EAP_HOME/bin/jboss-cli.sh --connect[standalone@localhost:9990 /] module add --name=com.postgresql --resources=~/Downloads/postgresql-42.2.14.jar --dependencies=javax.api,javax.transaction.api[standalone@localhost:9990 /] /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,\
driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
[standalone@localhost:9990 /] xa-data-source add --name=rhpamXADS --jndi-name=java:/rhpamXADS --driver-name=postgresql --user-name=rhpamuser \
--password=rhpampassword --validate-on-match=true --background-validation=false \
--valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker \
--exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter \
--xa-datasource-properties={"ServerName"=>"postgresql11","PortNumber"=>"5432","DatabaseName"=>"rhpamdatabase"}

Reference: postgresql_xa_datasource example

Verify: The PostgreSQL Driver and Datasource are now visible in the JBoss EAP console:

Configure the kieserver to use PostgreSQL storage:

Configure the kieserver to use PostgreSQL storage:

$EAP_HOME/bin/jboss-cli.sh --connect[standalone@localhost:9990 /] /system-property=org.kie.server.persistence.ds:add(value=java:/rhpamXADS)[standalone@localhost:9990 /] /system-property=org.kie.server.persistence.dialect:add(value=org.hibernate.dialect.PostgreSQLDialect)

EJB Timer configuration:

This step is to ensure that the EJB TImers are stored in the PostgreSQL database.

EJB Timer configuration:

[user@6-rhpam78 ~]$ $EAP_HOME/bin/jboss-cli.sh --connect[standalone@localhost:9990 /] /subsystem=ejb3/service=timer-service/file-data-store=default-file-store:remove[standalone@localhost:9990 /] /subsystem=ejb3/service=timer-service/database-data-store=ejb_timer_ds:add(datasource-jndi-name="java:/rhpamXADS",database="rhpamdatabase",partition="ejb_timer_master_part",refresh-interval="30000")[standalone@localhost:9990 /] /subsystem=ejb3/service=timer-service:write-attribute(name="default-data-store", value="ejb_timer_ds")[standalone@localhost:9990 /] shutdown [user@6-rhpam78 ~]$ $EAP_HOME/bin/standalone.sh -b rhpam78

NOTE: refresh-interval: Sets refresh interval for the EJB timer datastore service in milliseconds.

Verify EJB Timer configuration:

  1. Create a project with a timer, or import a project with a timer into business Central. For example, import
    https://github.com/jtayl222/MasterProject.git. Build and deploy.

2. table_size_1.sql: SQL script to show PostgreSQL space usage

SELECT *, pg_size_pretty(total_bytes) AS total
, pg_size_pretty(index_bytes) AS INDEX
, pg_size_pretty(toast_bytes) AS toast
, pg_size_pretty(table_bytes) AS TABLE
FROM (
SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS table_bytes FROM (
SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
, c.reltuples AS row_estimate
, pg_total_relation_size(c.oid) AS total_bytes
, pg_indexes_size(c.oid) AS index_bytes
, pg_total_relation_size(reltoastrelid) AS toast_bytes
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE relkind = 'r' AND relname like '%timer%'
) a
) a;
-- Thanks to https://wiki.postgresql.org/wiki/Disk_Usage

3. Create a bash script named do_1000_master.bash which will make RestAPI calls to start 1000 PAM processes

#!/bin/bashUSER='-ukieserver:kieserver1!'MASTER_PORT=8080
MASTER_CONTAINER_ID=MasterProject_1.0.1-SNAPSHOT
MASTER_PROCESS_ID="MasterProject.MasterProcess"
function start_them () {
local last=$1
echo "starting $last loops with $delay second delay."
for i in $(seq 1 $last)
do
echo -n "."

curl --silent $USER -X POST "http://rhpam78:$MASTER_PORT/kie-server/services/rest/server/containers/$MASTER_CONTAINER_ID/processes/$MASTER_PROCESS_ID/instances" \
-H "accept: application/json" -H "content-type: application/json" -d "{}"
done
}
start_them 1000

4. Verification:

  • EJB Timer data is being stored in the PostgreSQL database
$ PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase  < /tmp/table_size_1.sql 
oid | table_schema | table_name | row_estimate | total_bytes | index_bytes | toast_bytes | table_bytes | total | index | toast | table

-------+--------------+-----------------+--------------+-------------+-------------+-------------+-------------+---------+-------+------------+--------
-
17985 | public | jboss_ejb_timer | 814 | 2154496 | 90112 | 8192 | 2056192 | 2104 kB | 88 kB | 8192 bytes | 2008 kB
(1 row)

Create Master, Blue and Green kieserver configurations:

Create Master, Blue and Green kieserver configurations:

cd rhpam7-install-demo/targetmv jboss-eap-7.3 jboss-eap-7.3-mastercp -rp jboss-eap-7.3-master jboss-eap-7.3-bluecp -rp jboss-eap-7.3-master jboss-eap-7.3-greenvi jboss-eap-7.3-blue/standalone/configuration/standalone.xmlvi jboss-eap-7.3-green/standalone/configuration/standalone.xml

Use unique network ports and RDBMS storage for EJB Timers
When you edit the blue and green standalone.xml files, space the kieserver network ports by 50 and use unique storage for the EJB Timers. My tests indicate that using separate partitions within the datastore is sufficient. Using separate datastores is another viable approach.

Blue:

<property name=”org.kie.server.location” value=”http://${jboss.bind.address:127.0.0.1}:8130/kie-server/services/rest/server"/><property name=”org.kie.server.id” value=”blue-kieserver”/><database-data-store name=”ejb_timer_ds” datasource-jndi-name=”java:/rhpamXADS” database=”rhpamdatabase” partition=”ejb_timer_blue_part” refresh-interval=”30000"/>

Green:

<property name=”org.kie.server.location” value=”http://${jboss.bind.address:127.0.0.1}:8180/kie-server/services/rest/server"/><property name=”org.kie.server.id” value=”green-kieserver”/><database-data-store name=”ejb_timer_ds” datasource-jndi-name=”java:/rhpamXADS” database=”rhpamdatabase” partition=”ejb_timer_green_part” refresh-interval=”30000"/>

Bring up the final deployment

./jboss-eap-7.3-master/bin/standalone.sh -b rhpam78./jboss-eap-7.3-blue/bin/standalone.sh -Djboss.socket.binding.port-offset=50 -b rhpam78./jboss-eap-7.3-green/bin/standalone.sh -Djboss.socket.binding.port-offset=100 -b rhpam78

“Per process instance” runtime strategy

Do not use the Singleton runtime strategy with the EJB Timer Scheduler (the
default scheduler in KIE Server) in a production environment. This combination can result in Hibernate problems under load. Per process instance runtime strategy is recommended if there is no specific reason to use other strategies. For more information about this limitation, see Hibernate issues with Singleton strategy and EJBTimerScheduler.

Use the “Per process instance” Runtime strategy

Deploy your projects to the appropriate kieservers:

This test used the simplest possible processes definitions, which are available on github: MasterProject, BlueProject and GreenProject). Each process contains a boundary timer which will fire after one minute.

Final Stress Test:

As a final verification, a stress test was used to validate the final configuration. A RestAPI script similar to “do_1000_master.bash”, above, was used to round-robin throught the 3 kieservers, starting process instances as quickly as posible. Using my meager home lab equipment and a WiFi connection between PAM and the RestAPI client, 30,000 process instances were launched in 73 minutes, 10,000 per kieserver. Each process peristed the timer to ProgreSQL, waited for the event when the timer expired, and then exited.

While the stress test was running

(See table_size_1.sql, above)$ PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase  < table_size_1.sql
oid | table_schema | table_name | row_estimate | total_bytes | index_bytes | toast_bytes | table_bytes | total | index | toast | table
-------+--------------+-----------------+--------------+-------------+-------------+-------------+-------------+---------+--------+------------+---------
17985 | public | jboss_ejb_timer | 401 | 2146304 | 163840 | 8192 | 1974272 | 2096 kB | 160 kB | 8192 bytes | 1928 kB
SELECT count(partition_name)
FROM jboss_ejb_timer;
count
-------
414
(1 row)
SELECT partition_name, count(partition_name)
FROM jboss_ejb_timer
GROUP BY partition_name;

partition_name | count
-----------------------+-------
ejb_timer_blue_part | 138
ejb_timer_master_part | 138
ejb_timer_green_part | 138
(3 rows)

SUCCESS: The system processed more the 400 round trips per minute: client to kieserver to PostgreSQL insert row, wait, PostgresSQL delete row, complete process instance.

  • 100% of the process instances had sucessfully completed within 90 seconds after “do_10000_each.bash” completed.
  • No hung processes
  • There was never any excessive growth in the PostgreSQL table size

After the stress test has completed

(See table_size_1.sql, above)$ PGPASSWORD=rhpampassword psql -h postgresql11 -p 5432 -U rhpamuser rhpamdatabase  < table_size_1.sql
oid | table_schema | table_name | row_estimate | total_bytes | index_bytes | toast_bytes | table_bytes | total | index | toast | table
-------+--------------+-----------------+--------------+-------------+-------------+-------------+-------------+--------+--------+------------+-------
17985 | public | jboss_ejb_timer | 0 | 188416 | 163840 | 8192 | 16384 | 184 kB | 160 kB | 8192 bytes | 16 kB
(1 row)
SELECT count(partition_name)
FROM jboss_ejb_timer;
count
-------
0
(1 row)
SELECT partition_name, count(partition_name)
FROM jboss_ejb_timer
GROUP BY partition_name;

partition_name | count
----------------+-------
(0 rows)

References:

[1] https://github.com/kiegroup/kogito-examples/blob/stable/process-timer-quarkus/README.md[2]

--

--