Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog

Over the last weekend we refactored out our repositories structure to make sure that we can evolve each of the services and cloud starters separately. This was a natural step forward to make sure that our projects are aligned with the frameworks that they are depending on.


We have moved out the activiti-services that were originally created inside activiti/activiti repository to make sure that repositories for services are independent from each other. Activiti Cloud Services are the lowest level link between Activiti and Spring Cloud and for that reason now all the (Java) packages has been renamed to The same refactoring applies for maven artifacts which involved services which are now under the GroupId.

The following diagram shows our current repository structure that might expand into the future, but the basic structure is going to probably remain unchanged.




Main (Maven) Artifacts

In the previous diagram there are 4 dashed boxes which contain several repositories each Activiti, Activiti Cloud, Activiti Cloud Reference and Activiti Cloud Examples. The first layer is Activiti which includes the Build projects, Process Engine Runtime and examples. At this level we just depend on Java JDK 8. Inside the Activiti repository we have activiti-spring which serves as the basic integration against Spring 5 (version which is defined inside the Build parent project). Activiti relies on activiti-parent for 3rd party libraries dependency management. We are making sure that we deal with 3rd party dependencies and their versions in our parent poms, so it is completely forbidden to add version definitions in submodules. 

Inside the Activiti Build repository we are also providing the BOM (Bill of Materials - activiti-dependencies) that you can import into your projects for maven to handle all the dependencies version for you.  This will enable us to refactor Activiti’s internal modules without affecting applications depending on specific modules, so we recommend to use these activiti-dependencies whenever possible.


Moving down in the hierarchy we find Activiti Cloud which represent our Spring Cloud enabled services. These services were designed to run in a Cloud Native way and for that reason they will use abstraction layers provided by Spring Cloud. All the services are independent from each other and we will try to keep base dependencies as decoupled as possible, but they do share the same parent which specifies the spring boot and spring cloud versions that they all use.

We also have our BOM for Activiti Cloud that you can include in your projects to deal with dependencies of several starters and services in a centralised way:




There are also some cross-cutting concerns that also need to be added to each of our individual services such as security and utilities for testing. For that reason we have created a repository called activiti-cloud-service-common, which contains these shared cross-cutting features that are likely to be adopted by most of our services.

If you take a look at the following repositories (this list is likely to grow quite a lot):

You will find that in all of them we include two types of projects:

  • Base (Core) Services
  • Spring Starters

The base core services provides the business logic for each of these modules and Spring Starters allows you to add a simple dependency to your spring boot application to enable the functionality provided by the base core services to your spring boot app. We usually also provide some autoconfigurations and annotations to make sure that our base core services are easy to bootstrap.

We recommend our community members to build Services & Starters at this level. If you, for example, want to tap into events emited by the process engine, writing a service that consume those messages and execute some business logic should be trivial using Spring Cloud.

Until this point (everything inside) Activiti and Activiti Cloud will be released and available in Maven Central as maven artifacts. We will release Early Access builds every month until we have enough meat for a Beta release. We did our first EA release for the first time at the end of August 17, and we will do a our next one by the end of October.

Reference Docker Images & Examples

If you go down another level in the previous diagram, you will see a set of repositories which are being built by using the automated build configuration. These repositories contain reference implementations using our spring boot starters of each of our services. By using these images you can get a basic working setup of all the services.

These docker images are being tagged every month when we go through our release process, in our examples we always point to the latest build (which happens after every commit to these repositories).

These Docker Images are used to provide examples about how to deploy all these services using Docker Compose, Kubernetes, and Kubernetes HELM charts. You can find these examples inside the activiti-cloud-examples repository.

You, as a user, are encouraged to generate your own Docker images with your required customizations. We have tried to keep these images as simple as possible.

Deprecated Repositories

We will be deleting for good a repository called activiti-cloud-starters which we created to initial host all our starter projects. If you are working against this repository please move your changes to the appropriate *-service repository.


Activiti adopts Git Flow

As the team is growing fast, we have adopted Git Flow as our standard flow for working on new features. We are going to use the jgitflow maven plugin to drive the flow.

This means that we are not going to work on the master branch any more, master will be used for releasing and it will always contain a stable state of the project. In other words, master will be updated every month with our monthly release process. All the work now will be based on the develop branch, that you will find in all our repositories.

You can find more information about this plugin here: Atlassian JGITFLOW and workflow provides a very good and clear explanation about how the workflow works.

In order to contribute and send Pull Request to any of our repositories you will need to use the following maven goal making sure that you have an up to date develop branch:

This will create a feature branch for you to work, push your commits and then send a pull request. These maven plugin will ask you to provide a name for your feature branch and that should follow the following format <github username>-<issue number>-<short desc>. We will then review your PR and use jgitflow:feature-finish to merge that pull request.

If you have questions about these changes, procedures for contributions drop us a line in our Gitter Channel:

Are you using any version of the #Activiti project? We would like to invite you to submit a paper to the Alfresco DevCon happen in Lisbon, Portugal - 16, 17, 18 January 2018 ( ). If you are using the project or planning to, this is a great opportunity to share your project with a big community of users and a great opportunity to meet the team working on Activiti 7 & Activiti Cloud.

We will be sending some proposals about the new stuff that we are building so you can expect to see us talking about Activiti Cloud and how we are updating the engine to be Cloud Native. We will be showing examples about our brand new services designed for Kubernetes and Docker and how you can leverage the new infrastructure to build scalable solutions that doesn't mismatch with your existing infrastructure. We welcome ideas about topics that you consider worth the trip. 

Why is this a very good opportunity to share and learn about the future of Activiti? 

There are several reason why I want to meet with the large Activiti community and invite people using the project to submit an abstract for the DevCon to share with all the audience how they are using the project and what are they looking forward.

The following list depicts what, from my experience, we can all gain for meeting up in a conference like DevCon:

  • We can share our experiences with other people that is already using Activiti
  • We can find out what are the common and shared pain points that will need to be solved by future versions
  • We can learn from each other how to solve and troubleshoot problems related with implementing BPM solutions in different industries
  • We can define the future roadmap together, and you as an individual or a company can join our open source community and have a say on the future of the project
  • We can use our time together to discuss different implementation techniques and approaches
  • We can have a deep discussion about the Activiti Cloud architectural changes, why modern environments are promoting new architectural patterns and how that affects the BPM implementation and Process Engines. 

 I offer my personal time to meet with community members and organizations that are interested in the project, I also want to offer my assistance to help you to submit a paper for the conference where we can present together your use case and your expectations for the future. 

We don't have much time left, so if you are interested in this opportunity get in touch as soon as possible. You can find me every day in our Gitter Channel:

Or you can post a comment here, I will try to reply as soon as I can. 

See you all in Lisbon! 

Last week the team spend some time working around system to system integrations. You can find more about our Activiti Cloud Connectors Strategy here. This new strategy uses some abstraction layers to make sure that when we run inside a Kubernetes enabled environment our connectors can leverage Kubernetes Services. The JHipster integration is also moving forward, we now have clear plans of how that should look like and how our users will benefit from it (a blog post is coming about this as well). We are also very excited to see the team expanding with some experienced members from the Activiti 5 and 6 community (@balsarori , @lucianoprea and @constantin-ciobotaru ). We held some planning sessions to make sure that we can build all the new generation services in an autonomous way.


@willkzhou was looking at the ElasticSearch integration provided by @gmalanga to polish it and merge it into master.

@daisuke-yoshimoto created spring boot application for Audit Service with MongoDB and activiti-cloud-examples for Audit Service with MongoDB. And, he refactored Audit Service with MongoDB to remove Controller by using Spring Data REST.

@igdianov completed the graphql PoC for query module and started investigating using it to provide a notification mechanism.

@ryandawsonuk created an example Activiti project using the JHipster generator and added an Activiti Cloud example that uses the Spring Cloud Config Server.

@erdemedeiros worked on message queue implementation for service tasks. All the necessary information to retrigger the execution is now stored in the engine core database. The executionId  is no longer sent to the connector, only the correlation id.

@salaboy worked on finishing the connectors blog post and example in github. We also had very interesting meetings with new/old members of the activiti community which will be joining us on daily basis: @balsarori , @lucianoprea and @constantin-ciobotaru.

This week we will be finishing another big repository refactoring to make sure that each of our services and starters are independent from each other and we as a team along with community members doesn’t block each other. If you experience issues around these, or if you were working against an existing repo that will not be longer be supported get in touch with us via Gitter and we will guide you to the new structure. A blog post will follow with the new structure as soon as it is ready.

This blog is a continuation of my first blog around the work we Ciju Joseph and Francesco Corti did as part of Alfresco Global Virtual Hack-a-thon 2017


In this blog I’ll be walking you through aps-unit-test-example project we created where I’ll be using the features from the aps-unit-test-utils library which I explained in the first blog.

About the Project

This project contains a lot of examples showing:

  • how to test various components in a  BPMN (Business Process Model and Notation) model
  • how to test a DMN (Decision Model and Notation) model
  • how to test custom java classes that are supporting your process models.

Project Structure

Before even we get to the unit testing part, it is very important to understand the project structure.

As you can see from the above diagram, this is a maven project. However, if you are a “gradle” person, you should be able to do it the gradle way too! The various sections of the project are:

  1. Main java classes - located under src/main/java. This includes all the custom java code that are supporting your process/dmn models.
  2. Test classes -  located under src/test/java. The tests are again grouped into different packages depending on the type of units they are.
    1. Java class tests - This includes test classes for classes (eg: Java Delegate, Task Listener, Event Listener, Custom Rest Endpoints, Custom Extensions etc) under src/main/java.
    2. DMN tests - As you can see from the package name (com.alfresco.aps.test.dmn) itself, I’m writing all the DMN tests under this package. The pattern I followed in this example is one test class per DMN file under the directory src/main/resources/app/decision-table-models.
    3. Process(BPMN) tests - Similar to DMN tests, the package com.alfresco.aps.test.process contains all the BPMN test classes. Similar to DMN tests, I am following the pattern of one test class per BPMN file under src/main/resources/app/bpmn-models
  3. App models - All the models (forms, bpmn, dmn, data models, stencils, app.json etc) that are part of the process application is stored under the directory src/main/resources/app. When using the aps-unit-test-utils which I explained in the previous article, all the models are downloaded to this directory from APS. Once the tests are passed successfully, we will re-build the deployable process artifacts from this directory
  4. Test resources - As with any standard java projects, you can keep all your test resources in the directory src/test/resources. I’ll highlight a couple of files that you will find under this directory in the above project structure image
    1. - This file contains the APS server configurations such as server address, api url, user credentials etc for downloading the process application into your maven project. Please refer to my previous article for a detailed explanation of this file. You wouldn’t find this file on GitHub under this project, the reason is, this file is intended to be developer specific and local to the workspace of a developer. For this reason this file is included in the project’s .gitignore file to prevent it from getting saved to GitHub.
    2. process-beans-and-mocks.xml - the purpose of this file is to mock any project/process specific classes when you run your process tests. The concept is explained in detail in my previous article when I explained a similar file called common-beans-and-mocks.xml.  
  5. Build output - In the above screenshot you can see that there are two files named and aps-unit-test-example-1.0-SNAPSHOT.jar under /target directory. This is basically the build output that gets generated when you package the app using maven commands such as “mvn clean package”. The “.zip” file is the app package created from src/main/resources/app directory which you can version after every build and deploy to higher environments. The “.jar” is the standard jar output including all the classes/resources from your src/main directory.
  6. Maven pom xml - Since this is a maven based project, you need a pom.xml under the root of the project. Highlighting some of the dependencies and plugins that are used in this pom.xml
    • aps-unit-test-utils dependency - the test utils project which I explained in my previous post/blog.
    • maven-compiler-plugin - a maven plugin that helps compile the sources of the project
    • maven-assembly-plugin - a maven plugin that is used to package the “” archive from src/main/resources/app

Unit Test Examples

Now that you have a good understanding of all the project components, let’s take a look at some of the examples available in the project. I have tried my very best to keep the test classes and processes as simple as possible to make it easy for everyone to follow without much explanation.

Process Testing - This class can be used as a parent class for all the BPMN test classes. To avoid writing the same logic in multiple test classes, I added a few common logic into this, they are:

  • Setup of a mock email server
  • Process deployment prior to tests
  • Clean up such as delete all deployments after each tests
  • Test coverage alerts
/* Including it in the Abstract Class to avoid writing this in all the Tests.
      * Pre-test logic flow -
      * 1)      Download from APS if system property
      * 2)      Find all the bpmn20.xml's in {@value
      *           BPMN_RESOURCE_PATH} and deploy to process engine
      * 3)     Find all the elements in the process that is being tested. This set will
      *           be compared with another set that contains the process elements that are
      *           covered in each tests (this get updated after each tests).

     public void before() throws Exception {

          if (System.getProperty("") != null && System.getProperty("").equals("true")) {

          Iterator<File> it = FileUtils.iterateFiles(new File(BPMN_RESOURCE_PATH), null, false);
          while (it.hasNext()) {
               String bpmnXml = ((File);
               String extension = FilenameUtils.getExtension(bpmnXml);
               if (extension.equals("xml")) {
                    repositoryService.createDeployment().addInputStream(bpmnXml, new FileInputStream(bpmnXml)).deploy();
          processDefinitionId = repositoryService.createProcessDefinitionQuery()
          List<Process> processList = repositoryService.getBpmnModel(processDefinitionId).getProcesses();
          for (Process proc : processList) {
               for (FlowElement flowElement : proc.getFlowElements()) {
                    if (!(flowElement instanceof SequenceFlow)) {

      * Post-test logic flow -
      * 1)      Update activityIdSet (Set containing all the elements tested)
      * 2)      Delete all deployments

     public void after() {
          for (HistoricActivityInstance act : historyService.createHistoricActivityInstanceQuery().list()) {
          List<Deployment> deploymentList = activitiRule.getRepositoryService().createDeploymentQuery().list();
          for (Deployment deployment : deploymentList) {
               activitiRule.getRepositoryService().deleteDeployment(deployment.getId(), true);

      * Tear down logic - Compare the flowElementIdSet with activityIdSet and
      * alert the developer if some parts are not tested

     public static void afterClass() {
          if (!flowElementIdSet.equals(activityIdSet)) {
                         "***********PROCESS TEST COVERAGE WARNING: Not all paths are being tested, please review the test cases!***********");
               System.out.println("Steps In Model: " + flowElementIdSet);
               System.out.println("Steps Tested: " + activityIdSet);

Process Example 1

In this example we will test the following process diagram which is a simple process containing three steps Start → User Task → End - test class associated with this process which tests the following

  • A process is started correctly
  • Upon start a user task is created and assigned to the correct user with the correct task due date
  • Upon completion of the user task the process is ended successfully
@ContextConfiguration(locations = { "classpath:activiti.cfg.xml", "classpath:common-beans-and-mocks.xml" })
public class UserTaskUnitTest extends AbstractBpmnTest {

      * Setting the App name to be downloaded if run with
      * Also set the process definition key of the process that is being tested

     static {
          appName = "Test App";
          processDefinitionKey = "UserTaskProcess";

     public void testProcessExecution() throws Exception {
           * Creating a map and setting a variable called "initiator" when
           * starting the process.

          Map<String, Object> processVars = new HashMap<String, Object>();
          processVars.put("initiator", "$INITIATOR");

           * Starting the process using processDefinitionKey and process variables

          ProcessInstance processInstance = activitiRule.getRuntimeService()
                    .startProcessInstanceByKey(processDefinitionKey, processVars);

           * Once started assert that the process instance is not null and
           * successfully started


           * Since the next step after start is a user task, doing a query to find
           * the user task count in the engine. Assert that it is only 1

          assertEquals(1, taskService.createTaskQuery().count());

           * Get the Task object for further task assertions

          Task task = taskService.createTaskQuery().singleResult();

           * Asserting the task for things such as assignee, due date etc. Also,
           * at the end of it complete the task Using the custom assertion
           * TaskAssert from the utils project here

          TaskAssert.assertThat(task).hasAssignee("$INITIATOR", false, false).hasDueDate(2, TIME_UNIT_DAY).complete();

           * Using the custom assertion ProcessInstanceAssert, make sure that the
           * process is now ended.



Process Example 2

Let’s now look at a process that is a little more complex than the previous one. As you can see from the diagrams below, there are two units that are candidates for unit test in this model, they are process model & DMN model

  • - Similar to the above example, this is the test class associated with this process which tests the following:
    • A process is started correctly
    • Tests all possible paths in the process based on the output of rule step
    • Successful completion of process
    • Mocks the rules step - when it comes to the rules/decision step in the process, we are not invoking the actual DMN file associated with the process. From a process perspective all we care is that an appropriate variable is set at this step for it to take the respective path that is being tested. Hence the mock.
  • - This is the test class associated with the DMN file that is invoked from this process. More explanation in next section.

DMN Testing - Similar to the AbstractBpmnTest class I explained above, this class can be used as a parent class for all the DMN test classes. To avoid writing the same logic in multiple test classes, I added a few common logic into this, they are:

  • DMN deployment prior to tests
  • Clean up such as delete all deployments after each tests
      * Including it in the Abstract Class to avoid writing this in all the
      * Tests. Pre test logic -
      * 1)      Download from APS if system property
      * 2)      Find all the dmn files in {@value
      * DMN_RESOURCE_PATH} and deploy to dmn engine

     public void before() throws Exception {

          if (System.getProperty("") != null && System.getProperty("").equals("true")) {

          // Deploy the dmn files
          Iterator<File> it = FileUtils.iterateFiles(new File(DMN_RESOURCE_PATH), null, false);
          while (it.hasNext()) {
               String bpmnXml = ((File);

               String extension = FilenameUtils.getExtension(bpmnXml);
               if (extension.equals("dmn")) {
                    DmnDeployment dmnDeplyment = repositoryService.createDeployment()
                              .addInputStream(bpmnXml, new FileInputStream(bpmnXml)).deploy();

      * Post test logic -
      * 1)      Delete all deployments

     public void after() {
          for (Long deploymentId : deploymentList) {

DMN Example 1

In this example we will test the following DMN model which is a very simple decision table containing three rows of rules.

  • - the test class associated with the above DMN model. The test cases in this file will test every row in the DMN table and verify that it is getting executed as expected. The number of rules in real life can grow in size over time, hence it is important to have test cases covering all the possible hit and miss scenarios in your test cases for a healthy maintenance of your decision management and business rules.
@ContextConfiguration(locations = { "classpath:activiti.dmn.cfg.xml" })
public class DmnUnitTest extends AbstractDmnTest {

     static {
          appName = "Test App";
          decisonTableKey = "dmntest";

      * Test a successful hit using all possible inputs

     public void testDMNExecution() throws Exception {
           * Invoke with input set to xyz and assert output is equal to abc

          Map<String, Object> processVariablesInput = new HashMap<>();
          processVariablesInput.put("input", "xyz");
          RuleEngineExecutionResult result = ruleService.executeDecisionByKey(decisonTableKey, processVariablesInput);
          Assert.assertEquals(1, result.getResultVariables().size());
          Assert.assertSame(result.getResultVariables().get("output").getClass(), String.class);
          Assert.assertEquals(result.getResultVariables().get("output"), "abc");

           * Invoke with input set to 123 and assert output is equal to abc

          processVariablesInput.put("input", "123");
          result = ruleService.executeDecisionByKey(decisonTableKey, processVariablesInput);
          Assert.assertEquals(1, result.getResultVariables().size());
          Assert.assertSame(result.getResultVariables().get("output").getClass(), String.class);
          Assert.assertEquals(result.getResultVariables().get("output"), "abc");

           * Invoke with input set to abc and assert output is equal to abc

          processVariablesInput.put("input", "abc");
          result = ruleService.executeDecisionByKey(decisonTableKey, processVariablesInput);
          Assert.assertEquals(1, result.getResultVariables().size());
          Assert.assertSame(result.getResultVariables().get("output").getClass(), String.class);
          Assert.assertEquals(result.getResultVariables().get("output"), "abc");

      * Test a miss

     public void testDMNExecutionNoMatch() throws Exception {
          Map<String, Object> processVariablesInput = new HashMap<>();
          processVariablesInput.put("input", "dfdsf");
          RuleEngineExecutionResult result = ruleService.executeDecisionByKey(decisonTableKey, processVariablesInput);
          Assert.assertEquals(0, result.getResultVariables().size());


Custom Java Class Testing

This section is about the testing of classes that you may write to support your process models. This includes testing of Java Delegates, Task Listeners, Event Listeners, Custom Rest Endpoints, Custom Extensions etc which are available under src/main/java. The naming convention I followed for the test classes is “<ClassName>” and the package name is the same package name of the class that we are testing.


Let’s now inspect an example which is the testing of a task listener named

Example 1

The above task listener is used in a process named CustomListeners in the project.  From a process testing perspective, this TaskListener is mocked in the process test class via process-beans-and-mocks.xml. We now have this task listener class that is still not unit tested. Let’s inspect its testing class which is tested the following way:

  1. Set up mocks and inject mocks into classes that are being tested
  2. Set up mock answering stubs prior to execution
  3. Execute the test and assert the expected results
public class TaskAssignedTaskListenerTest {

     static class ContextConfiguration {
          public TaskAssignedTaskListener taskAssignedTaskListener() {
               return new TaskAssignedTaskListener();

     private static TaskAssignedTaskListener taskAssignedTaskListener;

     private DelegateTask task;

     public void initMocks() {

      * Testing TaskAssignedTaskListener.notify(DelegateTask task) method using a
      * mock DelegateTask created using Mockito library

     public void test() throws Exception {

           * Creating a map which will be used during the
           * DelegateTask.getVariable() & DelegateTask.setVariable() calls from
           * TaskAssignedTaskListener as well as from this test

          Map<String, Object> variableMap = new HashMap<String, Object>();

           * Stub a DelegateTask.setVariable() call

          doAnswer(new Answer<Void>() {
               public Void answer(InvocationOnMock invocation) throws Throwable {
                    Object[] arg = invocation.getArguments();
                    variableMap.put((String) arg[0], arg[1]);
                    return null;
          }).when(task).setVariable(anyString(), any());

           * Stub a DelegateTask.getVariable() call

          when(task.getVariable(anyString())).thenAnswer(new Answer<String>() {
               public String answer(InvocationOnMock invocation) {
                    return (String) variableMap.get(invocation.getArguments()[0]);
           * Start the test by invoking the method on task listener

           * sample assertion to make sure that the java code is setting correct
           * value

          assertThat(task.getVariable("oddOrEven")).isNotNull().isIn("ODDDATE", "EVENDATE");



Checkout the whole project on GitHub where we have created a lot of examples that covers the unit testing of various types of BPMN components and scenarios. We’ll be adding more to this over the long run.



Hopefully this blog along with the other two unit-testing-part-1 & aps-ci-cd-example is of some help in the Lifecycle Management of Applications built using Alfresco Process Services powered by Activiti

This blog is the first part of the two blog series around the work we Ciju Joseph & Francesco Corti did as part of Alfresco Global Virtual Hack-a-thon 2017

Hack-a-thon Project Description & Goal

Alfresco Process Services (APS) powered by Activiti has a standard way to develop custom java logic/extensions in your IDE. Typically the process models that often needs a lot of collaboration from many members of a team are developed in the web modeler of the product. From a packaging and versioning perspective, the process application, should be managed together with your java project. Since the role of unit tests is very critical during the lifecycle of these process artifacts it is important that we have good unit test coverage, testing the process models and custom java logic/extensions. The goal of this hack-a-thon project was to work on some unit test utilities and samples which can benefit the Alfresco community.


As part of this work we created two java projects (maven based) which are:

  1. aps-unit-test-utils - This is a utils project containing:
    1. Logic to automatically download the process app from an APS environment and make it available in your IDE/workspace.
    2. Activiti BPMN and DMN engine configuration xmls with all the necessary spring beans that you would need for your testing
    3. Mocks for OOTB (Out of the Box) APS BPMN stencils such as “Publish To Alfresco”, “REST call task” etc. Not all the components are mocked, but this gives you an idea on how to mock OOTB stencil components!
    4. Helper classes and custom assertions based on AssertJ library to help you quickly write tests over your process and decision tables.
  2. aps-unit-test-example - This project contains a lot of examples showing how to test your BPMN/DMN models and also, custom java logic associated with these models.


In this blog I’ll be walking you through the utils project and some of the main features available in this project

Project Features

Utility to fetch Process App from APS

One of the main features of this project is that it allows you to download(optionally) the process models that you have modeled in APS web modeler to your IDE during your local unit testing. Once you are happy with all your changes and unit test results, you can save those downloaded models into the version control repository. The reason why I highlighted the word “optionally” above is that, it is important that when you run your unit test in a proper CI/CD pipeline, you are unit testing the models that you have in your version control repository and avoid any other external dependencies.


The package com.alfresco.aps.testutils.resources in the project contains the classes responsible for downloading the process models from an APS environment into your IDE.

It is this method com.alfresco.aps.testutils.resources.ActivitiResources.get(String appName) which does this magic for you! You can invoke this method from your test classes at testing time to download and test your changes in web modeler. The method logic is:

  1. Read a property file named “” containing APS environment and api details. Sample property file available at
  2. If the app you are requesting is found on the server and if you have permissions to access the app, it is downloaded to your project under the path src/main/resources/app as a zip and then exploded into this directory. All the existing models will be deleted prior to the download and unzip
  3. From a unit testing perspective of the models, it is important we have the BPMN and DMN xmls available in the exploded directory under src/main/resources/app/bpmn-models and src/main/resources/app/decision-table-models respectively. When this method is successfully completed you will have those xmls in the respective directories ready for unit testing.

Configuration XMLs for the BPMN and DMN engines

Since Alfresco Process Services is a spring based webapp, we wrote all the utils, helper classes, examples etc using spring features. In this section, I’ll explain the two main configuration xmls present in src/main/resources directory that can be used to bootstrap the process engine and the dmn engine for unit testing.

  1. src/main/resources/activiti.cfg.xml:  Using this xml configuration a process engine is created with an in-memory h2 database. As you probably know there is a lot of configurations options available for Activiti process engine. If your test setup requires advanced configurations you should be able to do everything in this xml.
  2. src/main/resources/activiti.dmn.cfg.xml: This xml can be used to start the Activiti rule engine (DMN engine), again with an in-memory h2 database

Depending on the model that you are testing (BPMN/DMN), you can use one of the above configuration xmls to bootstrap the engine from your test cases.

Non-Engine Bean Configuration XML

src/main/resources/common-beans-and-mocks.xml: This xml can be used to configure any mock/real beans that are required for your process testing but not really part of the process/rule engine configuration. Those mock and non-mock beans are explained in the following subsections.

Mock Beans for OOTB APS BPMN Stencils

Since I think “mocks” are best explained with an example, please find below a process diagram where I’m using an OOTB APS “REST call task” stencil. I have also highlighted some of the other components in the APS editor that fall in this category of OOTB stencils.


In this example, from a unit testing perspective of OOTB components, you need to make sure a few things such as:

  1. This step is invoked upon a successful start of the process
  2. The expected response is set in your unit test so that you can continue with the next steps in the process.
  3. The configurations set on the model are successfully transferred to the BPMN XML upon export/deployment

However there are things that are not in-scope of unit testing of OOTB components:

  1. Test whether the REST API configured is invoked successfully - this is integration testing
  2. Testing of various configurations available on a REST task - this is the responsibility of Alfresco engineering team to make sure the configurations are working as expected


This is where we use mocks instead of the real classes that are behind these components. In order to create the mocks for these components, we need to first look at how these tasks look inside the deployed bpmn.xml. For example the bpmn equivalent of the above diagram is shown below:

<serviceTask id="sid-AB2E4A5F-4BF6-48BE-8FF1-CDE01687E69A" name="Rest Call" activiti:async="true" activiti:delegateExpression="${activiti_restCallDelegate}">
    <activiti:field name="restUrl">
    <activiti:field name="httpMethod">

As you can see from the XML, the bean that is responsible for the REST call is activiti_restCallDelegate. This bean also has some fields named “httpMethod”, “restUrl” etc. Let’s now look at the mock class (given below) that I created for this bean. Since it is a mock class, all you need to do is create a java delegate with the field extensions that are present in the bpmn.xml.

package com.alfresco.aps.mockdelegates;

import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.JavaDelegate;
import org.activiti.engine.delegate.Expression;

public class RestCallMockClass implements JavaDelegate{
     Expression restUrl;
     Expression httpMethod;
     Expression baseEndpoint;
     Expression baseEndpointName;
     Expression requestMappingJSONTemplate;
     public void execute(DelegateExecution execution) throws Exception {
          // TODO Auto-generated method stub


Now that I have the mock class, next step is to create a mock bean using this class so that it is resolved correctly during unit test time. Given below is the mock bean configuration in src/main/resources/common-beans-and-mocks.xml corresponding to the above mentioned mock class.

<bean id="activiti_restCallDelegate" class="org.mockito.Mockito" factory-method="mock"> 
   <constructor-arg value="com.alfresco.aps.mockdelegates.RestCallMockClass" />

You may have noticed that I am using a class called org.mockito.Mockito for the creation of the mocks. This is from the Mockito Library which is a great library for mocking!

I have created a few mock classes in this project which you can find in com.alfresco.aps.mockdelegates package. I have included them in the common-beans-and-mocks.xml file too. As you probably know APS contains a lot of OOTB stencils and this project contains only a very small subset. The point is, you should be able to mock any such OOTB beans using the above approach.

Common Beans

Any common beans (helper classes, utils etc) that you may require in the context of your testing can be added to common-beans-and-mocks.xml. Technically it can be separated from the mock xml, but for the sake of simplicity I kept it in the same xml.

Custom Assertions & Helper Classes

The classes in packages com.alfresco.aps.testutils and com.alfresco.aps.testutils.assertions are basically helper classes and assertions which can be re-used across all your process testing in an easy and consistent way. An approach like this will help reduce the unit test creation time and also help enforce some process modelling and unit test best practices. Highlighting some of the key features:

  1. The project contains an AbstractBpmnTest and AbstractDmnTest class which can be used as parent class to test your bpmn.xml and dmn.xml respectively.
  2. Includes a mock email transport which is set up in the AbstractBpmnTest. This can be used to test any email steps you have in the process.
  3. Custom Assertions using AssertJ Library on Activiti entities such as Task, ProcessInstance, DelegateExecution etc. Please note, the assertions I have created in this project is definitely not covering all possible assertion scenarios. However I think I have put a decent mix in there for you get started and you can add many more assertion methods depending on your test cases.  


Checkout the next blog Alfresco Process Services - Unit Testing # II to see the usage of some of these helper classes and assertions.


There are plenty of articles and blogs out there around unit testing best practices. So I’m not going to do that again here. However just wanted to stress one point with the help of an example: Do not mix integration testing with unit testing.

For example, a process containing the following steps Start → DMN (Rules) → Service Task → UserTask → End can be tested as three units

  1. Unit test for the process xml where you mock the “DMN” and “Service Task” steps
  2. Unit test for the DMN xml testing each rules in the DMN
  3. Unit test for the Service Task class

So, what next?

  • If you already have a good unit testing framework around your processes in APS, great, continue that way. Feel free to provide your feedback and contributions either as comments or as blogs here on community.
  • If you don’t have any unit testing around your processes in APS, I hope this article will help you get started. Feel free to fork it and make it your own at your organization. Good unit tests around your processes, rules, code etc will definitely help you in the long run especially when doing upgrades, migrations, change requests, bug fixes etc


Happy Unit Testing!


RFC: Activiti Cloud Connectors

Posted by salaboy Employee Oct 12, 2017

Open Source Java Process Engines historically provide a way to create system to system integrations. It is something basic expected from a BPMS. The problem begins when these mechanisms impose a non-standard way of doing those integrations. By non-standard I mean, something that feels awkward coming from outside the BPM world. Nowadays with the rise of microservices, there are several alternatives on how system to system integrations are designed and implemented. Projects such as Apache Camel and Spring Integrations are quite popular and they solve most of our integration problems for us.

Today’s real life integrations push us to use a lot of pre-baked tools to add fallback mechanisms, circuit breakers, bulkheads, client and cluster side load balancing, content type negotiation, automatic scaling up and down of our services based on demand, etc. Simple REST calls are not enough anymore.

In this blog post I share the approach that we are taking for Activiti Cloud Connectors. This is a request for comments blog post and it describes the underlying technology that we are planning to use. With time, we will simplify the mechanism shown in here to make sure that we don’t push our users to add too much boilerplate.



Java Delegates & Classpath Extensions (The old way)


Java Delegates & Classpath extensions are quite a powerful tool to extend the behavior of the process engine and the main entry point for system integrations. Until Activiti 6.x if you wanted to interact with external (running outside of the process engine) services you were responsible for writing a Java Class that implements the "org.activiti.engine.delegate.JavaDelegate" interface which exposes a single method:

[code language="java"]void execute(DelegateExecution execution);[/code]

As you can imagine, you can do all sorts of things with this. It’s simple and extremely flexible. However, there are downsides to this approach too that are common across any in-process extension model. For example, the JavaDelegate classes are directly referenced from within the process definitions, and this introduces a tighter coupling than is ideal between definition and implementation. Also, if you do make an error when coding your JavaDelegate then you run the risk of bringing down the JVM that is also running other processes. One other area that can cause difficulty is managing the Java class path and ensuring that it is replicated consistently across each member of your cluster of process engines and Job Executors.

For these and other reasons we want to introduce the concept of Activiti Cloud Connectors and decouple the responsibility of dealing with system to system interactions from the Process Engine.

Just to be clear, we are not removing JavaDelegates, instead we are providing a new recommended out-of-the-box mechanism that will tackle these problems.


Spring Cloud Connectors & Kubernetes


The Spring framework provides the concept and abstraction layer on top of Service Connectors and how these can be integrated in different cloud platforms. The main concept behind service connectors is to make sure that if your service depends on another service you delegate to the infrastructure the lookup to that service. This is usually referred as service/resource binding. You know that you want to interact with a very specific type of service but you don’t know where it is or how to locate it, so you delegate that responsibility to another layer that will look up to the available types of services, locate them and provide your service a way to interact with it.

Spring Cloud Connectors provides this level of abstraction and you can find more about them here: They provide different implementations for different cloud providers: CloudFoundry, Heroku and Local for testing and developing.

There is a Kubernetes Cloud Connector (, that we might want to pick up (the one that I’m using in the example) that looks like a very good start but it's not being actively maintained. So we will probably take the lead to push that project forward. Don’t get me wrong, the project is great, but we will need to get more juice from it. We also believe that this project can be moved to the incubator project related to kubernetes, so versions can be aligned on every front.


Activiti Runtime Bundle Integration Events


As mentioned before, we are pushing out of the Process Engine the responsibility to execute system to system integrations. From the Process Engine point of view Service Tasks (and other integrations) will be executed in an Async fashion. This means that the Process Engine will only emit an IntegrationRequestEvent and wait for an IntegrationResultEvent.

By pushing the responsibility to Connectors, the Process Engine doesn’t care anymore about what technology, practices or language do we use to perform the system to system integrations. In other words, please welcome “Polyglot Connectors!”

Screen Shot 2017-10-11 at 18.04.28.png

We are aiming to provide an out of the box solution for Service Tasks, where you don’t really need to specify which connector is going to be in charge of performing the integration. By doing this, we remove the need to add a reference to a Java Class inside the Process Definition, decoupling completely the “What needs to be done” from the “How needs to be done” and following a mode declarative approach.

Inside Activiti Cloud Connectors

Activiti Cloud Connectors will run outside of our Runtime Bundles, meaning that Runtime Bundles will not fail if a Connector fail. It also means that we will be able to scale them independently.

From a very high level perspective Activiti Cloud Connectors will be responsible for:

  • Listen to IntegrationRequestEvents
  • Process IntegrationRequestEvents (Perform the Remote/Local - Sync/Async call)
  • Return IntegrationResultEvents

Activiti Cloud Connector projects can bundle any number of system to system interactions that are related and can be managed together. This means that if you recognize a set of services that are often updated together you can bundle inside an Activiti Cloud Connector all these interactions, so your business processes can use them.

Activiti Cloud Connectors will be automatically discovered and registered inside the service registry provided by the infrastructure. This means that we can, at runtime, ask the service registry about the available connectors and their types. Our modelling tools can make use of that registry to present to the user the available connectors as well.

Screen Shot 2017-10-11 at 18.08.06

The Proof of Concept


You can find the PoC in my github account here:

This repository contains several pieces that demonstrate how to create connectors. You will find inside the repository:

  • Connector-consumer-app : Our connector that listen for IntegrationEvents and produces IntegrationResults after finishing the external system to system integration.
  • Payments-api : From the client (connector) point of view we just create a definition of the service that we want to connect to. This will help us to create different implementations for the same service for different environments.
  • Payments-local-connector : a local connector that will use Spring Connectors with a local service registry.
  • Payments-kube-connector : a kubernetes service connector, that allows you to discover a Kubernetes Service and connect to it.
  • Payments-service : a very simple Payment Service to connect as an example. This is a spring boot application that works in complete isolation from the Connector.

The Connector Consumer App can be packaged using different dependencies and configurations based on the selected profile. We have the “local” and “kube” profile. By default the “local” profile is active, this means that the local connector will be included.

Even if you are running with the local profile, you need to have rabbitMQ running for working with Spring Cloud Streams and for that reason, the project provides a docker-compose that will start the Payments Service plus RabbitMQ.

Once you have that running, you can start the Connector Consumer App which also expose a REST endpoint to trigger integrationEvents:

This is just to make it simple to test for this example, but the Process Engine will produce an event that will be picked up by the @StreamListener here:

The service connector resolution magic happens here:       

[code language="java"] PaymentService payment = cloud.getServiceConnector("payment",PaymentService.class,null); [/code]

Where cloud is an instance of Cloud created using a CloudFactory:

[code language="java"] private Cloud cloud = new CloudFactory().getCloud(); [/code]

This allows us to detect the environment where we are running and then obtain references to the services that we want to interact in a decoupled way.

Depending on the (Spring) Service Connector that we have in the classpath different lookup mechanisms will be used to get a reference to the service. We can even have multiple (Spring) Service Connectors inside our Activiti Cloud Connector project which will be enabled and disabled based on the Cloud Platform that is detected.

If you jump now to the two connectors implementation, you will find that they share a lot of code. The only thing that change is how the service reference is obtained. One uses a property file as a service registry and the other one uses the Kubernetes Service registry filtering by labels to get hold of the service reference.

There is no magic inside the connectors (local and kube) just a RestTemplate executing an exchange:

Each of these connectors will be registered to the specific cloud provider depending on which ServiceInfoCreator we provide.

[code language="java"] public class PaymentServiceInfoCreator extends LocalConfigServiceInfoCreator [/code]


[code language="java"] public class PaymentServiceInfoCreator extends KubernetesServiceInfoCreator [/code]

Spring does the resolution by using service loaders, and for that reason you need to register your own implementations by creating a couple of descriptors in META-INF/services/

You can take a look at this descriptors here:


These connectors are using different techniques to identify in which platform we are running.

In order to enable cloud connectors you need to set a property in your file:

[code language="xml"][/code]

In the case of Kubernetes, it looks at some ENV VARIABLES that must be set when you are running inside Kubernetes. For running the local connector you need to add a new properties file that will contain the service registry (which services are available and where they can be located). For our local payment service we have:

[code language="xml"][/code]

In the case that we are running in kubernetes the connector will use the Kubernetes Registry to find out which services are available and where they are.

The following diagram show the isolation of connectors from the Process Engine (Runtime Bundle instance) and the external service. Adding an abstraction layer that allow developers to use the framework/technology of their choice to implement these connectors.

Screen Shot 2017-10-11 at 18.13.19

It is important to notice that the Payment Activiti Cloud Connector lives inside the Activiti Infrastructure. This means that it can leverage all the other components available inside the infrastructure, for example, distributed logging and configuration service, the Service Registry, etc.

To finish this long post I wanted to mention some important points:

  • This PoC represent the most complex scenario that you might find while planning your integrations, so it might not be considered as the hello world example.
  • I mentioned this before, but Activiti Cloud Connectors will be the perfect place to bundle multiple calls (integrations) against the same type of services. When using JavaDelegates you had two options: 1) Create specific JavaDelegate for a certain service call, follwing this approach you end up with very simple JavaDelegates, but with a lot of them. 2) Create a very generic JavaDelegate that can be parameterized to make different calls to the same service. With this you end up with one very complex delegate that becomes complicated to maintain. With this new approach, Activiti Cloud Connectors allow you to define the granularity and the set of filters that you want to apply to the @StreamListeners to process different types of integrations.
  • If you are only targeting Kubernetes you can use the Kubernetes APIs directly to resolve Kubernetes exposed services. The Spring Cloud Kubernetes Connector is leveraging the Spring Cloud Connectors layer to make sure that our services are not tied to Kubernetes and can run in different infrastructures. This is a common case, you will probably want to run the same application locally outside kubernetes and in kubernetes, the abstraction layer is necessary.
  • Even though I’ve included 5 projects inside the PoC repository, you can end up with only one project + the service that you are trying to integrate with. I was trying to split in different modules the different connectors to make sure that we all understand the boundaries of the services and the requirements from the client perspective.
  • The Activiti Cloud project will suggest on how to create these connectors, but you are free to use any technology inside them, at your own risk :)
  • We are currently finishing the internal bits in the process engine. You can follow the PR here ( )

As soon as we have the PR ready to be merged in the process engine we will create some cloud connectors examples to demonstrate these concepts in action. Stay tuned and get in touch if you want to help or have comments about these topics. 

Last week (02/10/17 - 8/10/17) we covered numerous fronts. System to System integrations was a the core, but Igor and Ryan made huge progress on integrating with GraphQL and JHipster.


@igdianov finalized the GraphQL Pull Requests and improved our existing Query Code.

@ryandawsonuk created the first version of a JHipster generator for Activiti. Currently, this only supports embedding Activiti into an app rather than creating a runtime bundle and Activiti Cloud architecture but we’ve got a plan and have taken initial steps for covering the new Cloud components.

@erdemedeiros worked on message queue implementation for service tasks. The first PoC was successful: the new service task implementation sends a message to a queue and the execution waits for a new message containing the result before continuing. Improvements on the message content and filters are in progress.

@salaboy worked on getting an example with Service Cloud Connectors, Docker and Spring Cloud Streams working in kubernetes using Spring Cloud Connectors Kubernetes.

This week will be all about merging finished pull requests on the GraphQL side, refining the JHipster integration and finalizing the System to System Cloud Connectors (a blog post is coming about this). We are planning to do our last big repository refactoring related to the activiti-services and activiti-build to improve how we build and release all these new artifacts. If you experience any problem during this week, let us know so we can make sure to correct any issue that might arise. We want to welcome back @daisuke-yoshimoto who will be working on finalizing the MongoDB integration for our Audit module. We also want to welcome @willkzhou who just joined our community contributors and will be working on ElasticSearch integration.

The aim of this blog post is to show a working CI/CD example for managing process applications built using Alfresco Process Services (APS) powered by Activiti. Please note that the selection of the tools that are used in this article is my personal choice from an ever growing list of open source tools in this area! You should be able to swap one or more of these with your preferred tools/technologies. Similarly, things like release steps, versioning etc which I used in this article is just one way of doing it. I understand that every organization/team have their own established standard release processes. Again, the idea is you should be able to adjust the process to suit your needs!

CI/CD Process Steps

A typical CI/CD process for process applications built using Alfresco Process Services (APS) involves the following steps.

  1. Develop processes, forms, decision tables, data models, stencil etc. in the APS Web UI (App Designer)
  2. Group everything to an "App" and export the app.
  3. Create a java project using your IDE and write custom java extensions and delegates that are used by the process
  4. Add the exported app package into to the java project
  5. Write unit tests against the BPMN xml(s) available in the app package
  6. Configure the project to build an and app.jar upon build & package
  7. Add the java project to a version control repository
  8. Integrate the project in version control repository with an automation server and continuously build and unit test upon changes
  9. Version and upload the packages (zip and jar) built by the automation server job to an artifact repository.
  10. Download and deploy the versioned components from artifact repository to higher environments.
  11. Run any automated system integration tests that you may have after deployment.

DevOps Tools Used

The tools used in this example are:

  1. Jenkins  - a leading open-source automation server
  2. JFrog Artifactory - open-source artifact repository manager
  3. GitHub - popular web based Git version control repository
  4. Ansible - open-source deployment automation engine
  5. Apache Maven - open-source software project management tool (used for dependency management & packaging in this demo)

Component Diagram

Configuration Details

Sample Process Project (GitHub)

The first step in the demo is to create a simple process application in Alfresco Process Services. It is assumed that you are familiar with this step. If not, please refer to Get started with APS or APS User Guide. Once a process and associated components are built in the APS web modeler, group everything to an "App" and export the “App” archive. As the next step, we will save the process app components to GitHub. Please refer to GitHub: super-cool-process-app for a sample maven project structure. The process models and associated components we modeled in the web modeler are stored in this directory. The pom.xml of this project is configured to build the following two artifacts upon packaging.

  1. app.jar file - this will include all the custom java extensions that you may have to support the processes in the app.
  2. - this is just a zip archive of the app content models built using the maven assembly plugin.


Note: If you notice this project, the unit tests and java classes in this project are just some empty classes and not really related to process testing or process java delegates! The reason is, I just wanted to focus on the whole lifecycle in this article rather than focussing on the technical aspects. Check out the following two blogs where I have covered unit testing of APS applications in great depth

JFrog Artifactory Configuration

Artifactory is used for the resolving dependencies (including remote artifacts) at build & test time and also to store the output of each build (jar, and build info). Download and install Artifactory if you don’t already have one running. Once Artifactory is installed and running, we will do the following configuration.


In order to build an APS maven project, you would need to access the Alfresco Nexus Repo. I like to keep all such remote repository information at one place and just use one url to download all my dependencies rather than including various repository informations in pom files, maven settings files etc. To do this in Artifactory we can first create a remote repo in Artifactory pointing it to Alfresco Enterprise Repo and then add this remote repo to a virtual repo in Artifactory.

Creation of Remote Repository in Artifactory

  • Repository Key: activiti-enterprise-releases
  • URL: Use the alfresco enterprise repo url
  • Advanced -> Username/Password: use your alfresco nexus repo credentials

Please refer Artifactory Remote Repositories for more details on remote repositories.

Add Remote Repository to Virtual Repository in Artifactory

Now add the newly created remote repo to the default virtual release repo (named “libs-release”). Please refer Artifactory Virtual Repositories for more details on virtual repositories.


Ansible Scripts

Ansible is a very simple but powerful tool that can help you automate your application deployment steps! If you are not familiar with Ansible, I recommend you to check it out at A typical APS process application deployment involves the deployment of two types of artifacts:

  1. Process application archive (containing process models, form models, decision tables, data models and stencils)
  2. A jar file containing the dependant java extensions and customizations


The Ansible scripts (Ansible Playbook) that I used to automate the deployment of above artifacts are:

  1. app-deployment playbook - deploys the process application archive using the deployment REST APIs of APS. Note: Since this deployment is saving the models to the Process Engine database, no need to run this playbook on all nodes in a cluster. So you can just run this against one node in the cluster or against a load balancer url.
  2. jar-deployment playbook - deploys/copies the process application jar file to tomcat/webapps/activiti-app/WEB-INF/lib and also deletes any old version of the same jar file. This step needs to be run on all nodes in the APS cluster if you have a clustered deployment. Things that are not covered in the playbook are:
    1. Custom property file deployments - it is also common to have property files per environment that are associated with your jar files.
    2. Restart of app server, waiting for the server to come back up, making an api call to make sure the app is up and running etc - After the jar/property file deployment, a restart is often recommended. You should be able to add those steps into your Ansible Playbooks.

Jenkins Configuration

Jenkins can be deployed in a number of ways, please refer jenkins-download for details. Once it is installed, open the Web UI of Jenkins and install the following plugins which we would be using in this demo (Jenkins -> Manage Jenkins -> Manage Plugins)

  1. Ansible Plugin
  2. Artifactory Plugin
  3. Git Plugin
  4. GitHub Plugin
  5. Pipeline

Please refer to the respective documentation for the configuration of the plugins. Once all the plugins are correctly configured, let’s create the build and deployment jobs in Jenkins that will pull the above mentioned sample code from GitHub, creates deployable artifacts and deploys the code to target environments. We’ll be creating the following two jobs:

  1. Build Job: The Jenkins job responsible for testing, building, publishing the build to Artifactory and triggering the deployment job after a successful build. Instructions on creating this job below:
    1. Jenkins -> New Item -> Select “Pipeline” type and give it a name. Eg: “super-cool-process-app” in this demo
    2. GitHub project -> Project url ->
    3. You can configure various build triggers. For this example, I elected to do a poll every minute against GitHub. Poll SCM -> Schedule -> * * * * *
    4. Now we will create a pipeline script as shown in pipeline-script-file. The pipeline is split to 5 stages:
      1. Clone - clone the project from GitHub
      2. Test - maven based testing which will execute all the unit tests in the project
      3. Artifactory config - configuration of Artifactory repo using the artifactory plugin. Various examples of this can be found at artifactory-project-examples
      4. Package and publish to Artifactory - this step will package and publish the build artifacts to Artifactory
      5. Kick off deployment - this step will kick off the downstream deployment job explained in next section


Screenshots of this config shown below:



  1. Deploy Job: This job takes care of the deployment of artifacts built in the previous “Build Job” using the above mentioned Ansible Playbooks. Instructions on creating this job below:
    1. Jenkins -> New Item -> Select “FreeStyle” type and give it a name. Eg: super-cool-process-app-deploy (this name is used in the previous pipeline script in the last stage (“Start Deployment Job”)
    2. Check (tick) the “This project is parameterized” checkbox to configure some input parameters for this job. This is because, the Ansible Playbooks I wrote is generic and can work for any process application. Hence it requires few input variables to correctly identify the artifact that is deployed.
      1. APP_NAME: Default value is set to the name of the process application. in the case “Super Cool App”
      2. ARTIFACT_NAME: Default value is set to the name of the artifact (maven project). In this case “super-cool-process-app”
      3. ARTIFACT_VERSION: We don’t set any default value. The value for this parameter will be passed when this job is triggered. For example: if triggered from a build job, pass the build version. If triggered manually, enter manually via UI.
    3. The next item “Source Code Management” is configured to point to the GitHub repository where I saved the Ansible Playbooks. Git -> Repositories -> Repository URL ->
    4. Build Environment -> Check the box “Delete workspace before build starts”
    5. Now we need to configure two “Invoke Ansible Playbooks” in the next section under “Build”.
      1. The first Ansible Playbook will deploy the process app package via the REST APIs of APS. Configuration as below:
        1. Ansible installation -> Select the Ansible configuration available in dropdown
        2. Playbook path ->  ${WORKSPACE}/ansible-playbooks/app-deployment/site.yml (this will resolve to app-deployment-playbook)
      2. The second Ansible Playbook will deploy the process app jar file. Configuration as below:
        1. Ansible installation -> Select the Ansible configuration available in dropdown
        2. Playbook path ->  ${WORKSPACE}/ansible-playbooks/jar-deployment/site.yml (this will resolve to jar-deployment-playbook)


Screenshots of this config shown below:


Demo Video

A short video of the whole process in action is available here

Things to consider

Since this is not a production ready sample, listing down some of the factors you may want to consider when implementing it in your production environment.

  1. Extend the pipeline to manage the APS software deployment as well - embedded, standalone in a application server, container deployment etc
  2. If using dockerized deployments, creating a docker image per build is also an option instead of copying thing like jars, property files etc to an existing image.
  3. If there are breaking changes in the java classes during process modifications, consider a proper deprecation strategy to handle any running process instances.
  4. Securely manage the credential storage using features such as "Vault"
  5. Management of environment variables
  6. Manage re-usable java libraries/stencil libraries etc separate from processes specific libraries.
  7. The app package can grow quite big if you have a lot of processes in one app and this might make the release management and versioning of individual processes difficult. For a more granular control on each processes, you can create smaller apps containing one or two processes. Then those applications can be deployed via its own CI/CD pipeline. However if you would like to expose these processes in a single app to the end user (especially if the processes are user initiated process involving start forms) you can lock down the smaller individual apps to just a small set set of PVT (Production Verification Test) users and the CI/CD user. Once the processes are verified, those processes can be made available to the end users via a separate public facing process app.


So, if you don’t already have a CI/CD process around your process applications in APS, go ahead and set one up and I’m sure it will make your application development experience with Alfresco Process Services much easier!

Third month in and we are making huge progress. We have an initial set of services designed with a distributed approach in mind. Powered by Spring Cloud our services will run natively in Kubernetes + Docker making sure that there is no impedance mismatch between the infrastructure and the way that the services are designed and consumed. After releasing to Maven Central and tagging our initial release in Docker Hub you can start consuming our out of the box artefacts or build your own by using our Activiti Cloud Starters.

You can always find the most up to date roadmap here:


Milestone #0 - July 2017 - Ended

  • Clean up Activiti
  • Domain API + HAL API + Runtime Bundle
  • XML/JSON/SVG for process definitions
  • Audit Service: Event Store for Audit Information
  • Identity Management and SSO (KeyCloak implementation)
  • First Release Process

Milestone #1 - August 2017 - Ended

  • Domain API + HAL API + Runtime Bundle
  • Improvements, refinements and additions
  • Query Service: Event Store for Runtime Information
    • Security Enabled
    • JPA - Reference Implementation
  • Infrastructure Enabled Services
    • Gateway (Zuul)
    • Application Registry (Eureka)
    • SSO & IDM* (Keycloak default implementation)
  • All Services are Docker Enabled
  • All Services can be deployed into Kubernetes
  • Cloud Examples

Milestone #2 - September 2017 - Ended

  • Domain API + HAL API + Runtime Bundle
  • Audit Service Mongo DB alternative
  • GraphQL review in progress
  • Release to Maven Central
  • Infrastructure Enabled Services
    • Helm Charts
    • Cloud Documentation
  • Cloud Examples Improvements
    • Validation Examples
    • AWS
    • Kubernetes / Minikube
    • Docker Compose

Milestone #3 - October 2017 - In Progress

October will be all about making sure that our services endpoints provides all the functionality required to create complex applications.
Cloud Connectors will help to make sure that System to System integrations are handled in an asynchronous way and minimal changes (extensions) are required to run business process model. We are working in mechanisms to make sure that we reduce our Runtime Bundle classpath extension points in order to simplify upgrades and maintenance tasks.

We started planning our Repository Service that will be used to host different types of models such as Process Definitions, Forms, Data Models, Decision Models, Stencils/Extensions, etc.

In parallel with the Repository Service, we will have a new Form Service in charge of providing all the pieces and glue to render forms using ADF components.

  • Refactor Activiti Cloud Services
  • Process Def & Instance Security model
  • Activiti Cloud Connectors
    • Process Engine refactoring to avoid JavaDelegates and classpath extensions
    • Kubernetes Service ready
  • Model Repository Service (Design and Initial Implementation)
  • Form Service (Initial discussions and planning)

Milestone #4 - November 2017

  • Polyglot Connectors Examples
  • Application Context Service - (Design and Initial Implementation)
    • Publish/Deploy Runtime Bundle Service
  • Process Engine Clean ups and refactoring
    • BPMN2 Extensions
    • History Service
    • Job Executor
    • Timers
    • Emails Service

Milestone #5 - December 2017

  • Deployment Service & Pipeline (design and initial implementation)
  • New Decision Runtime Design and Initial Implementation
  • Polyglot MicroProcessRuntime PoC

As always, if you want to participate on the development process of some of these components get in touch. We are willing to collaborate with the open community and mentor people that want to learn about the project.

We look forward all community involvement, from comments, concerns, help with documentation, tests and component implementations.

You can always find the more up to date Roadmap in our Github Wiki.

Remember, we are 24/7 in our Gitter Channel, feel free to join us there.

Stay tuned!

Last week (25/9/17 - 1/10/17) we manage to ship our first Early Access release. We are getting quite close to get an initial version of the Activiti Cloud Connectors using Spring Cloud Service Connectors. Also our GraphQL Query endpoints are looking really good thanks to @igdianov.

@daisuke-yoshimoto is doing great by adding the Mongo DB Audit Service alternative to our cloud examples.

This week, we had the pleasure of meeting some other community members that are looking forward to collaborate and use this new version of the project.


@igdianov improved Query Endpoints and Models to support GraphQL endpoints.

@daisuke-yoshimoto started the next issues that makes spring boot application for Audit Service with MongoDB and  activiti-cloud-examples for Audit Service with MongoDB.

@erdemedeiros currently investigating how to provide a message queue based service task implementation.

@ryandawsonuk further hardened the release process, ensuring it is fully automated, and proved that we could do a hotfix release if we needed/wanted to. Ryan then worked on creating first versions of helm charts for kubernetes deployments.

@salaboy worked on getting an example with Service Cloud Connectors, Docker and Spring Cloud Streams.

This week

Today (2nd October 2017) we are going to do our monthly retrospective to update our roadmap based on our progress. Another blog post is coming about the update roadmap.

We will be working with our community members to finish GraphQL endpoints and Cloud Connectors so we can move forward and do some final repository refactorings to make sure that our next release handle the addition of new repositories correctly.

I'm proud to announce that we are now officially in Maven Central and Docker Hub. Our first Early Access release 7-201709-EA is out for the community to start experimenting. But what does this mean? Why Early Access? When is the next one coming? This blog post tries to clarify what is included in this Early Access release and when is the next release coming.

Plan, Build and Release Fast

In order to provide short iteration cycles we are planning to publish one of these Early Release every month (we will aim to the last Thursday of each month). In a componentized world, each of the modules will evolve separately and we want to make sure that we build, test and release each of these modules every month. We are making a lot of changes to the Activiti project structure and we introduced Activiti Cloud, for this reason we want to make sure that early adopters looking for cloud native components can get what they are looking for. 

Our release plan is directed by our Roadmap planning, what you can find here: Activiti Roadmap. A new update on this roadmap is coming next Monday (October Update). 

We will continue to improve our release process and also our projects structures to be aligned with the architectural decisions that we are making to enable Activiti to run natively on cloud providers. 

What's included in 7-EA201709?

This release included our first iteration to improve how we handle project dependencies to make our frameworks, services and starters easy to consume. This will also allows us to improve our modularity without affecting people implementations. 

The following two BOM (Bill Of Materials *-dependencies) artefacts were included in this release:

  • org.activiti:activiti-dependencies:7-EA201709

This allows you to include <dependencyManagement> tags in your projects to allow these artefacts manage the correct version for both activiti and activiti-cloud artefacts.




This release also included our new set of services that provides the core set of functionality for our Cloud Starters. These services now include: 

  • Audit Service
  • Query Service
  • Events enabled endpoints and Event Emmitters
  • Runtime Bundle Services (Stateless & Inmutable Process Engine) 
  • Identity Services and integrations

Cloud Starters

If you are used to work with Spring Boot our Cloud Starters provides a quick way to bootstrap different components in a straight forward way. All these starters are powered with Spring Cloud integration to make sure that they can be configured to leverage the ecosystem that surround them. For example, if there is a Service Registry, these services will be automatically registered to it. If there is an API gateway, these services will be automatically exposed and routes will be created to access them via the Gateway. 

You can find our Cloud Starters here:

Our Cloud Starters are powered by Spring Boot and the following Spring Cloud projects:

  • Spring Cloud Streams
  • Spring Cloud Registry
  • Spring Cloud Gateway
  • Spring Cloud Config (work in progress)
  • Spring Cloud Service Connectors (work in progress)
  • Spring Cloud Contracts (work in progress)
  • Spring Cloud Kubernetes (work in progress)

Docker Images

Using the starters we have created a set of Docker Images that you can quickly bootstrap the whole infrastructure for you. You can use Docker Compose or Kubernetes to run all these services. Helm Charts are in the process for Kubernetes and we, as developers and as the team in charge of the project, have chosen to use MiniKube for our everyday development environment.

Screen Shot 2017-09-29 at 09.12.52 

All our Docker images can be configured using Environment Variables to allow different environments, set ups and flexibility on the deployment phase. 

Git Book

We are constantly working in our new Git Book to explain how all these projects and components work together. We understand that Cloud Environments are much more complicated than using a Java Framework and the amount of technology that we will be using in the future might sounds overwhelming. For that reason, our Git Book tries to tackle the frequently asked questions about our architectural decisions and technology choices. This Git Book is going to be a common place for several pieces of documentation related with all our frameworks, services, cloud starters and how all these pieces fits together. 

You can access our Git Book here:

Examples & Cloud Examples

We are actively working in examples for our projects that you can find here:

We will welcome community feedback and help to improve these examples, which are probably the best way to get involved to the project and learn about the new features.


Next Steps

There will be some more repository refactoring during the next month to make sure that each project is independent from each other and we can evolve it separately. We are waiting to review and merge some pull requests before moving these projects to separate repositories. These refactorings will not affect your projects if you start using the BOMs introduced earlier. 

New Cloud Starters will be created and new Docker Images will be published related to:

  • Notification Service
  • Application Service
  • Repository Service 



Stay tuned!

Questions, Feedback, Comments?? Find us in Gitter:

Last week (18/9/17 - 24/9/17) we spend most of our time hardening our examples and our release process. @ryandawsonuk spent most of his time fighting with scripts for making releases as simple and flexible as possible. @erdemedeiros reviewed and merged @daisuke-yoshimoto Pull Request about supporting Mongo DB as an alternative Audit Service Module.


I’ve (@salaboy) spent some time reviewing @igdianov Pull Request for supporting extra GraphQL endpoints in our Query Service Module. This is a great addition and we are still reviewing how is the best way to add these functionality as an optional dependency to our Query Services. I will be interested in hearing from our community how feel about GraphQL. After some tests we can clearly see how UI components can leverage all the power of GraphQL endpoints.


This week we will be hardening our Kubernetes deployments while we finish some bits that we want to include in our first Maven Central release. I encourage the community to start looking at our GitBook which is slowly shaping up.

We will be also doing some planning for next month based on our progress, feel free to jump into our Gitter channel if you are interested in learning more about our plans.

Last week (11/9/17 - 18/9/17) we (@ryandawsonuk and myself) made huge progress validating our new cloud architecture with our partners from Pivotal. We attended Pivotal Partner’s day and discussed with their engineers our plans to run Activiti Cloud on top of Kubernetes and leverage their APIs and integrations. As result, we will start testing with Kubo, their approach flavour for Kubernetes which provides integration with their existing CloudFoundry infrastructure. This opens the doors to a myriad of possible integrations with existing apps and their entire marketplace. You will see more about this soon. In the meantime you can give it a try to our Kubernetes Example here.


@daisuke-yoshimoto reviewed his PR for Mongo DB and @gmalanga is updating his PR based on @erdemedeiros feedback.


At the same tim, @igdianov is working in finishing our first approach to GraphQL so we are looking forward to his PR this week.


We are working hard to get our release process sorted out to make sure that you can consume all of our artifacts from Maven Central. This require us a huge effort in setting up several builds and CI pipelines to coordinate all the new repositories and artifacts, you should expect a blog post about this during the week as soon as we finalize some extra details.


We are also structuring our GitBook to make sure that we cover the most important topics about Activiti and Activiti Cloud. We will appreciate your feedback, comments and suggestions about which topics should be covered in such book.

For the last two weeks, we have spent most of our time making sure that we can deploy Activiti Cloud components to Kubernetes. We’ve now shown how we can deploy Activiti cloud in the following environments:

  • Docker Compose (Development)
  • MiniKube (Development Environment - but a little bit more real)
  • Kubernetes (hosted in AWS)

The results can be found inside our cloud examples repository, where you will find docker compose and kubernetes descriptors to start these environments (docker / kubernetes directories). The most interesting bit of this deployment is that is composed of separate descriptors, one for the “Infrastructure” and another for each “Runtime Bundle” that we want to deploy/publish. Cloud Connector deployments will come next.


This means that in contrast with “normal” kubernetes deployments which are “static”, this deployment needs to deal with new Activiti Cloud Applications (Runtime Bundles, Query, Audit and Cloud Connectors) that will be deployed on demand based on business requirements.

If you remember, from my previous blog post about Activiti Cloud, we have the following components in our environment:

From this diagram, we can see the static components (that can be scaled using a load balancer) and the dynamic Activiti Cloud Applications.

Activiti Cloud Applications and their components will depend on the infrastructure to be running.

For each of these components we have separate Docker Images and some of them might use separate storage (requiring more docker containers for those DBs).

Based on the previous diagram here is the list of our current set of containers, already published for the infrastructure bits in Docker Hub:

  • Activiti-Cloud-Gateway (Zuul implementation)
  • Activiti-Cloud-SSO-IDM (Keycloak implementation)
  • Activiti-Cloud-Registry (Eureka implementation)
  • RabbitMQ  and PostgreSQL are also started as part of the infrastructure

It is important to note that even if the deployment descriptor is “static”, each of these components can be scaled independently.

This infrastructure will enable our Activiti Cloud applications to use common services such as Security and Single Sign On, the Service Registry and the message brokers to produce and consume messages.

Inside this infrastructure we enable our Domain Experts to deploy one or more Activiti Cloud Applications that will contain their business processes. This approach enables the interested parties to have a high level of separation between applications and a self healing network (provided by Kubernetes) that will fix problems when they happen (and they always happen).

Activiti Cloud Applications running in this “static” infrastructure are considered dynamic because:

  • Each application will be different
  • We cannot estimate when these applications are going to be needed
  • We cannot estimate in advance the load of each application
  • We must be able to scale each application separately
  • Each application can have different configurations

The services provided for Activiti Cloud Applications are:

  • Activiti-Cloud-Query (JPA Implementation)
  • Activiti-Cloud-Audit (JPA Implementation)
  • Activiti-Cloud-Nofitications (under development)
  • Activiti-Cloud-Runtime-Bundle (Base Image for your runtime bundles)
  • Activiti-Cloud-Connector (Base Image for your cloud connectors)

Activiti Cloud Applications will use the available infrastructure services to handle Service Lookups, Identity lookups, Single Sign On, and the Message Brokers to emit and receive events from other components.

As mentioned before, each runtime bundle can be configured against different databases or share the same instance if needed. For each application, meaning multiple runtime bundles, you can allocate a set of services that will aggregate information about execution and interaction of these runtime bundles. Some examples of these services are:

  • Query Service: you might want to aggregate all the tasks and process instances from all the Runtime Bundles running inside an application.
  • Audit Service: you might want to aggregate all the audit information of all the runtime bundles
  • Notification Service (under development): you might want to push notifications to all your users related to the changes of state inside your runtime bundles.

Getting Started

The first decision that you will need to make is to choose between running all services with:

  • Docker Compose
  • Kubernetes

The main difference between these two options is the degree of similarities that you will have a with a real production environment in your local setup. I would recommend Minikube if you are planning to run in Kubernetes hosted in a cloud provider. Minikube runs inside a VM, causing the application to be running inside a different instance of an Operating System.

The Docker Compose approach is probably faster, due the fact that it doesn’t require a VM, but you will need to be careful with configurations. Remember that if you are planning to run your applications in a clustered environment, each of the services might run in a different node.  

No matter which option do you choose, from a high level perspective these are the steps that you will need to perform:

  1. Start Infrastructure
  2. New runtime bundle
    1. Configure it
    2. Build it
    3. Deploy it
    4. Repeat 2 for a new Runtime Bundle

Docker Compose

If you choose the Docker Compose approach, you need to install Docker for your OS and Docker Compose will be ready to use in your terminal. (

Then you will need to clone the repository or copy the contents of the docker/infrastructure.yml file into your laptop and then run:

> docker-compose -f infrastructure-docker.yml up -d

Before you can start interacting with your services you need to add an entry to your "/etc/hosts" file so the SSO component can sign the verification tokens using the same internal name as the services which are behind the gateway. We have learnt a lot about Zuul (Gateway), Keycloak and how to do SSO with microservices, so expect a blog post about that shortly.

sudo vi /etc/hosts
# add for sso       activiti-cloud-sso-idm

This will start up all the infrastructure services + Audit and Query so you can focus on creating your runtime bundles.

Once you have all your infrastructure services started you can create a new Runtime Bundle Docker image with your business processes and required resources by using the project:

Which doesn’t require any Java or Maven and it uses our Activiti-Cloud-Runtime-Bundle Docker image as base to just include your business processes included inside the /processes/ directory. In order to build your Runtime Bundle you just build it as any docker image, by running:

docker build -t rb-my-app .

Inside the docker-runtime-bundle directory, which contains the Dockerfile specification to build the image.

If you require more Java magic and customizations, you can use: which is a simple project to create runtime bundle Docker Images by using the Fabric8 Docker Maven Plugin and our Activiti-Cloud-Runtime-Starter project. You can build the docker image by running:

mvn clean install

The obvious advantage of using the Java/Maven approach is that you can include unit tests for your processes and customizations.

You need to make sure to tag your docker image accordingly ( or when building the docker image manually with -t ) so then you can reference it in the rb-docker-compose.yml ->

Once you have your Runtime Bundle Docker Image ready you should be able to start it with:

docker-compose -f rb-docker-compose.yml up -d


At this point you have the infrastructure and one runtime bundle ready to be used.

To shut everything down run:

docker-compose -f rb-docker-compose.yml down

docker-compose -f infrastructure-docker.yml down

MiniKube / Kubernetes

With Kubernetes we will achieve a more realistic environment to test our services. By using Minikube we will achieve a similar runtime environment as with docker compose, with the big difference that kubernetes will take care to self heal our services in the case of failures.

In order to get the minikube set up and running you need to first install minikube ( and start the cluster with:

minikube start --memory 8000

You can change the memory allocation if you want to, Activiti Cloud with the current services is using around 3GB to bootstrap all the services.

You can access to the MiniKube dashboard running:

minikube dashboard

It should look like this:

And that will give you the MiniKube entry point IP address that you need to my to your /etc/hosts file

sudo vi /etc/hosts
# add for sso       activiti-cloud-sso-idm    activiti-cloud-sso-idm-kub

For my case the minikube IP address is:, but you should replace it for yours.

Once the cluster is ready you can start deploying services to it and you can do that by going to the /kubernetes/ directory inside the activiti-cloud-examples repository and run:

kubectl create -f infrastructure.yml

Look at the Docker Compose section to read more about how to build your Runtime Bundle Docker images, and when you have those ready you can do:

kubectl create -f runtime-bundle.yml

After running these commands you should see something like this in your Kubernetes Dashboard

Take a look at the runtime-bundle.yml ( ) file for customizations regarding your Runtime Bundle image name and how to configure a Database for it. Notice that we are creating a Single Pod with both Runtime Bundle + PostgreSQL, but this is not a restriction, you can change your deployments to suit your needs.

A couple of caveats regarding this deployment:

  • In minikube you can’t create service of type LoadBalancer, for that reason we are using NodePort. You will need to change this to LoadBalancer in two services: entrypoint and activiti-cloud-sso-idm-kub for a real Kubernetes environment. You will find comments explaining this in the infrastructure.yml deployments
  • In minikube we need to use the NodePorts specified in the infrastructure.yml in order to interact with the Gateway and SSO services. In a real Kubernetes environment you will need to use the Service specified ports intestead.

Now that you have everything up and running, you need to know how to shut it down:

kubectl delete -f runtime-bundle.yml

kubectl delete -f infrastructure.yml

minikube stop

Is it working?

How do you test all these services? You have two options for now, but more are coming, you can use (Chrome) Postman Collection to hit the REST endpoints of these services and check that components such as Query and Audit are receiving events from our Process Executions or you can use our Demo Client App (using Angular JS) that will perform the security Web Flow, redirecting you to Keycloak UI for Single Sign On to all of our services. This simple application allow you to test the different endpoints to check that everything is working there.

(Chrome) Postman Collection

Install Postman Chrome plugin, then download (if you cloned the github repo for the cloud examples you already have this in your local environment) the collection located here:

Import it into Postman and then you can execute different request against these services, mostly going through the API Gateway.

Depending on your environment setup you might need to change the IP addresses of the URLs in each request:

Notice that to perform any request you first need to get a Token, which will be added to all the following requests that you make against the services. The token is set to expire after 3 minutes, meaning that you might need to request a new token if you wait for longer than that between interactions.

Once you have the token you can interact with any the other services that you have deployed.

Notice also that based on the name of your runtime bundle the URL for such request might change. The API Gateway will use the name of your Runtime Bundle App to register a new route to it when it is deployed, you will need to adapt that accordingly.

Demo UI Client (Angular JS)

The other alternative, which looks more like building a client App, allows you to run an existing app that uses the Keycloak.js library to secure our rest calls and deal with requesting the token.

You can find this app in the demo-ui-client ( ) directory of the activiti-cloud-examples repository.

The README file in that directory guides you through the steps to execute the application.

Once you started the server you can access to:


In your browser and you should see something like this:

This is Keycloak kicking in, asking you for your credentials. You can use testuser/password to login and proceed to the application:

Again, here you will need to replace to your environment IP depending what setup are you testing. If you are running with Docker Compose (default URL to localhost) you can go ahead and execute some requests.

This should give you a high level idea about how to interact with these services and how to deploy new Runtime Bundles when you need them.

Sum up

On this very long blog post we have covered how to setup and run the Activiti Cloud Infrastructure and how to get one domain specific runtime bundle up and running. We will build up on the assumption that the infrastructure will provide a common set of services that each of the Activiti Cloud Applications will have available to use. We are also writing a GitBook to make sure that we document these components and these tutorials in a comprehensive way. We are just starting so we will appreciate any feedback or help that you might want to send our way.