Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog

Last week we started with a major clean up in our repositories to better decouple each project responsibility. This will enable us to build and version each module individually to allow faster development cycles. This week we will be setting up these pipelines to test the whole process for our libraries first followed by our service layer. We are still looking for Beta testers, if you want to get involved we recommend you to try the following two tutorials and get in touch via Gitter if you want to jump into more advanced topics:

@mteodori- holidays -

@lucianoprea- holidays -

@constantin-ciobotaru - holidays - 

@balsaroriworked on initial design of the Forms Router components.

@igdianovworked on pipelines for Activiti Cloud Applications

@cristina-sirbuContinued working on Activiti Cloud Events Adapter Service. Changed communication with RabbitMq (used Spring Cloud Stream). Helm Chart for this in progress. Also discovered the new Activiti Full Example App.


@daisuke-yoshimotois working on applying new audit service API interface to

@almerico Created presentation cluster based on new Activiti Cloud version.

@miguelruizdevworked on the ttc workshop, trying it out against the Beta1 version and identifying the changes that need to be implemented for it to work again.

@ryandawsonukSetup acceptance tests to run in a new custer against a JX-deployed example as part of review of CI/CD practices. Submitted PRs to Jenkins-X draft packs to help keep Activiti charts close to JX default practices. Helped Elias with package name refactoring to refine the API clarity/readability.

@erdemedeirosworked with @Salaboy and @ryandawsonuk to get Activiti API and Activiti Cloud API extracted to dedicated repositories ( Maven modules has been renamed and package structure has changed.

@salaboyworked on the next round of refactoring with @erdemedeirosand @ryandawsonukwe aligned package names, maven module names and repositories to follow the same conventions. This will make it easy to document and easy to maintain in the long run. This changes are also helping us to move forward Continuous Delivery and we will keep our experimentation with Jenkins X to build each repository.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we worked on the release of Activiti Core and Activiti Cloud Beta1. We wrote some Getting Started guides and we tested the maven artifacts and example services and deployment descriptors quite extensively. We got the Beta1 release deployed in PKS and also tested that the released example services can work in a Minikube deployment using the same Activiti Cloud Full Example HELM charts. Congrats to everyone involved in the release and thanks a lot for all of you who are providing early feedback about this release.

This week we already started with some large refactorings regarding source code organization and improving our work practice to fully switch to Continuous Deployment. We will aim for doing a Beta2 release quite soon, and we will be updating the Gitbook and the Roadmap accordingly this week.

@almericoUpgrades according to latest release with our google cluster.

@miguelruizdevworked on the overall testing of the third API refactoring, fixing tests in the acceptance test security module, reporting issues on faulty endpoints for future fixes beyond Beta1, and, along with @erdemedeiros, created new clusters within the company organization in GKE using Jenkins X.

@balsarori initial Form Router discussions

@mteodori - holidays - 

@cristina-sirbuWorked on Activiti Cloud Events Adapter Service ( a Spring Boot Application which gets the events from RabbitMq, enrich them to a new format and pushes them to an ActiveMq instance). Communication between RabbitMq and ActiveMq done. Enrichment of the events is still in progress.

@constantin-ciobotaruworked for adding import/export applications/models endpoints in modeling.

@ryandawsonukAdded minikube instructions to activiti-cloud-full-example chart. Updated the audit-service module to follow the query-service module in using the newly-refactored versions of the services for security and identity. Added tests for an example cloud connector to the serenity-based BDD acceptance tests and updated the tests so that the security-policies tests work together with the deployment conditions for runtime tests. Updated the release scripts for building and tagging the example-level repos using their new structure.

@erdemedeirosworked with @Salaboy to get all the API related PRs ready to merge. Set up a GKE cluster using Jenkins X to test the latest changes in the APIs.

@salaboy worked on the release process and creating tutorials and examples.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

I am happy to announce that after more than a year of hard work and particularly intense work over the last 4 months we are ready to release the first Beta version of all the Java Artifacts. You can consume them all from Maven Central.

During this release, we focused on providing the first iteration of Activiti Cloud and from now on we will harden each service in an incremental way. We are happy with the foundation and now it is time to test and polish the next generation of Activiti. We are eager to get early feedback, please get in touch via Gitterif you find any problem or have questions about the released artifacts.

What is included in Activiti Core and Activiti Cloud Beta1


Beta1 is focused on providing the foundation for both our Java (embedded) and Cloud Native implementations.

This release is divided into two big categories: Activiti Core and Activiti Cloud Building Blocks.

Activiti Core

On the Core, we have provided a new set of APIs for the Task and Process Runtime that enable a simple migration from embedded to the Cloud Approach. We believe that this is a fundamental change in order to provide long-term API support focused on Runtimes and following the Command Pattern. We also added security, identity and security policy abstractions to make sure that you can quickly integrate with different implementations when you embed the process & task runtime into your Spring Boot applications.

Activiti Core is now based on top of Spring Boot 2. On the Core, you should expect further integrations with Spring Boot 2. There are plans to adopt all the reactive capabilities included in Spring 5. A big part of refactoring the Core layer was about simplifying the runtimes to make sure that they don’t clash with other frameworks’ responsibilities.

The new set of Runtime APIs was conceived to promote Services that are Stateless and immutable, which allow us to write testing, verification and conformance tools to guarantee that your business processes are safe and sound to run in production environments. These Runtime APIs include new responsibilities and don’t deprecate the Activiti 6.x APIs. These new Runtime APIs are replicated at the service level (REST and Message Driven Endpoints) in the Activiti Cloud Layers.

If you want to get started with using Activiti Core 7.0.0.Beta1take a look at the following tutorials which highlight the ProcessRuntime and TaskRuntime API usage in Spring Boot 2 applications.

Getting Started with Activiti Core Beta1

Activiti Cloud Building Blocks

On the Activiti Cloud front, we are providing the initial Beta1 versions of our foundational building blocks:

  • Activiti Cloud Runtime Bundle
  • Activiti Cloud Query
  • Activiti Cloud Audit
  • Activiti Cloud Connectors

These building blocks were designed with a Cloud Native Architecture in mind and all of them are built on top of Spring Cloud Finchley.  Together, these components form the foundation for a new bread of Activiti Cloud Applications which can be distributed and scaled independently.  All these building blocks can be used as independent Spring Boot applications, but for large-scale deployments, these components have been designed and tested using Docker Containers and Kubernetes as the main target platform. We have been using Jenkins X to build components that need to be managed, upgraded and monitored in very dynamic environments.


All these building blocks understand the environment in which they are running, so they are enabled to work with components such SSO/IDM, Service Registry, Data Streams, Configuration Server, and Gateways. By being aware of the environment, our applications can be managed,configured and scaled independently.

It is important to understand that we have also included some experimental services in this release including:

  • Application Service
  • Modelling backend services
  • GraphQL support for the Query Service / Notification Service
  • An alternative Mongo DB implementation for Audit Service


If you want to start with the Cloud Native approach we recommend looking at the Activiti Cloud HELM charts for Kubernetes:


These example HELM Charts can be used to get all the services up and running.


Check the following blog post if you want to get started with Activiti Cloud in GKE.

Getting Started with Activiti Cloud Beta1

What is coming next?

In this release, we didn’t include any UI bits, and we focused on core services to build Cloud Native applications. In the following Beta2 and RC releases we will include new backend services to support the new Activiti BPMN2 Modeler application. This new set of modeling services and the new Activiti Modeler application is currently being designed to enable our Cloud Deployments by providing a simplified packaging mechanism for our Activiti Cloud Applications.


On future releases we will be hardening our building blocks, improving documentation and examples and adding new services to simplify implementations.  We are actively looking at tools such as Istioand KNativeto provide out of the box integration with other services and tools that rely on such technologies.


We are also contributing and collaborating with the Spring Cloud community to make sure that our components are easy to use and adopt rather than clash with any of the other Spring Cloud infrastructure components. This also involves testing our core building blocks in Pivotal Containers Services (PKS) which is based on Kubernetes.


We want to provide a continuous stream of community innovation and for that reason, we will change how the Core and Cloud artefacts are versioned and their release lifecycles. We are moving to a Continuous Delivery approach for each of the libraries that we build following the practices mentioned in the Accelerate book. We will, of course, update the structure of our projects but the services and APIs will remain the same.


You should expect some naming changes between Beta and RC artifacts. We are still polishing new service names and configurations. This will be all documented in the release notes for each release. You can always keep an eye on our public Roadmap document (that will be updated in the following days)for the future Beta, RC and GA releases.


As always, this is a great moment to join the community and provide feedback. If you are looking into building Cloud Native applications, we are more than happy to share and get feedback from people going through the same journey. Feel free to join our daily open standups or ping us in our Gitterchannel.

We have just released Activiti Core and Activiti Cloud 7.0.0.Beta1to Maven Centraland we wanted to highlight the new Process and Task Runtime APIs.

The new APIs were created with a clear purpose to address the following requirements:

  • Provide a clear path to our Cloud approach
  • Isolate internal and external APIs to provide backward compatibility moving forward
  • Provide a future path to modularity by following the single responsibility approach
  • Reduce the clutter of the previous version of APIs
  • Include Security and Identity Management as first-class citizens
  • Reduce time to value for common use cases, where you want to rely on the conventions provided by popular frameworks
  • Provide alternative implementations of the underlying services
  • Enable the community to innovate while respecting well-established contracts

We haven’t deprecated the old API, so you are still free to use it, but we strongly recommend the usage of the new API for long-term support.

This API is in a beta review, meaning that we might change and polish it before the GA release. We will appreciate all the feedback that we can get from our community users and if you want to get involved with the project please get in touch.

Time to get our hands dirty with a couple of example projects.

TaskRuntime API

If you are building business applications, creating Tasks for users and groups in your organisation is something that you might find handy.

The TaskRuntimeAPIs are here to help you.

You can clone this example from GitHub:

The code from this section can be found inside the “activiti-api-basic-task-example” maven module.

If you are running inside a Spring Boot 2 application you only need to add the activiti-spring-boot-starter dependency and a DB driver you can use H2 for an in-memory storage.

[sourcecode language="xml"]org.activiti activiti-spring-boot-starter com.h2database h2 [/sourcecode]

We recommend using our BOM (bill of materials

[sourcecode language="xml"]org.activitiactiviti-dependencies7.0.0.Beta1importpom[/sourcecode]

Now let’s switch to our DemoApplication.class:

Then you will be able to use the TaskRuntime


| @Autowired

| private TaskRuntime taskRuntime; 


Once you have the bean injected into your app you should be able to create tasks and interact with tasks.


public interface TaskRuntime {
  TaskRuntimeConfiguration configuration();
  Task task(String taskId); Page |tasks(Pageable pageable);
  Page tasks(Pageable pageable, GetTasksPayload payload);
  Task create(CreateTaskPayload |payload);
  Task claim(ClaimTaskPayload payload);
  Task release(ReleaseTaskPayload payload);
  Task complete(CompleteTaskPayload payload);
  Task update(UpdateTaskPayload payload);
  Task delete(DeleteTaskPayload payload);


For example, you can create a task by doing:


taskRuntime.create( TaskPayloadBuilder.create() 
.withName("First Team Task")
.withDescription("This is something really important")

This task will be only visible by users belonging to the activitiTeamand the owner (the current logged in user).

As you might have noticed, you can use TaskPayloadBuilder to parameterize the information that is going to be sent to the TaskRuntime in a fluent way.

In order to deal with Security, Roles and Groups we are relying on Spring Security modules. Because we are inside a Spring Boot application, we can use the UserDetailsService to configure the available users and their respective groups and roles. We are currently doing this inside a @Configurationclass:

Something important to notice here, is that in order to interact with the TaskRuntime API as a user, you need to have the role:ACTIVITI_USER (Granted Authority: ROLE_ACTIVITI_USER) .

While interacting with REST endpoints the Authorization mechanism will set up the currently logged in user, but for the sake of the example we are using an utility class ( that allows us to set in the context a manually selected user.  Notice that you should never do this unless you are trying things out and you want to change users without going through a REST endpoint. Look into the “web” examples to see more real scenarios where this utility class is not needed at all.

One last thing to highlight from the example is the registration of Task Event Listeners:

@Bean public TaskRuntimeEventListener taskAssignedListener() { 
  return taskAssigned -> ">>> Task Assigned: '" +
                         taskAssigned.getEntity().getName() +
                         "' We can send a notification to the assignee: " +

You can register as many TaskRuntimeEventListeners as you want. This will enable your application to be notified with Runtime events are triggered by the services.

ProcessRuntime API

In a similar fashion, if you want to start using the ProcessRuntime APIs, you need to include the same dependencies as before. We aim to provide more flexibility and separate runtimes in the future, but for now the same Spring Boot Starter is providing both TaskRuntime and ProcessRuntime API.

The code from this section can be found inside the “activiti-api-basic-process-example” maven module.

public interface ProcessRuntime { 
  ProcessRuntimeConfiguration configuration();
  ProcessDefinition processDefinition(String processDefinitionId);
  Page processDefinitions(Pageable pageable);
  Page processDefinitions(Pageable pageable, GetProcessDefinitionsPayload payload);
  ProcessInstance start(StartProcessPayload payload);
  Page processInstances(Pageable pageable);
  Page processInstances(Pageable pageable, GetProcessInstancesPayload payload);
  ProcessInstance processInstance(String processInstanceId);
  ProcessInstance suspend(SuspendProcessPayload payload);
  ProcessInstance resume(ResumeProcessPayload payload);
  ProcessInstance delete(DeleteProcessPayload payload);
  void signal(SignalPayload payload);


Similarly to the TaskRuntime APIs, in order to interact with the ProcessRuntime API the currently logged user is required to have the role “ACTIVITI_USER”.

First things first, let’s autowire our ProcessRuntime:

[sourcecode language="java"] @Autowired private ProcessRuntime processRuntime; @Autowired private SecurityUtil securityUtil; [/sourcecode]

Same as before, we need our SecurityUtil helper to define in behalf of which user we are interacting with our APIs.  

Now we can start interacting with the ProcessRuntime:

[sourcecode language="java"] Page processDefinitionPage = processRuntime .processDefinitions(Pageable.of(0, 10));"> Available Process definitions: " + processDefinitionPage.getTotalItems()); for (ProcessDefinition pd : processDefinitionPage.getContent()) {"\t > Process definition: " + pd); } [/sourcecode]

Process Definitions needs to be placed inside the /src/main/resources/processes/. For this example we have the following process defined:


We are using Spring Scheduling capabilities to start a process every second picking up random values from an array to process:

@Scheduled(initialDelay = 1000, fixedDelay = 1000) 
public void processText() {
  String content = pickRandomString();
  SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yy HH:mm:ss");"> Processing content: " + content + " at "
          + formatter.format(new Date()));
  ProcessInstance processInstance = processRuntime
                          .withProcessInstanceName("Processing Content: " + content)
                          .withVariable("content", content)
                          .build());">>> Created Process Instance: " + processInstance);

Same as before, we are using the ProcessPayloadBuilderto parameterize which process do we want to start and with which process variables in a fluent way.

Now if we look back at the Process Definition, you will find 3 Service Tasks. In order to provide the implementation for these service tasks you need to define Connectors:

[sourcecode language="java"] @Bean public Connector processTextConnector() { return integrationContext -> { Map inBoundVariables = integrationContext.getInBoundVariables(); String contentToProcess = (String) inBoundVariables.get("content") // Logic Here to decide if content is approved or not if (contentToProcess.contains("activiti")) {"> Approving content: " + contentToProcess); integrationContext.addOutBoundVariable("approved",true); } else {"> Discarding content: " + contentToProcess); integrationContext.addOutBoundVariable("approved",false); } return integrationContext; }; } [/sourcecode]

These connectors are wired up automatically to the ProcessRuntimeusing the Bean name, in this example “processTextConnector”. This bean name is picked up from the implementationproperty of the serviceTaskelement inside our process definition:

<bpmn:serviceTask id="Task_1ylvdew" name="Process Content" implementation="processTextConnector">

This new Connector interface is the natural evolution of JavaDelegates, and the new version of Activiti Core will try to reuse your JavaDelagates by wrapping them up inside a Connector implementation:

public interface Connector { 
  IntegrationContext execute(IntegrationContext integrationContext);

Connectors receive an IntegrationContext with the process variables and return a modified IntegrationContextwith the results that needs to be mapped back to process variables.

In the previous example, the connector implementation is receiving a “content” variable and adding an “approved” variables based on the content processing logic.

Inside these connectors you are likely to include System to System calls, such as REST calls and message based interactions. These interactions tends to become more and more complex and for such reason we will see in future tutorials how these connectors can be extracted from running outside of the context of the ProcessRuntime(Cloud Connectors), to decouple the responsibility of such external interactions outside of the ProcessRuntimescope.  

Check the maven module activiti-api-spring-integration-examplefor a more advanced example using Spring Integrations to kickstart processes based on a File poller.

Full Example

You can find an example using both, the ProcessRuntimeand TaskRuntimeAPIs to automate the following process:


The code from this section can be found inside the “activiti-api-basic-full-example” maven module.

As the ProcessRuntime only example this is also categorizing some input content, but in this case, the process relies on a Human Actor to make the decision to approve the content or not. We have a schedule task as before that creates new process instances every 5 seconds and a simulated user checking if there are available tasks to work on.


The UserTaskis created to a group of potentialOwners, in this example the “activitiTeam” group. But in this case, we are not creating the task manually as in the first example. The process instance is creating the task for us, every time that a process is started.


<bpmn:userTask id="Task_1ylvdew" name="Process Content"> 


Users belonging to this group will be able to claim and work on the task.

We encourage you to run these examples and experiment with them and get in touch if you have questions or if you find issues.


In this blog post we have seen how to get started using the new ProcessRuntimeand TaskRuntimeAPI from the new Activiti Core Beta1 project.

We recommend you to check the Activiti Examples repository, for more examples:

Helping us to write more of these examples might be a very good initial community contribution. Get in touch if you are interested, we are more than happy to guide you.

As always feel free to get in touch with us via Gitter: you have questions or feedback about these examples and tutorials.

More blog posts are coming to introduce the Runtime Admin APIs and how these examples can be adapted to be executed in our new Activiti Cloud approach.  

We have just released Activiti Core and Activiti Cloud 7.0.0.Beta1to Maven Centraland we wanted to highlight the new Cloud Native capabilities of Activiti Cloud. For those who haven’t followed what we have been doing over the last year, Activiti Cloud is a set of Cloud Native components designed from the ground up to work in distributed environments. We have chosen Kubernetesas our main deployment infrastructure and we are using Spring Cloud/Spring Bootalong with Dockerfor containerization of these components.

We have gone through a very valuable journey, meeting very passionate developers, communities and existing and potential customers who are looking to leverage these technologies (and business automation solutions) to reduce time to market and improve business agility in the Cloud. We have also contributed with these communities, making sure that the Open Source projects that we consume get back our valuable feedback and contributions. 

As part of the first Beta1 release, we are providing 4 foundational Building Blocks

  • Activiti Cloud Runtime Bundle
  • Activiti Cloud Query
  • Activiti Cloud Audit
  • Activiti Cloud Connectors

These Building Blocks are Spring Boot Starters that can be attached to any Spring Boot (2.x) application. These Building Blocks are enhanced with Spring Cloud functionalities which provide the Cloud Native capabilities.

By using these components you can create Activiti Cloud applications that:

  • Can be scaled independently based on demand
  • Can be managed in completely isolated cycles
  • Can be upgraded and maintained independently
  • Can provide domain specific features using the right tool for the job

You can read more about Activiti Cloud in our Gitbook:

On this blog post, we wanted to show how to get started by deploying an example set of these building blocks in Kubernetes. We strongly recommend having a real Kubernetes Cluster such as GKE, PKS or EKS. We have tested the content of this blog post in AWS (Using Kops, PKS, GKE and also with Jenkins X)

Let’s get our hands dirty with Kubernetes, HELM and Activiti Cloud.

Kubernetes Deployment & HELM Charts

The quickest and easiest way to deploy things to Kubernetes is by using HELM charts. HELM, as described by their own documentation, is: “a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.”

As part of the Beta1 release, we have created a set of hierarchical HELM charts that can be used to deploy several components, some related to infrastructure (such as SSO and Gateway) and some Application specific components like Runtime Bundle, Audit Service, Query Service and a Cloud Connector.

These HELM charts can be found here:

In this blog post, we will be looking more specifically at:

This “Activiti Cloud Full Example” deploys the following components:


One important thing to notice is that each of the Activiti Cloud components can be used independently. This example is intended to show a large-scale deployment scenario. You can start small with a Runtime Bundle (which provides the process and task runtimes), but if you want to scale things up you need to know what you are aiming for, and this charts shows you exactly that.

Now, moving forward you will need to download and install the following tools:

And as I mentioned before having a real life cluster is recommended.

The Google Cloud Platform offers a $300 free credit if you don’t have a Google Cloud account. See

If you choose GKE, you will also need to install the Google Cloud SDK CLI tool:

Creating and configuring the Cluster

Before we start, make sure that you clone the go to the “activiti-cloud-full-example” directory, we will use some files from there.

Following using GKE we are demonstrating how to  create a cluster by going to your Google Cloud Home Page and selecting Kubernetes Engine:


Then create a new Cluster:


Enter the Cluster Name, select the Zone based on your location and I’ve selected 2 vCPUs and left the Size to the default value (3). 


Once the cluster is created click on the Connect Button on the right hand side of the table:


This will open a popup to show you how to connect with the cluster, open a terminal and copy the command that was displayed in the previous pop up.


Now you have your Cluster configured and ready to be used.

Note: If you are working with an existing cluster, you will need to check if you have an Ingress Controller already installed, you can skip the following steps if that is the case.

Now let's configure HELM to work in the Cluster.

First, we need to start by giving HELM permissions to deploy things into the cluster. There are tons of articles on how to do this (   by running in a terminal the following command (you can copy/clone/download the helm-service-account-role.yaml file from here:

| kubectl apply -f helm-service-account-role.yaml

| helm init --service-account helm


One more thing that we need to do in order to be able to expose our services to be accessed from outside the cluster is to set up an Ingress Controller, which will automatically create routes to the internal services that we want to expose, in order to do this we just need to run the following command:

| helm install stable/nginx-ingress


Now that NGINX Ingress Controller is being deployed, we need to wait for it to expose itself using a Public IP. We need this Public IP to interact with our services from outside the cluster. You can find this IP by running the following command:

| kubectl get services


Notice that you might need to run kubectl get services serveral times until you can see the External IP for your ingress controller. If you see PENDING, wait for a few seconds and run the command again.

We will use nip.ioas DNS service to map our services to this External IP which will follow the following format:

Deploying Activiti Cloud Full Example

Now that we have our Cluster in place, HELM installed and an Ingress Controller to access our services from outside the cluster we are ready to deploy the Activiti Cloud Full Example HELM Chart.

The first step is to register the Activiti Cloud HELM charts into HELM. We do this by running the following commands:

| helm repo add activiti-cloud-charts

| helm repo update

The next step is to parameterize your deployment to your cluster. The Activiti Cloud Full Example Chart can be customized to turn on and off different features, but there is one mandatory parameter that needs to be provided which is the external domain name that is going to be used by this installation.

In order to do this, you can copy or modify the values.yamlfile located here: You need to replace the string “REPLACEME” to <EXTERNAL-IP>

For my cluster that was: every occurrence of “REPLACEME”.


Once you made all these changes you are ready to deploy the chart by running the following command:

| helm install -f values.yaml activiti-cloud-charts/activiti-cloud-full-example

This will trigger the deployment process, and you need to wait until the services are up and running. You can check this by running:

| kubectl get pods

Notice the READY column 1/1 in all the pods, that means that we have 1 pod in Kubernetes running our service. It is also important to notice that HELM created a release of our CHART. Because we haven’t specified a name for this release HELM choose one random name, in my case, it was: bumptious-yak. This means that we can manage this release independently of other Activiti Cloud Applications that we want to deploy using the same approach. You can run helm list and then helm delete to remove all the Activiti Cloud Services for this release.

In order to access to your services now, you can run the following command:

| kubectl get ingress


Interacting with your Application

We recommend to download our Activiti Cloud Postman collection for Postman ( from our Activiti Cloud Examples repository:

You can clone or download the files in that repository and then import into Postman the collection

Before calling any service you will need to create a new Environment in Postman. You can do that by going to the Manage Environmenticon (cog)

Then “Add” a new environment and add a name to it. Now you need to configure the variables for the environment: “gateway”, “idm” and “realm”

For gateway you need to copy the url associated with your Ingress, the same for idm which is SSO and IDM using Keycloak. For the realm enter “activiti”:


Click Save or Update and then you are ready to start using that Environment. Make sure that you select the environment in the dropdown of the right:


As shown in the previous screenshot, if you go to the keycloakdirectory and select the “getKeycloakToken for testuser”you will get the token which will be used to authenticate further requests. Notice that this token is time sensitive and it will be automatically invalidated so you might need to get it again if you start getting unauthorized errors.

Once you get the token for a user, you can interact with all the user endpoints. For example, you can create a request to see which Process Definitions are deployed inside our Example Runtime Bundle:


Now you can also start a new Process Instance from our SimpleProcess:


You can check that the audit service contains the events associated to the just started process instance.


And that the query service already contains information about the process execution:


You are now ready to start consuming these services to automate your own business processes.

Finally, you can access to all services Swagger documentation by pointing your browser to:


All our services are using SpringFox to generate this documentation and provide a UI for it.


In this blog post we have seen how to get started with Activiti Cloud and more specifically with the Activiti Cloud HELM charts in GKE. If you are not familiar with Kubernetes, Docker and GKE this might look like a lot of new information and it is our mission to simplify all the steps covered in these getting started guide. For that reason, we recommend you to checkout the Jenkins Xproject, which greatly simplifies the first two sections about creating clusters and configuring the basic infrastructure for your projects.

As part of the Activiti Cloud initiative we are making sure that we follow best practices coming from the Kubernetes, Docker and Spring Cloud communities and we are contributing back with fixes and feedback to make this technology stack the best for Cloud Native applications.

As always feel free to get in touch with us via Gitter: you have questions or feedback about these examples and tutorials.

Last week we manage to finish and polish the second round on the APIs refactoring and we attended an AWS 2 days training for cloud deployments. We already started a third iteration around the Java Runtime APIs related to security, security policies and further refinements on usability. This week we will be moving forward this work to close Beta1 release. We are also doing some testing on Pivotal PKS to validate that our HELM charts are supported across different cloud providers.


@igdianovworked on a PoC for Activiti Cloud

@almericoWorking on running latest runtime-bundle and Activiti cloud infrastructure with latest JenkinsX release.

@daisuke-yoshimotoworked on

@miguelruizdevworked on activiti-cloud-acceptance-tests, adding a new security policies module and refactoring default process definitions and users to specific ones. Attended AWS training.

@balsarorihelped with some extensions to the engine APIs to support Owner queries for tasks.

@mteodorifixed Batik vulnerability

@constantin-ciobotaru- holidays -

@ryandawsonukAdded a connector process to the example-runtime-bundle and a connector chart to activiti-cloud-charts then tested the example connector image. Started working with @miguelruizdevto improve the acceptance-tests with some refactoring to support adding tests for security-policies and for the example connector scenario. Attended AWS + EKS training.

@erdemedeirosworked with @salaboy on the next iteration for the Java API. Added Activiti Spring Boot starter to Activiti/Activiti. Attended AWS + EKS training.

@salaboyworked on the 3rd round of refactorings for the API, now focused on security and security policies. Attended AWS + EKS training.


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we merged the initial refactoring of the Java Runtime APIs into our develop branches of our projects. We also merged updates on our example projects, quickstart and acceptance tests. The BluePrint TTC project is still due to be migrated to use the latest versions. Now, the Activiti Cloud HELM charts are consuming these images published in Docker Hub and we recommend people to try them out and get in touch via Gitter in case of issues. We will be working on stabilizing all surrounding projects for the Beta release and updating documentation accordingly.


@igdianovworked on improving connectors and async tasks for internal PoC.

@almericocluster infrastructure configuration and helm updates

@daisuke-yoshimotoworked on internal issues and Audit Mongo DB API upgrade.

@miguelruizdevimplemented awaitility features in the contexts within the acceptance test that need time orchestration; kept testing Activiti services, running the acceptance tests in a Jenkins X environment.

@constantin-ciobotarufinished and pushed the code in organization for repository abstraction and the api, updated the modeling acceptance tests to use only interfaces from api

@ryandawsonukAdded tests to activiti-cloud-audit-service to cover different security policies scenarios and fixed a bug related to hyphens when setting security policies using environment variables. Worked with @miguelruizdev to run the acceptance tests and introduce awaitility to wait for test conditions to be met. Changed the charts in activiti-cloud-charts to provide a way to switch between h2 and postgres.

@erdemedeiros- holidays -

@salaboyworked on upgrading the Acceptance Tests to support the new APIs and on deployments to Kubernetes using our new Activiti Cloud HELM Charts.


This week we will be working hard on stabilizing examples and doing final refactorings at the API level. A blog post about the new API is being created, but it will probably see the light next week.


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we moved forward the new APIs refactoring to all of our services and extensions. We started validating the new data types and exposed endpoints against our existing acceptance tests and we are a couple of tests away to finalize the first round of refactorings which will enable us to merge all the changes into the develop branch. Our Docker Compose Quickstart was updated and simplified for people that wants to start testing our building blocks.


@igdianovworked on GraphQL upgrade modules to use new API data types and introduced a new graphQL query spring boot starter

@almericocluster configurations and charts improvements.

Worked on https configuration support in jx environment.

@daisuke-yoshimotoworking on MongoDB upgrade to use new APIs data types

@miguelruizdevtested Activiti services following the docker-compose approach as well as the Jenkins X one; updated the Postman Collection with admin endpoints and use case sub-collections.

@constantin-ciobotaruworked on modeling organization: moved abstract repository layer to a separate module, add an api module with Application and Module abstract interfaces for entities, force rest layer to use json deserializer for Application and Module interface, remove connection from rest and jpa modules and use from rest layer only api and repository modules, update acceptance tests

@ryandawsonuk- holidays -

@erdemedeirosmoved commands (StartProcess, ClaimTask, CompleteTask, etc) to the Java API level. Made sure that acceptance tests are using the customized ObjectMapper which contains all the interface mappings.

@salaboyworked on acceptance tests for the new API refactorings and deployed all new refactored services to a Kubernetes Cluster to validate new behaviour.


This week we will finish the missing acceptance tests and work on the complete automation of the execution of these tests including Security Policies scenario checking and new assertions to verify eventual consistency.


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we started a new testing round for our Kubernetes Deployments after the initial refactoring to use the new APIs. We are getting better and better into testing large refactorings for distributed deployments and we are looking forward to release Beta1 after this round of testing is finished. We will start by releasing only the Java artifacts to maven central, then we will continue with updating documentation and releasing our example docker images so they can be consumed from our HELM charts. Acceptance tests will be also part of our released artifacts, so you can run or modify them to test different scenarios. A warm welcome to our community to @almericowho is working hard to reduce the configuration points for our deployments.


@igdianov- on holidays -

@almericoWorking on configuration of helm charts and possibility to run everything on one ingresses with exposecontroller to reduce manual configuration.  

@daisuke-yoshimotoworking on updating the MongoDB implementation to support new APIs

@MiguelRuizDevtesting new API refactoring in Jenkins X

@constantin-ciobotaruworked on organization: removed group concept and refactored projects to applications concept, updated acceptance tests for modeling accordingly, added acceptance tests for updating application name, deleting application.

@ryandawsonukProduced helm charts published at activiti chart repoand configured JX example to use the charts for application and infrastructure components so that the user doesn’t need to build them. We’re refining this setup for a new quickstart.

@erdemedeirosmoved integration related interfaces to the API level. Cleaned up unused classes from the runtime-bundle after the introduction of the new API.

@salaboyworked on updating new example services to consume the new APIs and troubleshoot new issues and missing bits.

This week we will merge the API refactoring to our Develop branch and finalize the testing round to proceed to release early next week. Right after the release, we will update our roadmap and provide more insights about what it is coming before the end of the year. 

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Alfresco Process Services v1.9 released a few weeks ago introduced a new authentication module which is based on an open source IAM project called Keycloak which provides a wide range of authentication options! Since Keycloak supports X.509 Client Certificate User Authentication natively, the moment APS 1.9 was announced, I purchased a smart card called Yubikey which supports Personal Identity Verification (PIV) (FIPS 201, a US government standard) to test the X.509 support of Keycloak. This blog is to help Alfresco customers and/or partners looking to implement PIV authentication on Alfresco platform. The steps involved in getting this to work end-end are:

Please note that the steps in this blog is based on my experiments on my macOS. If you are not a macOS user you may want to adjust some of the config steps to match your OS

Generate SSL Certificates

The first step is to create the following certificates:

  • Server certificate issued and signed by an “Intermediate CA” which will be used to secure both Keycloak and APS apps.
  • Client certificate which can be used to authenticate the user. This client certificate will be loaded into the PIV smart card

In production scenarios, it is recommended to use internationally trusted CAs (eg: VeriSign) to sign your server and client certificates. Every organization will have best practices in place around certificate issuing and usage, so if you need SSL certificates to secure your apps, check with your security team first! For the purpose of this blog, I’ll be creating root & intermediate CAs by myself. The intermediate CA will be used to sign both the client and server certificates on behalf of the root CA.


Please follow the instructions in my GitHub repo to generate the following certificates which will be required for the subsequent sections of this blog:

  • Root CA pair
    • Root CA certificate - openssl-cert-gen-template/certs/ca.cert.pem
    • Intermediate CA key -  openssl-cert-gen-template/private/ca.key.pem
  • Intermediate CA pair
    • Intermediate CA certificate - openssl-cert-gen-template/intermediate/certs/intermediate.cert.pem
    • Intermediate CA key - openssl-cert-gen-template/intermediate/private/intermediate.key.pem
  • Client & server certificate
    • Client certificate - openssl-cert-gen-template/intermediate/certs/admin.cert.pem
    • Client key - openssl-cert-gen-template/intermediate/private/admin.key.pem
    • Server certificate - openssl-cert-gen-template/intermediate/certs/localhost.cert.pem
    • Server key - openssl-cert-gen-template/intermediate/private/localhost.key.pem
  • Certificate keystore - openssl-cert-gen-template/keystore/keystore.jks
  • CA truststore - openssl-cert-gen-template/truststore/truststore.jks

Configure Alfresco Process Services

Configure APS for SSL

Enable the HTTPS Connector in tomcat/conf/server.xml using the keystore created in the above mentioned “Generate SSL Certificates” section. My connector config on tomcat version 8.5.11 looks like below:


           port="8443" maxThreads="200"
           scheme="https" secure="true" SSLEnabled="true"
           clientAuth="false" sslProtocol="TLS"/>


Configure APS for Keycloak Authentication

APS version 1.9 added a new way of authentication based on Keycloak 3.4.3. I’m not going to go through the configuration details here, so please refer APS identity service documentation & APS 1.9 blog for more details on this configuration. Once APS is configured with Keycloak, the authentication flow will be driven by the configurations you make on the Keycloak server. My configuration file in APS is shown below:


# --------------------------
# --------------------------

# set to false to fully disable keycloak


Configure Yubikey (PIV/Smart Card)

I’m using a Yubikey Neo as the PIV smart card where I’ll load my client authentication certificate which will be used to login to APS. The smart card configuration steps are basically based on Yubikey documentation which you can find here.

Install Yubico PIV Tool

The Yubico PIV tool allow you to configure a PIV-enabled YubiKey through a command line interface. Download this tool and use the following commands to load the certificate and key into the authentication slot 9a of your smart card. You may need to configure the device and set a management key to run the following commands. The device setup instructions can be found here.

Commands to load the client certs into Yubikey

# Set pivtool home and openssl-cert-gen-template directories

# Import Certificate
$pivtool_home/bin/yubico-piv-tool -k $key -a import-certificate -s 9a < $cert_dir/intermediate/certs/admin.cert.pem

# Import Key
$pivtool_home/bin/yubico-piv-tool -k $key -a import-key -s 9a < $cert_dir/intermediate/private/admin.key.pem

Verify certificates using YubiKey PIV Manager

This is an optional step. The YubiKey PIV Manager enables you to configure a PIV-enabled YubiKey through a graphical user interface. Once the certificate and key is imported, you can verify the imported certificates via this utility. The installer can be found here. When a certificate is successfully loaded into the authentication slot of your Yubikey, the PIV manager will display it as shown below.


Verify certificates using YubiKey PIV Manager

Browser Configuration

Install OpenSC

As you can see from the OpenSC wiki page, this project provides a set of libraries and utilities to work with smart cards. Since I’m using a Mac, I followed the instructions on this page to get it installed using the DMG file provided by OpenSC

Configure Browser

Though I am a Chrome user, I used Firefox (version 60.0.2)  for testing the Smart Card Authentication into APS.

If you really want to test this on Chrome, you can use the Smart Card Connector Chrome app to test this. Though this app is intended for Chrome on Chrome OS, it worked for me on my Mac too. However it may prompt you for the Yubikey admin pin too many times which is quite annoying!

My recommendation is to install Firefox and configure Firefox with the OpenSC PKCS11 module as explained below!

Preferences -> Privacy & Security -> Security Devices -> Load

  1. Module name -> “PKCS#11 Module”
  2. Module filename -> “/Library/OpenSC/lib/” (installed as part of the OpenSC installation above) 

Import the root and intermediate CAs openssl-cert-gen-template/certs/ca.cert.pem & openssl-cert-gen-template/intermediate/certs/intermediate.cert.pem respectively via Preferences -> Privacy & Security -> View Certificates -> Authorities -> Import so that the browser will trust servers configured with certificates issued by these CAs

Configure Keycloak

First step is to install Keycloak 3.4.3 as documented here. The Keycloak documentation is quite detailed, hence I’m not going to detail it out here again. In the next few sections, I’ll go through the X.509 specific configuration of Keycloak which is essential to get this working!

Configure Keycloak for two-way/mutual authentication

For more details on X.509 client certificate authentication configuration, please refer enable-x-509-client-certificate-user-authentication.

  • Copy the openssl-cert-gen-template/keystore/keystore.jks & openssl-cert-gen-template/truststore/truststore.jks files to $KEYCLOAK_HOME/standalone/configuration.
  • Open the standalone.xml file and add the following ssl-realm to management/security-realms group in the xml.
<security-realm name="ssl-realm">
             <keystore path="keystore.jks" relative-to="jboss.server.config.dir" keystore-password="keystore"/>
        <truststore path="truststore.jks" relative-to="jboss.server.config.dir" keystore-password="truststore" />
  • Add a https-listener to the profile/subsystem[xmlns='urn:jboss:domain:undertow:4.0']/server[name="default-server"] in the standalone.xml
<subsystem xmlns="urn:jboss:domain:undertow:4.0">
    <server name="default-server">
             <https-listener name="https" socket-binding="https" security-realm="ssl-realm" verify-client="REQUESTED"/>
  • Start Keycloak standalone using the command “$KEYCLOAK_HOME/bin/ -Djboss.socket.binding.port-offset=100 -b” which will start the server by offsetting the default ports by 100. This is helpful to avoid port conflicts on your localhost. With this command, the https port will become 8543 instead of default 8443.

Configure Keycloak authentication flows

Login to Keycloak by going to https://localhost:8543/ (admin/admin) is the default admin user credentials. Add a new user with a username that matches with the certificate attributes. Username “admin” and email “” by going to Keycloak -> <your realm> -> Users -> Add user

Configure Direct Grant

Configuring direct grant is the easiest way to verify the configuration. For more details refer adding-x-509-client-certificate-authentication-to-a-direct-grant-flow. Screenshots below:

Configure Direct Grant Flow


Configure Direct Grant

Configure Browser Flow

The following screenshots will show how to configure the browser flow to use X.509 authentication. For more details, please refer adding-x-509-client-certificate-authentication-to-a-browser-flow. Screenshots below:

Configure Authentication Bindings


Direct Grant

Use the following command to test the direct grant (please change the certificate path as per your configuration). For more details refer adding-x-509-client-certificate-authentication-to-a-direct-grant-flow


curl https://localhost:8543/auth/realms/alfresco/protocol/openid-connect/token \
      --insecure \
      --data "grant_type=password&scope=openid profile&client_id=aps&client_secret=5323135f-36bb-46c4-a641-907ad359827a" \
      -E /Users/cijujoseph/openssl-cert-gen-template/intermediate/certs/admin.cert.pem \
      --key /Users/cijujoseph/openssl-cert-gen-template/intermediate/private/admin.key.pem


Browser Auth Demo

Insert the smart card into your computer and test the browser authentication flow as shown in the below video




Special thanks to the following references!

Last week we spend some time closing the final details of the Java Runtime API and improved our support for Events and Base Data Types to make sure that our services can be completely decoupled. We are looking forward to close this release to continue our journey to continuous deployment of each of our component in an independent way.

@ryandawsonuk, @igdianovand @almericoworked on improving our deployment mechanisms by creating new HELM charts for Applications and Infrastructure. This should enable us to deploy our components to different Cloud Providers such as: Google Cloud Engine, Openshift, EKS (Amazon), AKS (Microsoft) and PKS (Pivotal).


@igdianovand @almericoworked on improving HELM charts and deployments of community bits as well as adding more flexibility to the GraphQL extension for the Query Service

@daisuke-yoshimoto is working on updating Audit Service implementation for MongoDB to fit the new interface of Audit Service API.

@MiguelRuizDevfollowed the Cloud Native in Kubernetes Workshop, getting a high-level understanding of the technologies taking part in it, especially the structure that holds a cloud native application together.

@constantin-ciobotarudiscussions and plans for refactoring organization service in modeling

@ryandawsonukAdded demo-ui to the quickstart based on Jenkins-X and got this working with gateway and keycloak, including CORS configuration. Identified an issue with the graphql part of query. Did a POC on publishing docker images to dockerhub from Jenkins-X.

@erdemedeirosrefactored Java API event listeners: use generics to specify the event type instead of having methods for all the events in the same interface (no empty methods anymore when implementing a event listener). Covered all the missing events currently supported by the runtime bundle (except integration requests) using the new Java API model.

@salaboyupdated the Audit & Query Service to use the new Java Runtime API types for Rest Controllers and Event Handlers.

This week, we will move the new Java Runtime APIs to the main we will start preparing all Java Modules to be released to Maven Central.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we made considerable progress on moving forward on the new API implementation in our core services. @erdemedeirosgot the Runtime Bundle and Query Service implementation updated and I took the lead on Audit. Now our core services should use the same data types such as ProcessInstance, Task and Variables. Our Services APIs are now in charge of receiving data in a standard format (CloudRuntimeEvent) and returning data in structures that will wrap the previously mentioned types. Our Service API layer is now in charge of transforming these “standard” types into whatever model that they need for storage, decoupling the APIs from the underlying technology used to store and retrieve data.


@igdianovworking on Jenkins X set up for custom example.

@daisuke-yoshimotoworked on creating examples and acceptance tests about BPMN Cloud Signals.

@ryandawsonukcontinued working on porting the quickstart example for Jenkins-X, adding acceptance testsand beginning integration of spring cloud gateway, including submitting a PR for spring cloud gateway to support x-forwarded-prefix

@erdemedeiroscompleted work on integrating the new Java API to query service, which is no longer depending on activiti-engine for events reception.

@salaboyworking on the new API and updating Audit Service to use it internally.

This week we will be moving forward our acceptance tests to use the new APIs and make sure that we can easily run them inside our Jenkins X pipelines.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request

Wow 50 weeks of heavy work! Last week we (the Activiti Team) presented in JDK.IOand JBCNConf. You can find the slides and the workshop instructions here:

We moved forward the Quickstart clean up and refactoring plus we made huge progress on the API side.


@MiguelRuizDevcontinued working with Docker and Docker Compose in order to deploy the Task Manager Service and its dependent components (Database and UI) into different containers.

@daisuke-yoshimotoworked for investigating and fixingabout the issue #1902on this week. He will return to creating examples and acceptance tests about BPMN Cloud Signals from next week

@constantin-ciobotaruworked for export models as xml/json files endpoints

@ryandawsonukpresented with @erdemedeirosat JDK.IO. Contributed to keycloak helm chart with options to load realm with start argumentand run kcadm scripts. Worked on porting quickstart exampleto deploy on Jenkins-X using keycloak chart (and other charts).

@erdemedeirospresented and attended to JDK.IOconference with @ryandawsonuk; Reorganized the new Java API Maven modules to be able to add dependency to the model only (without the dependency to the activiti-engine).

@salaboydid a workshop at JBCNConf about Activiti Cloud, Spring Cloud and Kubernetes. We used Jenkins X to deploy services, configure environments and understand how all the pieces work together.

This week we will finish the API work plus the migration of the Quickstart to use Jenkins X conventions for creating Docker images and HELM charts for deployments. We are getting really excited with the progress that we are making so keep an eye on the gitter channel for more updates about our first Beta release. 

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

On behalf of the team, I am pleased to announce that Alfresco Process Services 1.9 has been released and is available now. This release includes new features, improvements as well as important bug fixes. Here are a few notable highlights:

New modern authentication option


Authentication plays an important role in improving user experiences and adhering to your organization’s security standards. Alfresco Process Services (APS) and other parts of Alfresco’s platform, including Alfresco’s Application Development Framework (ADF) and Alfresco Content Services (ACS), have extension points for customizing such authentication needs. Yet, until now, these authentication experiences were unique amongst platform components—including the authentication extensibility model, implementation, and supported authentication standards.


Alfresco now introduces the new Alfresco Identity Service Architecture, an optional solution for customers requiring more advanced authentication services. This architecture features:

  1. Modern and unified open standards for authentication amongst APS, ADF, and ACS via OpenID Connect authentication.
  2. Alfresco Identity Service for brokering authentication to your Identity Provider (IdP) and authentication protocols such as SAML or OAuth2.


Conceptually, this new architecture looks like the following:


Practically, this functionality is available on a limited availability basis for APS 1.9 (and other compatible Alfresco releases) as the Alfresco Identity Service is not Generally available at the time of this writing. Customers motivated to the benefits of this new architecture can leverage the technology Keycloak, an open source project freely available under an Apache 2 license, as a stand-in until the Alfresco Identity Service is available as illustrated below:



Note, Alfresco or Partner assistance for configuring Keycloak in your environment is available as a separate service contract. This is only needed until the Alfresco Identity Service is generally available. Customers using existing APS authentication mechanism can continue doing so without change.


This new option:

  1. Simplifies interactions with your identity provider (IdP) by centralizing configuration across Alfresco components including integration across solutions using ADF, APS, and ACS. This includes all ADF-based applications such as the Content App and Process Workspace.
  2. Allows you to leverage a wide array of authentication protocols including SAML, OAuth 2.0, OpenID Connect, and Kerberos as well as user federation with common user databases such as LDAP and Active Directory.
  3. Customers can now configure oAuth2 and leverage the Activiti-Admin console with this new solution by setting up a subset of local users in keycloak for basic authentication. (Previously, the Activiti-Admin could only be used if Activit-App was configured for Basic Authentication, not using OAuth2 or Kerberos. This new architecture removes this restriction whilst adding more authentication protocols.)  


A basic example to get going is to ensure the configuration file in your classpath, usually in .../tomcat/webapps/activiti-app/WEB-INF/classes/META-INF/.


Step 1

Make sure the “keycloak.enabled” stanza is true.

Step 2

Then match name in the properties file with your realm name.

Step 3

Similarly, your properties file resource matches your client.

Step 4

And, the valid URL needs to match the your Activiti-App host name.

Step 5

Lastly, ensure that your users exist in Keycloak (for authentication) and in APS (for authorization). Important: the email must match.


Once you restart Alfresco Process Services, the configuration will now redirect your authentication to the Alfresco Identity Service Architecture as in steps A, B, and C below:

While this is a simple example, additional configuration can be done for the following use cases:

  1. Configure the Alfresco Process Workspace to use the Alfresco identity Service Architecture. This can also be done for your own ADF application. This is the preferred interface for end-user applications.
  2. Allow the Alfresco Identity Service Architecture to authenticate with your own IdP for SAML or OAuth2
  3. Automatically sync users to the Alfresco Identity Service / Keycloak
    (note: this is needed for authentication and the traditional user sync of APS is still required for authorization)
  4. Configure the APS Admin Console (activiti-admin) to operate when the activiti-app is configured for a non-basic auth protocol (a feature not available prior to the Alfresco Identity Service Architecture)


Check Ciju Joseph recent blog post about implementing a strong two-factor hardware-based authentication. He used a smart card that supports Personal Identity Verification (PIV) authentication (FIPS 201, a US government standard). 



Process Workspace released independently

The Process Workspace is a new ADF-based user interface for end users to view, act and collaborate on tasks and processes. From Alfresco Process Services 1.9, Process Workspace is no longer shipped with APS but packaged as a separate distribution, giving customers the flexibility to update their environment each time there is a new release. The release cadence will follow ADF releases to keep pace and sync with its innovation cycle. Process Workspace 1.2.0 (based on ADF 2.4.0) was released July 2nd. It includes:

  • ADF 2.4 Upgrade
  • configurable landing page
  • UX Improvements
  • source code and .war file
  • bug fixes


Please check the following documentation for installation instructions: Installing Alfresco Process Services Workspace | Alfresco Documentation.


Dist folder (distribution) in Process Workspace package (.zip)

Advanced documentation generation

It is now possible to use advanced constructs to generate richer documents including information from multiple data sources such as external databases, REST endpoints, JSON objects, etc. The graphical user interface for the generate document task now includes 2 new properties:

  • additional data source names, a comma-separated list of data sources the document will use as the source of the expressions.
  • and additional data source expressions, a comma-separated list of expressions to be included in the document.


This makes it possible to collect data from multiple data sources including external sources that will enrich the generated documents. As an example, here is how to collect data from process variables and two custom services. The example is a simple booking app (Github repo link) which allows a user to specify a city to visit and at the end generates a document confirming the booking. The document should contain information about the booking and some extra information such as the weather and recommended places to visit.


Weather and recommendations information are retrieved from custom services WeatherService and RecommendationService.



Multiple data sources can be specified in the additional data source expressions

Additional data source names (weather and recommendations) can then be used to in the document template to display information.

The final generated document will look as follows.

New iOS mobile app version

A new version 1.1 of the Alfresco Process Service iOS app is available on the App Store. It adds offline capabilities and compatibility with iPhone X. Here are the detailed highlights for this update:

  • Access your task lists, downloaded files, and forms without network connectivity
  • Fill out task form and save progress for later even when you are offline
  • iPhone X compatibility
  • Date & time form field support
  • Attachment thumbnail
  • Additional enhancements and fixes

The updated app requires iOS 10 or higher and connects to Alfresco Process Services 1.7 or higher.


Sub-groups in task assignments

In Alfresco Process Services 1.9, if you select a candidate group or if you add a group of involved people for a user task it now includes all the users part of any sub-groups existing under the selected parent group. As an example, if you have the following group hierarchy:

  • TopGroup (user1)
    • SubGroupA (user2)
      • SubGroupA1 (user3)


If you select the group TopGroup as candidate group for the user task A, all 3 users (user1, user2 and user3) will see the task A in their task queue. The same behavior applies for involved groups on user tasks.


Next steps

If you are interested in trying out this new Process Services version, register for a free enterprise trial here. Check out the Alfresco documentation for detailed product instructions. 

Hi everyone! Yesterday we did a workshop at @JBCNConf about using Spring Cloud, Spring Cloud Kubernetes, Jenkins X and Activiti Cloud. 

It was the first time that we did a full 2hs session to deploying a set of services which implement the Activiti Cloud BluePrint. Notice that this was not a simple "hello world" exercise. We deployed 7 backend services (forking and cloning 7 repositories from github), a Front End application and we configured Jenkins X, Nexus and a Kubernetes Cluster in GKE. This can be done in 2hs if you have some Kubernetes experience, but the main focus was to help people to go through the journey and getting their hands dirty by solving issues and problems that might appear. 

Introductory slides here:


Cloud Native Java in Kubernetes 


You can find a brief description of the scenario that we are implementing here:

The good thing, is that you can do the workshop from the comfort of your home following the instructions provided here:

During the workshop you will learn about:

  • Activiti Cloud
  • Spring Cloud
  • Spring Cloud in Kubernetes
    • Configuration
    • Discovery
    • Client Side Load Balancing
  • Kubernetes Deployments
  • Jenkins X

If you are interested in having a webinar or a virtual session with this content drop us a message, and we will happy coordinate a session.