Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog

Last week we spend a lot of time preparing for our Core and Cloud Beta2 release. We improved our acceptance tests to cover more scenarios end to end. This included the modeling services. While aligning dependencies and how we build our repositories for a release we found out more optimal ways to reduce the interdependencies between services which enable us to decouple each service lifecycle.


@daisuke-yoshimotoworked on a critical issue related with Process Variables serialization.

@igdianovhelped with critical checks for CI/CD pipelines and looked at the Notifications mechanisms.

@constantin-ciobotaruworked for import/export applications in modeling with the new zip structure

@miguelruizdevworked on the update of the keycloak realm and started to increase coverage of acceptance test in runtime bundle.

@ryandawsonukmodified the scripts in activiti-scripts to also support replacing fixed versions from Jenkins-X for the cloud repositories. Ran the scripts to perform a first mock release on the cloud repositories.

@erdemedeirosworked with @Salaboy and @ryandawsonuk on CI/CD improvements. Started working on the deployment of processes contained on the zip file generated by the modeler.

@salaboywe worked on aligning our modules for the Beta2 release that. A lot of work on aligning PoMs and finally separating the Modeling services lifecycle from the runtime modules.


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we manage to import to, our new CI/CD pipelines in Jenkins X, our Cloud Repositories. Now each repository is being build, checked and released by these pipelines. The resultant artifacts are published to a public Alfresco Nexus Activiti Releases repository. We are currently testing our Scriptsfor milestone releases (labeled releases once a month) which will go to Maven Central. I am extremely happy with the results, this will improve the reliability of each service and allow us to evolve them independently. We are now full speed trying to finalize the last bits of Beta2, which should be available in the next couple of weeks.


@balsaroriWorking on process variable extensions #2049

@lucianopreaworked on improving the consistency of update operations across core services.

@daisuke-yoshimotoworked on reviewing core engine reported issues and updating the MongoDB audit implementation for Beta1/Beta2 conformance.

@igdianovworked on jx updatebot extensions to facilitate version convergence in builds

@constantin-ciobotaruworked for new zip structure for exported applications from the activiti modeler application

@miguelruizdevstarted experimenting with the Reactive stack for our core services.

@ryandawsonukmodified the scripts in activiti-scripts to support replacing fixed versions from Jenkins-X. Ran the scripts to perform a mock release on the non-cloud repositories.

@erdemedeiroskept working with @ryandawsonuk and @salaboy to get our CI/CD pipelines working. We went from automatic merge to manual merge for generated PRs in order to get a better control on versions changes.

@salaboyworked on CI/CD configurations, testing and releasing different artifacts via our new Pipelines. A blog post about our experience on migrating from a mono repo to distributed services and CI/CD pipelines is on the making and you can check the draft here:


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we manage to move forward our CI/CD pipelines. We are currently working with fixed versions of our libraries and we can see light at the end of the tunnel. We hope that by the end of this week we will get our core frameworks ready to be released, then we will work on the service layer (examples) and TTC blueprint upgrade.


@daisuke-yoshimotois working on applying new audit service API interface to

@igdianovworking on Kubernetes websockets PoC microservices.

@constantin-ciobotaruworked for dynamic model validators

@miguelruizdevdeployed environment for testing the new UI modelling app.

@ryandawsonukContinued importing projects into build cluster for running in Jenkins-X. Started configuring projects to use updatebot for propagating build versions. Added features to updatebot to give us options on how to merge and whether to delete branches after merge. Started mapping out build graph so the team can decide how to propagate version through the graph.

@erdemedeirosworked with @ryandawsonuk to get the Jenkins-X pipeline working. Fixed release commit message; updated ignore files.

@salaboypresented at SpringOnePlatform.ioabout our journey toCloud Native applications and the Roadmap about the Spring Cloud Kubernetes project. While doing this, I started a PoC on how to adapt Activiti Cloud Connectors to work on top of KNative.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Hello community,


Today shall be the day, on which I am writing my first blog post here. Who am I actually? Well, I am Dennis Koch and am working as Expert Support Engineer for Alfresco. In support we often get questions or requests from customers and partners, where sometimes the answer should go out to a wider audience. So my idea is to share those questions and answers with you from time to time from now on, whereas I will directly start with this first post here.


Recently I had a customer asking the following:


While setting a String process variable in Alfresco Process Services (same counts for the Activiti engine), how long can the String value be?


So I started brainstorming:


I am not the designer/creator of Activiti, neither did I ever help in developing or architecting it. Still, there should be an easy way to find out the answer to this question. And while I was thinking about it, it came to my mind, that the limiting factor here can only be the database, where we store process variables and their values and a lot of other process related data.


Knowing, that process variables for a runtime process are stored in the database table ACT_RU_VARIABLE, I did some "reverse engineering" to find out which columns are created for this table by default and how they are defined and what they are used for. The most interesting columns to answer my customer´s question were "TYPE_" and "TEXT_".


When I created a String process variable with the value "Hello World", a row was created in the ACT_RU_VARIABLE table, whereas "TYPE_" was set to "string" and "TEXT_" was set to my actual String value, i.e. "Hello World". So I inspected how the "TEXT_" column is actually defined and learned, that it is limitted to 4000 characters. All I had to do next was to create some randome text with 4000 characters (there are online tools to do that, e.g. || Dummy Text Generator | Lorem ipsum for webdesigners || ) and see whether this text actually makes it into the table when I set the text as value for another String process variable. Indeed, this worked and I knew I could answer my customer´s question already.


But before I would answer, I wanted to find out how Alfresco Process Services actually handles the case, where someone tries to create a String process variable with more than 4000 characters. Would the system have issues, would the value be truncated or would I be able to break my instance at all? To find out, all I had to do was to extend my generated 4000 characters text by 2 more charcaters and set the new text as value for another String process variable.


Good news! I did not see any errors in my logs, nor did I seem to have broken my instance. Going back to the "ACT_RU_VARIABLE" table I saw, that in fact a new row was created for my new process variable, whereas this time the "TYPE_" value was set to "longString" and my "TEXT_" value was NULL.


Did we lose the 4002 characters text now without throwing any error?


No, with Alfresco systems there is no data loss of course!


Interestingly, the "ACT_RU_VARIABLE" table also has a column "BYTE_ARRAY_ID" and whereas this ID value was NULL for my String process variables with 4000 or less characters, it now had a value for my variable exceeding the 4000 characters limitation. The ID stored there is indeed referencing a row in the table "ACT_GE_BYTEARRAY", where my text is stored as byte array in the column "BYTES_". "BYTES_" is defined as VARBINARY(2147483647), which means we can store text with a size of 2147483647bytes. That should be long enough to store this blog post and the next one to come :-)


Be prepared to hear back from me soon...


- Dennis

Last week we spend a lot of time configuring our CI/CD pipelines and that work is going to continue this week. We are now building our libraries using Jenkins X, publishing to our public Alfresco Nexus and configuring Docker Hub for our Docker Images. We are still working on publishing our HELM charts from our CI/CD pipelines via Github Pages.

I am currently at, so if you are around and interested in these topics, get in touch! I am presenting the journey of Activiti Cloud to Kubernetes using Spring Cloud on PKS.

@igdianov- planning to add GraphQL WebSockets and Notification services for Kubernetes. Researched Kubernetes Nginx websocket support and configurations to provide session affinity. Will be working on PoC next week.

@ffazzinicreated connector definitions endpoint in RB, created acc test for connector definitions, implemented all requested changes in all connector definitions PRs

@constantin-ciobotaruworked for making model types dynamic using beans  and remove form model type and references to form service

@miguelruizdevcompleted the implementation of keycloak to the ttc-dashboard-ui angular project.

@ryandawsonukWorked with @erdemedeiros to apply POM refactoringacross repositories. Implemented a PoC to prove we can trigger a jenkins build from creation of a tag. Started importing the activiti projects into a designated build cluster to build using the jx-activiti-cloud user.

@erdemedeirosComplete the POM refactoring on the missing repositories. Worked with @ryandawsonuk to apply them across repositories.

@salaboypreparing for S1P and working in improving the gateway, also PoC with KNative for Cloud Connectors

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

I’m really excited to be part of the SpringOne Platform | September 24–27, 2018 | Washington, D.C.conference this year. The agenda is just amazing covering topics from: KNative, Project Riff, Istio, Spring Cloud Kubernetes, etc.

I am delighted to be presenting with no less than the Spring Cloud CoLeader, Spencer Gibb (@spencerbgibb). We are going to be presenting about Spring Cloud on PKSand how this journey looks for Java Developers (that is happening Thursday 27th Sept 10:30 am - 11:40 am).

We are working on a demo to show the Spring Cloud Kubernetes project in action and I look forward to hear what the Spring community feedback. You can find the preliminary work in GitHub:


After the conference I will be uploading all the materials and probably a video on how to reproduce the demo that we will be presenting. Also I will try to update this blog post with the live feed.

I will be also doing a lighting talk: From Spring Boot to Kubernetes on Monday (24th Sept).

You can use the following code “Attendee_Speaker_100” to save 100 USD on the registration fees.

If you are attending, and interested in the stuff that we are working on, get in touch via twitter or drop me a message here and we can do some hacking together.

See you all there! 

Last week we made a huge step forward into our CI/CD approach. We have validated that we can create environments that can connect to our services such as Nexus and DockerHub and we are ready to start importing all the framework projects into the new CI/CD environment. We are still looking into how to publish our charts coming from CI/CD to be able to publish new versions as soon as the underlying libraries change.

There has also been some amazing work on the TTC workshop which now has acceptance tests for validating the basic scenarios. I am expecting to add also some experiments with Telepresence and the new Devpods in JX with the embedded Theia IDE.


@danielpancuAdded implementation for getting all users and all roles implementation for Activiti core

@ffazzini java api and RB Cloud scenario

@constantin-ciobotaruworked for activiti modeling integration with keycloak

@miguelruizdevworked on fixing issues in TTC acceptance tests and collaborated on the PoCs for CI/CD

@ryandawsonukworked on and documented PoCs to publish to maven central, alfresco nexus and dockerhub from a Jenkins-X cluster

@erdemedeirosrefactored POM files on Activiti-cloud projects to conform to CI/CD pipelines.

@salaboyPoC on telepresence, improving platform components such as the Gateway, working on the framework side of CI / CD.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we made some progress on Cloud Connectors Definitions initial implementation (we are also working in RFC document for this topic) and also we moved forward the TTC acceptance tests projects. There were further refinements on the Spring Cloud Kubernetes Discovery implementation that are being tested to be used with the Spring Cloud Gateway project to filter other, non related Kubernetes Services that we are not interested in registering as Routes.


@cristina-sirbuWorking on the custom Spring Boot Starter for Activiti Could Events Adapter

@ffazzinicloud connector definition and runtime

@lucianopreahelped @constantin-ciobotaruwith the helm chart for activiti-cloud-modeling and logged a bug about task update #1991

@constantin-ciobotaruworked to deploy modeling application in GCP using jenkins-x

@danielpancuAdded keycloak get users and roles names functionality on activiti identity-keycloak module

@miguelruizdevworked on the creation of new scenarios for the TTC acceptance tests

@ryandawsonuk- holidays -

@erdemedeiros- paternity leave -

@salaboyworking on Gateway filters and Spring Cloud Kubernetes enhancements for reloading configs.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we worked hard on updating the Trending Topics Campaign workshop to work with Keycloak and aligned dependencies for Beta1 and latest snapshots. This week we are pushing our CI/CD approach to move away from fixed versions for all the services and enable fast development cycles for our Cloud Native Building Blocks. There is some more work around SSO and IDM around our workshop and there are in progress discussions about the architecture of a set of UI/Operation controllers to report on inflight operations inside the infrastructure. 

@ffazzinicloud connector definition and runtime  

@lucianoprea worked on testing the Activiti Cloud HELM Charts 

@balsaroriCreated PRwhich removes all activiti5 compatibility code from the activiti 7 code base.

@igdianovworking to solve Found solution via customizing Jenkins-X pod templates via myvalues.yaml configuration

@cristina-sirbuworking on the spring boot starter for Events Adapter

@constantin-ciobotaruworked on modeling service for moving endpoint from aps to community

@almerico Worked on helm deployment to install models to  APS injenkins X pipelines.

@miguelruizdevkept working in the keycloak implementation within the ttc-dashboard angular project (front end of the TTC Workshop) following the keycloak.js api

@ryandawsonuk initial skipe on CI/CD for community projects (libs)

@erdemedeirosset up a cluster with TTC examples; started writing acceptance tests to cover TTC examples.

@salaboyworked on Spring Cloud Kubernetes Discovery filters for Service and worked with @constantin-ciobotaruto refine the backend services for the Modeling Application.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Hi everyone, it has been long since the last Roadmap update, this blog post address the main reasons why we delayed Activiti Core & Activiti Cloud Beta1and how the work is going to come moving forward. It is important to notice that this update represent the work that is being done in the Open Source community for Activiti Core and Activiti Cloud and we encourage anyone reading this article to join us in our journey.

This blog post is divided into 3 sections, Past, Present and Future. For the sake of transparency, and in addition to our weekly blog posts, we wanted to summarise our work (community) during the last 5 months.

Past: Some background about Beta1

Before Beta1, while we were doing Early Access Releases, we were defining the shape of Activiti Cloud. As in every experimentation exercise, we create a set of Proof of Concepts to demonstrate how an Activiti Cloud Applications were supposed to be built. We started with Spring Cloud and Docker, and it made a lot of sense at that point to use Spring Cloud Netflix OSS libraries to build Cloud Native applications. These libraries and services such as the Eureka Service Registry, Zuul (Gateway) and Spring Cloud Config Server helped us to demonstrate our points for Cloud Native Applications.

While looking at Kubernetes, we knew that there were some overlaps, all of the Early Access releases and examples contained dependencies to client libraries to connect to these infrastructural services. After getting the initial examples using Docker Compose, we started testing on Minikube, but quickly we realise that we better work in real life clusters than in local VMs. While testing on real life clusters such as GKE and AWS + Kops we realise that we didn’t need much of the Netflix OSS libraries, so we start moving away from them and started collaborating with the Spring Cloud Kubernetes project.

Once the PoCs were done, we defined a new set of APIs for the Runtime Services. These APIs needed to be defined at all levels, meaning Activiti Core, Java Services and at Activiti Cloud REST and Message Driven Endpoints level. This was a very interesting exercise, because previous version of Activiti didn’t define a public/private APIs that we can reuse. For this new APIs we focused on the Runtime aspects first but more APIs will come to cover others (such as data consumption, modeling, etc.).
At the same time we did another two important and related pieces of work around HELM and Jenkins X. The team presented at a set of conferences and prepared the Trending Topic Campaigns Workshop with heavy focus on building a real scenario that can be used to demonstrate the power of Activiti Cloud.

These large pieces of work(moving away from Netflix OSS, the new APIs definitions, HELM, Jenkins X and workshop examples) took us 5 months of work to stabilise and validate our ideas with the Java, Containers and Kubernetes communities. During these 5 months of work, it became clear that after Beta1 we couldn’t keep working in the same way to be efficient, we decided that we needed to spend time in our internal workflows in order to accelerate the project speed and in order to propagate those working practices to our consumers.

Present: Beta1

Beta1is focused on the main Runtime building blocks for Activiti Cloud Applications in Kubernetes. These building blocks can now be consumed using the provided [HELM charts](h) as templates for building custom applications. These HELM Charts has been tested in AWS+Kops, GKE and PKS (EKS is coming). If you are interested in testing in Openshift, Azure or any other Cloud Provider please get in touch, we are more than happy to help.

As mention in the release post, these building blocks are: Runtime Bundle, Audit & Query Services. These Services expose REST and Message Driven endpoints using Spring Cloud Streams ( we test with RabbitMQ as our Message Broker). These services work on top of a common infrastructure composed by: A gateway, A Service Registry, Config Maps and Keycloak for SSO and IDM.

If we build for the Cloud, we need to work in the Cloud. In the Cloud not everything is perfect, so we have the basics now, we have a huge road ahead to simplify the experience for our customers. Jenkins X helped a lot on this front, for that reason we are collaborating with them to improve their tools for all of us.

Because Beta1 targeted Kubernetes deployments the release is not supposed to support other deployments (Local Spring Boot / Docker) as it is. Docker Compose is currently not working due the fact that we moved away from the Netflix OSS libraries. We can still support a Docker Compose scenario for getting started experiences, but we will need customised Docker Images for that (adding extra Netflix OSS dependencies for such environment). Get in touch if you want to help us on that front.

As you might notice, because Beta1 was focused on the Runtime Services, Modeling services were included but they need more polishing. Also the Application Service which is not considered as a Core Building Block was included but there are still more work to be done in order to be stable.

If you want to get started with Beta1, we recommend the following blog posts:

Future: Beta2 (work in progress)

We are aiming to have Beta2 released by the end of September, and as mentioned before, we are making a lot of emphasis into our community process for building these release using a Continuous Deployment and Continuous Delivery approach. We believe that this change is crucial for 2 big reasons:

  1. It will allow the community to evolve faster, allowing innovation streams to be included in the releases faster, enabling different components to evolve at their own pace and free engineering time to work on features instead of releases
  2. Similar practices will be needed by whoever wants to consume these tools. We want to make sure that we lead the way (by eating our own dog food and generating recommendations and examples based on our experience) to our community and private customers.

For Beta2 we are aiming to have a set of CI/CD pipelines for all our libraries and building blocks as well as an environment where we can test in a continuous deployment fashion the impact of our changes at every level in our stack. Hence Beta2 will include a larger set of Acceptance Tests which will be validated after any pull request get merged at any level of the chain.
We are aiming to change the versioning strategy for Activiti Core & Activiti Cloud to be able to evolve each library and building block independently without being tied to a single version for all of our components. We will use semantic versioning for all our artefacts (Major.Minor.Patch) and we should be able to generate and test a new release for every Pull Request that gets merged into our develop branches. We are still defining some of these details, so expect a new blog post about this subject soon.

We aim to not only have basic (conformance) examples plus their acceptance tests running in an environment 24/7, but we also want to have our workshop Trending Topic Campaigns being used to check that new releases can support more complex scenarios.

For Beta2 we are also spending time on our Modeling Services and we expect to release a new Modeling Application along our refined Runtime building blocks to be able to demonstrate an initial full cycle of how to build Activiti Cloud Applications from Modeling to Runtime in Kubernetes.


Summary and Future (Post Beta2)

After Beta2 our focus will be on hardening our core services and to simplify the experience of building Activiti Cloud Applications for Developers. There are some big topics that we want to cover before going to Release Candidate and those include:

  • Further refinements to our CI/CD approach, we know that we cannot get it right in the first time
  • Istio is the most natural option in Kubernetes to support in an uniform way cross cutting concerns, so we want to make sure that we do some initial testing with it for our Cloud deployments
  • Distributed BPMN Timers should be covered and added to our acceptance tests suite
  • Spring Cloud Kubernetes enhancements for Distributed Timers
  • Trending Topics Campaigns example / workshop running 24/7 in an environment.
  • Performance Tests introduction (still under discussion)
  • Pluggable Pipelines (still under discussion)

I hope that this blog post give you an overview about what is going on in the Activiti Core and Activiti Cloud projects, if there is something that you are interested that is not covered here, drop us a message or join our Gitter channel and we will be more than happy to expand the information provided here.

Alfresco Process Services (APS) 1.9 provides support for authentication through Keycloak. The integration with Keycloak gives us access to many advanced Identity Provider features, such as SAML 2.0, oAuth, OpenID, Identity Brokering and User Federation. In this article, we will review the integration of APS, and AIS (Alfresco Identity Service) in a multi-domain LDAP environment.

With the introduction of AIS into the architecture, APS can now support authentication through multiple LDAP Domains, or other federated Identity Providers. However, in order to use APS integration with AIS, users are required to be preloaded in APS. Currently the APS-AIS Integration provided in APS 1.9 does not support user synchronization, so users will need to be loaded into APS via other means, such as the LDAP User Sync feature that APS supports.

In the following article: "User Synchronization in APS from Keycloak,"  I capture the technical details about synchronizing users and groups into APS from Keycloak. The article also includes a functional APS module that can be installed, or adapted to similar use cases.

Start Signal Event with REST example

Using Alfresco Process Services powered by Activiti, this example will demonstrate how to throw a signal in the global scope of the tenant for other processes to catch.


I am going to show one example of how to use a signal to start a process, but there a several ways to use the signal event.

You may choose to send a REST signal instead of using the REST call when there are several processes you wish to start at the same time or you don't know the ID of the process you want to start.


Configuring the REST call

  1. Define the REST header authentication in the under "Basic Auths" in the Identity Management > Tenants > Endpoints section Defining the Basic Auth header
  2. Define the REST endpoint under "Endpoints" in the Identity Management > Tenants > Endpoints section. For the Protocol, Host, and Port sections, enter the relevant information for your activiti-app configuration.
    Defining the REST endpoing
  3. In your process, add a REST call task activity.
  4. Configure the REST call task:
  5. Request mapping:

      The JSON of the request mapping should be similar to:

         "value":"This is a string"

      The values are:
         signalName - The name of the signal you are throwing and catching. You will need to define this in the process          that you want to catch the signal.
         tenantId - The ID of the tenant in which to throw the signal. In a non-MT environment, the tenantId will be          "tenant_1"
         async - If you want the throw to be executed asynchronously
         variables - Any variables you want to pass to catch events. You will need to define these in the catch process.

   6. Request header:

         Header name: Content-Type
         Header value: application/json

   7. Endpoint:
         HTTP method: POST
         Base endpoint: http://<aps_host>:<aps_port>
         Rest URL: /activiti-app/api/runtime/signals?tenantId=<your_tenant_id>

Defining the step REST endpoint

REST configuration is complete


Configuring the Start Signal Event

  1. In the editor for the process that will catch the signal, open up the signal definition and add a new signal:Defining the Signal
  2. The signal name, defined under the "Signal Definitions", needs to match what you defined in the REST call, or vice versa:
    Defining the Signal values
  3. The scope needs to be global
  4. Next, add a Start Signal Event to the process
  5. Under the configuration for the event, select your defined signal from the drop down under Signal Reference
    Defining the Signal reference
  6. If you are passing variables with the signal, as defined in your REST call, define those variables in the global space of the process
  7. Build out the rest of your process



You can find sample apps and process on my Github

The example files may not work with the community edition of Activiti, but the concepts are the same.




Last week we started with a major clean up in our repositories to better decouple each project responsibility. This will enable us to build and version each module individually to allow faster development cycles. This week we will be setting up these pipelines to test the whole process for our libraries first followed by our service layer. We are still looking for Beta testers, if you want to get involved we recommend you to try the following two tutorials and get in touch via Gitter if you want to jump into more advanced topics:

@mteodori- holidays -

@lucianoprea- holidays -

@constantin-ciobotaru - holidays - 

@balsaroriworked on initial design of the Forms Router components.

@igdianovworked on pipelines for Activiti Cloud Applications

@cristina-sirbuContinued working on Activiti Cloud Events Adapter Service. Changed communication with RabbitMq (used Spring Cloud Stream). Helm Chart for this in progress. Also discovered the new Activiti Full Example App.


@daisuke-yoshimotois working on applying new audit service API interface to

@almerico Created presentation cluster based on new Activiti Cloud version.

@miguelruizdevworked on the ttc workshop, trying it out against the Beta1 version and identifying the changes that need to be implemented for it to work again.

@ryandawsonukSetup acceptance tests to run in a new custer against a JX-deployed example as part of review of CI/CD practices. Submitted PRs to Jenkins-X draft packs to help keep Activiti charts close to JX default practices. Helped Elias with package name refactoring to refine the API clarity/readability.

@erdemedeirosworked with @Salaboy and @ryandawsonuk to get Activiti API and Activiti Cloud API extracted to dedicated repositories ( Maven modules has been renamed and package structure has changed.

@salaboyworked on the next round of refactoring with @erdemedeirosand @ryandawsonukwe aligned package names, maven module names and repositories to follow the same conventions. This will make it easy to document and easy to maintain in the long run. This changes are also helping us to move forward Continuous Delivery and we will keep our experimentation with Jenkins X to build each repository.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we worked on the release of Activiti Core and Activiti Cloud Beta1. We wrote some Getting Started guides and we tested the maven artifacts and example services and deployment descriptors quite extensively. We got the Beta1 release deployed in PKS and also tested that the released example services can work in a Minikube deployment using the same Activiti Cloud Full Example HELM charts. Congrats to everyone involved in the release and thanks a lot for all of you who are providing early feedback about this release.

This week we already started with some large refactorings regarding source code organization and improving our work practice to fully switch to Continuous Deployment. We will aim for doing a Beta2 release quite soon, and we will be updating the Gitbook and the Roadmap accordingly this week.

@almericoUpgrades according to latest release with our google cluster.

@miguelruizdevworked on the overall testing of the third API refactoring, fixing tests in the acceptance test security module, reporting issues on faulty endpoints for future fixes beyond Beta1, and, along with @erdemedeiros, created new clusters within the company organization in GKE using Jenkins X.

@balsarori initial Form Router discussions

@mteodori - holidays - 

@cristina-sirbuWorked on Activiti Cloud Events Adapter Service ( a Spring Boot Application which gets the events from RabbitMq, enrich them to a new format and pushes them to an ActiveMq instance). Communication between RabbitMq and ActiveMq done. Enrichment of the events is still in progress.

@constantin-ciobotaruworked for adding import/export applications/models endpoints in modeling.

@ryandawsonukAdded minikube instructions to activiti-cloud-full-example chart. Updated the audit-service module to follow the query-service module in using the newly-refactored versions of the services for security and identity. Added tests for an example cloud connector to the serenity-based BDD acceptance tests and updated the tests so that the security-policies tests work together with the deployment conditions for runtime tests. Updated the release scripts for building and tagging the example-level repos using their new structure.

@erdemedeirosworked with @Salaboy to get all the API related PRs ready to merge. Set up a GKE cluster using Jenkins X to test the latest changes in the APIs.

@salaboy worked on the release process and creating tutorials and examples.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

I am happy to announce that after more than a year of hard work and particularly intense work over the last 4 months we are ready to release the first Beta version of all the Java Artifacts. You can consume them all from Maven Central.

During this release, we focused on providing the first iteration of Activiti Cloud and from now on we will harden each service in an incremental way. We are happy with the foundation and now it is time to test and polish the next generation of Activiti. We are eager to get early feedback, please get in touch via Gitterif you find any problem or have questions about the released artifacts.

What is included in Activiti Core and Activiti Cloud Beta1


Beta1 is focused on providing the foundation for both our Java (embedded) and Cloud Native implementations.

This release is divided into two big categories: Activiti Core and Activiti Cloud Building Blocks.

Activiti Core

On the Core, we have provided a new set of APIs for the Task and Process Runtime that enable a simple migration from embedded to the Cloud Approach. We believe that this is a fundamental change in order to provide long-term API support focused on Runtimes and following the Command Pattern. We also added security, identity and security policy abstractions to make sure that you can quickly integrate with different implementations when you embed the process & task runtime into your Spring Boot applications.

Activiti Core is now based on top of Spring Boot 2. On the Core, you should expect further integrations with Spring Boot 2. There are plans to adopt all the reactive capabilities included in Spring 5. A big part of refactoring the Core layer was about simplifying the runtimes to make sure that they don’t clash with other frameworks’ responsibilities.

The new set of Runtime APIs was conceived to promote Services that are Stateless and immutable, which allow us to write testing, verification and conformance tools to guarantee that your business processes are safe and sound to run in production environments. These Runtime APIs include new responsibilities and don’t deprecate the Activiti 6.x APIs. These new Runtime APIs are replicated at the service level (REST and Message Driven Endpoints) in the Activiti Cloud Layers.

If you want to get started with using Activiti Core 7.0.0.Beta1take a look at the following tutorials which highlight the ProcessRuntime and TaskRuntime API usage in Spring Boot 2 applications.

Getting Started with Activiti Core Beta1

Activiti Cloud Building Blocks

On the Activiti Cloud front, we are providing the initial Beta1 versions of our foundational building blocks:

  • Activiti Cloud Runtime Bundle
  • Activiti Cloud Query
  • Activiti Cloud Audit
  • Activiti Cloud Connectors

These building blocks were designed with a Cloud Native Architecture in mind and all of them are built on top of Spring Cloud Finchley.  Together, these components form the foundation for a new bread of Activiti Cloud Applications which can be distributed and scaled independently.  All these building blocks can be used as independent Spring Boot applications, but for large-scale deployments, these components have been designed and tested using Docker Containers and Kubernetes as the main target platform. We have been using Jenkins X to build components that need to be managed, upgraded and monitored in very dynamic environments.


All these building blocks understand the environment in which they are running, so they are enabled to work with components such SSO/IDM, Service Registry, Data Streams, Configuration Server, and Gateways. By being aware of the environment, our applications can be managed,configured and scaled independently.

It is important to understand that we have also included some experimental services in this release including:

  • Application Service
  • Modelling backend services
  • GraphQL support for the Query Service / Notification Service
  • An alternative Mongo DB implementation for Audit Service


If you want to start with the Cloud Native approach we recommend looking at the Activiti Cloud HELM charts for Kubernetes:


These example HELM Charts can be used to get all the services up and running.


Check the following blog post if you want to get started with Activiti Cloud in GKE.

Getting Started with Activiti Cloud Beta1

What is coming next?

In this release, we didn’t include any UI bits, and we focused on core services to build Cloud Native applications. In the following Beta2 and RC releases we will include new backend services to support the new Activiti BPMN2 Modeler application. This new set of modeling services and the new Activiti Modeler application is currently being designed to enable our Cloud Deployments by providing a simplified packaging mechanism for our Activiti Cloud Applications.


On future releases we will be hardening our building blocks, improving documentation and examples and adding new services to simplify implementations.  We are actively looking at tools such as Istioand KNativeto provide out of the box integration with other services and tools that rely on such technologies.


We are also contributing and collaborating with the Spring Cloud community to make sure that our components are easy to use and adopt rather than clash with any of the other Spring Cloud infrastructure components. This also involves testing our core building blocks in Pivotal Containers Services (PKS) which is based on Kubernetes.


We want to provide a continuous stream of community innovation and for that reason, we will change how the Core and Cloud artefacts are versioned and their release lifecycles. We are moving to a Continuous Delivery approach for each of the libraries that we build following the practices mentioned in the Accelerate book. We will, of course, update the structure of our projects but the services and APIs will remain the same.


You should expect some naming changes between Beta and RC artifacts. We are still polishing new service names and configurations. This will be all documented in the release notes for each release. You can always keep an eye on our public Roadmap document (that will be updated in the following days)for the future Beta, RC and GA releases.


As always, this is a great moment to join the community and provide feedback. If you are looking into building Cloud Native applications, we are more than happy to share and get feedback from people going through the same journey. Feel free to join our daily open standups or ping us in our Gitterchannel.