Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog > 2018 > May

Last week we spend a lot of time refining examples and upgrading our infrastructure to leverage kubernetes services by using Spring Cloud Kubernetes projects. We started working on upgrading the BluePrint: Trending Topic campaigns in order to use the new set of dependencies and improve the Application Service to work in such environments. We are getting quite close to finish the new Process & Task Runtime Java APIs and we want to make sure that we refactor all the services to consume this new layer. A lot of refinements in our Spring Boot Starters is happening to make sure that we clean up all the rough edges and provide Auto Configurations when possible to reduce the burden in the consumer side.


@MiguelRuizDevworked on implementing logic to the Resource Assembler in the Task Service and on Actions Buttons dynamically displayed in the corresponding UI.

@daisuke-yoshimotopresented at JJUG CCC Spring 2018with @salaboy

@constantin-ciobotaruadded more test coverage for organization controllers and for process modeling controllers, complete the integration between aps process modeling and activiti organization

@pancudaniel7Kubernetes deployment setup investigation

@lucianopreadid some investigations about helm charts

@ryandawsonukCreated PRs to set create auto-configurations for more of the modules in common level under #1850. Got the trending topics blueprint working with Jenkins X and submitted PRs to make it easier for the modules to be configured from the parent-level values.yaml and work together with rabbit.

@erdemedeirosmade sure that all the REST controllers are using the new Java API: covered ProcessInstance controller and variable controllers.

@salaboypresented at JJUG CCC Spring 2018and improved the Trending Topic Campaigns Blueprint for the Beta1 release. Also managed to sync face to face with our community rock start member @daisuke-yoshimoto about the Cloud approach for BPMN Signal, Messages and Timers. 

This week we started migrating our core example to follow the same approach that our BluePrint adopting some of the Jenkins Xconventions for service descriptors, helm charts and coding standards.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

We (@daisuke_yoshimoand myself) presented our lessons learned about Spring Cloud, Docker and Kubernetes while designing, implementing and testing Activiti Cloud

<pictures coming soon>

Here are the slides and the source code repositories to run the demo that we showed during the presentation which included Jenkins Xand Spring Cloud Kubernetes.

[slideshare id=98864259&doc=springcloud-docker-kubernetes-180526074218]

Github Projects

These simple projects demonstrate the core technology that we are using inside Activiti Cloudto make it Cloud Native. 

In the following blog post you can find instructions about how to get these services running in your cluster:

Also if you are interested in Jenkins Xyou can also find some instructions to get your cluster up and running here:

Hi everyone, as part of the Activiti Cloudproject we have been testing our Cloud Native building blocks extensively into Kubernetes clusters. We started by using MiniKubeand that worked well for some time, but now we are moving to real clusters to do our development, testing and continuous deployment approach (Activiti Cloud & Jenkins X:  Blueprint in AWS and GCE). We believe that as part of being an Open Source project we should demonstrate, share and highlight best practices regarding software evolution, maintenance and testing. For that reason, we started looking into Jenkins Xand Spring Cloud Kubernetes, to fill some of the gaps of how to maintain the Spring Cloud programming model when we target Kubernetes as our main deployment platform. On this blog post I will be share an example that is evolving based on Ryan Dawson’s Minions in Minikube Blog post.


The Scenario

As described by Ryan in his blog post, we have minions, they are workers. Imagine your services here, they will perform a certain job, minions are just a simple spring boot app. We can have tons of them, and then can be configured in different ways to perform different things. Minions can and want to do some work, but they need a boss to tell them what to do.


This means that in this scenario, minions will look for a Boss until they find him/her and then ask for a particular mission to work on. If there is no boss, they will not know what to do.

If we translate this to a microservice architecture, we will have services that will use service discovery mechanism to find another service that has been labeled in a certain way to specify that it provides certain functionality, in this case a Boss service. We wanted to make sure that if we run in Kubernetes we don’t use Eureka/Consul, mostly because Kubernetes already provide a built-in Service Registry. In most of the cases, service discovery is handled by just the “serviceId”which is usually a string that allows your service to connect to another service. For cases when this is not enough, for example, finding a service based on its metadata/selectoryou will required to query the service registry and filter the available services based on this extra information to get hold of the “serviceId”that you are looking for.

In most scenarios, these services need to be configured and we want to avoid to deploy the Spring Cloud Config Serverif we have config maps already provided by Kubernetes.

Last but not least important, if we are deploying multiple minions and bosses, we want to make sure that we provide a single entry point for our clients to interact with these services. For that reason the example includes the new (and shiny) Spring Cloud Gateway, which uses the @DiscoveryClientimplementation to locate services and automatically register new routes.

Demo Flow

We will start by deploying a single minion service. This service have a Scheduled task to look for a Boss that provides something to work on.

If the minion find a Boss service, it will work on the task provided by the boss for a period of time (another Scheduled task to finish the task at hand). Then it will try to find a boss again. In order to find a Boss Service we will use the DiscoveryClient to query the Kubernetes Service Registry. We will use the metadata (labels / annotations) to filter out which services match with “type: boss”.

We can tail the logs of the minion service and we will see that it will keep looking for a Boss.

Next step is to deploy a Boss service. As soon as this service gets deployed, the minion will find it out via Service Discovery and the Minion will be able to request a new mission by executing a POST request to the recently discovered Boss service. The Boss service will return a task for the minion to perform.  We can also tail the logs of the boss service to see that it is receiving requests from minions.

At all times, we can access each of the individual services that are deployed and running via the Gateway Service. This gateway is also using the Discovery Client to find which services has been deployed and automatically registering these services as routes. If a service goes down the route will be automatically unregistered.

You can access to all the registered routes by doing a GET request to:

http://<gateway url>/actuator/gateway/routes

Where you should be able to find 2 routes defined one for the minion and one for the boss:

http://<gateway url>/spring-cloud-k8s-minion/

http://<gateway url>/spring-cloud-k8s-boss/

The Projects

You can find these projects in my GitHub account:

You can fork this projects and build them locally, but they are using (depending on) the Spring Cloud Kubernetes projects to enable Service Discovery, Configuration and Client Side Load Balancing with Ribbon.

You can make these project run in MiniKube, but I recommend you to try it out in a real cluster such as AWS + Kops or GCE (where you can get a free account to try stuff).

You can even use Jenkins X  (which is what I'm using in the video) if you want to forget completely about how to build the Projects, build the Docker images, create Kubernetes descriptors and HELM charts.

In the following video I will be describing how these services will interact and how we can use the same concepts defined in Spring Cloud inside a Kubernetes deployment.



(Important) Notes about Kubernetes Deployments

There are some important things that you will need to understand if you are deploying these services into a Kubernetes Cluster. These notes are separated by projects so you can follow the changes in their own GitHub repository:





When we look at the minion project it is consuming data from the Service Registry and the Config Maps.

All the Helm Charts and deployment descriptors templates where generated by Jenkins X import process, so we needed to tweak some of these descriptors in order to allow this service to interact with Kubernetes Services.

To leverage Spring Cloud Kubernetes Configmodule, we included a config map configuration inside the helm chart deployments descriptor template directory.

This will allow us to read configurations from this ConfigMap following the same programming model that we used in Spring Cloud Config Server. Where we can @Autowired an Object Oriented representation of the properties that we want to configure.

This configMap will be deployed along our service by the Jenkins X pipeline.

In the same way that we added the configMap, we needed to add some security configurations (service-account and role bindings) to enable our service to consume Kubernetes internal services (ConfigMaps and Service Registry). This services are not enabled for anyone to consume, so we need to enable Spring Cloud Kubernetes to access some internal resources.

You can look at these configurations here:

(This is very similar for the gateway)

Another small tune-in of the service and deployment descriptors templates were done in order to remove the release name from the published service name in the registry. This is to maintain the expected service name without taking into account the environment which follows the same approach of working outside Kubernetes with Spring Cloud.



On the Boss side, there are not too many changes because the Boss is quite simple and it is only accepting requests from minions. As an extra exercise, you can try to add configuration using configMaps and connect back to minions using client side load balancing with Ribbon.

In the previous video you saw how we did import the project using Jenkins X and Helm Charts(and descriptors templates), Skaffold.yamland JenkinsFilewere created. Then I just customised the service names  to remove the release name (which contains the environment prefix).



The Gateway is also a simple project, but it has some configurations that are important to be highlighted.

We created this Gateway using the new Spring Cloud Gateway implementation and just added the Spring Cloud Kubernetes Discovery implementation. Then we customised some of the properties to enable automatic routes registration, as you can see in here:


Notice also that we changed the default pattern from “lb://'+serviceId'”to “’http://'+serviceId”to get it working. I’m currently investigating if this is needed or not and how to support client side load balancing in the Gateway.

Also we added the serviceAccount, roleand rolebindingas explained in the minion project for service discovery.


We are making sure that Activiti Cloud can leverage these capabilities, because we require for dynamic route registration of Activiti Cloud Applications and to provide the option of centralized (and watchable) configuration for our services. It is important to understand that there are several ways of doing Service Discovery, and this might only be appropriate for certain advanced and well justified use cases. If you are not trying to solve an use case where you need dynamic (based on metadata) service filtering or in some kind of router/gateway you should use the “serviceId” in order to interact safely with other services. This simplified approach reduce the number of configuration needed in your services. For such simpler use cases we are looking at Istio Service Mesh to provide, control and monitor our service to service interactions.

We are comfortable with using ConfigMaps as our mechanism to configure our services and containers and Client Side Load Balancing with Ribbon and Hystrix to provide Fallback mechanism is something that you might want to include in your services. We are looking forward to keep contributing to the Spring Cloud Kubernetes project and make it better for everyone else in the community to use.

If you have questions about this example, feel free to drop me a message here or join our Gittter channelto get in touch. 


New website!

Posted by rallegre Employee May 22, 2018

We are happy to announce the new website ( for Activiti is now live.


This new version has an updated look and feel that reflects our new cloud native approach to Business Automation. It is now hosted on GitHub so we can iteratively improve it following an open community approach! The source code is available at can send your PRs with suggestions and changes.


We want this new website to be the one place for all community members to learn about Activiti, get the latest news, share highlights and award top contributors.


This is just the first step, and very soon there will be more pages. Stay tuned!

We value your feedback, please share it with us on Gitter.

Last week we worked hard into moving forward our services and making sure that they can be deployed into an AWS Kubernetes Cluster and also a GCE Kubernetes Cluster. This will enable us to validate our examples and blueprints in a more consistent way release after release.

We are looking into plugging in our Acceptance Tests in our pipelines to start covering more complex scenarios.


@MiguelRuizDevworked on improving the Task Service Incubator project and added a UI using Angular 6.

@mteodori#1894update to Spring Boot 2.0.2 and #1895Update to Spring Cloud RC1 and fixed related false positives for vulnerabilities

@igdianovdefined the next steps for the graphql integration and modules dependencies to make it more generic.

@daisuke-yoshimotoworking on the Cloud BPMN signal mechanisms examples and acceptance tests.

@constantin-ciobotaruworked for organization controllers

@pancudaniel7looked more into the concepts behind Activiti Cloud Infrastructure and reviewed the Activiti Cloud  GitBook

@lucianopreaused the Activiti Cloud Infrastructure inside a template project along with the logging and tracing

@ryandawsonukMade it possible to run the apps service is Kubernetes by putting it on a new example app - in the docker-compose example its embedded in the registry but both register on the gateway with the same route so that the external API is the same. Switched Activiti keycloak integration module to using autoconfiguration. Reviewed and merged changes from @igdianov to make graphql notifications with websockets available through the gateway.

@erdemedeirosmade sure that the runtime bundle relies on the new Java API: covered part of  the process instance and process definition REST APIs.

@salaboyworked on getting the new Spring Cloud Gateway up and running with the Spring Cloud Kubernetes Discovery Client for dynamic routes discovery.


This week we look forward to migrate our quickstart to follow our new BluePrint structure and also Daisuke (@daisuke-yoshimoto) and myself (@salaboy) will be presenting atJJUG CCC Spring 2018 Conference in Tokyo, Japan. If you are around Tokyo, feel free to drop me a message via Twitter and we can catch up while I’m around.

Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)




Hey there, on this blog post I wanted to touch base on using Activiti Cloud with Jenkins X. As mentioned in previous blog posts we wanted to make sure that Activiti Cloud provides the right tools for you to build, deploy and scale new Activiti Cloud Applications following standard practices around the technologies required to deploy to public clouds that uses Kubernetes.

Having automated pipelines which can:

  1. Build your domain specific project’s source code (Jar)
  2. Publish to Nexus (or Artifactory)
  3. Create a Docker Image
  4. Publish it to a docker registry
  5. Create a set of descriptors (kubernetes and helm charts) following industry best practices
  6. Publish these descriptors into a helm chart repository (Chart Museum)
  7. Promote/provision these projects into an environment (namespace in a running kubernetes cluster)

Sounds too way to complicated to build in-house and at the same time it sounds like a must if you really want to provide Continous Integration and  Continuous Deployment (CI/CD) approach in a serious way. Luckily for us Jenkins X provide these and much more. It also validates the approach that we have adopted in Activiti Cloud, where we can build each individual piece in isolation and each of the pieces are designed to behave and operate in a cloud native way and promote and push forward our main objective of providing an industry standard and modern platform to build business applications in a fast an interactive way.

In order to test Jenkins X capabilities I’ve migrated our Activiti Cloud Blueprint to a set of individual repositories so we can benefit from Jenkins X conventions.

Getting Started / Installing JX

Because I was just starting with Jenkins X I needed to basically create a Kubernetes Cluster and install JX. This is quite an easy procedure if you have access to an AWS account to play with. Jenkins X CLI tools will use Kops to create and setup the cluster and install all the JenkinsX services in a Kubernetes namespace called “jx”.

So first of all download the JX CLI. I did that by using homebrew because I'm using Mac OSX:

brew tap jenkins-x/jx

brew install jx

Note: I evaluated the option of using MiniKube, but this is not even recommend in their docs. You can make it work, but it is unrealistic to try to run all jx services plus all your own services locally.  

So first step was to get hold of some AWS credentials (aka aws_access_key_id  and aws_secret_access_key). Once I got those I created a file called credentials with those values inside ~/.aws

If you take a look at JX docs, you will find that they recommend to go with the configurations provided in a Cloud9 workshop which is AWS IDE. I didn’t wanted to go in that direction for now, so I decided to configure the environment to just install JX.

 At this point I executed:

jx create cluster aws -n=activiti-cloud

This downloaded the AWS SDK to my local environment but the installation failed because I needed to set some things first. One important thing to know is that Kops will use an S3 bucket to store the cluster state, so you need to create that bucket first, and you can do this with a very simple command as soon as you have the AWS SDK installed with the right credentials:

export S3_BUCKET=kops-state-store-$(cat /dev/urandom | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 32)



aws s3api put-bucket-versioning --bucket $S3_BUCKET --versioning-configuration Status=Enabled

This basically do 3 things:

  • Create an S3 Bucket with a random ID at the end of the name kops-state-store-<random>
  • Exports a variable called KOPS_STATE_STORE to be used by KOPS
  • Create an S3 bucket with that name
  • Enable versioning in the bucket

Next you need to select in which availability zone you want to use to set up your cluster, you should obviously choose the one that is close to your location from the AWS Web Console, then I used the following line to export in an environment variable the available Availability Zones from my account:

export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"

After this, you should be able to execute again:

jx create cluster aws -n=activiti-cloud

While this is running you can check in your EC2 Section inside AWS Web Console how your cluster is being created and how your nodes are being restarted as part of the set up.

Once this is done, congrats your cluster is app and it has Jenkins X inside and you CLI is configure to work with it.

Some important things that happens during the installation:

  •  It install AWS SDK, HELM and KOPS
  • Configure a token to work with a GitHub account
  • It creates two repositories in your GitHub account to configure the staging and production environments. Jenkins X treats these environments following the principles of dealing with infrastructure as code (GitOps), so these repositories will contain everything you need to recreate, rollback or upgrade these environment from the scratch
  • Jenkins X by default install the following services inside the “jx” namespace (also referred as dev environment):
    • Jenkins
    • Nexus
    • Docker Registry
    • ChartMuseum and UI
  • Create two more namespaces called jx-staging and jx-production and setup all the web hooks needed to monitor these repositories for changes and trigger the correspondent pipelines to apply changes to these environments
  • It uses to bind the AWS external URL to <service>.jenkins.jx.<IP>

At the end of the installation it prints out the password for the admin user. Make sure that you keep copy it and keep it safe somewhere.


Some things that might become handy to know:

  • You can of course delete the cluster and restart again if things goes wrong, for doing that you can use kops:

kops delete cluster <name> --state=s3://kops-state-store-<random> --yes

Notice that this is really time consuming and there is no need to delete the cluster unless you want to change something from the cluster itself

  • If you want completely uninstall Jenkins X you can run:

helm delete jenkins-x --debug --no-hooks --purge

  • The jx uninstall command does something similar but it leaves the environments jx-staging and jx-production intact, it only uninstall services from the jx namespace


Activiti Cloud Blueprint Demo

As mentioned in the introduction the source code of each different service inside our Trending Topics Campaigns BluePrint was now refactored out to individual repositories prefixed by ttc-* inside the in GitHub.

Screen Shot 2018-05-15 at 10.03.30.png

A quick recap of our BluePrint: Trending Topic Campaigns:

Screen Shot 2018-05-10 at 09.30.12

As you might remember (link) we had a global marketing company creating campaigns based on Trending Topics. They need to be quick and react fast on the current trending topics on different social media streams in order to promote their customers’ products. Different departments all over the world might be creating campaigns at the same time not only based on the Twitter feed, other social media feeds can be plugged into the infrastructure, so we need to make sure that our solution scale to support those requirements.

The following video shows how you can import these projects into JenkinsX and how to customise the Staging Environment to provide some of the infrastructural service required for the services to interact between each other, such as RabbitMQ and PostgreSQL to store the Campaign’s Process Runtime state.

One configuration step that is required until you can consume Activiti Cloud artifacts from Maven Central is to add into JenkinsX nexus the Alfresco Snapshot Repository. In order to do that you can do:

  • jx get urls (in the dev environment, by doing jx env dev)
  • Open in your browser the Nexus url
  • Login with admin and your password, generated by jenkinsX during installation
  • Go to repositories
  • Create new maven2 proxy
  • Go back to the repository list and edit the one called maven group
  • Make sure that the Activiti repository belongs to the member repositories (meaning that it needs to be on the column of the right, by default when you create a new one is not inside the group).







I’m quite excited to see how easy is becoming to get an Activiti Cloud Application running in a Cloud Native way on top of Kubernetes which is running in a real AWS and GCE cluster. Even if it is still quite early in the journey we can clearly see all the advantages of having these pipelines set up and managing how the changes are applied into each of our services. It also make it dead simple to add new campaigns, so we look forward to have this application running in a continuous deployment approach to test our Beta and future releases.

If you are interested in these projects and want to participate in our community get in touch, we look forward to improve this BluePrint to exemplify the most common patterns that you will find in real life scenarios, while we keep improving what Activiti Cloud can do to make business automation modern and as easy as possible.

Feel free to join us in our


Gitter channel

Last week we kept pushing to get our initial version of the Applications Service into our Activiti Cloud Example. We currently have a new repository containing the core logic for this serviceand we are looking forward to test the initial version along with the other components this week.


We are also making progress on the Process & Task Runtime API sideand now we are going through the exercise of updating our services to use this new API layer. This will improve maintenance and enable a future backward compatibility path.


We have also migrated the Blueprint Scenario (Trending Topics Campaign) to use Jenkins Xso it can be deployed to public clouds such as AWS, AKS and GCE. A blog post is coming soon after this weekly update. And presented our lessons learned in GeeCon 18' Krakow, Poland.


@MiguelRuizDevWorked on the display of pageable elements by the UI Angular app.

@igdianovfinished polishing up the Graphql Subscriptions PoC and now @ryandawsonukis working on merge those changes in our Develop branch.

@daisuke-yoshimotois working on making acceptance-testsfor Signal Throw Event Cloud Behavior.

@constantin-ciobotaruworking for adding controllers to organization service with support for both hal and alfresco output, also replaced rest template with feign client in organization service to not force aps to import rest template dependency, worked for modeling rest design in aps.

@lucianopreadid some investigation on #1893

@pancudaniel7Readed basics of Keycloak, deployed Keycloak in Kubernetes.

@ryandawsonukUpdated the activiti app service to use kubernetes metadata and reconcile this with an apps descriptor in a ConfigMap. Also attended London AWS Summit.

@erdemedeirosstarted using the new Java APIs inside the runtime bundle: this will help us to evaluate what is missing there. Attended AWS summit.

@salaboypresented at GeeCon 2018 in Krakow, Poland. You can find the slides and some pictures here:


We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: and our Gitbook’s Quickstart & Overview section:


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

I had a great time in Krakow, Poland at GeeCon 18' the 10th edition of this highly recognised conference focused in Java. In general the conference was amazing, excellent venue, great weather, beautiful city great talks and an amazing Java Community. You cannot ask for anything else. 




My presentation was about what we have learnt in the past 9 months of heavy work around Activiti & Activiti Cloud. I cover a brief summary of all the major steps into transforming a big monolith into a set of reusable cloud native building blocks. A lot of the emphasis in the presentation was about the core challenges when targeting Kubernetes from a Java Developer perspective. I closed the presentation with a quick demo around Spring Cloud Kubernetes and JenkinsX to demonstrate that by collaborating and engaging with other communities we can make sure that we do not reinvent the wheel and adopt well know good practices. 


Slides here


I will update this blog post with more pics and the video of the session as soon as it is published. 

Next stop Japan JJUG CCC SPRING 2018 Conference! 

Last week we focused in closing Pull Request that were waiting for review and pushed the next iteration of the Process & Task Runtime APIs plus the Application Service integration with other components in the infrastructure. We also did a spike to build our Blueprint using JenkinsX in AWS, which helped us to split our project into more maintainable units to enable our users to have continuous deployment pipelines.


@igdianov- Polished configurations, added unit tests and refactoring.

@daisuke-yoshimotois working on making acceptance-testsfor Signal Throw Event Cloud Behavior.

@constantin-ciobotaruworked to fix empty collection deserialization on HAL output for AlfrescoPagedResourceAssembler, worked for Modeling Service to support different backend storage

@lucianoprea- off on vacations -

@pancudaniel7Added configuration setup on microservice template project.

@ryandawsonukUpdated the activiti app service incubator project to report on app status based on eureka metadata. Added autoconfiguration for the app service and created PRs to embed it in the registry example component.

@erdemedeirosworked on the new Process Runtime Java APIs; split process and task related objects in different Maven modules. Create a auto-configuration to load the default service task behavior.

@salaboyworked on splitting the Trending Topic Campaigns BluePrint into separate repositories which are now built by JenkinsX. This demonstrate the flexibility of Activiti Cloud configuration and the usage of standard tools such as helm and kubernetes for continuous deployment pipelines.


There are still two PRs under review: @igdianovand @daisuke-yoshimotowhich will probably merge next week.

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: and our Gitbook’s Quickstart & Overview section:


Get in touch if you want to contribute:

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

This is a basic guide of how to deploy the Process Workspace in Tomcat



Download the latest version of Process Workspace from Nexus APW 1.1.0 based on adf 2.3.0



Once downloaded the Zip, you need to Unzip the file.


Step 3

Open the unzipped folder in your prefered IDE. You should be able to see the Process Workspace app structure like in the images below:

  • src folder (source code) 
    Alfresco Process Workspace Sourcecode
  • dist folder (distribution)
    Alfresco Process Workspace dist folder


Step 4

Now you are ready to deploy the Process Workspace app in tomcat.
Rename the dist folder with the name you want to see in the URL of your server ( i.e. process-workspace) and paste the renamed 
folder under the folder /webapps in tomcat as in the image above:


Deploy Process Workspace in tomcat webapp folder

Step 5

Start your tomcat server


Step 6

Open your browser and visit the URL localhost:8080/process-workspace. The app is running:


Alfresco Process Workspace up and running


 If you have more questions, please reply here or contact us using    gitter

After #44 weeks of heavy work we are getting closer and closer to our first Beta1release. We are actively working in getting the first iteration of our building blocks in shape to cover a set of common scenarios for cloud deployments. After Beta1we will be focused in making sure that these components are polished enough for a large set of use cases. There is still a lot of work to do around our two last RFCs (links below) which we want to include in our Beta1release.

As usual, you can keep track of our public Roadmap here:

and our current milestone in here:


All our building blocks are designed from the scratch to work seamleasly with the following technology stacks:

  • Spring Boot
  • Spring Cloud
  • Docker
  • Kubernetes

And we are aiming to run in all these cloud platforms:

Screen Shot 2018-05-03 at 08.38.27

For Beta1the following building blocks are going to be provided:

  • Runtime Bundle template / Spring Boot Starter
  • Query Service template  / Spring Boot Starter
    • JPA reference implementation
  • Audit Service template / Spring Boot Starter
    • JPA reference implementation - MongoDB alternative
  • Cloud Connectors template / Spring Boot Starter  RFC: Activiti Cloud Connectors 
  • Application Service template / Spring Boot Starter RFC: Activiti Cloud Application Service
    • Service Registry integration
    • Config Server integration
    • Gateway integration for Routing

On top of these Building Blocks a set of examples will be provided to demonstrate how these services can be customised and deployed to Kubernetes clusters using Helm Charts.
The Activiti Cloud Blueprint will be updated to follow good practices around Helm deployments and to demonstrate the use of Pipelines and a continuous deployment approach.
Acceptance Tests are provided as well,but this might come after we release Beta1.
Based on our backlog and planned conferences (GeeCon 2018, Krakow& JJUG CCC Spring 2018, Tokio) we estimate that we will use May to polish and finish these building blocks. We want to make sure that our Beta1 is functional enough to deliver value to community users that are looking for a solid cloud native approach.
As we stand today this is are the proposed dates for the following milestones/releases:

Beta1 - Late May 2018 - In Progress-

  • Process Definition based security unification
  • Cloud Native BPMN Signal and Message Behaviours
  • New Process & Task Runtime Java APIs
  • Application Service
  • Refinements and consistency improvements
  • BluePrint repository split improvements to use query & audit
  • Notifications PoC with GraphQL integration at the gateway layer
  • Jenkins X ( support and alignment

Beta2 - Late June 2018

Final - July 2018

  • BluePrint Trending Topics Campaigns User Interface and Dashboards
  • Security & SSO review (review Apache Shiro Wildcard Permissions)
  • JHipster integration review and articles

In this short post, I’ll outline the steps to launch Alfresco Process Services (APS) version 1.8.1 using the updated Amazon Machine Image (AMI) available on the AWS Marketplace.



Step 0: Register for a 30-day trial license

If you don't have a license to run Process Services, you can get a 30-day trial license by visiting


Step 1: Select the AMI from AWS Marketplace

From the AWS EC2 console in your preferred region, launch a new instance based on the AMI shared on the AWS Marketplace.



Step 2: Choose an instance type

By default, we recommend using m4.large instance type. Nevertheless, you can use a smaller instance such as t2.medium. 

Step 3: Review and launch

Apart from the instance type, you can simply leave the default settings and click the Preview and Launch button. After clicking the Launch button, the EC2 instance will be in the ‘initializing’ state for no longer than 2 to 3 minutes, before Alfresco Process Services will be accessible, denoted by the EC2 instance changing to the ‘running’ state.


To secure and control inbound traffic to your instance, you need to edit the existing inbound rules. To limit the HTTP inbound traffic to your IP for example, add the following custom rule:

  • Type: custom TCP
  • Protocol: TCP
  • Port Range: 8080
  • Source: My IP

Step 4: Logging in to Alfresco Process Services


Once the instance state is ‘running’ and all status checks passed, open a browser page using the URL: http://{public_dns_here}:8080/activiti-app.

Login page 

On the login page, use the credentials:


If you need a valid product license, it takes less than 2 minutes to get a 30-day trial license by filling out the form at

Step 5: Try out the new Process Workspace

The Process Workspace is a new out-of-the-box user interface for end users to view, act and collaborate on tasks and processes. Read the APS 1.8 blog post for more details. The Process Workspace UI is available at http://{public_dns_here}:8080/activiti-app/workspace/.


Step 6: Logging in to APS Admin console (optional)

Open a browser page using the URL: http://{public_dns_here}:8080/activiti-admin. On the login page, use the default credentials: admin/admin.


Once logged in to the Admin console,  it is always recommended to update the existing password to something much more secure such as a combination of four words. Check out this post for details about securing passwords.


Open the Configuration menu to connect the Admin Console to your APS running application. Edit the REST endpoint settings and apply the corresponding values as shown here below.


Save and click on Check REST endpoint. You get a confirmation message with the endpoint status and Engine version.

Bravo! You have successfully launched APS using the AMI published by Alfresco.

Step 7: Start building your first process apps 

You now have a running Alfresco Process Services environment that you can use to explore the Hello World getting started tutorial and try out some of our more advanced community examples.