Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog > 2017 > July

On behalf of the team, I am pleased to announce that Alfresco Process Services 1.6.4 has been released and is available now. This release contains some important bug fixes. Here are the notable highlights:



    • Possibility to encrypt sensitive properties (e.g. db user password, ldap user password, etc.) used in Alfresco Process Services properties files (,, Please check the documentation.
    • Whitelisting is enhanced to cover class whitelisting in JavaScript. Please check the documentation.
    • Improved SQL Injection protection.


For complete list of improvements in 1.6.x, please check the what’s new page in Alfresco Process Services documentation.

This blog post is part of another blog post that I wrote around the text sentiment analysis topic. This blog will focus on the process side and how to Move a file automatically in a folder using the metadata info. If you want more details about the sentiment analysis you can read this article.


For which of you is familiar with the BPM world the diagram below is almost self-explanatory but let's see in details what does the Process that we are going to implement:


  • The process starts getting a nodeId as an input.

  • In the second block, all the metadata related to this node is fetched from the content service through the API

  • If the sentiment is <0.5, the content will be moved to the "Bad Folder"
  • If the sentiment is >=0.5,  the content will be moved to the "Good Folder"

How import the app

To simplify the execution of this example I have already created the app that implements the flow above.You can download the sentiment app from the following link.

Once you have downloaded the sentiment app from the main page of the process service:


1. App Designer ->  

2. Select the App tab and click on import App -> 


3. Import the sentiment-app downloaded.


Process service Endpoint configuration

Let's see how to configure the content service endpoint in the process service tenant configuration. From the main page of the process service:


1. Open the process service as admin


2. Identity management -> 


3. Select the Tenants tab and form the Tenant dropdown select the tennant that will run the app


4. Press the plus button in Basic Auth configuration and add your credentials for the Content service and press save:



5.Click on the endpoints table plus button and add your content service endpoint information as the screenshot below:




Now that our app is configured in the process service we need to configure the content service.


Content service Metadata configuration


As you can see from the BPM graph, our Process has a gateway:



This gateway decides if the next step to execute is move the content file in a bad folder or a good folder.

How does the process know if is a good/bad file?

In order to archive this level of consciousness, I have added a new metadata inside the content service. The process gateway will analyze the value of this metadata and from its value will execute the corresponding action.

Let's see how to add a new metadata in the content service:


1. Login inside your content service

2. Click on the admin tab

3. From the left Tools menu select Model Manager

4.  and import the Model downloaded from the Github repository


Now as last step of configuration we need to create our bad folder and good folder in the content repository. 


Curl call bad folder:

curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Basic YWRtaW46YWRtaW4=' -d '{
  "name":"bad folder",
' ''

Curl call good folder:

curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Basic YWRtaW46YWRtaW4=' -d '{
  "name":"good folder",
' ''


Get the two nodeId returned by those calls above and use it to configure the steps Move Bad Folder and Move Good Folder of the process service.

Click on the Move good folder step and Bad Folder step to replace your folder Id in the request calls as in the gif below:


At this point, all our process is configured what you need to do is only valorize the value of the metadata for the choice and pass in input to this process the nodeId to check.

For executing those step in a nice visual way I suggest you give a look to this other blog post otherwise you can still populate the metadata using the rest API of the Content service and start the process using the rest API of the process service.

On behalf of the team, I am pleased to announce that Alfresco Process Services 1.6.3 has been released and is available now. This release contains a few important bug fixes as well as a couple of improvements. Here are the notable highlights:

  • Kerberos SSO support

Organizations using Kerberos AD infrastructure can now quickly set up windows-based SSO to allow secure and seamless access to the Alfresco Process Services application without explicit login. Kerberos configuration settings need to be defined in Please check the dedicated documentation page.

  • Whitelisting

As from version 1.6.3, it is no longer required to whitelist specific beans and classes in order to use the Alfresco, Box and Google Drive out-of-the-box publish tasks. They now work by default. Please check the dedicated documentation page.

  • Supported platform

Red Hat Enterprise Linux version 7.3 now supported.

  • Getting started with Alfresco Process Services

Unfamiliar with Digital Process Automation and Business Process Management (BPM)? Try out our getting started tutorial and build your first app in 3 steps.


Getting started with Alfresco Process Services

For complete list of improvements, please check the what’s new page in Alfresco Process Services documentation.


Activiti 7 Kick Off Roadmap

Posted by salaboy Employee Jul 5, 2017

If you were looking at the Activiti/Activiti repository, you might have noticed that we are restructuring the project. Activiti is taking a new direction towards microservices architectures and we are planning to make big design upgrades.


This new direction will give users the flexibility to open up the architecture, replace components as needed and to scale the engine and your applications independently. In combination with containerization (Docker), Microservice architectures with event-driven endpoints provide a cloud-native approach to deploy and interact with the engine in a distributed fashion.


However, we are well aware that users are currently embedding Activiti 5.x and 6.x in their applications. As we move toward the final release of the next major version of Activiti, we will provide a set of compatibility and migration tools. Embedding of the Activiti engine will continue to be an important feature for the project. You won’t need to re-architect your application to use this version of Activiti. However, new microservices architectures implementations will be able to take advantage of this new approach.


We understand that not everyone is using microservices yet but for those that are (and those that are going to) we need to make a real upgrade of the technology behind the engine and how the internal components integrate with each other. The Activiti Process Engine was designed several years ago, this means that there are some components, extension points, and mechanisms that have a high impedance with modern architectures and environments.


We recognize that most Open Source process engines out there claim that they are following microservices practices, but we haven’t seen the major refactoring in how the process engine works or how it integrates with cloud providers and other microservices that would be required to be a truly microservice architecture. Most of these other projects treat other microservices as REST endpoints that you can interact with. By doing so, you neglect the infrastructure that you are using and we want to leverage that. We also recognize the need for playing nicely with Containers and other technology stacks. For that reason, we need to upgrade how the engine and other services deal with versioning and immutability. With microservice architectures, if you want to integrate nicely in fast evolving environments you need to make sure that other technology stacks integrate well with your APIs and data types. For that reason, we are designing our new APIs to provide consistent data types and HAL (Hypermedia Applications Language) APIs to help you to build modern applications with cutting edge technologies.


We have created a short term roadmap for this initial refactoring, which will provide the stepping stones for more advanced services that will come towards the end of the year such as:

  • Distributed Process Execution and Coordination
  • Contextual Event Driven Case Management  
  • Decision Management and Inference Service
  • Blockchain integrated Audit Logs
  • Polyglot Process & Decision Runtime targeting IoT and mobile platforms


In the short term we want to focus into the process engine APIs and data types as well as our deployment model using containers and orchestrators such as Docker and Kubernetes.


Key points


Below a list of key points that we are going to cover in the short term:


  • Code Quality, Coding Standards, Code Maturity and Code Modularity
  • Infrastructural changes and tools that we are going to use to maintain the project healthy
  • New Rest HAL APIs, new Messaging endpoints with JSON payloads, new Event Listeners with JSON payloads
  • We will decouple all the components that are not part of the core process engine into new repositories and well-focused services
  • We will provide alignment with the Spring Community (Spring Boot 2 / Spring Cloud), targeting AWS + Kubernetes + Docker as our main deployment strategy
  • We will reuse as much of Spring Cloud as possible to make sure that the Process Engine doesn’t overlap with services provided by the infrastructure


By covering all these key points we plan to provide a set of services that you will be able to use as building blocks for your implementations, making sure that the process engine is only responsible for automating your business processes, and the impedance mismatch with your infrastructure is minimal. The following is the proposed roadmap for the next three months. We aim to test our release process by the end of July 2017, so the first one might be delayed a little bit.


Milestone #0 - 31 July 2017


  • Clean up Activity
    • Repository cleanup & restructuring
    • Dependency upgrades/Alignment with Spring 5 & Spring Boot 2
    • Infrastructure
      • GIT / Travis / Bamboo / Maven Central
      • Daily Snapshots
    • Test Coverage review
    • Testing frameworks review
    • Logging Frameworks review
  • Domain API + HAL API + Runtime Bundle 
    • Process Definitions
    • Process Runtime
    • Task Runtime
    • Source/Producer of Audit Events
  • Event Store for Audit Information - Initial Implementation
    • We should be able to query for all the events generated by multiple process engines
  • Identity Management and SSO (KeyCloak research in progress) - initial integration
  • First Release Process


Milestone #1 - 28th August 2017


  • Domain API + HAL API + Runtime Bundle
    • Source/Producer of Integration Events
    • (Source/Producer of Async Job Executions)
  • Event Store for Runtime Information - initial implementation
  • Gateway (Zuul research in progress) - Initial configuration and infrastructure
  • Application & Service Registry (Eureka research in progress) - Initial configuration and infrastructure
  • Tracer (Zipkin research in progress) - Initial configuration and infrastructure
  • AWS example demonstrating all these services in action


Milestone #2 - 29th September 2017


  • Event Store for Runtime Information - initial version
    • Decoupled query module based on events sourcing and rolling snapshots
    • We should be able to query the current state of processes and tasks without talking with the engine
  • Application Service / Case Management Features - initial version
    • Provide basic case management constructs
    • Provide coordination between different process runtime bundles
    • Deal with versioning, upgrades and case management like behaviors


Feedback are welcome


We encourage all users to start trying our milestones as soon as possible so you can get a gist of how these services are going to fit in your implementations.


We are looking forward for comments, questions, concerns, and collaborations. We will be working in a truly open source way, meaning that we want you to participate in the development of this and future roadmaps. We are looking for contributors from all levels, so if you feel attached to one of these topics, you want to learn new things or to participate in the conversation please get in touch.


Where to find more resources


We are using a Gitter channel here to discuss the roadmap, plan and collaborate every day, feel free to join:


We also have set up the following public services that you might find of interest:

Travis CI for continuous integration:

Codecov for code coverage reports:

Codacy for code style, security, duplication, complexity, and coverage:


These services will keep track of our code quality progress and release cycle.


There will be more announcements and blog posts about the topics mentioned here. We will be sharing progress and engaging community members regularly, stay tuned!


Original blog post: Activiti 7 Kick Off Roadmap – Salaboy (Open Source Knowledge)