Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog

I am happy to announce that Activiti Core and Activiti Cloud Beta4 has been released!

 

You can find the released artifacts in Maven Central, Docker Hub and in our GitHub repository for HELM Charts.

The focus of the Beta4 release was to improve data consistency between the different services and maturity of the new APIs. We improved the way of handling data inside the Query Service (now it also contains Process Definitions and Variables are kept in different scopes for improved performance on query) which lead us to improved coverage on different scenarios. We are looking forward to leverage these changes with the GraphQL extensions that are still experimental, but we are planning to add soon into our release process. As mentioned before, the APIs are getting better and better and as part of our commitment to support a wide range of cloud deployments we included the first iteration of our Conformance Tests and Cloud Conformance Tests suites.

These test suites are designed to enable implementors to extend the basic sets to cover their use cases. We believe that this is a “must”if you are planning to leverage a CI/CD approach.

 

Also a big part of this release was the upgrade to Spring Boot 2.1 and Spring Cloud Greenwich.M3 which we want to keep in sync with our release train. 

 

The fastest way to get started with Activiti Cloud is to use our HELM charts that allow you to deploy all the services to a Kubernetes Cluster by following some simple steps. You can use any tooling that you already use to deploy HELM charts to Kubernetes or you can try Jenkins X. You can take a look at our Getting Started guides:

You can find the full release notes here.

If you are interested in getting started with Activiti Core you can check our examples repository which highlight how the new Runtimes API is used in the context of Spring Boot 2 applications:

If you are interested in getting involved get in touch via Gitter

Last week we worked hard to close Beta4, which will be released later today. The full team is focused on making the release process smooth so we can include more fixes and features in every milestone. In Beta4 we managed to close 41 issues and you can check our next milestones here: https://github.com/activiti/activiti/issues?q=is%3Aopen+is%3Aissue+milestone%3ABeta4

@CTI777worked on finalizing update process event and task/process variablesCommits in:

@almericoWorking on JX pipelines Issues #2224 and DIND maven install #2201 PR’s:

@daisuke-yoshimotoworked on the issue #2075. PR’s:

https://github.com/Activiti/Activiti/pull/2229

https://github.com/Activiti/activiti-cloud-runtime-bundle-service/pull/171

@igdianovworked on the following PRs and Issues

@constantin-ciobotaruworked in modeling for:

  • send to model repository both modelToBeUpdated and newModel during update
  • add full json process metadata when export applications
  • export the content only (without metadata) for json based models

@miguelruizdevworked on acceptance test coverage and bug fix on task updated and process updated events

@ryandawsonukCreated a cluster for running example-level projects, updated the developer guide to explain how to make changes across repositories in the current structure, updated modeler repos to use masteras well as ttc repos, committed a chart for the trending topics example, ran a test release.

@erdemedeirosReviewed and merged pending pull requests. Fixed set process instance name on query-service (https://github.com/Activiti/Activiti/issues/1952). Fixed missing info on cloud events (https://github.com/Activiti/Activiti/issues/2248). Fixed audit handler for process updated and Process Suspended events.

@salaboyworked on refining services for Beta4 release, improving acceptance tests and we merged the initial version of our new APIs conformance tests ( https://github.com/Activiti/Activiti/tree/develop/activiti-spring-conformance-tests) that will help us to test our cloud modules.

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request

Introduction

Activiti 7 is an evolution of the battle-tested Activiti workflow engine from Alfresco that is fully adopted to run in a cloud environment. It is built according to the Cloud Native application concepts and differs a bit from the previous Activiti versions in how it is architected. There is also a new Activiti Modeler that we had a look at in a previous article.

 

In this article we will use the new Activiti 7 Process Runtime and Task Runtime Java APIs to try out the Activiti 7 process engine. We will do this from a Spring Boot 2 application. All the Activiti 7 Java artifacts that we need are available in Alfresco’s Maven Repository (Nexus).  

 

The Spring Boot application will also include the Web component (i.e. Spring MVC) so we can create a little ReST API to use for starting processes and interacting with processes and tasks. Activiti 7 provides a ReST API but we are not going to use it in this section when we just play around with the core libraries. Here we just create our own simple ReST API that will use the Activiti 7 Java libs (i.e. Process Runtime and Task Runtime).

 

The new APIs have been designed to provide a clear path to the Cloud Native approach. They also include security and identity management as first class citizens. Some common use-cases are also simplified with the new APIs.

 

In this article we will actually build a simple Business Process Management (BPM) application/solution using the Activiti 7 Core libraries. This is not normally something you would do, but it is a good exercise to be able to understand the APIs provided by Activiti 7.

 

Activiti 7 Deep Dive Article Series 

This article is part of series of articles covering Activiti 7 in detail, they should be read in the order listed:

 

  1. Deploying and Running a Business Process
  2. Using the Modeler to Design a Business Process
  3. Building, Deploying, and Running a Custom Business Process
  4. Using the Core Libraries - this article

 

Prerequisites

  • You have read and worked through the "Activiti 7 - Using the Modeler to Design Business Processes" article
  • JDK installed
  • Maven installed

 

Source Code

You can find the source code related to this article here:

 

https://github.com/gravitonian/activiti7-api-basic-process  

 

Generating a Spring Boot 2 App

It’s really easy to get going with a Spring Boot application. Just head over to https://start.spring.io/ and fill in the data for the app as follows:

 

 

Make sure to use Spring Boot version 2.0.x with Activiti 7 Beta 1 - 3, Beta 4 should be aligned with version 2.1.x.

 

You don’t have to use the same Group (org.activiti.training) and Artifact (activiti7-api-basic-process-usertask-servicetask-events) names as me, just use whatever you like. However, if you copy code from this article it might be easier if you use the same package names (i.e. the same group). Search for the H2 and Web dependencies so they are included in the Maven POM. Then click the Generate Project button. The finished Spring Boot 2 Maven project will automatically download as a ZIP. Unpack it somewhere.

 

Test the standard Spring Boot App

Let’s make sure that the Spring Boot application works before we continue with the Activiti stuff. This involves two steps. First build the app JAR and then run the app JAR.

 

Building the application JAR:

 

$ cd activiti7-api-basic-process-usertask-servicetask-events/

activiti7-api-basic-process-usertask-servicetask-events mbergljung$ mvn clean package

[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ activiti7-api-basic-process-usertask-servicetask-events---

[INFO] Building jar: /Users/mbergljung/IDEAProjects/activiti7-api-basic-process-usertask-servicetask-events/target/activiti7-api-basic-process-usertask-servicetask-events-0.0.1-SNAPSHOT.jar

 

Running the application JAR:

 

activiti7-api-basic-process-usertask-servicetask-events mbergljung$ java -jar target/activiti7-api-basic-process-usertask-servicetask-events-0.0.1-SNAPSHOT.jar

 

 .  ____          _ __ _ _

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

\\/  ___)| |_)| | | | | || (_| |  ) ) ) )

 ' |____| .__|_| |_|_| |_\__, | / / / /

=========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v2.0.4.RELEASE)

 

2018-08-22 15:43:59.992  INFO 8959 --- [ main] cessUsertaskServicetaskEventsApplication : Starting Activiti7ApiBasicProcessUsertaskServicetaskEventsApplication v0.0.1-SNAPSHOT on MBP512-MBERGLJUNG-0917.local with PID 8959 (/Users/mbergljung/IDEAProjects/activiti7-api-basic-process-usertask-servicetask-events/target/activiti7-api-basic-process-usertask-servicetask-events-0.0.1-SNAPSHOT.jar started by mbergljung in /Users/mbergljung/IDEAProjects/activiti7-api-basic-process-usertask-servicetask-events)

2018-08-22 15:43:59.995  INFO 8959 --- [ main] cessUsertaskServicetaskEventsApplication : No active profile set, falling back to default profiles: default

2018-08-22 15:44:00.045  INFO 8959 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@64b8f8f4: startup date [Wed Aug 22 15:44:00 BST 2018]; root of context hierarchy

2018-08-22 15:44:00.985  INFO 8959 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)

2018-08-22 15:44:01.011  INFO 8959 --- [ main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]

2018-08-22 15:44:01.011  INFO 8959 --- [ main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/8.5.32

2018-08-22 15:44:01.023  INFO 8959 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/mbergljung/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]

2018-08-22 15:44:01.112  INFO 8959 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext

2018-08-22 15:44:01.112  INFO 8959 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1070 ms

2018-08-22 15:44:01.204  INFO 8959 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]

2018-08-22 15:44:01.208  INFO 8959 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]

2018-08-22 15:44:01.209  INFO 8959 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]

2018-08-22 15:44:01.209  INFO 8959 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpPutFormContentFilter' to: [/*]

2018-08-22 15:44:01.209  INFO 8959 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]

2018-08-22 15:44:01.348  INFO 8959 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]

2018-08-22 15:44:01.500  INFO 8959 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@64b8f8f4: startup date [Wed Aug 22 15:44:00 BST 2018]; root of context hierarchy

2018-08-22 15:44:01.563  INFO 8959 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)

2018-08-22 15:44:01.564  INFO 8959 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)

2018-08-22 15:44:01.589  INFO 8959 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]

2018-08-22 15:44:01.590  INFO 8959 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]

2018-08-22 15:44:01.726  INFO 8959 --- [ main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup

2018-08-22 15:44:01.775  INFO 8959 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''

2018-08-22 15:44:01.780  INFO 8959 --- [ main] cessUsertaskServicetaskEventsApplication : Started Activiti7ApiBasicProcessUsertaskServicetaskEventsApplication in 2.288 seconds (JVM running for 2.718)

 

Ctrl-C out of the application to continue with the configuration below.

 

Adding Activiti 7 Dependencies to the App

The Spring Boot app has most of the dependencies that we need, except for the Activiti 7 dependencies. So let’s add them. We can use a BOM (Bill-of-Materials) dependency that will bring in all the needed Activiti 7 dependency management configurations, including the correct versions of all dependencies.

 

Add the following to the pom.xml:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.activiti.cloud.dependencies</groupId>
<artifactId>activiti-cloud-dependencies</artifactId>
<version>7.0.0.Beta3</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>

This will import all the dependency management configurations for Activiti 7. Now we just need to add an Activiti 7 dependency that supports running the Activiti process engine, service task implementations (i.e. Cloud Connectors), and event handler implementations (i.e. replacement for process and task listeners). Add the following dependency to the pom.xml:

<dependency>
<groupId>org.activiti</groupId>
<artifactId>activiti-spring-boot-starter</artifactId>
</dependency>

This will bring in all the Activiti and Spring dependencies need to run the Activiti 7 process engine embedded in a Spring Boot application. I will also make it possible to compile our Service Task implementations and our process engine event handlers.

 

We cannot yet run the application with these new dependencies as it will look for process definitions in the resources/processes directory. And if this directory does not exist, then an exception is thrown and the app halts.

 

Adding the Process Definition to the App

We will now add the process definition XML file we designed in one of the previous articles to the project. Create a new directory called processes under the src/main/resources directory. Then copy the .bpmn20.xml file into this directory. You should see a directory structure like this now:

 

├── src

│   ├── main

│   │ ├── java

│   │ │   └── org

│   │ │       └── activiti

│   │ │           └── training

│   │ │               └── activiti7apibasicprocessusertaskservicetaskevents

│   │ │                   └── Activiti7ApiBasicProcessUsertaskServicetaskEventsApplication.java

│   │ └── resources

│   │    ├── application.properties

│   │    ├── processes

│   │    │ └── sample-process.bpmn20.xml

│   │    ├── static

│   │    └── templates

 

This is all we need to do, we can now test to start the app.

 

Test the Spring Boot App containing Activiti libs and process definition

We can now package and run the app to see that all the Activiti libs are loaded properly and that the process definition is read correctly without errors.

 

activiti7-api-basic-process-usertask-servicetask-events mbergljung$ mvn clean package

 

activiti7-api-basic-process-usertask-servicetask-events mbergljung$ java -jar target/activiti7-api-basic-process-usertask-servicetask-events-0.0.1-SNAPSHOT.jar

 

 .  ____          _ __ _ _

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

\\/  ___)| |_)| | | | | || (_| |  ) ) ) )

 ' |____| .__|_| |_|_| |_\__, | / / / /

=========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v2.0.4.RELEASE)

 

...

2018-08-22 16:08:44.688  INFO 9056 --- [ost-startStop-1] aultActiviti5CompatibilityHandlerFactory : Activiti 5 compatibility handler implementation not found or error during instantiation : org.activiti.compatibility.DefaultActiviti5CompatibilityHandler. Activiti 5 backwards compatibility disabled.

2018-08-22 16:08:44.751  INFO 9056 --- [ost-startStop-1] o.activiti.engine.impl.db.DbSqlSession   : performing create on engine with resource org/activiti/db/create/activiti.h2.create.engine.sql

2018-08-22 16:08:44.864  INFO 9056 --- [ost-startStop-1] o.a.engine.impl.ProcessEngineImpl        : ProcessEngine default created

...

2018-08-22 16:08:47.478  INFO 9056 --- [ main] cessUsertaskServicetaskEventsApplication : Started Activiti7ApiBasicProcessUsertaskServicetaskEventsApplication in 7.213 seconds (JVM running for 7.685)

 

Adding ReST calls to interact with the process engine

We now got the application running with the Activiti 7 process engine runtime libraries available, so we can create some standard Spring MVC rest calls to interact with the process engine and the available process definition.

 

Add some users and groups and enable web security

To be able to interact with the Process Runtime API we need to be authenticated with a user that has the role ROLE_ACTIVITI_USER. If we were just calling the Process Runtime API directly from Java code, such as from the class with the main method, then we could set the user context directly before we make the API call. See here for an example. Note. do this only for playing around with the API, not in real production implementations...

 

We are going to create our own little ReST API so we would like the authentication to work via the Web Browser using Basic Auth. Activiti makes use of Spring Security so we can quite easily do this.

 

Create a Spring configuration class called  Activiti7ApplicationConfiguration in the src/main/java/org/activiti/training/ activiti7apibasicprocessusertaskservicetaskevents package:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.provisioning.InMemoryUserDetailsManager;

import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

/**
* Set up some users and groups that we can use when interacting with the process engine API.
* We use the testuser in the process definition so we need to include this user.
*
* We also enable Web security so we can build a simple ReST API that uses the Process Engine Java API. We need
* to be authenticated with a user that has the role ROLE_ACTIVITI_USER to be able to use the API.
*/

@Configuration
@EnableWebSecurity
public class Activiti7ApplicationConfiguration extends WebSecurityConfigurerAdapter {

private Logger logger = LoggerFactory.getLogger(Activiti7ApplicationConfiguration.class);

@Override
@Autowired
public void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(myUserDetailsService());
}

@Bean
public UserDetailsService myUserDetailsService() {

InMemoryUserDetailsManager inMemoryUserDetailsManager = new InMemoryUserDetailsManager();

String[][] usersGroupsAndRoles = {
{"mbergljung", "1234", "ROLE_ACTIVITI_USER", "GROUP_activitiTraining"},
{"testuser", "1234", "ROLE_ACTIVITI_USER", "GROUP_activitiTraining"},
{"system", "1234", "ROLE_ACTIVITI_USER"},
{"admin", "1234", "ROLE_ACTIVITI_ADMIN"},
};

for (String[] user : usersGroupsAndRoles) {
List<String> authoritiesStrings = Arrays.asList(Arrays.copyOfRange(user, 2, user.length));
logger.info("> Registering new user: " + user[0] + " with the following Authorities[" + authoritiesStrings + "]");
inMemoryUserDetailsManager.createUser(new User(user[0], passwordEncoder().encode(user[1]),
authoritiesStrings.stream().map(s -> new SimpleGrantedAuthority(s)).collect(Collectors.toList())));
}


return inMemoryUserDetailsManager;
}

@Override
protected void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()
.authorizeRequests()
.anyRequest()
.authenticated()
.and()
.httpBasic();
}

@Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
}

Set up some users and groups that we can use when interacting with the process engine API. We use the testuser in the process definition (assigned to the User Task 1) so we need to include this user. We also enable Web security so we can build a simple ReST API that uses the Process Engine Java API.

 

Adding a ReST call to list Process Definitions

First, let’s add a ReST call that lists the deployed process definitions. Create a new sub-package called rest in the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents package. Then add a new Spring MVC controller called ProcessDefinitionsController in this new package as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest;

import org.activiti.api.process.model.ProcessDefinition;
import org.activiti.api.process.runtime.ProcessRuntime;
import org.activiti.api.runtime.shared.query.Page;
import org.activiti.api.runtime.shared.query.Pageable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
* ReST controller to interact with deployed process definitions
*
*/

@RestController
public class ProcessDefinitionsController {
private Logger logger = LoggerFactory.getLogger(ProcessDefinitionsController.class);

@Autowired
private ProcessRuntime processRuntime;

@GetMapping("/process-definitions")
public List<ProcessDefinition> getProcessDefinitions() {
Page<ProcessDefinition> processDefinitionPage = processRuntime.processDefinitions(Pageable.of(0, 10));
logger.info("> Available Process definitions: " + processDefinitionPage.getTotalItems());

for (ProcessDefinition pd : processDefinitionPage.getContent()) {
logger.info("\t > Process definition: " + pd);
}

return processDefinitionPage.getContent();
}
}

Start by injecting the ProcessRuntime Spring bean so we can use the Process Runtime API, which has the following methods:

public interface ProcessRuntime {
ProcessRuntimeConfiguration configuration();
ProcessDefinition processDefinition(String var1);
Page<ProcessDefinition> processDefinitions(Pageable var1);
Page<ProcessDefinition> processDefinitions(Pageable var1, GetProcessDefinitionsPayload var2);
ProcessInstance start(StartProcessPayload var1);
Page<ProcessInstance> processInstances(Pageable var1);
Page<ProcessInstance> processInstances(Pageable var1, GetProcessInstancesPayload var2);
ProcessInstance processInstance(String var1);
ProcessInstance suspend(SuspendProcessPayload var1);
ProcessInstance resume(ResumeProcessPayload var1);
ProcessInstance delete(DeleteProcessPayload var1);
void signal(SignalPayload var1);
ProcessDefinitionMeta processDefinitionMeta(String var1);
ProcessInstanceMeta processInstanceMeta(String var1);
List<VariableInstance> variables(GetVariablesPayload var1);
void removeVariables(RemoveProcessVariablesPayload var1);
void setVariables(SetProcessVariablesPayload var1);
}

In this case we just need the processDefinitions() method. The /process-definitions URL path is used for this ReST GET call.

 

Now, package and run the application as described before. When starting the application you should see logs that indicate that everything has been implemented correctly:

...

2018-08-28 15:12:49.742  INFO 21982 --- [ost-startStop-1] .a.t.a.Activiti7ApplicationConfiguration : > Registering new user: mbergljung with the following Authorities[[ROLE_ACTIVITI_USER, GROUP_activitiTraining]]

2018-08-28 15:12:49.869  INFO 21982 --- [ost-startStop-1] .a.t.a.Activiti7ApplicationConfiguration : > Registering new user: testuser with the following Authorities[[ROLE_ACTIVITI_USER, GROUP_activitiTraining]]

2018-08-28 15:12:49.994  INFO 21982 --- [ost-startStop-1] .a.t.a.Activiti7ApplicationConfiguration : > Registering new user: system with the following Authorities[[ROLE_ACTIVITI_USER]]

2018-08-28 15:12:50.113  INFO 21982 --- [ost-startStop-1] .a.t.a.Activiti7ApplicationConfiguration : > Registering new user: admin with the following Authorities[[ROLE_ACTIVITI_ADMIN]]

org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6fdbe764,

...

2018-08-28 15:12:53.119  INFO 21982 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/process-definitions],methods=[GET]}" onto public java.util.List<org.activiti.api.process.model.ProcessDefinition> org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.ProcessDefinitionsController.getProcessDefinitions()

...

 

Go to a browser and hit http://localhost:8080/process-definitions, you should be presented with a login dialog. Type in a username and password for a user that has the role ROLE_ACTIVITI_USER, such as testuser/1234. You should then get a response looking something like this:

[
{
id: "c68315b2-fa2a-11e8-9c34-acde48001122",
name: "Sample Process",
version: 1,
key: "sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d"
}
]

In the logs you should see the following:

 

2018-08-28 15:13:09.678  INFO 21982 --- [nio-8080-exec-1] o.a.t.a.r.ProcessDefinitionsController   : > Available Process definitions: 1

2018-08-28 15:13:09.678  INFO 21982 --- [nio-8080-exec-1] o.a.t.a.r.ProcessDefinitionsController   : > Process definition: ProcessDefinition{id='c68315b2-fa2a-11e8-9c34-acde48001122', name='Sample Process', key='sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d', description='null', formKey='null', version=1}

 

Adding a ReST call to start a Process Instance

We now know we got one process definition that we can use to start a process instance. Let’s create a ReST call that can be used to do this. It will take one parameter with the process definition key.

 

In the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents/rest package add a new Spring MVC controller called ProcessStartController as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest;

import org.activiti.api.process.model.ProcessInstance;
import org.activiti.api.process.model.builders.ProcessPayloadBuilder;
import org.activiti.api.process.runtime.ProcessRuntime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.Date;

@RestController
public class ProcessStartController {
private Logger logger = LoggerFactory.getLogger(ProcessStartController.class);

@Autowired
private ProcessRuntime processRuntime;

@RequestMapping("/start-process")
public ProcessInstance startProcess(
@RequestParam(value="processDefinitionKey", defaultValue="SampleProcess") String processDefinitionKey) {
ProcessInstance processInstance = processRuntime.start(ProcessPayloadBuilder
.start()
.withProcessDefinitionKey(processDefinitionKey)
.withProcessInstanceName("Sample Process: " + new Date())
.withVariable("someProcessVar", "someProcVarValue")
.build());
logger.info(">>> Created Process Instance: " + processInstance);

return processInstance;
}
}

 

Start by injecting the ProcessRuntime Spring bean so we can use the Process Runtime API, which has the start() method that we need. The /start-process?processDefinitionKey={processDefinitionKey} URL is used for this ReST call.

 

Now, package and run the application as described before. When starting the application you should see logs that indicate that everything has been implemented correctly:

...

2018-08-29 10:34:34.312  INFO 27844 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/start-process]}" onto public org.activiti.runtime.api.model.ProcessInstance org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.ProcessStartController.startProcess(java.lang.String)

...

 

Go to a browser and type in the following URL (use the processDefinitionKey that matches your deployed process definition, you can see what key by calling /process-definitions as done previously): http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d, you should be presented with a login dialog (or NOT if credentials are cached). Type in a username and password for a user that has the role ROLE_ACTIVITI_USER. You should then get a response looking something like this:

{
id: "b0a28a43-fa2b-11e8-9c34-acde48001122",
name: "Sample Process: Fri Dec 07 14:23:38 GMT 2018",
processDefinitionId: "c68315b2-fa2a-11e8-9c34-acde48001122",
processDefinitionKey: "sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d",
startDate: "2018-12-07T14:23:38.878+0000",
status: "RUNNING"
}

In the logs you should see the following:

 

2018-08-29 10:35:18.379  INFO 27844 --- [nio-8080-exec-4] o.a.t.a.rest.ProcessStartController      : >>> Created Process Instance: ProcessInstance{id='b0a28a43-fa2b-11e8-9c34-acde48001122', name='Sample Process: Fri Dec 07 14:23:38 GMT 2018', description='null', processDefinitionId='c68315b2-fa2a-11e8-9c34-acde48001122', processDefinitionKey='sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d', initiator='null', startDate=Fri Dec 07 14:23:38 GMT 2018, businessKey='null', status=RUNNING}

 

Adding a ReST call to list Process Instances

It is useful to be able to list active process instances. And also be able to get more metadata about a process instance, such as where in the execution flow it is. Let’s create a couple of ReST calls for this that can come in handy.

 

In the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents/rest package add a new Spring MVC controller called ProcessInstanceController as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest;

import org.activiti.api.process.model.ProcessInstance;
import org.activiti.api.process.model.ProcessInstanceMeta;
import org.activiti.api.process.runtime.ProcessRuntime;
import org.activiti.api.runtime.shared.query.Pageable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
* ReST controller to get some info about a process instance
*
*/

@RestController
public class ProcessInstanceController {
private Logger logger = LoggerFactory.getLogger(ProcessInstanceController.class);

@Autowired
private ProcessRuntime processRuntime;

@GetMapping("/process-instances")
public List<ProcessInstance> getProcessInstances() {
List<ProcessInstance> processInstances =
processRuntime.processInstances(Pageable.of(0, 10)).getContent();

return processInstances;
}

@GetMapping("/process-instance-meta")
public ProcessInstanceMeta getProcessInstanceMeta(@RequestParam(value="processInstanceId") String processInstanceId) {
ProcessInstanceMeta processInstanceMeta = processRuntime.processInstanceMeta(processInstanceId);

return processInstanceMeta;
}
}

Start by injecting the ProcessRuntime Spring bean so we can use the Process Runtime API, which has the processInstances() and processInstanceMeta() methods that we need. The /process-instances URL is used for the first ReST call that just returns the first 10 active process instances. The second URL /process-instance-meta?processInstanceId={processInstanceId} gives information about the process instance, such at what activiti(es) it is currently waiting on.

 

Now, package and run the application as described before. When starting the application you should see logs that indicate that everything has been implemented correctly:

...

2018-08-30 10:35:46.192  INFO 36251 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/process-instance-meta],methods=[GET]}" onto public org.activiti.runtime.api.model.ProcessInstanceMeta org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.ProcessInstanceController.getProcessInstanceMeta(java.lang.String)

2018-08-30 10:35:46.193  INFO 36251 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/process-instances],methods=[GET]}" onto public java.util.List<org.activiti.runtime.api.model.ProcessInstance> org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.ProcessInstanceController.getProcessInstances()

...

 

As we are running an in-memory database the process instance we previously started will be gone after a restart, create a new one with http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d.

 

Now, to list process instances hit the http://localhost:8080/process-instances URL. You should get a response looking something like this:

[
{
id: "b0a28a43-fa2b-11e8-9c34-acde48001122",
name: "Sample Process: Fri Dec 07 14:23:38 GMT 2018",
processDefinitionId: "c68315b2-fa2a-11e8-9c34-acde48001122",
processDefinitionKey: "sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d",
startDate: "2018-12-07T14:23:38.878+0000",
status: "RUNNING"
}
]

We can then query this process instance for more information with the other ReST call. Type in the following URL (use the processInstanceId that was returned in the call above): http://localhost:8080/process-instance-meta?processInstanceId=b0a28a43-fa2b-11e8-9c34-acde48001122. You should then get a response looking something like this:

{
processInstanceId: "b0a28a43-fa2b-11e8-9c34-acde48001122",
activeActivitiesIds:
[
"UserTask_0b6cp1l"
]
}

The first activity in our process definition is the User Task 1, so that’s where in the process definition the execution is. The process engine is waiting for the following User Task to be completed:

<bpmn2:userTask id="UserTask_0b6cp1l" name="User Task 1" activiti:assignee="testuser">
<bpmn2:incoming>SequenceFlow_0qdq7ff</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_1sc9dgy</bpmn2:outgoing>
</bpmn2:userTask>

Being able to list and inspect process instances is useful when you get stuck and don’t know exactly where a process instance is waiting in the process definition.

 

Adding a ReST call to list available User Tasks

With the process running we should be able to list available tasks and then see the User task that is the first activiti in the process definition.

 

In the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents/rest package add a new Spring MVC controller called TaskManagementController as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest;

import org.activiti.api.runtime.shared.query.Page;
import org.activiti.api.runtime.shared.query.Pageable;
import org.activiti.api.task.model.Task;
import org.activiti.api.task.model.builders.TaskPayloadBuilder;
import org.activiti.api.task.runtime.TaskAdminRuntime;
import org.activiti.api.task.runtime.TaskRuntime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
public class TaskManagementController {
private Logger logger = LoggerFactory.getLogger(TaskManagementController.class);

@Autowired
private TaskRuntime taskRuntime;

@GetMapping("/my-tasks")
public List<Task> getMyTasks() {
Page<Task> tasks = taskRuntime.tasks(Pageable.of(0, 10));
logger.info("> My Available Tasks: " + tasks.getTotalItems());

for (Task task : tasks.getContent()) {
logger.info("\t> My User Task: " + task);
}

return tasks.getContent();
}
}

 

Start by injecting the TaskRuntime Spring bean so we can use the Task Runtime API. This API will work with tasks related to the current user. So when we call the tasks() method it will return the tasks assigned to the currently logged in user. The API has the following methods:

public interface TaskRuntime {
TaskRuntimeConfiguration configuration();
Task task(String var1);
Page<Task> tasks(Pageable var1);
Page<Task> tasks(Pageable var1, GetTasksPayload var2);
Task create(CreateTaskPayload var1);
Task claim(ClaimTaskPayload var1);
Task release(ReleaseTaskPayload var1);
Task complete(CompleteTaskPayload var1);
Task update(UpdateTaskPayload var1);
Task delete(DeleteTaskPayload var1);
List<VariableInstance> variables(GetTaskVariablesPayload var1);
void setVariables(SetTaskVariablesPayload var1);
}

In this case we just need the tasks() method. The /my-tasks URL path is used for this ReST GET call.

 

We can also add a ReST call to use when we want to see all tasks assigned in all active process instances. Can be useful for support and management purposes, such as when you want to reassign a task or complete a task on behalf of somebody. This call will require admin credentials (i.e. we need to be logged in as a user with ROLE_ACTIVITI_ADMIN). We also need to use a different runtime API called TaskAdminRuntime. Here is the method:

...
@Autowired
private TaskAdminRuntime taskAdminRuntime;
...
@GetMapping("/all-tasks")
public List<Task> getAllTasks() {
Page<Task> tasks = taskAdminRuntime.tasks(Pageable.of(0, 10));
logger.info("> All Available Tasks: " + tasks.getTotalItems());

for (Task task : tasks.getContent()) {
logger.info("\t> User Task: " + task);
}

return tasks.getContent();
}

Now, package and run the application as described before. When starting the application you should see logs that indicate that everything has been implemented correctly:

...

2018-08-30 09:09:47.757  INFO 36063 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/my-tasks],methods=[GET]}" onto public java.util.List<org.activiti.runtime.api.model.Task> org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.TaskManagementController.getMyTasks()

2018-08-30 09:09:47.757  INFO 36063 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/all-tasks],methods=[GET]}" onto public java.util.List<org.activiti.runtime.api.model.Task> org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.TaskManagementController.getAllTasks()

...

 

As we are running an in-memory database the process instance we previously started will be gone after a restart, create a new one with http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d.

 

I started the process instance logged in as user mbergljung. Because the user task is assigned to user testuser it will not show up when we call taskRuntime.tasks(). We will have to logout and login again as testuser before continuing with the below ReST call (easiest way is to clear browser cache before next ReST call).

 

Then type in the following URL: http://localhost:8080/my-tasks, you should then get a response looking something like this:

 

[
{
id: "b0a60cb6-fa2b-11e8-9c34-acde48001122",
name: "User Task 1",
status: "ASSIGNED",
assignee: "testuser",
createdDate: "2018-12-07T14:23:38.896+0000",
priority: 50,
processDefinitionId: "c68315b2-fa2a-11e8-9c34-acde48001122",
processInstanceId: "b0a28a43-fa2b-11e8-9c34-acde48001122"
}
]

 

In the logs you should see the following:

 

2018-08-30 09:50:09.753  INFO 36063 --- [nio-8080-exec-9] o.a.t.a.rest.TaskManagementController    : > My Available Tasks: 1

2018-08-30 09:50:09.754  INFO 36063 --- [nio-8080-exec-9] o.a.t.a.rest.TaskManagementController    : > My User Task: TaskImpl{id='b0a60cb6-fa2b-11e8-9c34-acde48001122', owner='null', assignee='testuser', name='User Task 1', description='null', createdDate=Fri Dec 07 14:23:38 GMT 2018, claimedDate=null, dueDate=null, priority=50, processDefinitionId='c68315b2-fa2a-11e8-9c34-acde48001122', processInstanceId='b0a28a43-fa2b-11e8-9c34-acde48001122', parentTaskId='null', formKey='null', status=ASSIGNED}

 

Now, logout by clearing the browser cache and then hit the http://localhost:8080/all-tasks URL. When asked to login use the admin/1234 credentials. As a response you should see the same user task.

 

Adding a ReST call to complete a User Task

We are now at a stage when we should be able to implement a ReST call that can be used to complete the user task that is assigned to testuser and that we just listed.

 

In the same controller we just used, called TaskManagementController, implement the following ReST call:  

 

...
@RequestMapping("/complete-task")
public String completeTask(@RequestParam(value="taskId") String taskId) {
taskRuntime.complete(TaskPayloadBuilder.complete()
.withTaskId(taskId).build());
logger.info(">>> Completed Task: " + taskId);

return "Completed Task: " + taskId;
}

 

Here we use the complete() method of the TaskRuntime API. We need to be logged in with the user that is assigned the task (so testuser) to complete it. The /complete-task?taskId={taskId} URL path is used for this ReST GET call.

 

Now, package and run the application as described before. When starting the application you should see logs that indicate that everything has been implemented correctly:

...

2018-08-30 10:07:36.214  INFO 36167 --- [  main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/complete-task]}" onto public java.lang.String org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.rest.TaskManagementController.completeTask(java.lang.String)

...

 

As we are running an in-memory database the process instance we previously started will be gone after a restart, create a new one with http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d.

 

I started the process instance logged in as user mbergljung. Because the user task is assigned to user testuser it will not show up when we call taskRuntime.tasks() and I will not be able to complete it.

 

We will have to logout and login again as testuser before continuing with the below ReST call (easiest way is to clear browser cache before next ReST calls).

 

Then type in the following URL: http://localhost:8080/my-tasks, you should then get a response looking something like this:

 

[
{
id: "b0a60cb6-fa2b-11e8-9c34-acde48001122",
name: "User Task 1",
status: "ASSIGNED",
assignee: "testuser",
createdDate: "2018-12-07T14:23:38.896+0000",
priority: 50,
processDefinitionId: "c68315b2-fa2a-11e8-9c34-acde48001122",
processInstanceId: "b0a28a43-fa2b-11e8-9c34-acde48001122"
}
]

 

Make a note of the Task ID and then use it in the http://localhost:8080/complete-task?taskId=b0a60cb6-fa2b-11e8-9c34-acde48001122 call to complete the task. This will have the process instance transition into the next activiti, which in our case is a service task.

 

As we have not yet implemented the service task we will see the following exception in the logs and in the browser:

 

org.activiti.engine.ActivitiException: No bean named 'serviceTask1Impl' available

 

So let’s fix the service task implementation.

 

Implementing Service Tasks and Listeners

Service tasks and listeners are implemented differently in Activiti 7 than in previous versions.

 

Implementing the Service Task Spring Bean

The service task is the last activiti in our process definition. Let’s implement it so we can complete the process instance.

 

What we need to do is create a Spring Bean with the name serviceTask1Impl that will represent the implementation of the Service Task. The Spring bean need to be of the type interface  org.activiti.runtime.api.connector.Connector. This new Connector interface is the natural evolution of Java Delegates, and Activiti 7 Core will try to reuse your Java Delegates by wrapping them up inside a Connector implementation.

 

Create a new sub-package called connectors in the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents package. Then add a new Spring bean named ServiceTask1Connector in this new package as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.connectors;

import org.activiti.api.process.model.IntegrationContext;
import org.activiti.api.process.runtime.connector.Connector;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

@Service(value = "serviceTask1Impl")
public class ServiceTask1Connector implements Connector {
private Logger logger = LoggerFactory.getLogger(ServiceTask1Connector.class);

public IntegrationContext execute(IntegrationContext integrationContext) {
logger.info("Some service task logic... [processInstanceId=" + integrationContext.getProcessInstanceId() + "]");

return integrationContext;
}
}

The connector is wired up automatically to the ProcessRuntime using the Bean name, in this example “serviceTask1Impl”. This bean name is picked up from the implementation property of the serviceTask element inside our process definition:

 

<bpmn2:serviceTask id="ServiceTask_1wg38me" name="Service Task 1" implementation="serviceTask1Impl">

 

Connectors receive an IntegrationContext with process instance information and the process variables and return a modified IntegrationContext with the results that needs to be mapped back to process variables.

 

Now, package and run the application as described before. As we are running an in-memory database the process instance we previously started will be gone after a restart, create a new one with http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d. Login as user testuser so we can keep the same user throughout.

 

Then type in the following URL: http://localhost:8080/my-tasks, you should then get a response looking something like this:

[
{
id: "79ce7445-fc4b-11e8-95e6-acde48001122",
name: "User Task 1",
status: "ASSIGNED",
assignee: "testuser",
createdDate: "2018-12-10T07:16:13.109+0000",
priority: 50,
processDefinitionId: "723ee071-fc4b-11e8-95e6-acde48001122",
processInstanceId: "79cb3ff2-fc4b-11e8-95e6-acde48001122"
}
]

Make a note of the Task ID and then use it in the http://localhost:8080/complete-task?taskId=79ce7445-fc4b-11e8-95e6-acde48001122 call to complete the task. This will have the process instance transition into the next activiti, which in our case is a service task. You should see the following in the logs:

 

2018-12-10 07:17:05.090  INFO 39199 --- [nio-8080-exec-7] o.a.t.a.c.ServiceTask1Connector          : Some service task logic... [processInstanceId=79cb3ff2-fc4b-11e8-95e6-acde48001122]

 

It would be good now to check if the process instance has completed. We can do that with the http://localhost:8080/process-instances call we developed earlier on. It should return an empty list.

 

Implementing Process Listeners and Task Listeners

Process Listeners and Task Listeners were traditionally in Activiti implemented with proprietary extensions. These extensions also meant that the code would run synchronously with the process execution. Which isn’t good in a Cloud deployment. In Activiti 7 the process engine emits events that we can listen to and subscribe to in an asynchronous way.

 

Process Listeners are created by implementing the interface  org.activiti.api.process.runtime.events.listener.ProcessRuntimeEventListener.

 

Create a new sub-package called listeners in the org/activiti/training/activiti7apibasicprocessusertaskservicetaskevents package. Then add a new Spring bean named MyProcessEventListener in this new package as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.listeners;

import org.activiti.api.model.shared.event.RuntimeEvent;
import org.activiti.api.model.shared.event.VariableCreatedEvent;
import org.activiti.api.process.model.events.SequenceFlowTakenEvent;
import org.activiti.api.process.runtime.events.*;
import org.activiti.api.process.runtime.events.listener.ProcessRuntimeEventListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

@Service
public class MyProcessEventListener implements ProcessRuntimeEventListener {
private Logger logger = LoggerFactory.getLogger(MyProcessEventListener.class);

@Override
public void onEvent(RuntimeEvent runtimeEvent) {

if (runtimeEvent instanceof ProcessStartedEvent)
logger.info("Do something, process is started: " + runtimeEvent.toString());
else if (runtimeEvent instanceof ProcessCompletedEvent)
logger.info("Do something, process is completed: " + runtimeEvent.toString());
else if (runtimeEvent instanceof ProcessCancelledEvent)
logger.info("Do something, process is cancelled: " + runtimeEvent.toString());
else if (runtimeEvent instanceof ProcessSuspendedEvent)
logger.info("Do something, process is suspended: " + runtimeEvent.toString());
else if (runtimeEvent instanceof ProcessResumedEvent)
logger.info("Do something, process is resumed: " + runtimeEvent.toString());
else if (runtimeEvent instanceof ProcessCreatedEvent)
logger.info("Do something, process is created: " + runtimeEvent.toString());
else if (runtimeEvent instanceof SequenceFlowTakenEvent)
logger.info("Do something, sequence flow is taken: " + runtimeEvent.toString());
else if (runtimeEvent instanceof VariableCreatedEvent)
logger.info("Do something, variable was created: " + runtimeEvent.toString());
else
logger.info("Unknown event: " + runtimeEvent.toString());

}
}

The process listener is wired up automatically to the ProcessRuntime by implementing the ProcessRuntimeEventListener interface. Listeners receive a RuntimeEvent with all the information about the event. And we can look at subclasses to figure out what the event is about, such as ProcessCompletedEvent.

 

In the same way we can create task listeners by implementing the org.activiti.api.task.runtime.events.listener.TaskRuntimeEventListener interface.

 

Add a new Spring bean named MyTaskEventListener in this new package as follows:

package org.activiti.training.activiti7apibasicprocessusertaskservicetaskevents.listeners;

import org.activiti.api.model.shared.event.RuntimeEvent;
import org.activiti.api.task.model.Task;
import org.activiti.api.task.runtime.events.*;
import org.activiti.api.task.runtime.events.listener.TaskRuntimeEventListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

@Service
public class MyTaskEventListener implements TaskRuntimeEventListener {
private Logger logger = LoggerFactory.getLogger(MyTaskEventListener.class);

@Override
public void onEvent(RuntimeEvent runtimeEvent) {

if (runtimeEvent instanceof TaskActivatedEvent)
logger.info("Do something, task is activated: " + runtimeEvent.toString());
else if (runtimeEvent instanceof TaskAssignedEvent) {
TaskAssignedEvent taskEvent = (TaskAssignedEvent)runtimeEvent;
Task task = taskEvent.getEntity();
logger.info("Do something, task is assigned: " + task.toString());
} else if (runtimeEvent instanceof TaskCancelledEvent)
logger.info("Do something, task is cancelled: " + runtimeEvent.toString());
else if (runtimeEvent instanceof TaskCompletedEvent)
logger.info("Do something, task is completed: " + runtimeEvent.toString());
else if (runtimeEvent instanceof TaskCreatedEvent)
logger.info("Do something, task is created: " + runtimeEvent.toString());
else if (runtimeEvent instanceof TaskSuspendedEvent)
logger.info("Do something, task is suspended: " + runtimeEvent.toString());
else
logger.info("Unknown event: " + runtimeEvent.toString());

}
}

Now, package and run the application as described before. Create a new Process Instance with http://localhost:8080/start-process?processDefinitionKey=sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d.

 

Already at this point we should see a lot of event logging:

 

2018-12-10 07:22:24.319  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Do something, process is created: org.activiti.runtime.api.event.impl.ProcessCreatedEventImpl@3b555ed0

2018-12-10 07:22:24.321  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Do something, process is started: org.activiti.runtime.api.event.impl.ProcessStartedEventImpl@45c34e86

2018-12-10 07:22:24.322  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Unknown event: org.activiti.api.runtime.event.impl.BPMNActivityStartedEventImpl@70e78827

2018-12-10 07:22:24.323  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Unknown event: org.activiti.api.runtime.event.impl.BPMNActivityCompletedEventImpl@2ea9f9f7

2018-12-10 07:22:24.325  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Do something, sequence flow is taken: org.activiti.api.runtime.event.impl.SequenceFlowTakenImpl@3f1b812c

2018-12-10 07:22:24.325  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.l.MyProcessEventListener         : Unknown event: org.activiti.api.runtime.event.impl.BPMNActivityStartedEventImpl@60734f37

2018-12-10 07:22:24.328  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.listeners.MyTaskEventListener    : Do something, task is created: org.activiti.runtime.api.event.impl.TaskCreatedEventImpl@2c9945ae

2018-12-10 07:22:24.329  INFO 39199 --- [nio-8080-exec-4] o.a.t.a.listeners.MyTaskEventListener    : Do something, task is assigned: TaskImpl{id='5711101b-fc4c-11e8-95e6-acde48001122', owner='null', assignee='testuser', name='User Task 1', description='null', createdDate=Mon Dec 10 07:22:24 GMT 2018, claimedDate=null, dueDate=null, priority=50, processDefinitionId='723ee071-fc4b-11e8-95e6-acde48001122', processInstanceId='570ffea8-fc4c-11e8-95e6-acde48001122', parentTaskId='null', formKey='null', status=ASSIGNED}

 

Introduction

So far in this article series about Activiti 7 we have just used out-of-the-box deployments with preconfigured business processes and business logic. What you really want to do is define your own business process and business logic and deploy it to a Kubernetes cluster. In this article we will take the process definition we designed in the previous article and deploy it as a custom Runtime Bundle. The service task in the process definition will be implemented with a custom Cloud Connector.

 

The runtime bundle and cloud connector that we develop will be deployed together with the Activiti Full Example that we worked with in the first article. By doing this we will see how several runtime bundles, with different business processes, and several cloud connectors, with different business logic, can be deployed together with Kubernetes and Helm.

 

Activiti 7 Deep Dive Article Series 

This article is part of series of articles covering Activiti 7 in detail, they should be read in the order listed:

 

  1. Deploying and Running a Business Process
  2. Using the Modeler to Design a Business Process
  3. Building, Deploying, and Running a Custom Business Process - this article
  4. Using the Core Libraries

 

Prerequisites

  • You have read and worked through the "Activiti 7 - Using the Modeler to Design Business Processes" article
  • The Activiti 7 Full Example is running

 

Source Code

You can find the source code related to this article here:

 

Creating a Helm Repository

We will be creating two new services, one Runtime Bundle with the process definition and one Cloud Connector with the service task implementation. They will require their own deployment packages. These packages need to be stored somewhere so they can be picked up by other Helm Charts. For example, the Full Example that we have been using uses several other Helm packages for things like infrastructure and modeling.

 

We want to make use of the Full Example setup and just add our packages to it. So we would end up with a Full Example that has two Runtime Bundles and two Cloud Connectors. In order to do this our Helm Charts needs to be available in some Helm Repository.

 

A Helm Repository is a HTTP server that has an index.yaml file plus all the chart files. You can use any HTTP server, but the easiest way to do this is to use GitHub Pages. Create a GitHub project that will be used for this, such as (note. You need to have a GitHub account to do this):

 

 

In this case I’m creating a new GitHub project called https://github.com/gravitonian/helm-repo to hold my Helm Repository.

 

Now, clone the GitHub project locally and create a branch called gh-pages:

 

$ git clone https://github.com/gravitonian/helm-repo.git

Cloning into 'helm-repo'...

warning: You appear to have cloned an empty repository.

$ cd helm-repo/

$ git checkout -b gh-pages

Switched to a new branch 'gh-pages'

 

Next step is to add an index.yaml file to the repo, it will contain a list of what Helm chart packages that are available in this repository:

 

$ touch index.yaml

git add .

$ git commit -m "Added helm repo index.yaml"

[gh-pages (root-commit) 9d76c1e] Added helm repo index.yaml

1 file changed, 0 insertions(+), 0 deletions(-)

create mode 100644 index.yaml

$ git push --set-upstream origin gh-pages

Counting objects: 3, done.

...

To https://github.com/gravitonian/helm-repo.git

* [new branch]      gh-pages -> gh-pages

Branch 'gh-pages' set up to track remote branch 'gh-pages' from 'origin'.

 

Your GitHub repo should look something like this now:

 

 

Click on the Settings link to the right and scroll down to the GitHub pages config section, it should look something like this:

 

 

We should not have to change anything here, just grab the URL where the site is published (i.e. https://gravitonian.github.io/helm-repo/), this will be our Helm Repository URL.

 

Before we can add this new Helm Repo to our local Helm installation there needs to be a chart uploaded so the index.yaml file looks correct. We will do that next when creating a Runtime Bundle.

 

Building a Runtime Bundle

An Activiti 7 Runtime Bundle contains the business processes that we want to execute. A Runtime Bundle executes business processes. It has its own ReST API that clients can talk to to start the business process and to interact with the business process. We have an Example Runtime Bundle project here, so should not be too difficult to replicate building a custom Runtime Bundle with our process definition.

 

A Runtime Bundle is the cloud version of the process engine. If you ever expose Activiti (the process engine) as a service, then you are defining a Runtime Bundle. The following are some facts about Activiti Runtime Bundles:

 

  • In the context of Activiti 7 a Runtime Bundle represent a stateless instance of the process engine, which is in charge of executing an immutable set of process definitions.
  • You cannot deploy new process definitions to a Runtime Bundle, instead you will create a new immutable version of your Runtime Bundle for your updated process definitions.
  • Runtime Bundles expose a synchronous ReST API and an asynchronous Message Based API.
  • Runtime Bundles emit events (in a fire & forget fashion) using a default ActivitiEventListener (Listens to the internal Process Engine events and transform them into messages).
  • When a Service Task (BPMN) is executed the Runtime Bundle will emit Integration Events so we can perform System to System integration. These Integration Events will be picked up by Activiti Cloud Connectors to perform integrations.

 

The following picture illustrates:

 

 

A Runtime Bundle can be implemented on top of a Spring Boot app. We will deploy the Sample Runtime Bundle as part of the out-of-the-box Full Example that we took for a test drive. So the Example Runtime Bundle and our Sample Runtime Bundle will live next to each other in the same release deployment:

 

 

Generating a Spring Boot 2 App

It’s really easy to get going with a Spring Boot application. Just head over to https://start.spring.io/ and fill in the data for the app as follows:

 

 

Make sure to use Spring Boot version 2.0.x with Activiti 7 Beta 1 - 3, Beta 4 should be aligned with version 2.1.x.

 

You don’t have to use the same Group (org.activiti.training) and Artifact (sample-activiti7-runtime-bundle) names as me, just use whatever you like. However, if you copy code from this article it might be easier if you use the same package names (i.e. the same group). Search for the H2 dependency so it’s included in the Maven POM. The Process Engine need a database to store process instance and task instance data. Then click the Generate Project button. The finished Spring Boot 2 Maven project will automatically download as a ZIP. Unpack it somewhere.

 

Test the standard Spring Boot App

Let’s make sure that the Spring Boot application works before we continue with the Activiti stuff. This involves two steps. First build the app JAR and then run the app JAR.

 

Building the application JAR:

 

$ cd sample-activiti7-runtime-bundle/

sample-activiti7-runtime-bundle mbergljung$ mvn package

[INFO] Scanning for projects...

[INFO]

[INFO] ------------------------------------------------------------------------

[INFO] Building sample-activiti7-runtime-bundle 0.0.1-SNAPSHOT

[INFO] ------------------------------------------------------------------------

[INFO]

...

[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ sample-activiti7-runtime-bundle---

[INFO] Building jar: /Users/mbergljung/IDEAProjects/sample-activiti7-runtime-bundle/target/sample-activiti7-runtime-bundle-0.0.1-SNAPSHOT.jar

[INFO]

[INFO] --- spring-boot-maven-plugin:2.0.4.RELEASE:repackage (default) @ sample-activiti7-runtime-bundle---

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

 

Running the application JAR:

 

sample-activiti7-runtime-bundle mbergljung$ java -jar target/sample-activiti7-runtime-bundle-0.0.1-SNAPSHOT.jar

 

 .  ____          _ __ _ _

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

\\/  ___)| |_)| | | | | || (_| |  ) ) ) )

 ' |____| .__|_| |_|_| |_\__, | / / / /

=========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v2.0.4.RELEASE)

 

2018-09-12 09:57:58.567  INFO 33115 --- [  main] .SampleActiviti7RuntimeBundleApplication : Starting SampleActiviti7RuntimeBundleApplication v0.0.1-SNAPSHOT on MBP512-MBERGLJUNG-0917 with PID 33115 (/Users/mbergljung/IDEAProjects/sample-activiti7-runtime-bundle/target/sample-activiti7-runtime-bundle-0.0.1-SNAPSHOT.jar started by mbergljung in /Users/mbergljung/IDEAProjects/sample-activiti7-runtime-bundle)

2018-09-12 09:57:58.571  INFO 33115 --- [  main] .SampleActiviti7RuntimeBundleApplication : No active profile set, falling back to default profiles: default

2018-09-12 09:57:58.621  INFO 33115 --- [  main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@2db0f6b2: startup date [Wed Sep 12 09:57:58 BST 2018]; root of context hierarchy

2018-09-12 09:57:59.168  INFO 33115 --- [  main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup

2018-09-12 09:57:59.181  INFO 33115 --- [  main] .SampleActiviti7RuntimeBundleApplication : Started SampleActiviti7RuntimeBundleApplication in 0.892 seconds (JVM running for 1.313)

2018-09-12 09:57:59.183  INFO 33115 --- [ Thread-2] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@2db0f6b2: startup date [Wed Sep 12 09:57:58 BST 2018]; root of context hierarchy

2018-09-12 09:57:59.184  INFO 33115 --- [ Thread-2] o.s.j.e.a.AnnotationMBeanExporter        : Unregistering JMX-exposed beans on shutdown

 

The application does not contain much so it will exit by itself.

 

Adding Activiti 7 Runtime Bundle Dependencies to the App

The Spring Boot app has most of the dependencies that we need, except for the Activiti 7 Runtime Bundle dependencies. So let’s add them. We can use a BOM (Bill-of-Materials) dependency that will bring in all the needed Activiti 7 dependency management configurations, including the correct versions of all dependencies.

 

Add the following to the pom.xml:

 

 

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.activiti.cloud.dependencies</groupId>
<artifactId>activiti-cloud-dependencies</artifactId>
<version>7.0.0.Beta3</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>

 

This will import all the dependency management configurations for Activiti 7 Beta 3. Now we just need to add an Activiti 7 Runtime Bundle dependency that supports running the Activiti process engine, ReST APIs, events, etc. Add the following dependency to the pom.xml:

 

<dependency>
<groupId>org.activiti.cloud.rb</groupId>
<artifactId>activiti-cloud-starter-runtime-bundle</artifactId>
</dependency>

 

This will bring in all the Activiti and Spring dependencies needed to run the Activiti 7 Runtime Bundle embedded in a Spring Boot application.

 

Make the App an Activiti Runtime Bundle

We can now use a so called Spring Boot Starter to turn this app into an Activiti 7 Runtime Bundle, add the @ActivitiRuntimeBundle annotation to the sample-activiti7-runtime-bundle/src/main/java/org/activiti/training/sampleactiviti7runtimebundle/SampleActiviti7RuntimeBundleApplication.java class as follows:

 

package org.activiti.training.sampleactiviti7runtimebundle;

import org.activiti.cloud.starter.rb.configuration.ActivitiRuntimeBundle;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
@ActivitiRuntimeBundle
public class SampleActiviti7RuntimeBundleApplication {

public static void main(String[] args) {
SpringApplication.run(SampleActiviti7RuntimeBundleApplication.class, args);
}
}

 

We cannot yet run the application with these new dependencies as it will look for process definitions in the resources/processes directory. And if this directory does not exist, then an exception is thrown and the app halts. But even before that exception is thrown another exception is thrown about a missing spring.application.name property, so we need to add it too.

 

Adding the App Name property

To add the Spring Application Name property open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties file and add the following property:

 

spring.application.name=${ACT_RB_APP_NAME:sample-activiti7-rb-app}

 

The sample-activiti7-rb-app name will be the default one in our case. But we can also override this name by setting the ACT_RB_APP_NAME parameter for the Docker container. This property is very important as it will be used to create the new URL for the Runtime Bundle’s ReST API, for example: {{gatewayUrl}}/sample-activiti7-rb-app/v1/process-definitions

 

The application name also identifies this application in a microservice environment and is used when registering with a service registry such as Netflix Eureka service registry. It is also used to look up <applicationName>[-<profile>].[properties|yml] in Spring Cloud Config Server as well as configuration in other service registries like Hashicorp’s Consul or Apache Zookeeper.

 

Adding the Process Definition to the App

We will now add the custom sample process definition XML file to the project. Create a new directory called processes under the src/main/resources directory. Then copy the .bpmn20.xml file into this directory. You should see a directory structure like this now:

 

├── pom.xml
├── src
│   ├── main
│   │ ├── java
│   │ │   └── org
│   │ │       └── activiti
│   │ │           └── training
│   │ │               └── sampleactiviti7runtimebundle
│   │ │                   └── SampleActiviti7RuntimeBundleApplication.java
│   │ └── resources
│   │    ├── application.properties
│   │    └── processes
│   │        └── sample-process.bpmn20.xml

I renamed the BPMN XML file to sample-process.bpmn20.xml.

 

Test the Spring Boot App containing Runtime Bundle Starter and Process Definition

We can now package and run the app to see that all the Activiti Runtime Bundle stuff is loaded properly and that the process definition is read correctly without errors.

 

sample-activiti7-runtime-bundle mbergljung$ mvn clean package -DskipTests

(you need to skip the tests as it tries to connect to RabbitMQ, we just want the App JAR created...)

 

sample-activiti7-runtime-bundle mbergljung$ java -jar target/sample-activiti7-runtime-bundle-0.0.1-SNAPSHOT.jar

2018-11-21 10:24:53.479  INFO [-,,,] 13199 --- [         main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@2b80d80f: startup date [Wed Nov 21 10:24:53 GMT 2018]; root of context hierarchy

2018-11-21 10:24:53.927  INFO [-,,,] 13199 --- [         main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$6f8c6440] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

 

 .  ____          _ __ _ _

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

\\/  ___)| |_)| | | | | || (_| |  ) ) ) )

 ' |____| .__|_| |_|_| |_\__, | / / / /

=========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v2.0.4.RELEASE)

 

...

2018-11-21 10:25:00.721  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)

2018-11-21 10:25:00.775  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] o.apache.catalina.core.StandardService : Starting service [Tomcat]

2018-11-21 10:25:00.776  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32

...

2018-11-21 10:25:11.230  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/v1/process-definitions/{id}/meta],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.activiti.cloud.services.rest.api.resources.ProcessDefinitionMetaResource org.activiti.cloud.services.rest.controllers.ProcessDefinitionMetaControllerImpl.getProcessDefinitionMetadata(java.lang.String)

...

2018-11-21 10:25:16.677  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel signalProducer

2018-11-21 10:25:16.684  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] o.s.i.monitor.IntegrationMBeanExporter : Located managed bean 'org.springframework.integration:type=MessageChannel,name=signalProducer': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name=signalProducer]

...

2018-11-21 10:28:17.366  INFO [sample-activiti7-rb-app,,,] 13199 --- [           main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [rabbitmq:5672]

 

We can see that it all starts up. We have an Apache Tomcat 8 server started on port 8080 to support the ReST API. We can see ReST API request mappings such as /admin/v1/process-definitions. We should be able to test them right now. We can also see different message channels being created. And finally we can see that the Runtime Bundle is trying to connect to a RabbitMQ message broker on 5672, which we don’t have running yet. In fact, there are more things we don’t have running, such as Keycloak, which contains the users and the groups. Before we can actually call any of the ReST APIs we need to be logged into Keycloak and have an access token.

 

The best way to test the new Runtime Bundle is to create a Docker Image with it, and then deploy it to the existing cluster we have just tested in previous section. But before we go ahead and create a new Docker Image there are a few more things we need to configure.

 

Configure App to use Keycloak for authentication

All the Runtime Bundles that are deployed uses Keycloak for user authentication. So we need to tell our Runtime Bundle where it can find the Keycloak server.

 

Open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties configuration file and add the following properties:

 

keycloak.auth-server-url=${ACT_KEYCLOAK_URL:http://activiti-keycloak:8180/auth}
keycloak.realm=${ACT_KEYCLOAK_REALM:activiti}
keycloak.resource=${ACT_KEYCLOAK_RESOURCE:activiti}
keycloak.ssl-required=${ACT_KEYCLOAK_SSL_REQUIRED:none}
keycloak.public-client=${ACT_KEYCLOAK_CLIENT:true}
keycloak.security-constraints[0].authRoles[0]=${ACT_KEYCLOAK_USER_ROLE:ACTIVITI_USER}
keycloak.security-constraints[0].securityCollections[0].patterns[0]=${ACT_KEYCLOAK_PATTERNS:/v1/*}
keycloak.security-constraints[1].authRoles[0]=${ACT_KEYCLOAK_ADMIN_ROLE:ACTIVITI_ADMIN}
keycloak.security-constraints[1].securityCollections[0].patterns[0]=/admin/*
keycloak.principal-attribute=${ACT_KEYCLOAK_PRINCIPAL_ATTRIBUTE:preferred-username}
activiti.keycloak.admin-client-app=${ACT_KEYCLOAK_CLIENT_APP:admin-cli}
activiti.keycloak.client-user=${ACT_KEYCLOAK_CLIENT_USER:client}
activiti.keycloak.client-password=${ACT_KEYCLOAK_CLIENT_PASSWORD:client}

Almost all of these properties can have their values set when the app is initialized via the ACT_* property substitutions.

 

Configure App to Produce and Consume Async Messages

All the Runtime Bundles that are deployed uses Spring Cloud Stream for producing and consuming asynchronous messages. These messages can be from for example RabbitMQ and the Runtime Bundle uses Spring AMQP protocol to make this happen with RabbitMQ. We need to tell our Runtime Bundle what the different message destinations are.

 

Open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties configuration file and add the following properties:

 

spring.cloud.stream.bindings.auditProducer.destination=${ACT_RB_AUDIT_PRODUCER_DEST:engineEvents}
spring.cloud.stream.bindings.auditProducer.contentType=${ACT_RB_AUDIT_PRODUCER_CONTENT_TYPE:application/json}
spring.cloud.stream.bindings.myCmdResults.destination=${ACT_RB_COMMAND_RESULTS_DEST:commandResults}
spring.cloud.stream.bindings.myCmdResults.group=${ACT_RB_COMMAND_RESULTS_GROUP:myCmdGroup}
spring.cloud.stream.bindings.myCmdResults.contentType=${ACT_RB_COMMAND_RESULTS_CONTENT_TYPE:application/json}
spring.cloud.stream.bindings.myCmdProducer.destination=${ACT_RB_COMMAND_RESULTS_DEST:commandConsumer}
spring.cloud.stream.bindings.myCmdProducer.contentType=${ACT_RB_COMMAND_RESULTS_CONTENT_TYPE:application/json}
spring.cloud.stream.bindings.signalProducer.destination=${ACT_RB_SIGNAL_PRODUCER_DEST:signalEvent}
spring.cloud.stream.bindings.signalProducer.contentType=${ACT_RB_SIGNAL_PRODUCER_CONTENT_TYPE:application/json}
spring.cloud.stream.bindings.signalConsumer.destination=${ACT_RB_SIGNAL_CONSUMER_DEST:signalEvent}
spring.cloud.stream.bindings.signalConsumer.group=${ACT_RB_SIGNAL_CONSUMER_GROUP:mySignalConsumerGroup}
spring.cloud.stream.bindings.signalConsumer.contentType=${ACT_RB_SIGNAL_CONSUMER_CONTENT_TYPE:application/json}
spring.jackson.serialization.fail-on-unwrapped-type-identifiers=${ACT_RB_JACKSON_FAIL_ON_UNWRAPPED_IDS:false}
spring.rabbitmq.host=${ACT_RABBITMQ_HOST:rabbitmq}

All of these properties can have their values set when the app is initialized via the ACT_* property substitutions.

 

Configure App to connect to Zipkin for Tracing

All the Runtime Bundles that are deployed uses Zipkin, which is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data.  

 

Open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties configuration file and add the following properties:

 

spring.zipkin.base-url=http://zipkin:80/
spring.zipkin.sender.type=web
spring.sleuth.enabled=true
spring.sleuth.sampler.probability=1.0

Miscellaneous App Configuration

There are a few more configuration properties that we need to supply. Open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties configuration file and add the following properties:

 

spring.activiti.useStrongUuids=true
activiti.cloud.application.name=default-app

Deploying a Runtime Bundle

In this section we will have a look at one way of deploying Runtime Bundles.

 

Building a Docker Image for our Runtime Bundle

We now got the runtime bundle completed with our custom process definition. In order to use it we need to have it available as a Docker Image. And to produce a Docker image we need a Dockerfile. Create one in the top directory of the project with the following content:

 

FROM openjdk:alpine
RUN apk --update add fontconfig ttf-dejavu
ENV PORT 8080
EXPOSE 8080
COPY target/*.jar /opt/app.jar
WORKDIR /opt
ENTRYPOINT exec java $JAVA_OPTS -jar app.jar

This will create a runtime bundle image based on Alpine linux and OpenJDK. All we need to run it. It will be exposed on port 8080.

 

Let’s have our Maven project build the image automatically for us with the help of the Fabric8 Maven plugin. Add it to the sample-activiti7-runtime-bundle/pom.xml as follows:

 

<build>
<plugins>
...


<!-- Build the custom Activiti Runtime Bundle Docker Image -->
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.26.1</version>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<id>docker</id>
<phase>install</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>registry</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

 

Now, to get the Docker image built we just have to execute the following maven command:

 

sample-activiti7-runtime-bundle mbergljung$ mvn clean install -DskipTests

[INFO] Scanning for projects...

[INFO]

[INFO] ------------------------------------------------------------------------

[INFO] Building sample-activiti7-runtime-bundle 0.0.1-SNAPSHOT

[INFO] ------------------------------------------------------------------------

...

[INFO]

[INFO] --- docker-maven-plugin:0.26.1:build (docker) @ sample-activiti7-runtime-bundle ---

[INFO] DOCKER> Pulling from library/openjdk

8e3ba11ec2a2: Already exists

311ad0da4533: Already exists

df312c74ce16: Already exists

[INFO] DOCKER> Digest: sha256:1fd5a77d82536c88486e526da26ae79b6cd8a14006eb3da3a25eb8d2d682ccd6

[INFO] DOCKER> Status: Downloaded newer image for openjdk:alpine

[INFO] DOCKER> Pulled openjdk:alpine in 3 seconds

[INFO] Building tar: /Users/mbergljung/IDEAProjects/activiti7-sample-runtime-bundle/target/docker/activiti-runtime-bundle-custom/0.0.1-SNAPSHOT/tmp/docker-build.tar

[INFO] DOCKER> [activiti-runtime-bundle-custom:0.0.1-SNAPSHOT] "activiti-rb-custom": Created docker-build.tar in 962 milliseconds

[INFO] DOCKER> [activiti-runtime-bundle-custom:0.0.1-SNAPSHOT] "activiti-rb-custom": Built image sha256:a146b

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 50.889 s

[INFO] Finished at: 2018-11-21T10:47:17Z

[INFO] Final Memory: 77M/751M

[INFO] ------------------------------------------------------------------------

 

Make sure we got the image by listing local images:

 

$ docker image ls

REPOSITORY                                                       TAG IMAGE ID CREATED SIZE

activiti-runtime-bundle-custom                                   0.0.1-SNAPSHOT a146b0829d69 About a minute ago 193MB

...

 

We are now ready to include the runtime bundle in an Activiti 7 Deployment.

 

Deploying a custom Runtime Bundle Docker Image

What we want to do now is deploy our runtime bundle together with the necessary services and infrastructure. We can use the previous Full Example Helm Chart setup for this. We just need to add our Sample Runtime Bundle to this solution deployment. To do this we need to create a Helm package for our Runtime Bundle.

 

Creating a Helm Chart for the Sample Runtime Bundle

The easiest way to accomplish this is to copy the Helm Charts for the out-of-the-box Example Runtime Bundle into our Helm Repo project (the GitHub pages project we created earlier on, in my case https://github.com/gravitonian/helm-repo):

 

helm-repo mbergljung$ mkdir charts

helm-repo mbergljung$ cd charts/

charts mbergljung$ mkdir sample-runtime-bundle

charts mbergljung$ cd sample-runtime-bundle/

sample-runtime-bundle mbergljung$ cp -R ../../../activiti-cloud-charts/runtime-bundle/* .

sample-runtime-bundle mbergljung$ ls -l

total 32

-rw-r--r--  1 mbergljung  staff 184 22 Nov 15:27 Chart.yaml

-rw-r--r--  1 mbergljung  staff 18 22 Nov 15:27 README.md

-rw-r--r--  1 mbergljung  staff 251 22 Nov 15:27 requirements.yaml

drwxr-xr-x  7 mbergljung  staff 224 22 Nov 15:27 templates

-rw-r--r--  1 mbergljung  staff 2651 22 Nov 15:27 values.yaml

 

Now we can update these charts to match our Sample Runtime Bundle.

 

Open up the helm-repo/charts/sample-runtime-bundle/Chart.yaml file and update it to look as follows, changing the name and version (note. The name of the chart and the directory it is contained in must match):

 

apiVersion: v1
description: A Helm chart for Kubernetes
icon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/master/images/java.png
name: sample-runtime-bundle
version: 0.1.0

Next open up the helm-repo/charts/sample-runtime-bundle/requirements.yaml file and remove all its content, we are not going to be using PostgreSQL.

 

Then open up the helm-repo/charts/sample-runtime-bundle/values.yaml file and update it to look as follows:

 

# Default values for Maven projects.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
global:
rabbitmq:
host:
value: ""
username:
value: guest
password:
value: guest
keycloak:
url: ""
name: keycloak
service:
type: http
port: 80
## The list of hostnames to be covered with this ingress record.
## Most likely this will be just one host, but in the event more hosts are needed, this is an array
ingress:
hostName: ""
db:
uri: ""
name: activitipostgresql
deployPostgres: false
port: 5432

activitipostgresql:
postgresPassword: activiti

## Allows the specification of additional environment variables
extraEnv: |
# - name: ACT_KEYCLOAK_URL
# valueFrom:
# configMapKeyRef:
# name: {{ .Release.Name }}-keycloak-http
# key: expose-keycloak-service-key

javaOpts:
xmx: 768m
xms: 512m
other: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90
image:
repository: activiti-runtime-bundle-custom
tag: 0.0.1-SNAPSHOT
pullPolicy: IfNotPresent
service:
name: sample-activiti7-rb-app
type: ClusterIP
externalPort: 80
internalPort: 8080
annotations:
fabric8.io/expose: "false"
resources:
limits:
memory: 768Mi
requests:
cpu: 400m
memory: 768Mi
probePath: /actuator/health
livenessProbe:
initialDelaySeconds: 140
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 4
readinessProbe:
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
terminationGracePeriodSeconds: 20


ingress:
## Set to true to enable ingress record generation
enabled: false

path: /rb-sample-app

## Set this to true in order to enable TLS on the ingress record
tls: false

## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: myTlsSecret

## Ingress annotations done as key:value pairs
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Port, X-Forwarded-Prefix,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,X-CSRF-Token,Access-Control-Request-Headers,Access-Control-Request-Method,accept,Keep-Alive"
nginx.ingress.kubernetes.io/x-forwarded-prefix: "true"

 

The following values have been updated:

 

Name

New value

Description

image.repository

activiti-runtime-bundle-custom

This is the name of the custom runtime bundle Docker Image in our local Docker repository. You can do $ docker image ls to view it.

image.tag

0.0.1-SNAPSHOT

This is the version of the custom runtime bundle Docker Image in our local Docker repository. You can do $ docker image ls to view it.

service.name

sample-activiti7-rb-app

This is the name of the service representing the new runtime bundle.

 

Package and Publish the Helm Chart for the Sample Runtime Bundle

We are now ready to create a Helm package for our Sample Runtime Bundle Helm Chart. Execute the following commands standing in the /helm-repo directory:

 

helm-repo mbergljung$ helm lint charts/sample-runtime-bundle

==> Linting .

Lint OK

 

1 chart(s) linted, no failures

 

helm-repo mbergljung$ helm package charts/sample-runtime-bundle

Successfully packaged chart and saved it to: /Users/mbergljung/DeployProjects/helm-repo/sample-runtime-bundle-0.1.0.tgz

 

Now, let’s upload all the chart files including the package to our Helm Repository (i.e. https://github.com/gravitonian/helm-repo). First generate the index file:

 

helm-repo mbergljung$ helm repo index --url https://gravitonian.github.io/helm-repo .

 

This command generates the index.yaml file. Let’s take a look at it:

 

helm-repo mbergljung$ more index.yaml

apiVersion: v1

entries:

 sample-runtime-bundle:

 - apiVersion: v1

   created: 2018-11-23T13:22:51.768039Z

   description: A Helm chart for Kubernetes

   digest: 8215e08e933dda7372866796027b2aa80f28c0487ddd902d2ec9e27036092905

   icon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/master/images/java.png

   name: sample-runtime-bundle

   urls:

   - https://gravitonian.github.io/helm-repo/sample-runtime-bundle-0.1.0.tgz

   version: 0.1.0

generated: 2018-11-23T13:22:51.764624Z

 

The helm repo GitHub project should now look something like this:

 

$ tree

.
├── charts
│   └── sample-runtime-bundle
│       ├── Chart.yaml
│       ├── README.md
│       ├── requirements.yaml
│       ├── templates
│       │ ├── NOTES.txt
│       │ ├── _helpers.tpl
│       │ ├── deployment.yaml
│       │ ├── ingress.yaml
│       │ └── service.yaml
│       └── values.yaml
├── index.yaml
└── sample-runtime-bundle-0.1.0.tgz

 

Now commit & push the changes:

 

helm-repo mbergljung$ git add .

helm-repo mbergljung$ git commit -m "Sample Runtime Bundle Chart"

[gh-pages 17585d6] Sample Runtime Bundle Chart

...

helm-repo mbergljung$ git push

Counting objects: 18, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (15/15), done.

Writing objects: 100% (18/18), 8.50 KiB | 4.25 MiB/s, done.

Total 18 (delta 1), reused 0 (delta 0)

remote: Resolving deltas: 100% (1/1), done.

To https://github.com/gravitonian/helm-repo.git

  9d76c1e..17585d6  gh-pages -> gh-pages

 

Add Our Helm Repository to the local Helm installation

We can now add this Helm repo to our Helm installation as follows:

 

helm-repo mbergljung$ helm repo add gravitonian-helm-repo https://gravitonian.github.io/helm-repo

"gravitonian-helm-repo" has been added to your repositories

 

Update repo to make sure we got all the latest stuff locally:

 

helm-repo mbergljung$ helm repo update

Hang tight while we grab the latest from your chart repositories...

...Skip local chart repository

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "gravitonian-helm-repo" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

helm repo list...Successfully got an update from the "stable" chart repository

Update Complete. ⎈ Happy Helming!⎈

 

helm-repo mbergljung$ helm repo list

NAME                  URL                                              

stable                https://kubernetes-charts.storage.googleapis.com

local                 http://127.0.0.1:8879/charts                     

alfresco-incubator    http://kubernetes-charts.alfresco.com/incubator  

activiti-cloud-charts https://activiti.github.io/activiti-cloud-charts/

gravitonian-helm-repo https://gravitonian.github.io/helm-repo   

 

Update the Full Example Helm Charts to deploy Sample Runtime Bundle

Add a dependency on the Sample Runtime Bundle in the activiti-cloud-charts/activiti-cloud-full-example/requirements.yaml file:

dependencies:
- name: infrastructure
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
- name: application
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
- name: activiti-cloud-modeling
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
condition: activiti-cloud-modeling.enabled,modeling.enabled
- name: sample-runtime-bundle
repository: https://gravitonian.github.io/helm-repo/
version: 0.1.0

For the new dependency on the Sample Runtime Bundle to be picked up we need to update the dependencies as follows:

 

activiti-cloud-full-example mbergljung$ helm dependency update

Hang tight while we grab the latest from your chart repositories...

...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):

Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: getsockopt: connection refused

...Successfully got an update from the "gravitonian-helm-repo" chart repository

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

...Successfully got an update from the "stable" chart repository

Update Complete. ⎈Happy Helming!⎈

Saving 4 charts

Downloading infrastructure from repo https://activiti.github.io/activiti-cloud-charts/

Downloading application from repo https://activiti.github.io/activiti-cloud-charts/

Downloading activiti-cloud-modeling from repo https://activiti.github.io/activiti-cloud-charts/

Downloading sample-runtime-bundle from repo https://gravitonian.github.io/helm-repo

Deleting outdated charts

 

You should see a charts directory with the downloaded dependencies:

 

activiti-cloud-full-example mbergljung$ tree

.

├── Chart.yaml

├── README.md

├── charts

│   ├── activiti-cloud-modeling-0.4.0.tgz

│   ├── application-0.4.0.tgz

│   ├── infrastructure-0.4.0.tgz

│   └── sample-runtime-bundle-0.1.0.tgz

├── helm-service-account-role.yaml

├── requirements.lock

├── requirements.yaml

├── upgrade-debug.txt

└── values.yaml

 

Deploy the Sample Runtime Bundle via the Full Example Helm Charts

We are now ready to upgrade our running Full Example solution (assuming it is running since before) with the new sample runtime bundle. First check what name the Full Example deployment has:

 

helm ls

NAME           REVISION UPDATED                  STATUS   CHART                             NAMESPACE

cloying-possum 1        Tue Nov 20 06:34:44 2018 DEPLOYED activiti-cloud-full-example-0.4.0 activiti7

intent-deer    1        Mon Nov 19 07:45:10 2018 DEPLOYED nginx-ingress-0.19.2              activiti7

 

We can see that the Full Example deployment has the name cloying-possum. Execute the following Helm command to upgrade it:

 

activiti-cloud-charts mbergljung$ cd activiti-cloud-full-example/

activiti-cloud-full-example mbergljung$ helm upgrade cloying-possum -f values.yaml . --namespace=activiti7

 

Note here that we cannot refer to the Full Example helm chart package in the Activiti Helm Repo like we did before when installing with activiti-cloud-charts/activiti-cloud-full-example. This is because we are changing more than just the values.yaml. We are also changing the requirements.yaml (i.e. the dependencies). And the dependencies would still be the same in the remote activiti-cloud-charts/activiti-cloud-full-example helm package, so we need to pick up the chart info from the local directory with just a dot(.).

 

Have a look in the Kubernetes Dashboard and make sure the new runtime bundle is deployed properly:

 

 

Before moving on let’s make sure we can call the ReST API in the new Sample Runtime Bundle. In Postman copy the rb-my-app collection so we can update the URLs in it. Then update the URL for the getProcessDefintions call from {{gatewayUrl}}/rb-my-app/v1/process-definitions to {{gatewayUrl}}/sample-activiti7-rb-app/v1/process-definitions. so the call targets the new Sample Runtime Bundle.

 

Now make a getKeycloakToken call to get auth token. And then make a getProcessDefinitions call in the new updated collection:

 

 

Everything seems to work fine. However, before we can run the new process definition we need to implement the custom cloud connector for Service Task 1.

 

Building a Cloud Connector

The process definition that we have deployed in our custom Runtime Bundle is dependent on a Cloud Connector for Service Task 1. Implementing a Cloud Connector can be done with a Spring Boot project in the same way we did for the runtime bundle. We have an example project for this here, so should not be too difficult to replicate building a custom Cloud Connector.

 

A Cloud Connector contains a Java implementation of the business logic that should be executed when a Service Task is processed by the Activiti 7 process engine. It lives in its own container next to the runtime bundle.

 

The following picture illustrates:

 

 

The implementation property value (i.e. serviceTask1Impl) on the Service Task 1 definition will be used as a Spring Cloud Stream channel destination name (on the bound middleware RabbitMQ). So we have to make sure the Cloud Connector creates a consumer bound to this name with the spring.cloud.stream.bindings.<channel>.destination property value.

 

Generating a Spring Boot 2 App

It’s really easy to get going with a Spring Boot application. Just head over to https://start.spring.io/ and fill in the data for the app as follows:

 

 

Make sure to use Spring Boot version 2.0.x with Activiti 7 Beta 1 - 3, Beta 4 should be aligned with version 2.1.x.

 

You don’t have to use the same Group (org.activiti.training) and Artifact (sample-activiti7-cloud-connector) names as me, just use whatever you like. However, if you copy code from this article it might be easier if you use the same package names (i.e. the same group). Then click the Generate Project button. The finished Spring Boot 2 Maven project will automatically download as a ZIP. Unpack it somewhere.

 

Test the standard Spring Boot App

Let’s make sure that the Spring Boot application works before we continue with the Activiti stuff. This involves two steps. First build the app JAR and then run the app JAR.

 

Building the application JAR:

 

$ cd sample-activiti7-cloud-connector/

sample-activiti7-cloud-connector mbergljung$ mvn package

[INFO] Scanning for projects...

[INFO]

[INFO] ------------------------------------------------------------------------

[INFO] Building sample-activiti7-cloud-connector 0.0.1-SNAPSHOT

[INFO] ------------------------------------------------------------------------

[INFO]

...

[INFO] --- maven-jar-plugin:3.1.0:jar (default-jar) @ sample-activiti7-cloud-connector---

[INFO] Building jar: /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar

[INFO]

[INFO] --- spring-boot-maven-plugin:2.1.0.RELEASE:repackage (repackage) @ sample-activiti7-cloud-connector---

...

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

 

Running the application JAR:

 

sample-activiti7-cloud-connector mbergljung$ java -jar target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar

 

 .  ____          _ __ _ _

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

\\/  ___)| |_)| | | | | || (_| |  ) ) ) )

 ' |____| .__|_| |_|_| |_\__, | / / / /

=========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v2.1.0.RELEASE)

 

2018-11-28 10:42:16.764  INFO 33357 --- [  main] SampleActiviti7CloudConnectorApplication : Starting SampleActiviti7CloudConnectorApplication v0.0.1-SNAPSHOT on MBP512-MBERGLJUNG-0917 with PID 33357 (/Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar started by mbergljung in /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector)

2018-11-28 10:42:16.768  INFO 33357 --- [  main] SampleActiviti7CloudConnectorApplication : No active profile set, falling back to default profiles: default

2018-11-28 10:42:17.298  INFO 33357 --- [  main] SampleActiviti7CloudConnectorApplication : Started SampleActiviti7CloudConnectorApplication in 0.866 seconds (JVM running for 1.28)

 

The application does not contain much so it will exit by itself.

 

Adding Activiti 7 Cloud Connector Dependencies to the App

The Spring Boot app has most of the dependencies that we need, except for the Activiti 7 Cloud Connector dependencies. So let’s add them. We can use a BOM (Bill-of-Materials) dependency that will bring in all the needed Activiti 7 dependency management configurations, including the correct versions of all dependencies.

 

Add the following to the pom.xml:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.activiti.cloud.dependencies</groupId>
<artifactId>activiti-cloud-dependencies</artifactId>
<version>7.0.0.Beta3</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>

This will import all the dependency management configurations for Activiti 7 Beta 3. Now we just need to add an Activiti 7 Cloud Connector dependency that supports building and running the Service Task Implementation etc. Add the following dependency to the pom.xml:

<dependency>
<groupId>org.activiti.cloud.connector</groupId>
<artifactId>activiti-cloud-starter-connector</artifactId>
</dependency>

This will bring in all the Activiti and Spring dependencies needed to run the Activiti 7 Cloud Connector embedded in a Spring Boot application.

 

Note. you might also need the following dependency if you have ReST endpoints in your Cloud Connector and you want them secured the same way as in the Runtime Bundle:

<dependency>
<groupId>org.activiti.cloud.common</groupId>
<artifactId>activiti-cloud-services-common-security-keycloak</artifactId>
</dependency>

Make the App a Cloud Connector

We can now use a so called Spring Boot Starter to turn this app into an Activiti 7 Cloud Connector, add the @EnableActivitiCloudConnector annotation to the sample-activiti7-cloud-connector/src/main/java/org/activiti/training/sampleactiviti7cloudconnector/SampleActiviti7CloudConnectorApplication.java class as follows:

package org.activiti.training.sampleactiviti7cloudconnector;

import org.activiti.cloud.connectors.starter.configuration.EnableActivitiCloudConnector;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;

@SpringBootApplication
@EnableActivitiCloudConnector
@ComponentScan({"org.activiti.cloud.connectors.starter","org.activiti.training.sampleactiviti7cloudconnector","org.activiti.cloud.services.common.security"})
public class SampleActiviti7CloudConnectorApplication {

public static void main(String[] args) {
SpringApplication.run(SampleActiviti7CloudConnectorApplication.class, args);
}
}

At the same time we also add several Java Package paths that can be searched for Spring Beans, we do this with the @ComponentScan annotation. If you created your project with a different Java package then me, then you have to change the org.activiti.training.sampleactiviti7cloudconnector path to match your project environment.

 

Adding the App Name property

To add the Spring Application Name property open up the sample-activiti7-runtime-bundle/src/main/resources/application.properties file and add the following property:

 

spring.application.name=sample-activiti7-cloud-connector

 

The application name identifies this application in a microservice environment and is used when registering with a service registry such as Netflix Eureka service registry. It is also used to look up <applicationName>[-<profile>].[properties|yml] in Spring Cloud Config Server as well as configuration in other service registries like Hashicorp’s Consul or Apache Zookeeper.

 

Configure App to use Keycloak for authentication

All the Cloud Connectors that are deployed uses Keycloak for user authentication. So we need to tell our connector where it can find the Keycloak server.

 

Open up the sample-activiti7-cloud-connector/src/main/resources/application.properties configuration file and add the following properties:

 

keycloak.auth-server-url=${ACT_KEYCLOAK_URL:http://activiti-keycloak:8180/auth}
keycloak.realm=${ACT_KEYCLOAK_REALM:activiti}
keycloak.resource=${ACT_KEYCLOAK_RESOURCE:activiti}
keycloak.ssl-required=${ACT_KEYCLOAK_SSL_REQUIRED:none}
keycloak.public-client=${ACT_KEYCLOAK_CLIENT:true}
keycloak.security-constraints[0].authRoles[0]=${ACT_KEYCLOAK_USER_ROLE:ACTIVITI_USER}
keycloak.security-constraints[0].securityCollections[0].patterns[0]=${ACT_KEYCLOAK_PATTERNS:/v1/*}
keycloak.security-constraints[1].authRoles[0]=${ACT_KEYCLOAK_ADMIN_ROLE:ACTIVITI_ADMIN}
keycloak.security-constraints[1].securityCollections[0].patterns[0]=/admin/*
keycloak.principal-attribute=${ACT_KEYCLOAK_PRINCIPAL_ATTRIBUTE:preferred-username}
activiti.keycloak.admin-client-app=${ACT_KEYCLOAK_CLIENT_APP:admin-cli}
activiti.keycloak.client-user=${ACT_KEYCLOAK_CLIENT_USER:client}
activiti.keycloak.client-password=${ACT_KEYCLOAK_CLIENT_PASSWORD:client}

Almost all of these properties can have their values set when the app is initialized via the ACT_* property substitutions.

 

Configure App to Produce and Consume Async Messages

All the Cloud Connectors that are deployed uses Spring Cloud Stream for producing and consuming asynchronous messages. These messages can be from for example RabbitMQ and the Cloud Connector uses Spring AMQP protocol to make this happen with RabbitMQ. We need to tell our Cloud Connector what the different message destinations are.

 

Open up the sample-activiti7-cloud-connector/src/main/resources/application.properties configuration file and add the following properties:

 

spring.cloud.stream.bindings.sampleConnectorConsumer.destination=serviceTask1Impl
spring.cloud.stream.bindings.sampleConnectorConsumer.contentType=application/json
spring.cloud.stream.bindings.sampleConnectorConsumer.group=${spring.application.name}    
spring.rabbitmq.host=${ACT_RABBITMQ_HOST:rabbitmq}

The properties have the following meaning:

 

Property name

Value

Description

spring.cloud.stream.bindings.

sampleConnectorConsumer.destination

serviceTask1Impl

The target destination of a channel on the bound middleware (i.e. RabbitMQ).

This must match the Service Task Definition implementation property value.

spring.cloud.stream.bindings.

sampleConnectorConsumer.contentType

application/json

The content type of the channel.

spring.cloud.stream.bindings.

sampleConnectorConsumer.group

${spring.application.name}

The consumer group of the channel. Applies only to inbound bindings. All groups that subscribe to a given destination receive a copy of published data, but only one member of each group receives a given message from that destination.

spring.rabbitmq.host

${ACT_RABBITMQ_HOST:rabbitmq}

Hostname for where the RabbitMQ message broker is running. Usually initialized via the ACT_RABBITMQ_HOST in the Helm charts

 

The channel name sampleConnectorConsumer is just internal Spring Cloud Streams wiring.

 

Miscellaneous App Configuration

There are a few more configuration properties that we need to supply. Open up the sample-activiti7-cloud-connector/src/main/resources/application.properties configuration file and add the following properties:

activiti.cloud.application.name=default-app
spring.main.allow-bean-definition-overriding=true

 

Implementing the Input Channel Interface

Our Cloud Connector is not going to do much unless we hook it up to the Spring Cloud Stream channel where the Service Task 1 events will appear. Create an interface called SampleConnectorChannel in the org.activiti.training.sampleactiviti7cloudconnector package:

 

package org.activiti.training.sampleactiviti7cloudconnector;

import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;

public interface SampleConnectorChannel {
String SAMPLE_CONNECTOR_CONSUMER = "sampleConnectorConsumer";

@Input(SAMPLE_CONNECTOR_CONSUMER)
SubscribableChannel sampleConnectorConsumer();
}

The SAMPLE_CONNECTOR_CONSUMER value must match the channel name as specified in spring.cloud.stream.bindings.<channel name>.<property> configuration. We can now use this sample consumer in the connector class implementation.

 

Implementing the Connector Class

So finally on to implement the Service Task 1 business logic. Create a class called SampleConnector in the org.activiti.training.sampleactiviti7cloudconnector package:

package org.activiti.training.sampleactiviti7cloudconnector;

import org.activiti.cloud.api.process.model.IntegrationRequest;
import org.activiti.cloud.api.process.model.IntegrationResult;
import org.activiti.cloud.connectors.starter.channels.IntegrationResultSender;
import org.activiti.cloud.connectors.starter.configuration.ConnectorProperties;
import org.activiti.cloud.connectors.starter.model.IntegrationResultBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;

import java.util.HashMap;
import java.util.Map;

@Component
@EnableBinding(SampleConnectorChannel.class)
public class SampleConnector {

@Value("${spring.application.name}")
private String appName;

@Autowired
private ConnectorProperties connectorProperties;

private final IntegrationResultSender integrationResultSender;

public SampleConnector(IntegrationResultSender integrationResultSender) {
this.integrationResultSender = integrationResultSender;
}

@StreamListener(value = SampleConnectorChannel.SAMPLE_CONNECTOR_CONSUMER)
public void execute(IntegrationRequest event) throws InterruptedException {

System.out.println("Called Cloud Connector " + appName);

String var1 = SampleConnector.class.getSimpleName() + " was called for instance " +
event.getIntegrationContext().getProcessInstanceId();

// Implement your business logic here

Map<String, Object> results = new HashMap<>();
results.put("var1", var1);
Message<IntegrationResult> message = IntegrationResultBuilder.resultFor(event, connectorProperties)
.withOutboundVariables(results)
.buildMessage();
integrationResultSender.send(message);
}
}

 

 

The class starts off with two annotations, one for making it a Spring bean (@Component) and the other one for binding it to the SampleConnectorChannel (@EnableBinding). Then we wire in the ConnectorProperties bean, which has info about the connector such as service type, service version, application name, and application version. Then the IntegrationResultSender bean is injected to be able to respond with data to the initiating runtime bundle, basically sending events back to the runtime bundle.

 

Then we implement the Service Task business logic in a method with the StreamListener annotation. We can call the method whatever we like, in this case it is called execute. The implementation does not actually do anything in this case, it just sends back a message with a variable called var1 to the runtime bundle.

 

So, the Sample Cloud Connector’s association with the Sample Runtime Bundle is just what stream it is listening to, which needs to match the implementation attribute of the Service Task in the process definition XML in the runtime bundle. There is also another stream that is used for sending the result back to the runtime bundle but we use the default value from the spring boot starter so nobody has to worry about that unless they hit on a reason to change it.

 

Build and run the Sample Cloud Connector

We are now ready to build and run the Cloud Connector, just to make sure everything compiles and runs properly before we create a Docker image and deploy. Build as follows:

 

sample-activiti7-cloud-connector mbergljung$ mvn package -DskipTests

 

I’m turning off tests as there is no RabbitMQ running on the correct host to do the tests.

Then run the Cloud Connector as follows:

 

sample-activiti7-cloud-connector mbergljung$ java -jar target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar

2018-11-29 14:36:26.833  INFO 40967 --- [  main] o.s.c.support.GenericApplicationContext  : Refreshing org.springframework.context.support.GenericApplicationContext@345f69f3: startup date [Thu Nov 29 14:36:26 GMT 2018]; root of context hierarchy

2018-11-29 14:36:27.117  INFO 40967 --- [  main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [rabbitmq:5672]

 

Looks like everything is starting as expected so on to deploying it.

 

Deploying a Cloud Connector

In this section we will have a look at one way of deploying Cloud Connectors.

 

Building a Docker Image for our Cloud Connector

We now got the cloud connector completed with our custom process definition. In order to use it we need to have it available as a Docker Image. And to produce a Docker image we need a Dockerfile. Create one in the top directory of the project with the following content:

 

FROM openjdk:alpine
RUN apk --update add fontconfig ttf-dejavu
ENV PORT 8080
EXPOSE 8080
COPY target/*.jar /opt/app.jar
WORKDIR /opt
ENTRYPOINT exec java $JAVA_OPTS -jar app.jar

This will create a cloud connector image based on Alpine linux and OpenJDK. All we need to run it. It will be exposed on port 8080.

 

Let’s have our Maven project build the image automatically for us with the help of the Fabric8 Maven plugin. Add it to the sample-activiti7-cloud-connector/pom.xml as follows:

 

 

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
...
<build>

<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>

<!-- Build the custom Activiti Cloud Connector Docker Image -->
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.26.1</version>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<id>docker</id>
<phase>install</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>registry</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

Now, to get the Docker image built we just have to execute the following maven command:

 

sample-activiti7-cloud-connector mbergljung$ mvn clean install -DskipTests

[INFO] Scanning for projects...

[INFO]

[INFO] ------------------------------------------------------------------------

[INFO] Building sample-activiti7-cloud-connector 0.0.1-SNAPSHOT

[INFO] ------------------------------------------------------------------------

[INFO]

[INFO] --- maven-clean-plugin:3.0.0:clean (default-clean) @ sample-activiti7-cloud-connector---

[INFO] Deleting /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target

[INFO]

[INFO] --- maven-resources-plugin:3.0.2:resources (default-resources) @ sample-activiti7-cloud-connector---

[INFO] Using 'UTF-8' encoding to copy filtered resources.

[INFO] Copying 1 resource

[INFO] Copying 0 resource

[INFO]

[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ sample-activiti7-cloud-connector---

[INFO] Changes detected - recompiling the module!

[INFO] Compiling 3 source files to /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/classes

[INFO]

[INFO] --- maven-resources-plugin:3.0.2:testResources (default-testResources) @ sample-activiti7-cloud-connector---

[INFO] Using 'UTF-8' encoding to copy filtered resources.

[INFO] skip non existing resourceDirectory /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/src/test/resources

[INFO]

[INFO] --- maven-compiler-plugin:3.7.0:testCompile (default-testCompile) @ sample-activiti7-cloud-connector---

[INFO] Changes detected - recompiling the module!

[INFO] Compiling 1 source file to /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/test-classes

[INFO]

[INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ sample-activiti7-cloud-connector---

[INFO] Tests are skipped.

[INFO]

[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ sample-activiti7-cloud-connector---

[INFO] Building jar: /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar

[INFO]

[INFO] --- spring-boot-maven-plugin:2.0.6.RELEASE:repackage (default) @ sample-activiti7-cloud-connector---

[INFO]

[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ sample-activiti7-cloud-connector---

[INFO] Installing /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar to /Users/mbergljung/.m2/repository/org/activiti/training/sample-activiti7-cloud-connector/0.0.1-SNAPSHOT/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.jar

[INFO] Installing /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/pom.xml to /Users/mbergljung/.m2/repository/org/activiti/training/sample-activiti7-cloud-connector/0.0.1-SNAPSHOT/sample-activiti7-cloud-connector-0.0.1-SNAPSHOT.pom

[INFO]

[INFO] --- docker-maven-plugin:0.26.1:build (docker) @ sample-activiti7-cloud-connector---

[INFO] Building tar: /Users/mbergljung/IDEAProjects/sample-activiti7-cloud-connector/target/docker/activiti-cloud-connector-custom/0.0.1-SNAPSHOT/tmp/docker-build.tar

[INFO] DOCKER> [activiti-cloud-connector-custom:0.0.1-SNAPSHOT] "activiti-cc-custom": Created docker-build.tar in 434 milliseconds

[INFO] DOCKER> [activiti-cloud-connector-custom:0.0.1-SNAPSHOT] "activiti-cc-custom": Built image sha256:6b999

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 9.614 s

[INFO] Finished at: 2018-11-30T09:13:36Z

[INFO] Final Memory: 56M/803M

[INFO] ------------------------------------------------------------------------

 

Make sure we got the image by listing local images:

 

$ docker image ls

REPOSITORY                                                       TAG IMAGE ID CREATED SIZE

activiti-cloud-connector-custom                                  0.0.1-SNAPSHOT 6b9999eed20b      51 seconds ago 153MB

activiti-runtime-bundle-custom                                   0.0.1-SNAPSHOT a146b0829d69 8 days ago 193MB

...

We are now ready to include the cloud connector in an Activiti 7 Deployment.

 

Deploying a custom Cloud Connector Docker Image

What we want to do now is deploy our cloud connector together with the necessary services and infrastructure. We can use the previous Full Example Helm Chart setup for this. We just need to add our Sample Cloud Connector to this solution deployment. To do this we need to create a Helm package for our cloud connector.

 

Creating a Helm Chart for the Sample Cloud Connector

The easiest way to accomplish this is to copy the Helm Charts for the out-of-the-box Example Cloud Connector into our Helm Repo project (the GitHub pages project we created earlier on, in my case https://github.com/gravitonian/helm-repo):

 

helm-repo mbergljung$ cd charts/

charts mbergljung$mkdir sample-cloud-connector

charts mbergljung$cd sample-cloud-connector/

sample-cloud-connector mbergljung$ cp -R ../../../activiti-cloud-charts/activiti-cloud-connector/* .

sample-cloud-connector mbergljung$ ls -l

total 24

-rw-r--r--  1 mbergljung  staff 194 30 Nov 09:18 Chart.yaml

-rw-r--r--  1 mbergljung  staff 18 30 Nov 09:18 README.md

drwxr-xr-x  7 mbergljung  staff 224 30 Nov 09:18 templates

-rw-r--r--  1 mbergljung  staff 2531 30 Nov 09:18 values.yaml

 

Now we can update these charts to match our Sample Cloud Connector.

 

Open up the helm-repo/charts/sample-cloud-connector/Chart.yaml file and update it to look as follows, changing the name and version (note. The name of the chart and the directory it is contained in must match):

 

apiVersion: v1
description: A Helm chart for Kubernetes
icon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/master/images/java.png
name: sample-cloud-connector
version: 0.1.0

Then open up the helm-repo/charts/sample-cloud-connector/values.yaml file and update it to look as follows:

# Default values for Maven projects.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
global:
rabbitmq:
host:
value: ""
username:
value: guest
password:
value: guest
keycloak:
url: ""
name: keycloak
service:
type: http
port: 80
## The list of hostnames to be covered with this ingress record.
## Most likely this will be just one host, but in the event more hosts are needed, this is an array
ingress:
hostName: ""

## Allows the specification of additional environment variables
extraEnv: |
# - name: ACT_KEYCLOAK_URL
# valueFrom:
# configMapKeyRef:
# name: {{ .Release.Name }}-keycloak-http
# key: expose-keycloak-service-key

javaOpts:
xmx: 768m
xms: 512m
other: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90
image:
repository: activiti-cloud-connector-custom
tag: 0.0.1-SNAPSHOT
pullPolicy: IfNotPresent
service:
name: sample-activiti7-cc-app
type: ClusterIP
externalPort: 80
internalPort: 8080
annotations:
fabric8.io/expose: "false"
resources:
limits:
memory: 768Mi
requests:
cpu: 400m
memory: 768Mi
probePath: /actuator/health
livenessProbe:
initialDelaySeconds: 140
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 4
readinessProbe:
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
terminationGracePeriodSeconds: 20

ingress:
## Set to true to enable ingress record generation
enabled: false

path: /sample-cloud-connector

## Set this to true in order to enable TLS on the ingress record
tls: false

## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: myTlsSecret

## Ingress annotations done as key:value pairs
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Port, X-Forwarded-Prefix,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,X-CSRF-Token,Access-Control-Request-Headers,Access-Control-Request-Method,accept,Keep-Alive"
nginx.ingress.kubernetes.io/x-forwarded-prefix: "true"

 

 

The following values have been updated:

 

Name

New value

Description

image.repository

activiti-cloud-connector-custom

This is the name of the custom cloud connector Docker Image in our local Docker repository. You can do $ docker image ls to view it.

image.tag

0.0.1-SNAPSHOT

This is the version of the custom cloud connector Docker Image in our local Docker repository. You can do $ docker image ls to view it.

service.name

sample-activiti7-cc-app

This is the name of the service representing the new cloud connector.

 

Package and Publish the Helm Chart for the Sample Cloud Connector

We are now ready to create a Helm package for our Sample Cloud Connector Helm Chart. Execute the following commands standing in the /helm-repo directory:

 

helm-repo mbergljung$ helm lint charts/sample-cloud-connector

==> Linting charts/sample-cloud-connector

Lint OK

 

1 chart(s) linted, no failures

 

helm-repo mbergljung$ helm package charts/sample-cloud-connector

Successfully packaged chart and saved it to: /Users/mbergljung/DeployProjects/helm-repo/sample-cloud-connector-0.1.0.tgz

 

Now, let’s upload all the chart files including the package to our Helm Repository (i.e. https://github.com/gravitonian/helm-repo). First generate the index file:

 

helm-repo mbergljung$ helm repo index --url https://gravitonian.github.io/helm-repo .

 

This command generates the index.yaml file. Let’s take a look at it:

 

helm-repo mbergljung$ more index.yaml

apiVersion: v1

entries:

 sample-cloud-connector:

 - apiVersion: v1

   created: 2018-11-30T09:33:10.290581Z

   description: A Helm chart for Kubernetes

   digest: 743e4c88dd9368cf2d5555c29f78c6732b428ecae9ec1cf14607955a4e7c55a1

   icon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/master/images/java.png

   name: sample-cloud-connector

   urls:

   - https://gravitonian.github.io/helm-repo/sample-cloud-connector-0.1.0.tgz

   version: 0.1.0

 sample-runtime-bundle:

 - apiVersion: v1

   created: 2018-11-30T09:33:10.291296Z

   description: A Helm chart for Kubernetes

   digest: 8215e08e933dda7372866796027b2aa80f28c0487ddd902d2ec9e27036092905

   icon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/master/images/java.png

   name: sample-runtime-bundle

   urls:

   - https://gravitonian.github.io/helm-repo/sample-runtime-bundle-0.1.0.tgz

   version: 0.1.0

generated: 2018-11-30T09:33:10.288776Z

 

The helm repo GitHub project should now look something like this:

 

helm-repo mbergljung$ tree

.

├── charts

│   ├── sample-cloud-connector

│   │ ├── Chart.yaml

│   │ ├── README.md

│   │ ├── templates

│   │ │   ├── NOTES.txt

│   │ │   ├── _helpers.tpl

│   │ │   ├── deployment.yaml

│   │ │   ├── ingress.yaml

│   │ │   └── service.yaml

│   │ └── values.yaml

│   └── sample-runtime-bundle

│       ├── Chart.yaml

│       ├── README.md

│       ├── requirements.yaml

│       ├── templates

│       │ ├── NOTES.txt

│       │ ├── _helpers.tpl

│       │ ├── deployment.yaml

│       │ ├── ingress.yaml

│       │ └── service.yaml

│       └── values.yaml

├── index.yaml

├── sample-cloud-connector-0.1.0.tgz

└── sample-runtime-bundle-0.1.0.tgz

 

Now commit & push the changes:

 

helm-repo mbergljung$git add .

helm-repo mbergljung$ git commit -m "Sample Cloud Connector Chart"

[gh-pages 493f63d] Sample Cloud Connector Chart

...

helm-repo mbergljung$ git push

Counting objects: 11, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (10/10), done.

Writing objects: 100% (11/11), 6.20 KiB | 6.20 MiB/s, done.

Total 11 (delta 2), reused 0 (delta 0)

remote: Resolving deltas: 100% (2/2), completed with 1 local object.

To https://github.com/gravitonian/helm-repo.git

  907b594..493f63d  gh-pages -> gh-pages

 

Add Our Helm Repository to the local Helm installation

We can now add this Helm repo to our Helm installation as follows (optional if you did this when developing the runtime bundle):

 

helm-repo mbergljung$ helm repo add gravitonian-helm-repo https://gravitonian.github.io/helm-repo

"gravitonian-helm-repo" has been added to your repositories

 

Update repo to make sure we got all the latest stuff locally (important, don’t forget to do this!):

 

charts mbergljung$ helm repo update

Hang tight while we grab the latest from your chart repositories...

...Skip local chart repository

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "gravitonian-helm-repo" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

helm repo list...Successfully got an update from the "stable" chart repository

Update Complete. ⎈ Happy Helming!⎈

 

Update the Full Example Helm Charts to deploy Sample Cloud Connector

Add a dependency on the Sample Cloud Connector in the activiti-cloud-charts/activiti-cloud-full-example/requirements.yaml file:

dependencies:
- name: infrastructure
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
- name: application
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
- name: activiti-cloud-modeling
repository: https://activiti.github.io/activiti-cloud-charts/
version: 0.4.0
condition: activiti-cloud-modeling.enabled,modeling.enabled
- name: sample-runtime-bundle
repository: https://gravitonian.github.io/helm-repo/
version: 0.1.0
- name: sample-cloud-connector
repository: https://gravitonian.github.io/helm-repo/
version: 0.1.0

For the new dependency on the Sample Cloud Connector to be picked up we need to update the dependencies as follows:

 

activiti-cloud-full-example mbergljung$ helm dependency update

Hang tight while we grab the latest from your chart repositories...

...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):

Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: getsockopt: connection refused

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "gravitonian-helm-repo" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

...Successfully got an update from the "stable" chart repository

Update Complete. ⎈Happy Helming!⎈

Saving 5 charts

Downloading infrastructure from repo https://activiti.github.io/activiti-cloud-charts/

Downloading application from repo https://activiti.github.io/activiti-cloud-charts/

Downloading activiti-cloud-modeling from repo https://activiti.github.io/activiti-cloud-charts/

Downloading sample-runtime-bundle from repo https://gravitonian.github.io/helm-repo/

Downloading sample-cloud-connector from repo https://gravitonian.github.io/helm-repo/

Deleting outdated charts

 

You should see a charts directory with the downloaded dependencies:

 

activiti-cloud-full-example mbergljung$ tree

.

├── Chart.yaml

├── README.md

├── charts

│   ├── activiti-cloud-modeling-0.4.0.tgz

│   ├── application-0.4.0.tgz

│   ├── infrastructure-0.4.0.tgz

│   ├── sample-cloud-connector-0.1.0.tgz

│   └── sample-runtime-bundle-0.1.0.tgz

├── helm-service-account-role.yaml

├── requirements.lock

├── requirements.yaml

├── upgrade-debug.txt

└── values.yaml

 

Deploy the Sample Cloud Connector via the Full Example Helm Charts

We are now ready to upgrade our running Full Example solution (assuming it is running since before) with the new sample cloud connector. First check what name the Full Example deployment has:

 

helm ls

NAME           REVISION UPDATED                  STATUS   CHART                             NAMESPACE

cloying-possum 1        Tue Nov 20 06:34:44 2018 DEPLOYED activiti-cloud-full-example-0.4.0 activiti7

intent-deer    1        Mon Nov 19 07:45:10 2018 DEPLOYED nginx-ingress-0.19.2              activiti7

 

We can see that the Full Example deployment has the name cloying-possum. Execute the following Helm command to upgrade it:

 

activiti-cloud-charts mbergljung$ cd activiti-cloud-full-example/

activiti-cloud-full-example mbergljung$ helm upgrade cloying-possum -f values.yaml . --namespace=activiti7

 

Note here that we cannot refer to the Full Example helm chart package in the Activiti Helm Repo like we did before when installing with activiti-cloud-charts/activiti-cloud-full-example. This is because we are changing more than just the values.yaml. We are also changing the requirements.yaml (i.e. the dependencies). And the dependencies would still be the same in the remote activiti-cloud-charts/activiti-cloud-full-example helm package, so we need to pick up the chart info from the local directory with just a dot(.).

 

Have a look in the Kubernetes Dashboard and make sure the new runtime bundle is deployed properly:

 

 

I get the above error first time around. To fix it I just added a CPU in the Docker for Desktop advanced settings and restarted:

 

 

I had 4 CPUs allocated before the change.

 

Testing the Custom Process

We should now be in a state where we can test our custom process. We got the process definition deployed in the Sample Runtime Bundle and we got the Service task implementation deployed in the Sample Cloud Connector.

 

As usual, to be able to make any ReST calls we need to acquire an access token from Keycloak. If you go to the keycloak directory in the Postman collection and select the getKeycloakToken you will get an access token:

 

 

The returned access token will be used to authenticate further requests as user hruser. Note that this token is time sensitive and it will be automatically invalidated at some point, so you might need to request it again if you start getting unauthorized errors (you will see a 401 Unauthorized error in Postman to the right in the middle of the screen).

 

Once we get the token for a user we can interact with all the user endpoints. For example, we can invoke a ReST call to see what Process Definitions that are deployed inside our Sample Runtime Bundle. We made a copy earlier of the rb-my-app Postman collection and changed the URL to point to the new Sample Runtime Bundle: ({{gatewayUrl}}/sample-activiti7-rb-app/v1/process-definitions):

 

 

Now, let’s start a process instance with the Sample Process Definition contained in our Sample Runtime Bundle. To start a process instance based on this process definition we can use the startProcess Postman request that is in the rb-my-app Copy folder. This request should be updated to look like {{gatewayUrl}}/sample-activiti7-rb-app/v1/process-instances POST ReST call:

 

 

Note the POST body, it specifies the processDefinitionKey as sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d (taken from the response to the getProcessDefinitions call) to tell Activiti to start a process instance based on the latest version of the Sample Process definition. I also removed the process variables that were specified by default for this call as the process does not expect to be initialized with any variables. The commandType property has also been changed to payloadType:

{
"processDefinitionKey": "sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d",
"payloadType":"StartProcessPayload"
}

 

 

Click the Send button (but make sure you have a valid access token first, otherwise you will see a 401 Unauthorized status). You should get a response with information about the new process instance:

 

 

Next thing to do would probably be to list the active tasks. But before we do that we need to login (i.e. get an access token) with the user that is expected to have a task(s) assigned. Currently we are logged in as user hruser. Our custom process definition has a User Task that is assigned to the testuser. So we need to get a new access token for the testuser user before we can make a call and retrieve this task.

 

To do this click on the getKeycloakToken request in the keycloak folder, then click on the Body tab:

 

 

Change the username to testuser. Then click Send to get an access token for this user.

 

We can now call the getTasks request in the rb-my-app Copy folder. Update the URL to {{gatewayUrl}}/sample-activiti7-rb-app/v1/tasks?page=0&size=10, which is associated with the sample-activiti7-rb-app runtime bundle. If we run the getTasks request we should see a response such as this:

 

 

We can see the id of the task here and it is 4f36e024-f6e5-11e8-bbfe-46f0074bbf7d. The task is assigned to the testuser and can be completed by that user using the completeTask request as follows:

 

 

Here I have also updated the URL to include the task id: {{gatewayUrl}}/sample-activiti7-rb-app/v1/tasks/4f36e024-f6e5-11e8-bbfe-46f0074bbf7d/complete.

 

Completing this User Task should automatically make the process transition to Service Task 1. And we should see a log about the Service Task implementation being executed as follows:

 

 

To get to the log go to the Kubernetes Dashboard overview for namespace Activiti 7 (https://localhost:31234/#!/overview?namespace=activiti7) and then click on the Sample Cloud Connector Deployment (i.e. <helm-release-name>-sample-cloud-connector). In the New Replica Set box you have a couple of lines on the right side which, if you click on them, takes you to the logs.

Introduction

Activiti 7 Beta 2 comes with a new BPMN modeling application that we will have a look at in this article. It supports the BPMN 2.0 standard. It’s been developed with the help of the Alfresco Application Development Framework (ADF). The source code is available here.

 

Activiti 7 Deep Dive Article Series 

This article is part of series of articles covering Activiti 7 in detail, they should be read in the order listed:

 

  1. Deploying and Running a Business Process
  2. Using the Modeler to Design a Business Process  - this article
  3. Building, Deploying, and Running a Custom Business Process
  4. Using the Core Libraries

 

Prerequisites

  • You have read and worked through the "Activiti 7 - Getting Started" article and the Activiti 7 Full Example is running
  • You are familiar with Business Process Model and Notation (BPMN) 2.0 and you have designed and created business processes before, maybe even with previous versions of Activiti. This article will not cover too much around how to design business processes with BPMN.

 

Running the Activiti Modeler Application

The Activiti deployment that we did in the previous article already got the modeler up and running. If you don’t have it running, then go back and start it up as per instructions in that article. You can access the modeler via the http://activiti-cloud-gateway.<IP>.nip.io/activiti-cloud-modeling URL. Login with modeler/password as this user is part of the ACTIVITI_MODELER role, which have the rights to use the modelling capabilities, you should now see:

 

 

You might see an application if you created one the first time you accessed the modeler in the previous article.

 

Creating a Process Application

A process definition lives inside a so called Process Application. Let's create one that will contain our process definition. Click Create new | Application and give it a name and description:

 

 

Click Create button to create the new Process Application:

 

 

Now, click on the Sample App application to start modelling:

 

 

Creating a Process Definition

A Process Application can contain more than one Process Definition, so we need to create one before we start modelling the business process. Click on the Create new | Process menu item:

 

 

That should take you to a form where you fill in the name and description of the Process Definition:

 

 

Click Create button to create the new Process Definition:

 

 

Now, click on the Sample Process in the left navigation to start BPMN 2.0 modelling:

 

 

We already got a start node on the canvas in the middle. Now let’s continue with the activities that we want for our sample process.

 

Let’s start by adding a new User Task, in the left toolbar select the rectangle with a person icon in it and then drag-n-drop the User Task onto the canvas. Call it User Task 1:

 

 

We want to assign the user task to a specific user (i.e. not the the initiator of the process instance). We got the following users in Keycloak that we can choose from:

 

 

Click on the user task on the canvas and then fill in Assignee as testuser to the right in the Properties pane, then click the check button after the field to save the assignee value:

 

 

Then drag-n-drop a Service Task onto the canvas after the User task and call it Service Task 1:

 

 

To implement the Service Task we use something called Cloud Connectors, they run as their own processes in their own containers, which means they can be scaled and managed separately from the process execution.

 

Cloud Connector implementations are specified with the standard implementation property. Click on the Service Task 1 and then fill in the implementation property as follows with the serviceTask1Impl value:

 

 

The implementation property value on the Service Task definition will be used as a Spring Cloud Stream channel destination name (on the bound middleware RabbitMQ). So we have to make sure the custom Cloud Connector that we will build in the next article creates a consumer bound to this name with the spring.cloud.stream.bindings.<channel>.destination property value.

 

Now add an End node and then Save the whole process definition by clicking on the disk in the upper right corner. Also, make sure to download the XML for the new process definition by clicking on the download button next to the save button.

 

Important, if the session timeout expired and you cannot save, then just download and then import the process definition when logged in again.

 

You might be used to working with process listeners and task listeners in previous versions of Activiti. Listeners are an Activiti extension to BPMN 2.0 that implement hook points inside a process definition which are triggered by events during process execution. In Activiti 7 we can implement “listener” functionality by listening to events emitted by the process engine. This is better for Cloud deployments as events are async and Activiti listeners are synchronous. We will have a look at how to implement event handlers in the next article when we implement a custom Cloud Connector.

 

Sample Process Definition BPMN 2.0 XML

The exported BPMN 2.0 sample process definition looks something like this (double check that you got the User task set up properly with assignee and the Service Task with the implementation attribute):

 

 

<?xml version="1.0" encoding="UTF-8"?>
<bpmn2:definitions xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:bpmn2="http://www.omg.org/spec/BPMN/20100524/MODEL"
xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
xmlns:activiti="http://activiti.org/bpmn"
id="sample-diagram"
targetNamespace="http://bpmn.io/schema/bpmn"
xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd">

<bpmn2:process id="sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d" name="Sample Process" isExecutable="true">
<bpmn2:documentation />
<bpmn2:startEvent id="StartEvent_1">
<bpmn2:outgoing>SequenceFlow_0qdq7ff</bpmn2:outgoing>
</bpmn2:startEvent>
<bpmn2:userTask id="UserTask_0b6cp1l" name="User Task 1" activiti:assignee="testuser">
<bpmn2:incoming>SequenceFlow_0qdq7ff</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_1sc9dgy</bpmn2:outgoing>
</bpmn2:userTask>
<bpmn2:sequenceFlow id="SequenceFlow_0qdq7ff" sourceRef="StartEvent_1" targetRef="UserTask_0b6cp1l" />
<bpmn2:serviceTask id="ServiceTask_1wg38me" name="Service Task 1" implementation="serviceTask1Impl">
<bpmn2:incoming>SequenceFlow_1sc9dgy</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_0t37jio</bpmn2:outgoing>
</bpmn2:serviceTask>
<bpmn2:sequenceFlow id="SequenceFlow_1sc9dgy" sourceRef="UserTask_0b6cp1l" targetRef="ServiceTask_1wg38me" />
<bpmn2:endEvent id="EndEvent_0irytw8">
<bpmn2:incoming>SequenceFlow_0t37jio</bpmn2:incoming>
</bpmn2:endEvent>
<bpmn2:sequenceFlow id="SequenceFlow_0t37jio" sourceRef="ServiceTask_1wg38me" targetRef="EndEvent_0irytw8" />
</bpmn2:process>
<bpmndi:BPMNDiagram id="BPMNDiagram_1">
<bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="sampleproc-e9b76ff9-6f70-42c9-8dee-f6116c533a6d">
<bpmndi:BPMNShape id="_BPMNShape_StartEvent_2" bpmnElement="StartEvent_1">
<dc:Bounds x="105" y="121" width="36" height="36" />
<bpmndi:BPMNLabel>
<dc:Bounds x="78" y="157" width="90" height="20" />
</bpmndi:BPMNLabel>
</bpmndi:BPMNShape>
<bpmndi:BPMNShape id="UserTask_0b6cp1l_di" bpmnElement="UserTask_0b6cp1l">
<dc:Bounds x="233" y="98.5" width="100" height="80" />
</bpmndi:BPMNShape>
<bpmndi:BPMNEdge id="SequenceFlow_0qdq7ff_di" bpmnElement="SequenceFlow_0qdq7ff">
<di:waypoint x="141" y="139" />
<di:waypoint x="283" y="139" />
<bpmndi:BPMNLabel>
<dc:Bounds x="212" y="117.5" width="0" height="13" />
</bpmndi:BPMNLabel>
</bpmndi:BPMNEdge>
<bpmndi:BPMNShape id="ServiceTask_1wg38me_di" bpmnElement="ServiceTask_1wg38me">
<dc:Bounds x="422" y="99" width="100" height="80" />
</bpmndi:BPMNShape>
<bpmndi:BPMNEdge id="SequenceFlow_1sc9dgy_di" bpmnElement="SequenceFlow_1sc9dgy">
<di:waypoint x="333" y="139" />
<di:waypoint x="422" y="139" />
<bpmndi:BPMNLabel>
<dc:Bounds x="377.5" y="117" width="0" height="13" />
</bpmndi:BPMNLabel>
</bpmndi:BPMNEdge>
<bpmndi:BPMNShape id="EndEvent_0irytw8_di" bpmnElement="EndEvent_0irytw8">
<dc:Bounds x="611" y="121" width="36" height="36" />
<bpmndi:BPMNLabel>
<dc:Bounds x="629" y="160" width="0" height="13" />
</bpmndi:BPMNLabel>
</bpmndi:BPMNShape>
<bpmndi:BPMNEdge id="SequenceFlow_0t37jio_di" bpmnElement="SequenceFlow_0t37jio">
<di:waypoint x="522" y="139" />
<di:waypoint x="611" y="139" />
<bpmndi:BPMNLabel>
<dc:Bounds x="566.5" y="117" width="0" height="13" />
</bpmndi:BPMNLabel>
</bpmndi:BPMNEdge>
</bpmndi:BPMNPlane>
</bpmndi:BPMNDiagram>
</bpmn2:definitions>

The bpmndi:BPMNDiagram section defines the process definition graphical layout.

 

In the next article we will build a custom Runtime Bundle containing this process definition and a custom Cloud Connector that will provide an implementation for the Service Task.

Introduction

Activiti 7 is an evolution of the battle-tested Activiti workflow engine from Alfresco that is fully adopted to run in a cloud environment. It is built according to the Cloud Native application concepts and differs a bit from the previous Activiti versions in how it is architected. There is also a new Activiti Modeler that we will have a look at in a separate article.

 

The very core of the Activiti 7 engine is still very much the same as previous versions. However, it has been made much more narrowly focused to do one job and do it amazingly well, and that is to run BPMN business processes. The ancillary functions built into the Activiti runtime, include servicing API runtime request for Query and Audit data produced by the engine and stored in the Engine's database, have been moved out of the engine and made to operate as Spring Boot 2 microservices, each running in their own highly scalable containers.

 

The core libraries of the Activiti engine has also been re-architected for version 7, we will have a look at them in another article.

 

This article dives into how you can easily deploy and run your Activiti 7 applications in a cloud environment using Docker containers and Kubernetes. There are a lot of new technologies and concepts that are used with Activiti 7, so we will have a look at this first.

 

Activiti 7 Deep Dive Article Series 

This article is part of series of articles covering Activiti 7 in detail, they should be read in the order listed:

 

  1. Deploying and Running a Business Process - this article
  2. Using the Modeler to Design a Business Process
  3. Building, Deploying, and Running a Custom Business Process
  4. Using the Core Libraries

 

Prerequisites

  • You have Docker installed.

 

Concepts and Technologies

The following is a list of concepts (terms) and technologies that you will come in contact with when deploying and using the Activiti 7 product.

 

Virtual Machine Monitor (Hypervisor)

A Hypervisor is used to run other OS instances on your local host machine. Typically it's used to run a different OS on your machine, such as Windows on a Mac. When you run another OS on your host it is called a guest OS, and it runs in a so called Virtual Machine (VM).

 

Image

An image is a number of software layers that can be used to instantiate a container. It’s a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. This could be, for example, Java + Apache Tomcat. You can find all kinds of Docker Images on the public repository called Docker Hub. There are also private image repositories (for things like commercial enterprise images), such as the one Alfresco uses called Quay.  

 

Container

An instance of an image is called a container. You have an image, which is a set of layers as described. If you start this image, you have a running container of this image. You can have many running containers of the same image.

 

Docker

Docker is one of the most popular container platforms. Docker provides functionality for deploying and running applications in containers based on images.

 

Docker Compose

When you have many containers making up your solution, such as with Activiti 7, and you need to configure each one of the containers so they work nicely together, then you need a tool for this.

 

Docker Compose is such a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

 

Dockerfile

A Dockerfile is a script containing a successive series of instructions, directions, and commands which are to be executed to form a new Docker image. Each command executed translates to a new layer in the image, forming the end product. They replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having built a new image, which then you use to start a new Docker container.

 

Difference between Containers and Virtual Machines

It is important to understand the difference between using containers and using VMs. Here is a picture that illustrates:

 

 

The main difference is that when you are running a container you are not kicking off a complete new OS instance. And this makes containers much more lightweight and quicker to start. A container is also taking up much less space on your hard-disk as it does not have to ship the whole OS.

 

For more info read What is a Container | Docker.

 

Cluster

A cluster forms a shared computing environment made up of servers (nodes with one or more containers), where resources have been clustered together to support the workloads and processes running within the cluster:

 

Kubernetes

Docker is great for running containers on one host, and provides all required functionality for that purpose. But in today’s distributed services environment, the real challenge is to manage resources and workloads across servers and complex infrastructures.

The most used and supported tool for this today is Kubernetes (a Greek word for “helmsman” or “pilot”), which was originally created by Google and then made open source. As the word suggest, Kubernetes undertakes the cumbersome task to orchestrate containers across many nodes, utilizing many useful features. Kubernetes is an open source platform that can be used to run, scale, and operate application containers in a cluster.

Kubernetes consists of several architectural components:

  • Kubernetes Node - a cluster node running one or more containers
    • Kube Proxy - the Kubernetes network proxy abstracts the service access points, can route to other nodes.
    • Kubelet - the kubelet is the primary node-agent. Makes sure the containers are running properly as per specification in the pod template.
    • Pods - this is the smallest unit in the Kubernetes Object Model (see next section) that can be deployed. It represents a running process in the cluster. A pod encapsulates an application container and Docker is the most used container runtime.
    • Labels - metadata that's attached to Kubernetes objects, including pods.
  • Kubernetes Master
    • API Server -  frontend to the cluster's shared state. Provides a ReST API that all components uses.
    • Replication controllers - regulates the state of the cluster. Creates new pod "replicas" from a pod template to ensure that a configurable number of pods are running.
    • Scheduler - is managing the workload by watching newly created pods that have no node assigned, and selects a node for them to run on.
  • Kubectl - To control and manage the Kubernetes cluster from the command line we will use a tool called kubectl. It talks to the API Server in the Kubernetes Master Server, which in turn talks to the individual Kubernetes nodes.
  • Services offer a low-overhead way to route requests to a logical set of pod back ends in the cluster, using label-driven selectors.

 

The following picture illustrates:

 

 

You might also have heard of Docker Swarm, which is similar to Kubernetes.

 

Kubernetes Objects

When working with Kubernetes different kind of things will be created, deployed, managed, and destroyed. We can call these things objects. There are a number of different types of objects in the Kubernetes Object Model that are good to know about. The following picture illustrates some of these objects that you are likely to come across:

 

 

Some of these objects are already familiar, here is a list explaining the rest:

 

  • Container - our application is deployed with a Docker container.
  • Volume - our application can store data via a volume that points to physical storage.
  • Pod - contains one or more containers and it is short lived, it is not guaranteed to be constantly up.
  • Replica Set - manages a set of replicated pods. Making sure the correct numbers of replicas are always running.
  • Deployment - pods are scheduled using deployments, which provides replica management, scaling, rolling updates, rollback, clean-up etc.
  • Service - defines a set of pods that provide a micro-service. Provides a stable virtual endpoint for the ephemeral (short lived) pods in the cluster.
  • Ingress - public access point for one or more services. Ingress is the built‑in Kubernetes load‑balancing framework for HTTP traffic. With Ingress, you control the routing of external traffic.
  • Secret - contains access tokens and provides access to, for example, private images.
  • Config Map - name and value pair property configuration of application.
  • Namespace - all object names in one namespace cannot clash with object names in another namespace. The use of Namespace provides complete isolation and ideally suited for addressing multi-tenancy requirements.

 

Minikube

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop. It's primarily for users looking to try out Kubernetes, or to use as a development environment.

 

We will be using the newest Docker for Desktop environment, which comes with Kubernetes, so no need to install Minikube.

 

Helm

So we have a Kubernetes cluster up and running, and it is now ready for container deployments. How do we handle this efficiently? We might have a whole lot of containers for things such as database layer, application layer, web layer, search layer, etc. And they should have different configurations for scalability and failover. Sounds quite complex.

There is help however in the form of a tool called Helm. It is a package manager for Kubernetes clusters. With it we can specify exactly how, for example, the database layer deployment should look like, 3 containers with MySQL, 2 standby failover containers, autoscale like this etc. A Helm package definition is referred to as a Helm Chart. Each chart is a collection of files that lists, or defines:

 

  • The Docker container images that should be deployed (links to them).
  • The configuration of the components.
  • And the configuration of the infrastructure that the solution uses.

 

Other important Helm concepts:

 

  • Release - an instance of a chart loaded into Kubernetes.
  • Helm Repository - a repository of published charts. There is a public repo at kubernetes.io but organisations can have their own repos. Alfresco has its own chart repo for example.
  • Template - a Kubernetes configuration file written with the Go templating language (with addition of Sprig functions)

 

Helm Charts are stored in a Helm repository, much like JARs are stored in for example Maven Central. So Helm is actually built up of both a Client bit and a Server bit. The server bit of Helm is called Tiller and runs in the Kubernetes cluster. The Helm architecture looks something like this:

 

 

Helm is not only a package manager, it is also a deployment manager that can:

 

  • Do repeatable deployments
  • Manage dependencies: reuse and share
  • Manage multiple configurations
  • Update, rollback, and test application deployments (referred to as releases)

 

The Activiti 7 solution is packaged with Helm and Activiti provides a Helm repository with Charts related to the Activiti 7 components.

 

Cloud Native Applications

In order to understand how Activiti 7 is architected, built, and runs it is useful to know a bit about Cloud Native applications. Let’s try and explain by an example. About 10 years ago Netflix was thinking about a future where customers could just click a button and watch a movie online. There would be no more DVDs. Sending around DVDs to people just don’t scale globally, you cannot watch what you want when you want where you want, there is only so many DVDs you can send via snail mail, DVDs could be damaged etc. Netflix also had limited ways of recommending new movies to customers in an efficient way.

 

To implement this new online streaming service Netflix would have to create a new type of online service that would be:

 

  • Web-scale - everything is distributed, both services and compute power. The system is self-healing with fault isolation and distributed recovery. There would be API driven automation. Multiple applications running simultaneously.
  • Global - the movie service would have to be available on a global scale instantly, wherever you are.
  • Highly-available - when you click a button to see a movie it just works, otherwise clients would not adopt the service
  • Consumer-facing - the online service would have to be directly facing the customers in their homes.

 

What Netflix really wanted was speed and access at scale. The movie site needed to be always on, always available, with pretty much no downtime. As a consumer you would not accept that you could not watch a movie because of technical problems. They also knew that they would have to change their product continuously while it was running to add new features based on consumer needs. Basically they would have to get better and better, faster and faster. And this is key to Cloud Native applications.

 

So what is it with Cloud Native that enables you provide an application that appears to be always online and that can have new features added and delivered continuously? The following are some of the characteristics of Cloud Native applications:

 

  • Modularity - the applications can no longer be monoliths where all the functionality is baked into one massive application. Each application function need to be delivered as an independent service, referred to as microservice. In the case of Netflix they would see a transition from a traditional development model with 100 engineers producing a monolithic DVD‑rental application to a microservices architecture with many small teams responsible for the end‑to‑end development of hundreds of microservices that work together to stream digital entertainment to millions of Netflix customers every day.
  • Observability - with all these microservices it is important to constantly monitor them to detect problems and then instantly fire up new instances of the services, so it appears as if the whole application is always working.
  • Deployability - delivering the application as a number of microservices enable you to deploy these small services quickly and continuously. It also means you can upgrade and do maintenance on different parts of the application independently. The services are also deployed as Docker containers, which means that the OS is abstracted.
  • Testability - when doing continuous delivery it is important that all tests are automated, so we can deliver new features as quickly as possible.
  • Disposability - it should be very easy to get rid of feature or function in an application. This is easy if all the major functionality is independently delivered as microservices.
  • Replaceability - how easy can we replace the features that make up the application. If it is easy to dispose of features, and replace them with new ones, then the application can be very flexible to new requirements from customers.

 

So what’s the main differences compared to traditional application development:

 

  • OS abstraction - the different microservices are delivered and deployed as Docker containers, which means we don’t have to worry about what OS we need and other dependent libraries, which is often the case with traditional applications.
  • Predictable - we can adhere to a contract and have predictability on how fast and reliable we can deliver a service. With one big monolith it can be difficult to predict when a new feature can be available.
  • Right size capability - we can scale the individual services up and down depending on the load on each one of them. With traditional applications it is common to oversize the deployment to cater for “future” peak loads.
  • Continuous Delivery (CD) - we will be able to make software updates as soon as they are ready as we can depend on our automated tests to quickly spot any regressions. With one big monolith you can’t easily update one feature as you have to test the whole application before you can deliver the feature update. And there is usually very low automated test coverage.
  • Automated scaling: we can scale the whole application very efficiently and automatically depend on requirements and load. With traditional applications you would often see overscaling and most of the scaling would have to be done manually.
  • Rapid recovery - how quickly can we recover from failure. With sophisticated monitoring of the individual microservices we can easily detect faults and recover quickly. The application will also be more resistant to complete downtime as it is often possible to continue servicing customers if just one or a few of the microservices are down momentarily. With one monolith application it can take hours to figure out which part of the application the problem reside in, resulting in longer downtimes.

 

So, can anyone build these kind of solutions today, or does it require special knowledge and long experience? Yes you can develop Cloud Native apps, if you have read up on the concepts around Cloud Native applications, and you know about some of the more prominent frameworks supporting it, then you can definitely do it.

 

Here are some of the building blocks of Cloud Native Applications:

 

  • Service Registry
  • Distributed Configuration Service
  • Distributed Messaging (Streams)
  • Distributed logging and monitoring
  • Gateway
  • Circuit Breakers, Bulkheads, Fallbacks, Feign
  • Contracts

 

A lot of these features can be found in the Spring Cloud framework, which provides a number of tools that can be used to build Cloud Native applications:

 

 

A typical architecture for a cloud native solution looks like this:

 

 

Installing and enabling necessary software

This section walks through how to install and enable the required software, specifically in regards to Kubernetes.

 

Checking Docker Version

As mentioned in the beginning, you need to have Docker installer. But that’s not all, you need to have a Docker version that comes with Kubernetes. You can check this by looking at the About dialog for your Docker installation, you should see something like this:

 

 

The dialog should list Kubernetes as a supported software (i.e. in the bottom right corner). This means that we don’t need to install Kubernetes locally with for example Minikube, we are ready to deploy stuff into a Kubernetes cluster as it is provided by our Docker installation.

 

These local Kubernetes deployments will mirror exactly how a production deployment would look like in for example AWS, which is good.

 

Enabling Kubernetes

Docker for Desktop comes with Kubernetes but we still need to enable it. To do this go into Docker Preferences..., then click on the Kubernetes tab, you should see the following:

 

 

Click on Enable Kubernetes followed by the Kubernetes radio button. It runs Swarm by default if we don't specifically select Kubernetes. Click Apply to install a Kubernetes cluster. You should see the following config dialog after a couple of minutes:

 

 

This means we are ready to deploy stuff into Kubernetes.

 

Now check that you have the correct kubectl context. I have been running minikube on my Mac so I don’t have the correct context:

 

$ kubectl config current-context

minikube

 

To change to Docker for Desktop context use:

 

$ kubectl config use-context docker-for-desktop

Switched to context "docker-for-desktop".

 

Check again:

 

$ kubectl config current-context

docker-for-desktop

 

Now check the kubectl version (no need to install kubectl, it comes with Docker):

 

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}

 

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

 

You might have noticed that my server and client versions are different. I am using kubectl from a previous manual installation and the server is from the Docker for Desktop installation.

 

Let’s get some information about the Kubernetes cluster:

 

$ kubectl cluster-info

Kubernetes master is running at https://localhost:6443

KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

 

And let’s check out the nodes we got in the cluster:

 

$ kubectl get nodes

NAME                 STATUS ROLES AGE       VERSION

docker-for-desktop   Ready master 1h        v1.10.3

 

We can also find out what PODs we got in the Kubernetes system with the following command:

 

$ kubectl get pods --namespace=kube-system

NAME                                         READY STATUS RESTARTS AGE

etcd-docker-for-desktop                      1/1 Running 0 1h

kube-apiserver-docker-for-desktop            1/1 Running 0 1h

kube-controller-manager-docker-for-desktop   1/1 Running 0 1h

kube-dns-86f4d74b45-4jw6f                    3/3 Running 0 1h

kube-proxy-zwwpx                             1/1 Running 0 1h

kube-scheduler-docker-for-desktop            1/1 Running 0 1h

 

Configure memory for Docker and Kubernetes

The applications that we are going to deploy will most likely require more memory than is allocated by default. I am updating my setting from 4GB to 7GB:

 

 

Updating the memory settings will restart Docker and Kubernetes so you will have to wait a bit before you proceed with the next step.

 

Installing the Kubernetes Dashboard

What we don’t see here is a Kubernetes Dashboard POD, which is a Webapp that is really handy to have around when working with Kubernetes. We can use the Kubernetes Dashboard YAML that is available and submit the same to the Kubernetes Master as follows:

 

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" created

serviceaccount "kubernetes-dashboard" created

role "kubernetes-dashboard-minimal" created

rolebinding "kubernetes-dashboard-minimal" created

deployment "kubernetes-dashboard" created

service "kubernetes-dashboard" created

 

Check the PODs again:

 

kubectl get pods --namespace=kube-system

NAME                                         READY STATUS RESTARTS AGE

etcd-docker-for-desktop                      1/1 Running 0 1h

kube-apiserver-docker-for-desktop            1/1 Running 0 1h

kube-controller-manager-docker-for-desktop   1/1 Running 0 1h

kube-dns-86f4d74b45-4jw6f                    3/3 Running 0 1h

kube-proxy-zwwpx                             1/1 Running 0 1h

kube-scheduler-docker-for-desktop            1/1 Running 0 1h

kubernetes-dashboard-7b9c7bc8c9-d7dcz        0/1 ContainerCreating 0 39s

 

Wait for the Dashboard POD to load and you see STATUS as Running. It could take some time to change from ContainerCreating to Running, so be patient. Once it’s in running state, then you can set up a forwarding port to that specific Pod. We are going to do this in a more permanent way by defining a NodePort type of Service, so create a YAML file called k8s-dashboard-nodeport-service.yaml with the following content:

 

apiVersion: v1
kind: Service
metadata:
 labels:
   k8s-app: kubernetes-dashboard
 name: kubernetes-dashboard-nodeport
 namespace: kube-system
spec:
 ports:
 - port: 8443
   protocol: TCP
   targetPort: 8443
   nodePort: 31234
 selector:
   k8s-app: kubernetes-dashboard
 sessionAffinity: None
 type: NodePort

Note the NodePort range, which is 30000-32767.

 

Create the service as follows:

 

$ kubectl create -f k8s-dashboard-nodeport-service.yaml

service "kubernetes-dashboard-nodeport" created

 

Now access the Dashboard from a browser via the https://localhost:31234 URL. You will get some warnings but do proceed until you see the following dialog:

 

 

Click on SKIP and you will be lead to the Dashboard as shown below:

 

 

Click on Nodes and you will see the single Kubernetes node as follows:

 

 

Installing Helm

The Helm package manager will be used to deploy container solutions, such as Activiti 7, to the Kubernetes cluster. It consist of both a client and a server. Find the appropriate installation package for your OS here.

I installed the Helm Client on my Mac using the following steps:

 

  1. Downloaded helm-v2.8.0-darwin-amd64.tar.gz.
  2. $ tar -zxvf helm-v2.8.0-darwin-amd64.tar.gz
  3. $ sudo mv darwin-amd64/helm /usr/local/bin/helm

 

Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. The easiest way to install tiller into the cluster is simply to run helm init. This will validate that helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default. Once it connects, it will install tiller into the kube-system namespace.

 

I did an in-cluster installation as follows:

 

$ helm init

$HELM_HOME has been configured at /Users/mbergljung/.helm.

 

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Happy Helming!

 

To see Tiller running do:

 

$ kubectl get pods --namespace kube-system

NAME                                         READY STATUS RESTARTS AGE

etcd-docker-for-desktop                      1/1 Running 0 2d

kube-apiserver-docker-for-desktop            1/1 Running 0 2d

kube-controller-manager-docker-for-desktop   1/1 Running 0 2d

kube-dns-86f4d74b45-4jw6f                    3/3 Running 0 2d

kube-proxy-zwwpx                             1/1 Running 0 2d

kube-scheduler-docker-for-desktop            1/1 Running 0 2d

kubernetes-dashboard-7b9c7bc8c9-xq6qf        1/1 Running 0 55m

tiller-deploy-664858687b-hrtkw               1/1 Running 0 58s

 

Adding the Activiti 7 Helm Repository

Add the Activiti Helm Repository so we can pull Activiti 7 Charts and deploy Activiti 7 applications.

 

$ helm repo add activiti-cloud-charts https://activiti.github.io/activiti-cloud-charts/

"activiti-cloud-charts" has been added to your repositories

 

List the helm repositories that you have access to like this:

 

$ helm repo list

NAME                  URL                                              

stable                https://kubernetes-charts.storage.googleapis.com

local                 http://127.0.0.1:8879/charts                     

activiti-cloud-charts https://activiti.github.io/activiti-cloud-charts/

    

To see what charts are available do a search like this:

 

$ helm search

NAME                                               CHART VERSION      APP VERSION                  DESCRIPTION                                       

activiti-cloud-charts/activiti-cloud-audit         0.2.0                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/activiti-cloud-connector     0.2.0                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/activiti-cloud-demo-ui       0.1.4                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/activiti-cloud-full-example 0.2.1              1.0                          An Activiti Helm chart for Kubernetes             

activiti-cloud-charts/activiti-cloud-gateway       0.2.0                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/activiti-cloud-query         0.2.0                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/activiti-keycloak            0.1.4              1.0                          A Helm chart for Kubernetes                       

activiti-cloud-charts/application                  0.2.0              1.0                          A Helm chart for Kubernetes                       

activiti-cloud-charts/example-runtime-bundle       0.1.0                                          A Helm chart for Kubernetes                       

activiti-cloud-charts/infrastructure               0.2.0              1.0                          A Helm chart for Kubernetes                       

activiti-cloud-charts/runtime-bundle               0.2.0                                          A Helm chart for Kubernetes         

...     

 

We can see in the above listing that we have access to all the Activiti 7 Cloud app charts, including the activiti-cloud-full-example that we will deploy in a bit.

 

Activiti 7 Overview

Activiti 7 is implemented as a Cloud Native application and its main abstraction layer against different cloud services, such as message queues and service registry, is the Spring Cloud framework. By using it the Activiti team does not have to reinvent and come up with new abstraction layers for cloud services.

 

To support scaling globally the Activiti team has chosen Kubernetes container orchestration engine. Kubernetes is supported by the main cloud providers, such as Amazon and Google.

 

The Activiti 7 infrastructure can be described with the following picture:

 

 

The different building blocks in the infrastructure can be explained as follows:

 

  • Activiti Keycloak - SSO - Single Sign On to all services
  • Activiti Keycloak - IDM - Identity Management so Activiti knows what the organizational structure looks like and who the groups and users are, and who are allowed to do what.
  • Activiti Modeler - BPMN 2.0 modelling application where developers can define new process applications with one or more process definitions.
  • API Gateway - Gives user interfaces and other systems access to the process applications
  • Service Registry - a service registry that allows all the services and applications to register with it and be discovered in a dynamic way.
  • Config Server - centralized and distributed configuration service that can be used by all other services.
  • Zipkin - is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.
  • Activiti Applications - each application represents one or more business process implementations. So one application could be an HR process for job applications while another application could be used for loan applications. Cloud connectors are used to make calls outside of the business process.

 

More specifically, An Activiti 7 Application consists of the following services:

 

  • Runtime Bundles - will provide different runtimes for different business models. The first available runtime is for business processes. The Process Runtime would be comparable to what was previously referred to as the Activiti Process Engine. A runtime will be as small as possible and efficient as possible. A Runtime Bundle contains a number of process definitions and is immutable. This means that it will run just a number of immutable process definitions (instances).
  • Query Service - is used to aggregate data and makes it possible to consume the data in an efficient way. There might be multiple Runtime Bundles inside an Activiti application and the Query Service will aggregate data from each of those.
  • Notification Service - can provide notifications about what is happening in the different Runtime Bundles.
  • Audit Service - this is the standard audit log that you have in BPM systems, it provides a log of what exactly happened when a process was executed.
  • Cloud Connectors - are about System - to - System interaction. Instead of having all code that talks to external systems in the process definition implementation and the process runtime, it’s now decoupled and implemented as separate services with separate SLAs. A Service Task would typically be implemented as a Cloud Connector.

 

Deploying and Running a Business Process

This section will show you how to deploy and run a business process with the Activiti 7 product using Helm and Kubernetes. We will not develop any new processes, just get a feel for how Activiti 7 works by using a provided example. Note that Activiti 7 does not come with a UI, which means you will have to interact with the system via the ReST API.

 

To do this we will use one of the examples that are available out-of-the-box. It’s called the Activiti 7 Full Example and it includes all the building blocks that conforms an Activiti 7 application, such as:

 

  • API Gateway (Spring Cloud)
  • SSO/IDM (Keycloak)
  • Activiti 7 Runtime Bundle (Example)
  • Activiti 7 Cloud Connector (Example)
  • Activiti 7 Query Service
  • Activiti 7 Audit Service

 

The following picture illustrates:

 

 

The actual example process definition is contained in what's called a Runtime Bundle. The Service Task implementation(s) and listener implementation(s) is contained in what’s called a Cloud Connector. The Client in this case will just be a Postman Collection that will be used to interact with the services. Activiti 7 Beta currently doesn’t have a Process and Task management user interface.

 

The new Activiti Modeler application can be deployed at the same time as the rest of the example, but you need to enable it manually as it is not available by default.

 

Create a Kubernetes namespace for the Activiti 7 App Deployments

We are going to create a separate namespace for the Activiti 7 application deployments. This means that any name we use inside this namespace will not clash with the same name in other namespaces.

 

$ kubectl create namespace activiti7

namespace "activiti7" created

 

Check what namespaces we have now:

 

$ kubectl get namespaces

NAME          STATUS AGE

activiti7     Active 43s

default       Active 2d

docker        Active 2d

kube-public   Active 2d

kube-system   Active 2d

 

Installing an Ingress Controller to Expose Services Externally

Before we start installing Activiti 7 components we need to install an Ingress, Front-End, Proxy, Gateway, or whatever you want to call it that can be used to expose all the Activiti 7 Kubernetes Services externally. The Activiti example installation process is simpler if services are exposed with a wildcard DNS and the DNS is mapped to an Ingress in advance of the installation.

 

Before we install the Ingress it’s always a good idea to update the local helm chart repo/cache, so we are not using an older version of the chart:

 

$ helm repo update

Hang tight while we grab the latest from your chart repositories...

...Skip local chart repository

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

...Successfully got an update from the "stable" chart repository

Update Complete. ⎈ Happy Helming!⎈

 

To install the Ingress we install something called a Kubernetes Ingress Controller, which will automatically create routes to the internal services that we want to expose. Run the following command:

 

helm install stable/nginx-ingress --namespace=activiti7

NAME:   intent-deer

E1119 07:45:11.270800   52554 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:61100->127.0.0.1:61103: write tcp4 127.0.0.1:61100->127.0.0.1:61103: write: broken pipe

LAST DEPLOYED: Mon Nov 19 07:45:10 2018

NAMESPACE: activiti7

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                                  DATA AGE

intent-deer-nginx-ingress-controller  1 0s

 

==> v1/ServiceAccount

NAME                       SECRETS AGE

intent-deer-nginx-ingress  1 0s

 

==> v1beta1/ClusterRole

NAME                       AGE

intent-deer-nginx-ingress  0s

 

==> v1beta1/PodDisruptionBudget

NAME                                       MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS  AGE

intent-deer-nginx-ingress-controller       1 N/A 0          0s

intent-deer-nginx-ingress-default-backend  1 N/A 0          0s

 

==> v1/Pod(related)

NAME                                                        READY STATUS RESTARTS AGE

intent-deer-nginx-ingress-controller-9b5fb9b4f-4985v        0/1 ContainerCreating 0 0s

intent-deer-nginx-ingress-default-backend-6f7884d594-85dph  0/1 ContainerCreating 0 0s

 

==> v1beta1/ClusterRoleBinding

NAME                       AGE

intent-deer-nginx-ingress  0s

 

==> v1beta1/Role

NAME                       AGE

intent-deer-nginx-ingress  0s

 

==> v1beta1/RoleBinding

NAME                       AGE

intent-deer-nginx-ingress  0s

 

==> v1/Service

NAME                                       TYPE CLUSTER-IP EXTERNAL-IP PORT(S)                     AGE

intent-deer-nginx-ingress-controller       LoadBalancer 10.98.10.209 <pending> 80:30006/TCP,443:30107/TCP  0s

intent-deer-nginx-ingress-default-backend  ClusterIP 10.111.24.254 <none> 80/TCP                      0s

 

==> v1beta1/Deployment

NAME                                       DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

intent-deer-nginx-ingress-controller       1 1 1 0 0s

intent-deer-nginx-ingress-default-backend  1 1 1 0 0s

 

NOTES:

The nginx-ingress controller has been installed.

It may take a few minutes for the LoadBalancer IP to be available.

You can watch the status by running 'kubectl --namespace activiti7 get services -o wide -w intent-deer-nginx-ingress-controller'

 

An example Ingress that makes use of the controller:

 

 apiVersion: extensions/v1beta1

 kind: Ingress

 metadata:

   annotations:

     kubernetes.io/ingress.class: nginx

   name: example

   namespace: foo

 spec:

   rules:

     - host: www.example.com

       http:

         paths:

           - backend:

               serviceName: exampleService

               servicePort: 80

             path: /

   # This section is only required if TLS is to be enabled for the Ingress

   tls:

       - hosts:

           - www.example.com

         secretName: example-tls

 

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

 

 apiVersion: v1

 kind: Secret

 metadata:

   name: example-tls

   namespace: foo

 data:

   tls.crt: <base64 encoded cert>

   tls.key: <base64 encoded key>

 type: kubernetes.io/tls

 

With the Ingress Controller created the services can then later on be accessed from outside the cluster. To access anything via the Ingress Controller we need to find out its IP address:

 

$ kubectl get services --namespace=activiti7

NAME                                        TYPE CLUSTER-IP EXTERNAL-IP PORT(S)                      AGE

intent-deer-nginx-ingress-controller        LoadBalancer 10.98.10.209 localhost     80:30006/TCP,443:30107/TCP   35m

intent-deer-nginx-ingress-default-backend   ClusterIP 10.111.24.254 <none> 80/TCP                       35m

 

In this case the external IP address is localhost, which means IP 127.0.0.1. We cannot use localhost as that would not work both inside and outside Docker containers. We will use the host’s IP and the public nip.io service for DNS.

 

Note that you might need to run kubectl get services... several times until you can see the EXTERNAL-IP for your Ingress Controller. If you see PENDING, wait for a few seconds and run the command again.

 

Clone the Activiti 7 Helm Charts source code

We will need to do a few changes in the Helm Chart for the Full Example so let’s clone the source code as follows:

 

$ git clone https://github.com/Activiti/activiti-cloud-charts

Cloning into 'activiti-cloud-charts'...

 

This clones the Helm Chart source code for all Activiti 7 examples.

 

Configure the Full Example

The next step is to parameterize the Full Example deployment. The Helm Chart can be customized to turn on and off different features in the Full Example, but there is one mandatory parameter that needs to be provided, which is the external IP address for the Ingress Controller that is going to be used by this installation.

 

When we are running locally we need to find out the current IP address. We cannot use localhost (127.0.0.1) as then containers cannot talk to each other, such as the Runtime Bundle container talking to the Keycloak container. So, for example, find the IP in the following way:

 

$ hostname

MBP512-MBERGLJUNG-0917.local

MBP512-MBERGLJUNG-0917:activiti-cloud-full-example mbergljung$ ping MBP512-MBERGLJUNG-0917.local

PING mbp512-mbergljung-0917.local (10.244.50.42): 56 data bytes

64 bytes from 10.244.50.42: icmp_seq=0 ttl=64 time=0.068 ms

Make sure this is not a bridge IP address. Look for something like en0 when doing ifconfig.

The custom configuration is done in the values.yaml file located here: https://github.com/Activiti/activiti-cloud-charts/blob/master/activiti-cloud-full-example/values.yaml (you can copy this file or change it directly). We can update this file in the Helm Chart source code that we cloned previously.

 

Open up the activiti-cloud-charts/activiti-cloud-full-example/values.yaml file and replace the string REPLACEME with 10.244.50.42.nip.io, which is the EXTERNAL IP of the Ingress Controller. We are using nip.io as DNS service to map our services to this External IP which will follow the following format: <EXTERNAL_IP>.nip.io.

Important! Make sure that you can ping 10.244.50.42.nip.io:

 

$ ping 10.244.50.42.nip.io

PING 10.244.50.42.nip.io (10.244.50.42): 56 data bytes

64 bytes from 10.244.50.42: icmp_seq=0 ttl=64 time=0.041 ms

64 bytes from 10.244.50.42: icmp_seq=1 ttl=64 time=0.083 ms

...

 

If this does not work then you might experience a DNS rebind protection problem with your router. See more info here.

As mentioned, the Activiti BPMN Modeler application needs to be enabled manually. So change the following value to true:

 

activiti-cloud-modeling:
enabled: true

You should end up with a file looking something like this (note. I removed all the stuff that were commented out):

 

global:
keycloak:
url: "http://activiti-keycloak.10.244.50.42.nip.io/auth"
gateway:
host: &gatewayhost "activiti-cloud-gateway.10.244.50.42.nip.io"

activiti-cloud-modeling:
enabled: true

application:
runtime-bundle:
enabled: true
image:
pullPolicy: Always

activiti-cloud-query:
image:
pullPolicy: Always

activiti-cloud-connector:
enabled: true
image:
pullPolicy: Always

activiti-cloud-audit:
image:
pullPolicy: Always

infrastructure:
activiti-keycloak:
keycloak:
enabled: true
keycloak:
ingress:
enabled: true
path: /
proxyBufferSize: "16k"
hosts:
- "activiti-keycloak.10.244.50.42.nip.io"
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers 'Access-Control-Allow-Methods: "POST, GET, OPTIONS, PUT, PATCH, DELETE"';
more_set_headers 'Access-Control-Allow-Credentials: true';
more_set_headers 'Access-Control-Allow-Headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,authorization"';
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
preStartScript: |
/opt/jboss/keycloak/bin/add-user.sh -u admin -p admin
/opt/jboss/keycloak/bin/add-user-keycloak.sh -r master -u admin -p admin
cp /realm/activiti-realm.json .
sed -i 's/placeholder.com/*/g' activiti-realm.json

activiti-cloud-gateway:
ingress:
enabled: true
hostName: *gatewayhost
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/enable-cors: true
nginx.ingress.kubernetes.io/cors-allow-headers: "*"
nginx.ingress.kubernetes.io/x-forwarded-prefix: true

 

So what are we going to deploy with this configuration? The following picture illustrates:

 

 

We can see here that everything is accessed via the Ingress controller at 10.244.50.42.nip.io. Then you just add the service name in front of that to get to the API Gateway and the Keycloak service.

 

Deploy the Full Example

When we have customized the configuration of the Full Example Helm Chart we are ready to deploy the chart by running the following command. However, before doing that it’s always a good idea to update the local helm chart repo/cache, so we are not using an older version of the chart:

 

$ helm repo update

Hang tight while we grab the latest from your chart repositories...

...Skip local chart repository

...Successfully got an update from the "alfresco-incubator" chart repository

...Successfully got an update from the "activiti-cloud-charts" chart repository

...Successfully got an update from the "stable" chart repository

Update Complete. ⎈ Happy Helming!⎈

 

Now, remember to stand in the correct directory locally where you made the changes to the values.yaml file, then do the installation as follows:

 

activiti-cloud-charts mbergljung$ cd activiti-cloud-full-example/

activiti-cloud-full-example mbergljung$ helm install -f values.yaml activiti-cloud-charts/activiti-cloud-full-example --namespace=activiti7

NAME:   ignorant-bunny

LAST DEPLOYED: Mon Nov 19 09:41:30 2018

NAMESPACE: activiti7

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                       DATA AGE

ignorant-bunny-rabbitmq    2 2s

ignorant-bunny-keycl       2 2s

ignorant-bunny-keycl-test  1 2s

 

==> v1beta1/Role

NAME                     AGE

ignorant-bunny-rabbitmq  2s

 

==> v1/Service

NAME                               TYPE CLUSTER-IP EXTERNAL-IP  PORT(S)         AGE

activiti-cloud-modeling-backend    ClusterIP 10.102.61.115 <none>     80/TCP         2s

activiti-cloud-modeling            ClusterIP 10.99.1.27 <none>     80/TCP         2s

audit                              ClusterIP 10.99.4.236 <none>     80/TCP         1s

example-cloud-connector            ClusterIP 10.111.163.56 <none>     80/TCP         1s

query                              ClusterIP 10.108.106.152 <none>     80/TCP         1s

ignorant-bunny-rabbitmq-discovery  ClusterIP None <none>     15672/TCP,5672/TCP,4369/TCP,61613/TCP,61614/TCP  1s

ignorant-bunny-rabbitmq            ClusterIP None <none>     15672/TCP,5672/TCP,4369/TCP,61613/TCP,61614/TCP  1s

rb-my-app                          ClusterIP 10.108.116.9 <none>     80/TCP         1s

activiti-cloud-gateway             ClusterIP 10.104.143.117 <none>     80/TCP         1s

ignorant-bunny-keycl-headless      ClusterIP None <none>     80/TCP         1s

ignorant-bunny-keycl-http          ClusterIP 10.96.152.246 <none>     80/TCP         1s

 

==> v1beta1/Ingress

NAME                                   HOSTS     ADDRESS PORTS AGE

ignorant-bunny-activiti-cloud-gateway  activiti-cloud-gateway.10.244.50.42.nip.io  80 1s

ignorant-bunny-keycl                   activiti-keycloak.10.244.50.42.nip.io     80 1s

 

==> v1beta1/Deployment

NAME                                     DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

ignorant-bunny-activiti-cloud-modeling   1 1 1 0 1s

ignorant-bunny-activiti-cloud-audit      1 1 1 0 1s

ignorant-bunny-activiti-cloud-connector  1 1 1 0 1s

ignorant-bunny-activiti-cloud-query      1 1 1 0 1s

ignorant-bunny-runtime-bundle            1 1 1 0 1s

ignorant-bunny-activiti-cloud-gateway    1 1 1 0 1s

 

==> v1beta1/StatefulSet

NAME                     DESIRED CURRENT AGE

ignorant-bunny-rabbitmq  1 1 1s

ignorant-bunny-keycl     1 1 1s

 

==> v1/Pod(related)

NAME                                                     READY STATUS RESTARTS AGE

ignorant-bunny-activiti-cloud-modeling-6f6fb4448c-cpjzf  0/2 ContainerCreating 0 1s

ignorant-bunny-activiti-cloud-audit-db5665849-wrmbc      0/1 ContainerCreating 0 1s

ignorant-bunny-activiti-cloud-connector-8fdc5f57d-7x6n9  0/1 ContainerCreating 0 1s

ignorant-bunny-activiti-cloud-query-f586d6cb-7dc2k       0/1 ContainerCreating 0 1s

ignorant-bunny-runtime-bundle-7bd664f89f-gf276           0/1 Pending 0 1s

ignorant-bunny-activiti-cloud-gateway-78f99df5f7-fv5zw   0/1 Pending 0 1s

 

==> v1/Secret

NAME                         TYPE DATA AGE

ignorant-bunny-rabbitmq      Opaque 2 2s

ignorant-bunny-keycl-db      Opaque 1 2s

ignorant-bunny-keycl-http    Opaque 1 2s

ignorant-bunny-realm-secret  Opaque 1 2s

 

==> v1/ServiceAccount

NAME                     SECRETS AGE

ignorant-bunny-rabbitmq  1 2s

activiti-cloud-gateway   1 2s

 

==> v1/Role

NAME                    AGE

activiti-cloud-gateway  2s

 

==> v1beta1/RoleBinding

NAME                     AGE

ignorant-bunny-rabbitmq  2s

 

==> v1/RoleBinding

activiti-cloud-gateway  2s

 

This installs the full example based on the remote activiti-cloud-charts/activiti-cloud-full-example Helm chart from the Activiti 7 Helm chart repo with the custom configuration from the local values.yaml file.

 

Now, wait for all the services to be up and running, check the pods as follows:

 

$ kubectl get pods --namespace=activiti7

NAME                                                         READY STATUS RESTARTS AGE

ignorant-bunny-activiti-cloud-audit-db5665849-wrmbc          0/1 Running 0 43s

ignorant-bunny-activiti-cloud-connector-8fdc5f57d-7x6n9      0/1 ContainerCreating 0 43s

ignorant-bunny-activiti-cloud-gateway-78f99df5f7-fv5zw       0/1 ContainerCreating 0 43s

ignorant-bunny-activiti-cloud-modeling-6f6fb4448c-cpjzf      0/2 ContainerCreating 0 43s

ignorant-bunny-activiti-cloud-query-f586d6cb-7dc2k           0/1 Running 0 43s

ignorant-bunny-keycl-0                                       0/1 Running 0 43s

ignorant-bunny-rabbitmq-0                                    0/1 Running 0 43s

ignorant-bunny-runtime-bundle-7bd664f89f-gf276               0/1 ContainerCreating 0 43s

intent-deer-nginx-ingress-controller-9b5fb9b4f-4985v         1/1 Running 0 1h

intent-deer-nginx-ingress-default-backend-6f7884d594-85dph   1/1 Running 0 1h

 

Pay attention to the READY column, it should show 1/1 in all the pods before we can proceed. If some pods are not starting it can be useful to look at it in the Kubernetes Dashboard. Select the activiti7 namespace and then click on Pods:

 

 

In this case we can see that there is not enough memory to load all pods. If you see an insufficient memory error then you need to increase the available memory for Kubernetes running in Docker for Desktop (Preferences... | Advanced | Memory). You will also see a lot of these readiness probe failed... errors. They will eventually disappear, but it will take 5-10 minutes.

 

When all starts successfully you should see the following after a while:

 

 

We can also check the status of the Kubernetes node by selecting Cluster | Nodes | docker-for-desktop:

 

 

It is important to notice that Helm created a release of our Chart. Because we haven’t specified a name for this release Helm choose a random name, in my case ignorant-bunny. This means that we can manage this release independently of other deployments that we do.

 

You can run helm ls to see all deployed applications:

 

$ helm ls --namespace=activiti7

NAME           REVISION UPDATED                  STATUS   CHART                             NAMESPACE

ignorant-bunny 1        Mon Nov 19 09:41:30 2018 DEPLOYED activiti-cloud-full-example-0.4.0 activiti7

intent-deer    1        Mon Nov 19 07:45:10 2018 DEPLOYED nginx-ingress-0.19.2              activiti7

 

To delete a release do helm delete <release-name>.

 

Checking what Users and Groups that are available

Before we start interacting with the deployed example application, let’s check out what users and groups we have available in Keycloak. These can be used in process definitions and when we interact with the application.

 

Access Keycloak Admin Console on the following URL: http://activiti-keycloak.10.244.50.42.nip.io/auth/admin/master/console  

 

Login with admin/admin.

 

You should see a home page such as:

 

 

From this page we can access users and groups under the Manage section in the lower left corner. But first, note that there is already a security realm setup called activiti. We will be using it in a bit. There is also an Activiti client setup and the following Roles:

 

 

The ACTIVITI_ADMIN and ACTIVITI_USER roles are important to understand because they tell Activiti 7 what part of the core API the user can access. In the next section we will use the modeler user to login to the Activiti Modeler and it’s a member of the ACTIVITI_MODELER role.

 

Now, click on Users | View all users so we can see what type of users we have to work with when accessing the application:

 

 

So there are several custom users configured. We can use for example the testuser as it has the role ACTIVITI_USER applied:

 

 

Now, let’s move on and have a look at how we can interact with the deployed Activiti 7 application. But first, let’s just verify we got the Activiti BPMN Modeler app running.

 

A quick look at the Activiti 7 Modeler

Let’s have a quick look at the new Activiti 7 Modeler. So we can make sure it has been deployed properly. You can access the BPMN Modeler App via the http://activiti-cloud-gateway.10.244.50.42.nip.io/activiti-cloud-modeling URL. Login with modeler/password as this user is part of the ACTIVITI_MODELER role, you should now see:

 

 

All good so far, let’s create a Process Application just to be sure it works all the way. Click Create new | Application and give it a name and description:

 

 

Cool, all works!

 

Quick test to verify environment setup

We can do a quick test with curl to verify that the Activiti 7 Full Example has been deployed correctly.

 

First, request an access token for testuser from Keycloak:

 

$ curl -d 'client_id=activiti' -d 'username=testuser' -d 'password=password' -d 'grant_type=password' 'http://activiti-keycloak.10.244.50.42.nip.io/auth/realms/activiti/protocol/openid-connect/token' | python -m json.tool

 % Total    % Received % Xferd  Average Speed Time    Time Time Current

                                Dload Upload Total Spent Left  Speed

100  3106 100  3032 100  74 400  9 0:00:08 0:00:07  0:00:01 691

{

   "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJoTUJPYnNyb1BJS2JJeTVOUGdnU0pPSGRDcmZFUWlva2otQUIwQjBaS3U0In0.eyJqdGkiOiJkMDU1ZmZlMy05MzNjLTQzNGUtYmExMC0yMjc5MzA3ODkwMTkiLCJleHAiOjE1MzYzMDM1NjUsIm5iZiI6MCwiaWF0IjoxNTM2MzAzMjY1LCJpc3MiOiJodHRwOi8vYWN0aXZpdGkta2V5Y2xvYWsuMTkyLjE2OC4xLjcxLm5pcC5pby9hdXRoL3JlYWxtcy9hY3Rpdml0aSIsImF1ZCI6ImFjdGl2aXRpIiwic3ViIjoiMzMxYTEyZDEtNzY2ZS00ODk3LWIwYTgtMzA5YWU1Y2FlYjI1IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWN0aXZpdGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJlOWM0MDU2Ny04YTA4LTQ3MGEtOWZlNS0wNTgyNGM1YTczNjgiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbImh0dHA6Ly9ndy5qeC1zdGFnaW5nLmFjdGl2aXRpLmVudmFsZnJlc2NvLmNvbSIsImh0dHA6Ly9qeC1zdGFnaW5nLXF1aWNrc3RhcnQtaHR0cC5qeC1zdGFnaW5nLmFjdGl2aXRpLmVudmFsZnJlc2NvLmNvbSIsImh0dHA6Ly9hY3Rpdml0aS1jbG91ZC1kZW1vLXVpLmp4LXN0YWdpbmcuYWN0aXZpdGkuZW52YWxmcmVzY28uY29tIiwiaHR0cDovL2xvY2FsaG9zdDozMDAwIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsIkFDVElWSVRJX1VTRVIiLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwibmFtZSI6InRlc3QgdXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6InRlc3R1c2VyIiwiZ2l2ZW5fbmFtZSI6InRlc3QiLCJmYW1pbHlfbmFtZSI6InVzZXIiLCJlbWFpbCI6InRlc3R1c2VyQHRlc3QuY29tIn0.VTiYczKzW5StCVRcva-s9RbWROrzjHDAIGCYID6VERSC3R8lT86s3Qxux6chJDYKTCsNriN1xRQbQrPN3BPeTmB7EDlTFVtLHSA56sqp3ok9o9SoyDahcZ1JzWyKlQW3E63YVcofXv03sOx7yW70ZnIDEq6P5mb-frK_nA5jEoOG-Za6geTHj6z0yYBl0y3ropcT2NNhbofaaW8H6rQqiLHbEPddnds0QIoJvVXSbfUlW7MVlZjCfQS5RA4yg1OdkdWfBPBa3Tn1qM5wHBcZElWXmOYTi2B6bG1MSK7ufkca32S2fvB7vLhjBHzSxk2wzk8p_PUCeDdmTZTW_bTaxA",

   "expires_in": 300,

   "refresh_expires_in": 1800,

   "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJoTUJPYnNyb1BJS2JJeTVOUGdnU0pPSGRDcmZFUWlva2otQUIwQjBaS3U0In0.eyJqdGkiOiI1ZGFmNDJmNy0yZmNlLTRjOGEtOTYyMC1kNTA3NGQzZTkxYzciLCJleHAiOjE1MzYzMDUwNjUsIm5iZiI6MCwiaWF0IjoxNTM2MzAzMjY1LCJpc3MiOiJodHRwOi8vYWN0aXZpdGkta2V5Y2xvYWsuMTkyLjE2OC4xLjcxLm5pcC5pby9hdXRoL3JlYWxtcy9hY3Rpdml0aSIsImF1ZCI6ImFjdGl2aXRpIiwic3ViIjoiMzMxYTEyZDEtNzY2ZS00ODk3LWIwYTgtMzA5YWU1Y2FlYjI1IiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImFjdGl2aXRpIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiZTljNDA1NjctOGEwOC00NzBhLTlmZTUtMDU4MjRjNWE3MzY4IiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwiQUNUSVZJVElfVVNFUiIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIn0.hWiwSCILYBhRJO1-yTYu00zncDTMWVXrJuPPfJw6DpESe-UPr_ny5ffCzmUcql4PWl83VD0NXMIbAKxlJU3X1d7A8CMIfxJFdk_NVyKxB9TWrnUMZvTAT3Mf2y8Qj2UyEciQCpQBBkPyjJV1zS7R51i3dxIcDwDMLvaBBgNNSqnYKqagxMt2yGaC5fJkYt6hqOHuuSEMUTK2yL8kvc21Q-_AR8bdWy6zWD7geelYL3Wlhp83I_3I8-GZJ0iQRFsSOBeLZqhE7lBj7E4DxZKW6UEsTP77R49dFXPtryt4t5xWrDk1f14jIRe4-OWDbzE4mCNjwWTd07nOl5vxMhlrTg",

   "token_type": "bearer",

   "not-before-policy": 0,

   "session_state": "e9c40567-8a08-470a-9fe5-05824c5a7368",

   "scope": "email profile"

 

Now, use this access token to make a call to the Runtime Bundle for deployed process definitions:

 

$ curl http://activiti-cloud-gateway.10.244.50.42.nip.io/rb-my-app/v1/process-definitions -H "Authorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJoTUJPYnNyb1BJS2JJeTVOUGdnU0pPSGRDcmZFUWlva2otQUIwQjBaS3U0In0.eyJqdGkiOiJkMDU1ZmZlMy05MzNjLTQzNGUtYmExMC0yMjc5MzA3ODkwMTkiLCJleHAiOjE1MzYzMDM1NjUsIm5iZiI6MCwiaWF0IjoxNTM2MzAzMjY1LCJpc3MiOiJodHRwOi8vYWN0aXZpdGkta2V5Y2xvYWsuMTkyLjE2OC4xLjcxLm5pcC5pby9hdXRoL3JlYWxtcy9hY3Rpdml0aSIsImF1ZCI6ImFjdGl2aXRpIiwic3ViIjoiMzMxYTEyZDEtNzY2ZS00ODk3LWIwYTgtMzA5YWU1Y2FlYjI1IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWN0aXZpdGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJlOWM0MDU2Ny04YTA4LTQ3MGEtOWZlNS0wNTgyNGM1YTczNjgiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbImh0dHA6Ly9ndy5qeC1zdGFnaW5nLmFjdGl2aXRpLmVudmFsZnJlc2NvLmNvbSIsImh0dHA6Ly9qeC1zdGFnaW5nLXF1aWNrc3RhcnQtaHR0cC5qeC1zdGFnaW5nLmFjdGl2aXRpLmVudmFsZnJlc2NvLmNvbSIsImh0dHA6Ly9hY3Rpdml0aS1jbG91ZC1kZW1vLXVpLmp4LXN0YWdpbmcuYWN0aXZpdGkuZW52YWxmcmVzY28uY29tIiwiaHR0cDovL2xvY2FsaG9zdDozMDAwIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsIkFDVElWSVRJX1VTRVIiLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwibmFtZSI6InRlc3QgdXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6InRlc3R1c2VyIiwiZ2l2ZW5fbmFtZSI6InRlc3QiLCJmYW1pbHlfbmFtZSI6InVzZXIiLCJlbWFpbCI6InRlc3R1c2VyQHRlc3QuY29tIn0.VTiYczKzW5StCVRcva-s9RbWROrzjHDAIGCYID6VERSC3R8lT86s3Qxux6chJDYKTCsNriN1xRQbQrPN3BPeTmB7EDlTFVtLHSA56sqp3ok9o9SoyDahcZ1JzWyKlQW3E63YVcofXv03sOx7yW70ZnIDEq6P5mb-frK_nA5jEoOG-Za6geTHj6z0yYBl0y3ropcT2NNhbofaaW8H6rQqiLHbEPddnds0QIoJvVXSbfUlW7MVlZjCfQS5RA4yg1OdkdWfBPBa3Tn1qM5wHBcZElWXmOYTi2B6bG1MSK7ufkca32S2fvB7vLhjBHzSxk2wzk8p_PUCeDdmTZTW_bTaxA" | python -m json.tool

 % Total    % Received % Xferd  Average Speed Time    Time Time Current

                                Dload Upload Total Spent Left  Speed

100  7123   0 7123   0 0 856      0 --:--:-- 0:00:08 --:--:--  1535

{

   "_embedded": {

       "cloudProcessDefinitionImpls": [

           {