Skip navigation
All Places > Alfresco Process Services & Activiti (BPM) > Blog
rallegre

New Activiti.org website!

Posted by rallegre Employee May 22, 2018

We are happy to announce the new website (www.activiti.org) for Activiti is now live.

 

This new version has an updated look and feel that reflects our new cloud native approach to Business Automation. It is now hosted on GitHub so we can iteratively improve it following an open community approach! The source code is available at https://github.com/Activiti/activiti.github.ioYou can send your PRs with suggestions and changes.

 

We want this new website to be the one place for all community members to learn about Activiti, get the latest news, share highlights and award top contributors.

 

This is just the first step, and very soon there will be more pages. Stay tuned!


We value your feedback, please share it with us on Gitter.

Last week we worked hard into moving forward our services and making sure that they can be deployed into an AWS Kubernetes Cluster and also a GCE Kubernetes Cluster. This will enable us to validate our examples and blueprints in a more consistent way release after release.

We are looking into plugging in our Acceptance Tests in our pipelines to start covering more complex scenarios.

 

@MiguelRuizDevworked on improving the Task Service Incubator project and added a UI using Angular 6.

@mteodori#1894update to Spring Boot 2.0.2 and #1895Update to Spring Cloud RC1 and fixed related false positives for vulnerabilities

@igdianovdefined the next steps for the graphql integration and modules dependencies to make it more generic.

@daisuke-yoshimotoworking on the Cloud BPMN signal mechanisms examples and acceptance tests.

@constantin-ciobotaruworked for organization controllers

@pancudaniel7looked more into the concepts behind Activiti Cloud Infrastructure and reviewed the Activiti Cloud  GitBook

@lucianopreaused the Activiti Cloud Infrastructure inside a template project along with the logging and tracing

@ryandawsonukMade it possible to run the apps service is Kubernetes by putting it on a new example app - in the docker-compose example its embedded in the registry but both register on the gateway with the same route so that the external API is the same. Switched Activiti keycloak integration module to using autoconfiguration. Reviewed and merged changes from @igdianov to make graphql notifications with websockets available through the gateway.

@erdemedeirosmade sure that the runtime bundle relies on the new Java API: covered part of  the process instance and process definition REST APIs.

@salaboyworked on getting the new Spring Cloud Gateway up and running with the Spring Cloud Kubernetes Discovery Client for dynamic routes discovery.

 

This week we look forward to migrate our quickstart to follow our new BluePrint structure and also Daisuke (@daisuke-yoshimoto) and myself (@salaboy) will be presenting atJJUG CCC Spring 2018 Conference in Tokyo, Japan. If you are around Tokyo, feel free to drop me a message via Twitter and we can catch up while I’m around.

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

 

 

 

Hey there, on this blog post I wanted to touch base on using Activiti Cloud with Jenkins X. As mentioned in previous blog posts we wanted to make sure that Activiti Cloud provides the right tools for you to build, deploy and scale new Activiti Cloud Applications following standard practices around the technologies required to deploy to public clouds that uses Kubernetes.

Having automated pipelines which can:

  1. Build your domain specific project’s source code (Jar)
  2. Publish to Nexus (or Artifactory)
  3. Create a Docker Image
  4. Publish it to a docker registry
  5. Create a set of descriptors (kubernetes and helm charts) following industry best practices
  6. Publish these descriptors into a helm chart repository (Chart Museum)
  7. Promote/provision these projects into an environment (namespace in a running kubernetes cluster)

Sounds too way to complicated to build in-house and at the same time it sounds like a must if you really want to provide Continous Integration and  Continuous Deployment (CI/CD) approach in a serious way. Luckily for us Jenkins X provide these and much more. It also validates the approach that we have adopted in Activiti Cloud, where we can build each individual piece in isolation and each of the pieces are designed to behave and operate in a cloud native way and promote and push forward our main objective of providing an industry standard and modern platform to build business applications in a fast an interactive way.

In order to test Jenkins X capabilities I’ve migrated our Activiti Cloud Blueprint to a set of individual repositories so we can benefit from Jenkins X conventions.

Getting Started / Installing JX

Because I was just starting with Jenkins X I needed to basically create a Kubernetes Cluster and install JX. This is quite an easy procedure if you have access to an AWS account to play with. Jenkins X CLI tools will use Kops to create and setup the cluster and install all the JenkinsX services in a Kubernetes namespace called “jx”.

So first of all download the JX CLI. I did that by using homebrew because I'm using Mac OSX:

brew tap jenkins-x/jx

brew install jx

Note: I evaluated the option of using MiniKube, but this is not even recommend in their docs. You can make it work, but it is unrealistic to try to run all jx services plus all your own services locally.  

So first step was to get hold of some AWS credentials (aka aws_access_key_id  and aws_secret_access_key). Once I got those I created a file called credentials with those values inside ~/.aws

If you take a look at JX docs, you will find that they recommend to go with the configurations provided in a Cloud9 workshop which is AWS IDE. I didn’t wanted to go in that direction for now, so I decided to configure the environment to just install JX.

 At this point I executed:

jx create cluster aws -n=activiti-cloud

This downloaded the AWS SDK to my local environment but the installation failed because I needed to set some things first. One important thing to know is that Kops will use an S3 bucket to store the cluster state, so you need to create that bucket first, and you can do this with a very simple command as soon as you have the AWS SDK installed with the right credentials:

export S3_BUCKET=kops-state-store-$(cat /dev/urandom | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 32)

export KOPS_STATE_STORE=s3://${S3_BUCKET}

aws s3 mb $KOPS_STATE_STORE

aws s3api put-bucket-versioning --bucket $S3_BUCKET --versioning-configuration Status=Enabled

This basically do 3 things:

  • Create an S3 Bucket with a random ID at the end of the name kops-state-store-<random>
  • Exports a variable called KOPS_STATE_STORE to be used by KOPS
  • Create an S3 bucket with that name
  • Enable versioning in the bucket

Next you need to select in which availability zone you want to use to set up your cluster, you should obviously choose the one that is close to your location from the AWS Web Console, then I used the following line to export in an environment variable the available Availability Zones from my account:

export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"

After this, you should be able to execute again:

jx create cluster aws -n=activiti-cloud

While this is running you can check in your EC2 Section inside AWS Web Console how your cluster is being created and how your nodes are being restarted as part of the set up.

Once this is done, congrats your cluster is app and it has Jenkins X inside and you CLI is configure to work with it.

Some important things that happens during the installation:

  •  It install AWS SDK, HELM and KOPS
  • Configure a token to work with a GitHub account
  • It creates two repositories in your GitHub account to configure the staging and production environments. Jenkins X treats these environments following the principles of dealing with infrastructure as code (GitOps), so these repositories will contain everything you need to recreate, rollback or upgrade these environment from the scratch
  • Jenkins X by default install the following services inside the “jx” namespace (also referred as dev environment):
    • Jenkins
    • Nexus
    • Docker Registry
    • ChartMuseum and UI
  • Create two more namespaces called jx-staging and jx-production and setup all the web hooks needed to monitor these repositories for changes and trigger the correspondent pipelines to apply changes to these environments
  • It uses nip.io to bind the AWS external URL to <service>.jenkins.jx.<IP>.nip.io

At the end of the installation it prints out the password for the admin user. Make sure that you keep copy it and keep it safe somewhere.

Troubleshooting

Some things that might become handy to know:

  • You can of course delete the cluster and restart again if things goes wrong, for doing that you can use kops:

kops delete cluster <name> --state=s3://kops-state-store-<random> --yes

Notice that this is really time consuming and there is no need to delete the cluster unless you want to change something from the cluster itself

  • If you want completely uninstall Jenkins X you can run:

helm delete jenkins-x --debug --no-hooks --purge

  • The jx uninstall command does something similar but it leaves the environments jx-staging and jx-production intact, it only uninstall services from the jx namespace

 

Activiti Cloud Blueprint Demo

As mentioned in the introduction the source code of each different service inside our Trending Topics Campaigns BluePrint was now refactored out to individual repositories prefixed by ttc-* inside the http://github.com/Activiti/organization in GitHub.

Screen Shot 2018-05-15 at 10.03.30.png

A quick recap of our BluePrint: Trending Topic Campaigns:

Screen Shot 2018-05-10 at 09.30.12

As you might remember (link) we had a global marketing company creating campaigns based on Trending Topics. They need to be quick and react fast on the current trending topics on different social media streams in order to promote their customers’ products. Different departments all over the world might be creating campaigns at the same time not only based on the Twitter feed, other social media feeds can be plugged into the infrastructure, so we need to make sure that our solution scale to support those requirements.

The following video shows how you can import these projects into JenkinsX and how to customise the Staging Environment to provide some of the infrastructural service required for the services to interact between each other, such as RabbitMQ and PostgreSQL to store the Campaign’s Process Runtime state.

One configuration step that is required until you can consume Activiti Cloud artifacts from Maven Central is to add into JenkinsX nexus the Alfresco Snapshot Repository. In order to do that you can do:

  • jx get urls (in the dev environment, by doing jx env dev)
  • Open in your browser the Nexus url
  • Login with admin and your password, generated by jenkinsX during installation
  • Go to repositories
  • Create new maven2 proxy
  • Go back to the repository list and edit the one called maven group
  • Make sure that the Activiti repository belongs to the member repositories (meaning that it needs to be on the column of the right, by default when you create a new one is not inside the group).

 

 

 

 

 

Conclusions

I’m quite excited to see how easy is becoming to get an Activiti Cloud Application running in a Cloud Native way on top of Kubernetes which is running in a real AWS and GCE cluster. Even if it is still quite early in the journey we can clearly see all the advantages of having these pipelines set up and managing how the changes are applied into each of our services. It also make it dead simple to add new campaigns, so we look forward to have this application running in a continuous deployment approach to test our Beta and future releases.

If you are interested in these projects and want to participate in our community get in touch, we look forward to improve this BluePrint to exemplify the most common patterns that you will find in real life scenarios, while we keep improving what Activiti Cloud can do to make business automation modern and as easy as possible.

Feel free to join us in our

 

Gitter channel

Last week we kept pushing to get our initial version of the Applications Service into our Activiti Cloud Example. We currently have a new repository containing the core logic for this serviceand we are looking forward to test the initial version along with the other components this week.

 

We are also making progress on the Process & Task Runtime API sideand now we are going through the exercise of updating our services to use this new API layer. This will improve maintenance and enable a future backward compatibility path.

 

We have also migrated the Blueprint Scenario (Trending Topics Campaign) to use Jenkins Xso it can be deployed to public clouds such as AWS, AKS and GCE. A blog post is coming soon after this weekly update. And presented our lessons learned in GeeCon 18' Krakow, Poland.

 

@MiguelRuizDevWorked on the display of pageable elements by the UI Angular app.

@igdianovfinished polishing up the Graphql Subscriptions PoC and now @ryandawsonukis working on merge those changes in our Develop branch.

@daisuke-yoshimotois working on making acceptance-testsfor Signal Throw Event Cloud Behavior.

@constantin-ciobotaruworking for adding controllers to organization service with support for both hal and alfresco output, also replaced rest template with feign client in organization service to not force aps to import rest template dependency, worked for modeling rest design in aps.

@lucianopreadid some investigation on #1893

@pancudaniel7Readed basics of Keycloak, deployed Keycloak in Kubernetes.

@ryandawsonukUpdated the activiti app service to use kubernetes metadata and reconcile this with an apps descriptor in a ConfigMap. Also attended London AWS Summit.

@erdemedeirosstarted using the new Java APIs inside the runtime bundle: this will help us to evaluate what is missing there. Attended AWS summit.

@salaboypresented at GeeCon 2018 in Krakow, Poland. You can find the slides and some pictures here: https://salaboy.com/2018/05/11/geecon-18-lessons-learned-spring-cloud-in-activiti-oss/

 

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

 

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

I had a great time in Krakow, Poland at GeeCon 18' the 10th edition of this highly recognised conference focused in Java. In general the conference was amazing, excellent venue, great weather, beautiful city great talks and an amazing Java Community. You cannot ask for anything else. 

 

 

Slides

My presentation was about what we have learnt in the past 9 months of heavy work around Activiti & Activiti Cloud. I cover a brief summary of all the major steps into transforming a big monolith into a set of reusable cloud native building blocks. A lot of the emphasis in the presentation was about the core challenges when targeting Kubernetes from a Java Developer perspective. I closed the presentation with a quick demo around Spring Cloud Kubernetes and JenkinsX to demonstrate that by collaborating and engaging with other communities we can make sure that we do not reinvent the wheel and adopt well know good practices. 

 

Slides here

 

I will update this blog post with more pics and the video of the session as soon as it is published. 

Next stop Japan JJUG CCC SPRING 2018 Conference! 

Last week we focused in closing Pull Request that were waiting for review and pushed the next iteration of the Process & Task Runtime APIs plus the Application Service integration with other components in the infrastructure. We also did a spike to build our Blueprint using JenkinsX in AWS, which helped us to split our project into more maintainable units to enable our users to have continuous deployment pipelines.

 

@igdianov- Polished configurations, added unit tests and refactoring.

@daisuke-yoshimotois working on making acceptance-testsfor Signal Throw Event Cloud Behavior.

@constantin-ciobotaruworked to fix empty collection deserialization on HAL output for AlfrescoPagedResourceAssembler, worked for Modeling Service to support different backend storage

@lucianoprea- off on vacations -

@pancudaniel7Added configuration setup on microservice template project.

@ryandawsonukUpdated the activiti app service incubator project to report on app status based on eureka metadata. Added autoconfiguration for the app service and created PRs to embed it in the registry example component.

@erdemedeirosworked on the new Process Runtime Java APIs; split process and task related objects in different Maven modules. Create a auto-configuration to load the default service task behavior.

@salaboyworked on splitting the Trending Topic Campaigns BluePrint into separate repositories which are now built by JenkinsX. This demonstrate the flexibility of Activiti Cloud configuration and the usage of standard tools such as helm and kubernetes for continuous deployment pipelines.

 

There are still two PRs under review: @igdianovand @daisuke-yoshimotowhich will probably merge next week.

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

 

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

This is a basic guide of how to deploy the Process Workspace in Tomcat

 

Step1


Download the latest version of Process Workspace from Nexus APW 1.1.0 based on adf 2.3.0

 

Step2


Once downloaded the Zip, you need to Unzip the file.

 

Step 3

Open the unzipped folder in your prefered IDE. You should be able to see the Process Workspace app structure like in the images below:

  • src folder (source code) 
    Alfresco Process Workspace Sourcecode
  • dist folder (distribution)
    Alfresco Process Workspace dist folder

 

Step 4

Now you are ready to deploy the Process Workspace app in tomcat.
Rename the dist folder with the name you want to see in the URL of your server ( i.e. process-workspace) and paste the renamed 
folder under the folder /webapps in tomcat as in the image above:

 

Deploy Process Workspace in tomcat webapp folder

Step 5

Start your tomcat server

 

Step 6

Open your browser and visit the URL localhost:8080/process-workspace. The app is running:

 

Alfresco Process Workspace up and running

 

 If you have more questions, please reply here or contact us using    gitter

After #44 weeks of heavy work we are getting closer and closer to our first Beta1release. We are actively working in getting the first iteration of our building blocks in shape to cover a set of common scenarios for cloud deployments. After Beta1we will be focused in making sure that these components are polished enough for a large set of use cases. There is still a lot of work to do around our two last RFCs (links below) which we want to include in our Beta1release.

As usual, you can keep track of our public Roadmap here: https://github.com/Activiti/Activiti/wiki/Activiti-7-Roadmap

and our current milestone in here:

https://github.com/activiti/activiti/issues?q=is%3Aopen+is%3Aissue+milestone%3ABeta1

 

All our building blocks are designed from the scratch to work seamleasly with the following technology stacks:

  • Spring Boot
  • Spring Cloud
  • Docker
  • Kubernetes

And we are aiming to run in all these cloud platforms:

Screen Shot 2018-05-03 at 08.38.27

For Beta1the following building blocks are going to be provided:

  • Runtime Bundle template / Spring Boot Starter
  • Query Service template  / Spring Boot Starter
    • JPA reference implementation
  • Audit Service template / Spring Boot Starter
    • JPA reference implementation - MongoDB alternative
  • Cloud Connectors template / Spring Boot Starter  RFC: Activiti Cloud Connectors 
  • Application Service template / Spring Boot Starter RFC: Activiti Cloud Application Service
    • Service Registry integration
    • Config Server integration
    • Gateway integration for Routing

On top of these Building Blocks a set of examples will be provided to demonstrate how these services can be customised and deployed to Kubernetes clusters using Helm Charts.
The Activiti Cloud Blueprint will be updated to follow good practices around Helm deployments and to demonstrate the use of Pipelines and a continuous deployment approach.
Acceptance Tests are provided as well,but this might come after we release Beta1.
Based on our backlog and planned conferences (GeeCon 2018, Krakow& JJUG CCC Spring 2018, Tokio) we estimate that we will use May to polish and finish these building blocks. We want to make sure that our Beta1 is functional enough to deliver value to community users that are looking for a solid cloud native approach.
As we stand today this is are the proposed dates for the following milestones/releases:

Beta1 - Late May 2018 - In Progress-

  • Process Definition based security unification
  • Cloud Native BPMN Signal and Message Behaviours
  • New Process & Task Runtime Java APIs
  • Application Service
  • Refinements and consistency improvements
  • BluePrint repository split improvements to use query & audit
  • Notifications PoC with GraphQL integration at the gateway layer
  • Jenkins X (jenkinsx.io) support and alignment

Beta2 - Late June 2018

Final - July 2018

  • BluePrint Trending Topics Campaigns User Interface and Dashboards
  • Security & SSO review (review Apache Shiro Wildcard Permissions)
  • JHipster integration review and articles

In this short post, I’ll outline the steps to launch Alfresco Process Services (APS) version 1.8.1 using the updated Amazon Machine Image (AMI) available on the AWS Marketplace.

 

 

Step 0: Register for a 30-day trial license

If you don't have a license to run Process Services, you can get a 30-day trial license by visiting https://www.alfresco.com/platform/process-services-bpm/trial/aws.

 

Step 1: Select the AMI from AWS Marketplace

From the AWS EC2 console in your preferred region, launch a new instance based on the AMI shared on the AWS Marketplace.

 

 

Step 2: Choose an instance type

By default, we recommend using m4.large instance type. Nevertheless, you can use a smaller instance such as t2.medium. 

Step 3: Review and launch

Apart from the instance type, you can simply leave the default settings and click the Preview and Launch button. After clicking the Launch button, the EC2 instance will be in the ‘initializing’ state for no longer than 2 to 3 minutes, before Alfresco Process Services will be accessible, denoted by the EC2 instance changing to the ‘running’ state.

 

To secure and control inbound traffic to your instance, you need to edit the existing inbound rules. To limit the HTTP inbound traffic to your IP for example, add the following custom rule:

  • Type: custom TCP
  • Protocol: TCP
  • Port Range: 8080
  • Source: My IP

Step 4: Logging in to Alfresco Process Services

 

Once the instance state is ‘running’ and all status checks passed, open a browser page using the URL: http://{public_dns_here}:8080/activiti-app.

Login page 

On the login page, use the credentials:

 

If you need a valid product license, it takes less than 2 minutes to get a 30-day trial license by filling out the form at https://www.alfresco.com/platform/process-services-bpm/trial/aws

Step 5: Try out the new Process Workspace

The Process Workspace is a new out-of-the-box user interface for end users to view, act and collaborate on tasks and processes. Read the APS 1.8 blog post for more details. The Process Workspace UI is available at http://{public_dns_here}:8080/activiti-app/workspace/.

 

Step 6: Logging in to APS Admin console (optional)

Open a browser page using the URL: http://{public_dns_here}:8080/activiti-admin. On the login page, use the default credentials: admin/admin.

 

Once logged in to the Admin console,  it is always recommended to update the existing password to something much more secure such as a combination of four words. Check out this post for details about securing passwords.

 

Open the Configuration menu to connect the Admin Console to your APS running application. Edit the REST endpoint settings and apply the corresponding values as shown here below.

 

Save and click on Check REST endpoint. You get a confirmation message with the endpoint status and Engine version.

Bravo! You have successfully launched APS using the AMI published by Alfresco.

Step 7: Start building your first process apps 

You now have a running Alfresco Process Services environment that you can use to explore the Hello World getting started tutorial and try out some of our more advanced community examples.

Last week we worked to make further refinements in our core services. Events emitted by our services now include the Activiti Cloud Application Name, Service Type and version. Our new Runtime APIs are now covering Event Listeners and Connectors and we are getting close to start moving these initiatives from the incubator to their own repositories.

 

@MiguelRuizDevstarted tweaking with Angular in order to build a basic UI to try TaskManagerService out

@igdianovfinishing acceptance tests for GraphQL and subscriptions mechanism

@daisuke-yoshimotois working on making acceptance-tests for Signal Throw Event Cloud Behavior.

@constantin-ciobotarucreating new controllers for Modelling Service to support different backends for storage

@lucianopreaworked on some tickets related to Tasks (#1831#1814)

@pancudaniel7start looking into Activiti7 by checking out the activiti-cloud-examples

@balsarorireviewing PRs and discussions around Service Task Binding behavior in the engine

@ryandawsonukcreated PRs to add application metadata (service name, version, type, parent application) to the events emitted by the runtime bundle and have these consumed by the other components. Included in the PRs changes to the security policies for these services so that service name can be specified in long or short format

@erdemedeirosworked on the new Process Runtime Java APIs. Polished events listeners; added the first iteration for Java connectors (work in progress).

@salaboyworked in splitting the Activiti Cloud Blueprint into separate repositories to leverage cloud deployments and automated pipelines for continuous delivery.

 

This week we will start reviewing pending PRs from @daisuke-yoshimotoand @igdianov.

Also we will start merging code from the incubator to our infrastructure components to make sure that our Demos and Blueprints use the new services and features. We will also be having more news about our Modelling project soon, so keep an eye on this blog for more news soon.

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

 

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

Last week we worked hard to move forward Application Services, new Runtime APIs and aligning our services and configurations to map these changes. This week we will push forward to make sure that we start having examples using these new services and APis.

Feel free to join our RFC documents and add comments/feedback.

https://salaboy.com/2018/04/18/rfc-java-process-task-runtime-apis/

https://salaboy.com/2018/04/11/rfc-activiti-cloud-application-service/

 

@igdianovsubmitted Pull Requests related to Notifications and WS integration at the Gateway level for GraphQL endpoints.

@daisuke-yoshimotocreated the first implementation about Signal Throw Event Cloud Behavior that broadcasts signals to all Runtime Bundles by using Spring Cloud Stream. Pull requests: https://github.com/Activiti/Activiti/pull/1783https://github.com/Activiti/activiti-cloud-runtime-bundle-service/pull/38https://github.com/Activiti/activiti-cloud-build/pull/41

@constantin-ciobotarudid rebase and resolved conflicts for “delete task” #1812and “delete process instance” #1809related PRs

@ryandawsonukcreated PRs to publish application metadata (which reflects which application a service belongs to) to eureka and to allow the stream names used between runtime bundles and connectors to be overridden. Started work on adding the application metadata to events

@erdemedeirosworked on the new Process Runtime Java APIs. Added first iteration for event listeners. Started thinking about connectors.

@salaboyworked on RFC document for new APIs and some refinements in the Application Service located in our incubator repositories.

 

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

 

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

The main goal of this document is to create a proposal for a new set of Process & Task Runtime oriented APIs. This APIs will be provided as separate maven modules from the activiti-engine module, where the interfaces defined in the APIs will be implemented. The main purpose for this new API is to make sure that the path to the cloud approach is smoother, allowing implementations to start by consuming the framework and then move smoothly to a distributed approach. As mentioned by Martin Fowler in his blog post Monolith First, https://martinfowler.com/bliki/MonolithFirst.htmlwe are going through an evolution to better isolation to support scalable and resilient architectures. The API proposed here are consistent with modern development approaches, compact and making sure that the transition to a microservice approach is faster and conceptually more aligned.  

Until now, the Activiti Process Engine was bundled as a single JAR file containing all the services and configurations required for the engine to work. This approach pushes users to adopt all or nothing. Inside this JAR file there are interfaces and implementations which are tightly coupled with the Process Engine internals. This causes maintenance and extension problems of different implementers not understanding where the public API ends and where the internals begin. This also blocks the Activiti team to move forward and refactor the internals into a more modular approach.

It is really important to notice that the old APIs will not be removed, but the new APIs will be always the recommended and maintained approach for backward compatibility.

Process Runtimes are immutable runtimes that knows how to execute a given set of business process definitions (concrete versions). Process Runtimes are created, tested, maintained and evolved with these restrictions in mind. Process Runtimes have a lifecycle, they can be created and destroyed. It is likely for a process runtime to be included in a pipeline for making sure that the Process Runtime will work properly when needed. In other words, we need to make sure that Process Runtimes, after its creation, will work properly based on the acceptance criterion defined.

It is also important to note that Process Runtimes provides you the granularity to be scaled them independently. In contrast with the notion of Process Engine, where you never know the nature of the Processes that you will have in there and hence testing, validating and defining scalability requirements becomes a painful (if not an impossible) task.

In the same way the Task Runtime will provide access to Tasks. Once the Task Runtime is created we can expect the same behavior (and configurations) to apply to every task created by this runtime.

One more important difference between the old notion of Process Engine and Process Runtimes is the data consumption aspect. Process Runtimes are designed to execute Processes. Task Runtimes are designed to execute Tasks. Process & Task runtimes are not designed for efficient querying, data aggregation or retrieval. For that reason the Runtime APIs will be focused on compactness and the execution side. Following the CQRS pattern (Command/Query Responsibility Segregation, https://martinfowler.com/bliki/CQRS.html) the Runtime APIs are focused on the command (execution) side. There will be another set of APIs for efficient querying and data fetching provided outside of the scope of the Runtime APIs. Check the Activiti Cloud Query Service for more insights about these APIs.

Main Constructs (APIs)

While creating the proposal, we have the following architectural diagram in mind. You can consider this as the ultimate objective from an architectural and APIs point of view:

ProcessAndTaskRuntimeDraft

We want to make sure that there are clear limits and separation of concerns between the different components exposed in our APIs and for that reason, even if we cannot fully decouple the internals right now, we want to make sure that our APIs clearly state our intentions and motivations for future iterations.

ProcessRuntime

  • processDefinitons()-> Immutable list
  • processInstances()
    • operations(Create/Suspend/Activate)
  • metadata()
    • signals()
    • users()
    • groups()
    • variables()
  • processInstances() -> get all the process instances that are in this Runtime (quick accessor to bypass the selection of the process definition)
    • Operations
      • Activate
      • Start
      • Delete
      • Suspend
  • Operations(broadcast and batch to all process instances)
    • signal()
    • stats() / info() // The statistics and status of the Runtime
  • configs() -> configurations used to create the Process Runtime
  • destroy()

TaskRuntime

  • Operations
    • create()
    • complete()
    • claim()
    • release()
    • update()
  • configs() -> configurations used to create the Task Runtime
  • destroy()

RuntimeConfiguration

  • eventListeners()
  • connectors() - > system to system connectors (old JavaDelegates)
  • dataStore()
  • policies() // security policies for process definitions
  • identity()
    • getGroupsForCandidateUser()

EventListener

Connector

  • health()
  • stats()

Events

  • Events emitted the process runtime
    • ProcessInstanceCreated
    • ProcessInstanceStarted
    • ProcessInstanceCompleted
    • ProcessInstanceSuspended
    • ProcessInstanceActivated
    • ProcessVariableInstanceCreated
    • ProcessVariableInstanceUpdated
    • ProcessVariableInstanceDeleted
  • Events emitted the task runtime
    • TaskCreated
    • TaskAssigned
    • TaskCompleted
    • TaskReleased
    • TaskUpdated
    • TaskSuspended
    • TaskActivated
    • TaskVariableInstanceCreated
    • TaskVariableInstanceUpdated
    • TaskVariableInstanceDeleted

Maven Modules

The separations of concerns between Tasks and Processes at the API level will be kept in separate packages. Providing an activiti-process-runtime-apiand a activiti-task-runtime-api. These packages will represent our Backward Compatibility Layer with clear deprecation and evolution policies.

new-apis-maven-modules

For Spring Boot users the Spring Boot Starters will provide all the configuration and wiring needed for quickly get a ProcessRuntime and a TaskRuntime up and running.

Developer Scenario

  1. The developer is working in a Java Application with Spring Boot
  2. The developer adds the activiti-all-spring-boot-starterdependency
  3. The developer uses only interfaces in the activiti-process-runtime-api project to interact against Process Runtimes

Maven dependency:

<dependency>

  <groupId>org.activiti</groupId>

  <artifactId>activiti-all-spring-boot-starter</artifactId>

</dependency>

 

Configuration wise the following types will be autowired (following standard spring practices: aka autoconfigurations) for you while creating the ProcessRuntime instance:

  • EventListener
  • Connector

This autoconfigurations should match the available metadata from our process definitions to make sure that our processes has everything required to run.

Then you should be able to do:

 

@Autowired private ProcessRuntime processRuntime; 

 

Now you have a ProcessRuntime available in your application, this ProcessRuntime object will be populated with the available process definitions in the classpath. This means that this ProcessRuntime instance can be heavily tested for that set of process definitions.

With the new processRuntime instance you can access to the available processDefinitions:

 

List<ProcessDefinition> processDefs = processRuntime .processDefinitions(); 

 

Notice that there are is no way to change this list of ProcessDefinitions. It is an immutable list.

You can get the configurations used to construct this ProcessRuntime by doing:

 

ProcessRuntimeConfiguration conf =  processRuntime.configs();

 

Here you can check which connectors and event listeners are currently registered to your runtime. In particular, getting connectors informations is a key improvement over previous versions:

 

Map<String, Connectors> connectors = processRuntime.configs().connectors();

 

This allows Process Runtimes to validate at creation time, that all the Process Definitions loaded can be executed, because all the connectors required are present.

In a Spring context, all connectorsand listenerswill be scanned from the classpath and make available to the Process Runtime at creation time.

To create process instances it should be as simple as selecting the Process Definition and creating new instances with our fluent API:

 

ProcessInstance processInstance = processRuntime.processDefinitionByKey(“myProcessDefinition”) .start(); 

 

And then actions:

 

processInstance.suspend(); ... processInstance.delete(); 

 

Getting Process Instance should look like:

List<ProcessInstance> instances = processRuntime.processInstances(); [/code]

Or

 

ProcessInstance instance = processRuntime.processInstanceById(id); 

 

Task Runtime should follow the same approach as Process Runtime:

@Autowired private TaskRuntime taskRuntime; …

Task task = taskRuntime.createWith()...;

task.update();

task.complete(); 

 

The initial version of these APIs will be restricted to the minimal amount of operations to cover most use cases. Are you using Activiti 5 or 6 APIs? We will appreciate community feedback on what other features not mentioned here are you using in your projects?

Other Considerations and References

Reasons to introduce a new set of APIs:

  • Making sure that we have a clear line between internal code and external APIs that we suggest users to use and rely on for backwards compatibility
  • A reduced technical surface that is nicely crafted and achieving a single purpose.
  • A well defined set of interfaces to avoid all or nothing approach. Improved transitive dependencies control over our microservices and clear separation of concerns
  • Closely aligned with Cloud Native practices for easy transition to distributed architectures
  • Provide more freedom to refactor internals into a more modular and maintainable approach
  • Leverage new reactive frameworks and programing practices

PoC in Incubator (to track the progress)

https://github.com/Activiti/incubator/tree/develop/activiti-process-runtime-api

Feel free to add comments to our live RFC document here:

https://docs.google.com/document/d/1S9KDOoFTkbjwPq6LWEKVPLP5ySJJgoOX1ZIsZcdPi-M/edit?usp=sharing

Last week we worked hard into getting the Activiti Cloud Application Service abstraction layer and our Proposal for a new set of Process & Task Runtime APIs into our Incubator Github Repository: https://github.com/activiti/incubator. This week we will push forward to get the Application Service up and running into our Kubernetes cluster and a set of examples for the new Process & Task Runtime APIs. We are still refining the RFC document for the Process & Task Runtime APIs, but feel free to jump in our Gitter channel if you want to have a chat about this.

 

@MiguelRuizDevpolished Service layer between Repository and Controller, crafting more specific methods. Implemented right verbs for requests along with UUID. Changed the RepositoryRestConfigurerAdapter to expose resource Id.

@igdianovWebSockets Subscriptions and GraphQL notification mechanismo integrated with the gateway.

@daisuke-yoshimotoworking on BPMN Signals and Messages

@constantin-ciobotarufinished default diagram when no graphic info present in model” #1808, “delete task” #1812

@lucianopreaworked on creating Ad-Hoc Tasks to Runtime Bundles.

@mteodorifix IllegalArgumentException with illegal-transitive-dependency-check #1866

@ffazzinitrying out new deployment approaches in different namespaces inside Kubernetes.

@ryandawsonukAdded support for variable types to the variable-related REST endpoints on runtime bundles. Updated documentation on security restrictions. Started work on adding application name into services so as to support fully qualified names.

@balsaroriWorking on correcting task events order #1854

@erdemedeirosworked on the new Process Runtime Java APIs. Added support for variables retrieval at process instance and task level.

@salaboyworked on the Application Service RFC and PoC in the incubator. I’ve also worked with @erdemedeirosinto defining the new Process & Task Runtime APIs.

 

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)

When you build distributed architectures (with containers involved), applications end up being a set of logically related microservices that you run on top an infrastructure. This infrastructure will allow you to set up isolated environments where you should be able to control resource allocation and security. We believe that inside each of these environments you might want to have a set of these logical applications, which means that you will probably have a lot of containers deployed. Some of these containers will be infrastructural services, such as Data Stores, Message Brokers, Gateways, and some of them will be very domain specific.

The main responsibility of the Activiti Cloud Application Serviceis to provide this high level (Activiti Cloud specific) view of a set of domain specific applications that are running inside an environment. At some level Applications also provide isolation between each other and they usually require different Role Based Access Control, IDM configurations, different message broker configurations, different set of credentials, etc.

The purpose of this document is to serve as a specification for the initial implementation of the Activiti Cloud Application Servicewhich serves as some kind of controller/monitor for the Applications that are deployed inside our Cloud Native environments. This can be used by client applications to consume and interact with each Application Services.

 

Notice: even if we are targeting Kubernetes we need to make sure that the core service is technology agnostic. This is, as in any service of community project needs to rely on common abstractions, trying to reuse existing ones or collaborating with existing communities to avoid duplication of effort.

Features

The Application Service endpoints should expose the available applications, the application metadata and the application state. The Application Service should not store this information, but rely on delegating as much as possible to the Service Registry, Configuration Service and other services in the infrastructure.

The Application Service is not responsible for deploying/provisioning new applications, but it should expose operations such as the structure and status of the application to understand when applications are already provisioned and ready to receive requests. This means that for at least the initial version of this service most of the operations will be READ only. This service should react to changes in the environment (new service registration/de-registration) to make sure that the Application Service is providing data that is up to date and reflecting the real status of the deployments, this can be achieved by monitoring the Service Registry.

The Application Service should also interact with the Configuration Serviceto understand each application and its infrastructural services configurations and dependencies. For example: if  Runtime Bundle (one of our core building blocks) depends on a Relational Database and a Message Broker, it can look into a Configuration Service(or configMaps) for an entry related to the application service to understand which infrastructural services are required for the application to run. We can be smart and list these requirements before deployment time, so an administrator can make sure that those infrastructural services are ready.

Applications have a relationship with IDM and security, because we are using Keycloak as our SSO and IDM provider, Applications might require to have a different realmconfiguration.

A key important feature that must be considered and analyzed is Applications versioning. Making sure that we can move applications from different environments and that each of these Application will have a lifecycle is important. The intention of this document is to describe these aspects, but not to solve them all.

Interactions / Flow

The following diagram shows the components that will interact to provide the features described above:

Screen Shot 2018-04-11 at 09.08.19

 

 

The Application Service has relationships with 4 key components:

  1. Configuration Service (ConfigMaps in K8s and Configuration Service in Spring Cloud)
  2. Service Registry (Eureka outside of K8s and the K8s Service Registry)
  3. Gateway (Spring Cloud Gateway)
  4. Identity Management / SSO (KeyCloak)

In order to provide these high level abstractions (one application composed by a set of services) we need to have an Application Deployment Descriptor, which basically describe the Application expected structure. This Deployment Descriptor describes how the application is composed and implicitly define what is required for the Application to be UP (state).

For this reason, the first step of the interaction is to create this high level Deployment Descriptor.  This high level Deployment Descriptormaps the Activiti Cloud Building Blocks (Runtime Bundles, Cloud Connectors, Query & Audit Services, etc.) to an Applicationstructure.

This Deployment Descriptorwill live inside the Config Server, which has a structure to store Deployment Descriptors in a directory fashion. In other words, the Deployment Descriptor Directorywill have a list of Application Deployment Descriptors available, that can be queried to obtain references to the available Deployment Descriptors for Applications.

This Deployment Descriptorswill be used to match against the Service Registrythe status of each Application.

Each Service provisioned will require to have two pieces of MetaData that will allow the correlation against these Deployment Descriptors:

  • Activiti Cloud Application Name
  • Activiti Cloud Service Type

If these two pieces of information are added to the Service Instance information inside the Service Registry, the Application Service will be able to correlate, validate and monitor the relationship between the services that are currently deployed.

Notice that all services that doesn’t belong to an application, will be grouped together.

The Application Service then, will be in charge of interacting with the Service Registry to answer questions about the amount of deployed applications and their respective state.

The sequence of interactions is as follows:

application-service-cycle

It is important to notice that there is no state storing as part of the Application Service, all state is created based on the Deployment Descriptors in the Config Service and based on the currently deployed services in the Service Registry.

Data Types

As the first Draft of the service the following Data Types are going to be introduced.

These entities and data types should be agnostic to the underlying implementation. These Data Types represent our view of the world when we think about Activiti Cloud Applications, and we shouldn’t assume any particular implementation or technology stack.

  • ApplicationDeploymentDirectory
    • ApplicationDeploymentEntry[]
  • DeploymentDescriptor
    • applicationName
    • applicationVersion
    • serviceDeploymentDescriptors[]
      • Name
      • Version
      • ServiceType
  • Application[]
    • Application
      • Name
      • Version
      • ProjectRelease
      • Realm (Security Group / IDM bindings)
      • Status
      • Services
        • URL (path??)
        • Configurations[]
          • Resources[]
        • ServiceType
        • Name (Descriptive name)
        • ArtifactName (artifact - maven / docker image)
        • Version
        • Status
  • ServiceType-> Enum:  (Connector, Runtime Bundle, Audit, Query, Domain Service)
  • ProjectRelease(Coming from Modeling Service)
  • Status(UP, DOWN, PENDING, ERROR)
  • Realm(TBD)

Proposed Endpoints

  • GET/v1/deployments/directory-> list of applications that we want to deploy (static)
  • GET/v1/deployments/{deploymentName}-> get deployment descriptor for a given app
  • GET/v1/apps/-> list of APPs names and link to app
  • GET/v1/apps/{appName}/-> get app info
  • GET/v1/apps/{appName}/services
  • GET/v1/apps/{appName}/services/{serviceName}/url
  • GET/v1/apps/{appName}/services/{serviceName}/config

Evolution, Versions Upgrade and Data Migration

If we are considering applications to contain Runtime Bundles, we need to define how these Bundles evolve into future versions and how the version of the application relates to the Runtime Bundles in it. Runtime Bundles were conceived as immutable containers for sets of Runtimes, not necessarily and not only Process Runtimes.   

We need to define some common scenarios of how this is going to work making sure that we cover a wide range of scenarios. Traditionally in BPM platforms there are some different scenarios that you want to cover:

  1. Migration between different versions of the same process definition: this opens the door for a lot of complications like for example: what happens if two process versions are completely different, and migration of in-flight processes is not possible at all? It also introduces the concept of manual migration, where each individual instance needs to be inspected and migrated manually.
  2. Running two different versions of a process definition in parallel, until the old version doesn’t have any more in-flight processes, this is usually a good approach to make sure that things cannot go wrong, but it forces us to think about the routing logic used to create new process instances. *It is interesting thing to notice that this is a more aligned approach with the concept of immutability*.
  3. Sometimes, no matter what you are doing you just want to deploy a new Application version, and forget about the old one, including data. This is quite common on development cycles. Basically you want to throw away your current version of the application and replace it with a new version without really caring about data migration. If you want fast iteration cycles this might be required.

Based on these categories the following strategies are suggested:

  • Maintain: use the same data store and add new versions of process definitions, keeping the old ones in. This will require some enforcement at the time of creating new versions of the existing Runtime Bundle.
  • Migrate: a set of operations will need to be executed when rolling an upgrade, endpoints should be provided for inflight process extraction and injection. This strategy will be very useful for situations when we want to move executions from one environment to another, where the underlying data store is not the same and data need to be moved.  
  • Parallel: You want to run the previous version of the inflight processes in the same Runtime Bundle where those were defined and start running the new version in parallel in a different Runtime Bundle instance. A routing mechanism will be needed and a retire policy might be required to check when to shutdown the old version.
  • Destroy: You want to throw away the old definition and just run the new one. You don’t care about data that was generated by the old version.

Notice that these strategies should be defined at application level, not at Runtime Bundle level, because these strategies will impact other components such as Audit and Query. The consistency needs to be maintained at Application Level.

Each strategy will implement a different flow of actions (pipelines) that should be automated or triggered by a DevOps user. Notice that no reference to any pipeline should be included in this service, but hooks should be available.

(TBD Strategies examples and planning for implementation, suggestions and comments are more than welcome)

Other topics / Risks / Open Questions

We are building the Application Service because we have some Activiti Cloud specific requirements, but the Kubernetes, Spring Cloud and other communities (like JHipster) are looking at these requirements using different angles. As part of the development of this service, we commit to monitor these communities and collaborate with them to make sure that we do not duplicate any functionality, instead we will collaborate and improve what they propose to benefit all from the same set of core libraries and infrastructure.

Notice because of the interaction between the Service Registry and the Configuration Service, for Spring Cloud pure apps, we can bundle the Service Registry and the Configuration Service together as other communities are doing (http://jhipster.tech/). Following the same approach we can also bundle the Application Service with these two to avoid extra containers to be deployed.

When we look at K8s, this means that we will need to have a container for the Application Service which will interact with the K8s Service Registry and Config Maps, raising the question if we can bundle this service with the Gateway container.

Some research should be done around the relationship between our ApplicationDeploymentDescriptorsand the real deployment descriptors such as Kubernetes Deployment Descriptors (YAML files) and HELM Charts. It is important to have a trace between the Activiti Cloud view and the real deployment descriptors that are used to provision the services. For this reason, looking into Monocular ( https://github.com/kubernetes-helm/monocular) and the relationships between the services explained here is important. It is also important to make sure that we follow similar practices for dealing with versioning and releases as HELM or similar technologies which are already designed to solve these scenarios in a non BPM specific way. 

References

Github Issues:

https://github.com/Activiti/Activiti/issues/1692

Incubator Project:

https://github.com/Activiti/incubator/tree/develop/activiti-cloud-application-service

Original Document in Google Docs open for Comments:

https://docs.google.com/document/d/1ZKKJQIXu313ak9lUzsfj46bZbQRzqe1zPrPVpM98hlU/edit?usp=sharing

Last week we had some very good progress on our main topics that we want to cover before doing Beta1. Application Service and the new Process Runtime Java APIs are moving forward at very nice pace. There is a lot of work to do around these two areas, so we want to be focused in getting the first iteration out of the door soon. At the same time we keep iterating our core building blocks to make sure that they are consistent.

 

@MiguelRuizDev changed layout of the Task Manager Incubator Project, from Assembler to Service between Controller and Repository, managing all endpoints from Controller instead of Repository. Implemented architectural changes to tests using Junit and AssertJ.

@igdianovfinishing WebSocket and Gateway integrations for notifications and subscriptions.

@daisuke-yoshimotoworking on the BPMN Signal and Messages types for cloud native deployments.

@constantin-ciobotaruworked to add test for “delete task” #1812, more changes for default diagram when no graphic info present in model” #1808

@lucianopreaworked on standalone tasks and subtasks in Runtime Bundles.

@mteodorivisiting Iasi, Romania and meet remote team there, fixed build issues #1858and #1857

@ffazzinidiscussions and review around application service

@ryandawsonukImproved the security policies on the runtime bundle service by adding filtering by RB name (therefore making it possible to work from a single access control config across multiple services). Created PR to add support for json media type according to alfresco guidelines for audit service.

@erdemedeirosworked on the new Process Runtime Java APIs. Added the basic operations around process definitions and process instances.

@salaboyworked on the Activiti Cloud Application Service PoC to finish the RFC document.

This week we will push harder into the Application Service and APIs definitions and make sure that our Cloud Native Building Blocks are aligned (from a dependencies and APIs point of views).

We keep getting community users messages that wants to try out our new infrastructure and services. We totally recommend to check out our Activiti Cloud Examples Github repository: https://github.com/activiti/activiti-cloud-examples and our Gitbook’s Quickstart & Overview section: https://activiti.gitbooks.io/activiti-7-developers-guide/content/getting-started/quickstart.html

Get in touch if you want to contribute: https://gitter.im/Activiti/Activiti7

We are looking forward to mentor people on the technologies that we are using to submit their first Pull Request ;)