Skip navigation
All Places > Platform > Blog


13 posts

Setting Course

Posted by ragauss Employee Dec 15, 2017

Long ago, perhaps 6 or 7 BC (before children, my oldest is 9), I made the adventurous/stupid decision to buy a very used boat off of eBay.


The 1986 Bayliner’s previous owner was a duct tape kind of guy: quick fix now, real fix later, (but later never comes).


I am not a duct tape kind of guy.


My (beautiful and tolerant) wife and I spent many a weekend getting the boat in shape: redoing electrical systems, replacing engine components, refinishing teak… so much so that the marina staff would tease us, asking if we would ever actually take it out.  “We’ll take it out when it’s ready”, we’d reply.


After many months it was finally ready.


We had a fantastic 5 day trip around the Chesapeake Bay planned.  The first day was absolutely perfect, the boat effortlessly gliding across calm, glassy water to Solomon’s Island for a relaxing evening.


The next day started off with promise, but on the way to our second destination things very quickly took a turn for the worse when the boat broke down, leaving us adrift in a shipping channel.


We later learned that a piece of the transmission in the outdrive had worn down and broken off, an internal component not easily inspected, and would likely not have been recognized as nearing failure even if it were.


All of our time optimizing other components wasn’t wasted however, we certainly became much more familiar with them which gave us more confidence when we finally were underway, but perhaps if we had it out on the water sooner, more often, on shorter outings, exercising all of the components working together, we would have identified this issue before it ruined our holiday.


Our family still enjoys boating (albeit on a different boat), particularly actually getting out on the water, but enough of the nautical stories, you’re probably here to read about Alfresco...


As I’m sure you've heard by now, the Alfresco Digital Business Platform (DBP) is quite a compelling story itself: Process, Content, and Governance under one roof powering digital transformation in enterprise organizations across all kinds of industries.  That’s a big ship to deliver, particularly from a software engineering perspective, and as you can imagine something that needs to constantly evolve with the ever changing technical landscape we’re all a part of.


I’ve mentioned previously that we created a new Platform Services team in engineering and we’ve been primarily focused on common deployment needs via Docker and Kubernetes thus far.  Our team’s original plan was to next use this deployment approach to improve the production deployment of specific components of the Digital Business Platform.  Of course, deployment is just one piece of the DBP puzzle and the Platform Services team is also responsible for the shared infrastructure required to link the DBP components together and provide a consistent external interface to them.


If you haven’t seen yet, Alfresco’s Activiti project (the engine powering Alfresco Process Services) and the vibrant community around it has been doing some fantastic, innovative work to bring a modern architecture inline with industry trends to process in Activiti 7.  That architecture depends on pieces of infrastructure very similar to what’s needed for the DBP, and since the DBP will include Activity 7 any duplicated and/or diverging infrastructure could be difficult to resolve in the future.


To accelerate the optimization of the DBP as a whole and across the organization we’ll be bring together the entire DBP as it exists now, get it out on the water, and iterate on the infrastructure, consolidating redundant components and hardening them for release to enterprise customers.


As part of that effort we’ll move some of the existing infrastructure components currently under Activiti’s umbrella to make them more common across all of Alfresco’s ecosystem, still as open source projects and bringing the current active community along with them to the new home.



This approach has a number of benefits:

  • We can benefit from Activiti’s momentum without starting from scratch
  • We avoid the potential for duplication and divergence of Activiti’s infrastructure efforts without slowing down its development
  • We continue to foster an open community around infrastructure, and in fact expand it via shared ownership by Alfresco DBP engineering, Activiti, and the community, with the Platform Services team guiding and being responsible for delivery as supportable DBP components
  • Alfresco engineering teams can have a common development and test environment where they can start to leverage new infrastructure components


I hope you’re as excited as I am for this new way of moving forward and I encourage you all to contribute.


Let’s get out on the water, together.



Team Kickoffs

Posted by ragauss Employee Oct 6, 2017

Hello Again


Hi, I’m Ray Gauss.  You may remember me from such blog posts as “DAM Standards” and “Metadata… Stuff”.


I’ve worked in Engineering on many areas of the Alfresco product portfolio and have fairly recently transitioned to the role of Architect for Platform Services, a new team whose self-proclaimed mission is to:


Provide services, in the form of deliverable artifacts and guidelines, for the Alfresco Digital Business Platform.   The purpose of these services is to provide unified, secure and scalable technical frameworks to allow other components and consumers of the Platform to interoperate and focus on delivering integrated process, content and governance capabilities that enable digital transformation.  


The team members have a healthy mix of backgrounds from development, QA, and DevOps and are positioned to do some fantastic work!


In this post I wanted to give a bit of insight into some aspects of our teams’ processes, specifically new project kickoffs.




When a development team starts on new work it’s usually helpful to do some form of kickoff to get everyone engaged and gain a shared understanding of, or in some cases help shape, the backlog.


We recently went through kickoff sessions for three different bundles of work ahead of us and chose a different method of team engagement for each.


User Story Mapping


I won’t go into the details of the User Story Mapping process itself here as there is plenty of information on the subject including the video and reference guides on Jeff Patton's website.


In my limited experience, this process works best for projects where there are several user personas involved that need to complete many tasks against the system being designed, particularly when a fair amount of UX is involved (I’d consider any interaction with a system part of UX, not just graphical UIs).


As such, we chose this method for discussing development of Containerized Deployment since we’ll need to account for the experiences that developers, runtime container cluster admins, solutions engineers, IT, etc. might have with the framework.



If you’re following the User Story Mapping process to the letter it can literally take days, so if you’re considering this method and are limited on time I’d suggest that you define your ideal goals for the session but make sure that you try to capture all the user personas and high-level tasks before going too deep on any one story.


Role Play


This is a bit of an odd one that I don’t think I’ve seen used elsewhere, but it has worked well the few times I’ve employed it.


Each person on the team assumes the role of a participant in a system.  That could be a user, browser, service component, database, etc.  Everyone wears a sticky indicating their role.


Common flow scenarios are then acted out, passing data on other stickies from one role to another, perhaps as part of a request, getting back another sticky with response data.


I find this helps solidify both the component model and complex sequences of interactions between the components, but probably works best when there is already some high-level understanding of that component model.


We chose this method for getting our heads into our Identity Service and API Gateway work.



At the end of the session the team felt that while they understood the flow they would have also liked to have seen the sequence written down as they went so they wouldn’t have to keep the whole thing in their head.


As a result, we spent the next morning collaboratively creating a sequence diagram of the same scenario using PlantUML, part of our method of architectural documentation.


Whiteboarding Diagrams


Ah, good old whiteboards... with squares, and rounded squares, and circles, and arrows, there must be arrows, and dashed squares that enclose other stuff!  Draw all the things!


They still work really well for many situations, including general design and introduction to new concepts and systems.


We used this one for our discussion on the Messaging and Event Gateway project.


Not official component name
(not official product name  )


It is a bit tougher to get total team involvement using this method.  We had a few people wielding dry erase markers but often the components and interactions are more easily drawn serially, and there’s only so much room to physically stand next to the board.  Maybe everyone without a marker should get a laser pointer at the next one (as long as we’re careful not to blind the marker people, marker people can be very sensitive).


Having a bit of fun with some of the components can certainly help to break up the monotony (message wagon FTW!).


Distributed Teams


While Platform Services is a distributed team spanning four time zones (+2 to -7 GMT) we were lucky enough to be able to coordinate the above as an in-person gathering in Maidenhead.


However, there are online tools available that target some of these methods.  In the past I’ve used and, my current favorite for both story mapping and whiteboarding which I hope we'll start using more in this team.


I suppose the role play game could be done with our Zoom video chats, but we'd have to think about the logistics of that one.


When you do have an opportunity for a distributed team to be together in-person it can also be very helpful to go through some form of team building exercise.  Our fantastic scrum master planned an escape room outing:


(Cristina Sirbu, Jared Ottley, Christine Thompson, Marco Mancuso, Ray Gauss, Gavin Cornwell, Subashni Prasanna, Sergiu Vidrascu, Greg Melahn, Nic Doye)


What Else?


There you have it, a peek at our kickoffs.


Are there other methods or tools you’d recommend for team engagement?

Having reliable pipelines to build, test, compile and ship your products are, in my opinion, the backbone of a software organisation. Having good developers is one thing but if you don't have the pipe that gets that product "out there" for the customers to see as soon as humanly possible then you have yourselves a bottleneck.


Delivering a product quickly and efficiently is the first of the "Three ways of DevOps"; Increasing the "flow" from left to right. 'Left' being the action of committing your code. 'Right' being the moment a customer can use the product, whether thats live on a production site or making an artefact available to download. This also ensures that we don't pass on defects downstream, that the system is understood by the entire team and that emphasises the performance of the system overall.


One of the exciting challenges we have here at Alfresco is the opportunity to improve on our current pipelines and think out-of-the-box. Innovation is key. Just because we have used a central system historically this doesn't mean that we are pinned to only using that system. Teams within Alfresco are now being encouraged to design, implement, manage and maintain their own build pipelines. Whether the pipelines remain on Bamboo or move to a different platform entirely is a decision each team makes together as a team.


The DevOps engineers have recently been reorganised to embed in these teams that require this support and to spread our culture. This change to our structure and daily work has given us the chance and freedom to investigate and work on technologies we may not have had the opportunity to previously. We all still meet weekly to discuss our work just to make sure we all aren't reinventing the wheel or duplicating our efforts.


My recent challenge was to evaluate a pipeline for the team I work in. I needed to look at the throughput and the duration of the builds. When the builds were being triggered they were consuming most of the central build resources, preventing other teams from building any of their products and placing their jobs in a queue. And when these jobs were building, they could sometimes take up to 9 hours to complete!!


This is far too long and contradicts the second of the "Three ways of DevOps"; Amplifying the "feedback" loop, right to left. We need to be able to improve how the system works and make corrections continually. The goal being that all customers, internal and external, are understood and responded to. These actions also further embed knowledge of the working systems in the teams working on them.


As part of my evaluation I discovered some interesting things. The most interesting being that our use of "elastic agents" was flawed. Elastic agents are build agents that are created dynamically on AWS, based on demand from our central build server. Each of the elastic agents are stateless. They all spin up with the same AMI. And thats how it should be. Most if not all of the jobs we run need to download libraries and dependencies before being able to run their main tasks. So we could have nearly 20 minutes of downloads happening to run 2 minutes worth of unit tests!!


To fix this we attempted to use EFS. This allowed all the elastic agents to launch as normal but then mount the EFS volume themselves, using this volume as a single repository for commonly downloaded libraries. This approach seemed to work at first but then there were issues with artefact and snapshot conflicts so we reverted back. This is still a valid approach but how we manage and implement the correct levels of segregation of the shared resources and job outputs still needs to be determined.


Another approach I took, which is one of my previously mentioned POC's, was to use a pre-provisioned Docker image that ALREADY contains the common libraries and dependencies in the correct directories. This means when the instance of the image is running (container) all the build agent has to do is checkout the source code and run the build commands. Sounds great! This method could be used both on our current build system and in conjunction with my second POC. We'll get to that one in a bit.


The HUGE benefit of having Docker containers build your code is that the teams can manage their own build images. All we need to do is to configure the jobs to use the images we produce. Sounds so simple. But with this implementation comes the need to skill up dev's who may not of even installed software on the command line, let alone use Docker to manage this. Still, another challenge of ours to face.


Moving onto my second POC now. This was to "prove" that we could use AWS and its services to build, test and deploy our code in a very aggressive manner. We're talking multiple deployments per day. It is very possible to achieve this but the work, time and cost involved is the greatest of all the approaches taken so far. It also requires skills in multiple areas (AWS, Docker, Lambda coding) that may not necessarily be something the teams have the time to invest in. We all have deadlines to meet. With this, I was able to create an AWS Pipeline that glued together a mirrored repo on AWS CodeCommit and a build project using AWS CodeBuild. Once we have a strategy for deployment of test/staging systems we can start to include AWS CodeDeploy to the pipeline. Getting this up and running took a matter of an hour or two. It was so simple. I was also able to produce an architectural diagram per-branch to see the workflows and resources created:



This is still an ongoing piece of work and the decisions to use this idea are still up for discussion. Having the freedom to experiment and break from the status-quo is a perfect segway to the third of the "Three ways of Devops"; Continual experimentation and learning. Within this team and indeed Alfresco, I was able to explore technical alternatives, experiment with new systems, learn new products and fail rapidly. Theres no better way to learn than from your own mistakes. We need to be able to take risks as often as possible. If we didn't, we wouldn't be there business we are today.


Take risks in your day to day duties! Add comments and thoughts to this post to let the community know what you have overcome!

Yesterday we, in Alfresco DevOps, successfully released our new control system for the creation, deployment, allocation and management of Alfresco CS (Content Services) Trials.


These trials last for two weeks, allowing you the ability to experience ACS before committing to a purchase.


All you need to do is fill in the form here Try Alfresco One Online Now | Alfresco  and you should receive an email within minutes giving you your login details for your isolated environment in the cloud. All of this has been possible with the power of aws!


The source code is also open-source and can be found here GitHub - Alfresco/online-trials-architecture . For all you techies out there, please fork and contribute!

As of yesterday, we opened up two repositories.


Firstly our in-development Online Trial Serverless Architecture. This can be found at GitHub - Alfresco/online-trials-architecture 

This new system is a replacement of our current deployment method and revolutionizes our present approaches to deployments. It's architect Martin Whittington is giving a brief talk about the story of how this challenge came about at #beecon2017 In Zaragosa on Friday 28th April. 


Secondly, the Lambdas we have been writing in DevOps that we use for specific products and/or day-to-day tasks. This can be found here: GitHub - Alfresco/devops-lambdas 



*Alfresco Content Services (ACS) is no longer using AWS Quick Start for deployments on AWS. We recommend Amazon EKS as the new reference deployment for ACS on AWS. To learn more, read the following blog post from our Product Manager Harry Peek:


Today we are celebrating the release of the AWS Quick Start for Alfresco One!


What is AWS Quick Start?

A collection of everything you need for a production-ready reference deployment of Alfresco One into your AWS Cloud, including CloudFormation templates and instruction guides.

AWS Quick Starts are written by our own Architects, as an AWS Official Partners in conjunction with Amazon Solutions Architects. The Quick Start provides all of the steps to automatically:

  • create and configure of all needed infrastructural resources in the AWS cloud
  • deploy Alfresco One in a clustered configuration


The Alfresco One CloudFormation Template enables you to launch a production-ready Alfresco One environment in minutes!


Where can I find the AWS QuickStart for Alfresco One?

The official Quick Start can be launched from the along with a step by step guide with detailed information to deploy your cluster in minutes.


What's in the AWS QuickStart for Alfresco One?

We also put a lot of effort in place to make this new template an Alfresco One Reference Architecture, not only because it is in high availability, but because we use our latest automation tools like chef-alfresco and our experience on tuning but also best practices learned during our latest benchmarks without forgetting architecture security. In addition to that, and to make it faster to deploy, we are using the official Alfresco One AMI published in the AWS Marketplace.


The Reference Architecture deployed:

Some interesting features you will find in this new template are:

  • It takes ~45 minutes with little user intervention
  • All Alfresco and Index nodes will be placed inside a Virtual Private Cloud (VPC).
  • Each Alfresco and Index nodes will be in a separate Availability Zone (same Region).
  • We use Alfresco One with Alfresco Offices Services and Google Docs plugin, taken from the Alfresco One AMI.
  • All configuration is done automatically using Chef-Alfresco, you don’t need to know Chef to make this work.
  • An Elastic Load Balancer instance with “sticky” sessions based on the Tomcat JSESSIONID.
  • Shared content store is in a S3 bucket.
  • MySQL database on RDS instances in Multi-AZ mode.
  • We use a pre-baked AMI. Our official Alfresco One AMI published in the AWS Marketplace, based on CentOS 7.2 and with an all-in-one configuration that we reconfigure automatically to work for this architecture and save time.
  • Auto-scaling rules that will add extra Alfresco and Index nodes when certain performance thresholds are reached.
  • HTTPS access to Alfresco Share not enabled by default but all set to enable it.


In the 6 minute video below guides you through how to deploy Alfresco One using the AWS Quick Start.


Go to the official now to launch Alfresco One into your own AWS Cloud!

This article was originally posted in my personal blog here.

SCAP (Security Content Automation Protocol) provides a mechanism to check configurations, vulnerability management and evaluate policy compliance for a variety of systems. One of the most popular implementations of SCAP is OpenSCAP and it is very helpful for vulnerability assessment and also as hardening helper.
In this article I’m going to show you how to use OpenSCAP in 5 minutes (or less). We will create reports and also dynamically hardening a CentOS 7 server.
Installation for CentOS 7:
yum -y install openscap openscap-utils scap-security-guide 
wget -O /usr/share/xml/scap/ssg/content/ssg-rhel7-ocil.xml
Create a configuration assessment report in xccdf (eXtensible Configuration Checklist Description Format):
oscap xccdf eval --profile stig-rhel7-server-upstream \ 
--results $(hostname)-scap-results-$(date +%Y%m%d).xml \
--report $(hostname)-scap-report-$(date +%Y%m%d)-after.html \
--oval-results --fetch-remote-resources \
--cpe /usr/share/xml/scap/ssg/content/ssg-rhel7-cpe-dictionary.xml \  
Now you can see your report, and it will be something like this (hostname.localdomain-scap-report-20161214.html):
See also different group rules considered:
You can go through the fails in red and see how to fix them manually or dynamically generate a bash script to fix them. Take a note of the Score number that your system got, it will be a reference after hardening.
In order to generate a script to fix all needed and harden the system (and improve the score), we need to know our report result-id, we can get it running this command using the results xml file:
export RESULTID=$(grep TestResult $(hostname)-scap-results-$(date +%Y%m%d).xml | \
awk -F\" '{ print $2 }')
Run oscap command to generate the fix script, we will call it
oscap xccdf generate fix \ 
--result-id $RESULTID \
--output $(hostname)-scap-results-$(date +%Y%m%d).xml
chmod +x
Now you should have a script to fix all issues, open and edit it if needed. For instance, remember that the script will enable SELINUX and do lots of changes to Auditd configuration. If you have a different configuration you can run commands like bellow after running ./ to keep SElinux permissive and in case you can change some actions of Auditd.
sed -i "s/^SELINUX=.*/SELINUX=permissive/g" /etc/selinux/config 
sed -i "s/^space_left_action =.*/space_left_action = syslog/g" /etc/audit/auditd.conf
sed -i "s/^admin_space_left_action =.*/admin_space_left_action = syslog/g" /etc/audit/auditd.conf
Then you can build a new assessment report to see how much it improved your system hardening (note I added -after to the files name):
oscap xccdf eval --profile stig-rhel7-server-upstream \ 
--results $(hostname)-scap-results-$(date +%Y%m%d)-after.xml \
--report $(hostname)-scap-report-$(date +%Y%m%d)-after.html \
--oval-results --fetch-remote-resources \
--cpe /usr/share/xml/scap/ssg/content/ssg-rhel7-cpe-dictionary.xml \  
Additionally, we can generate another evaluation report of the system in OVAL format (Open Vulnerability and Assessment Language):
oscap oval eval --results $(hostname)-oval-results-$(date +%Y%m%d).xml \ 
--report $(hostname)-oval-report-$(date +%Y%m%d).html \
OVAL report will give you another view of your system status and configuration ir order to allow you improve it and follow up, making sure your environment reaches the level your organization requires.
Sample OVAL report:
Happy hardening!

…On AWS, in High Availability, Auto scalable and Multi AZ support.

Back in 2013 we (at Alfresco) released an AWS CloudFormation template that allow you to deploy an Alfresco Enterprise cluster in Amazon Web Services and I talked about it here.
Today, I’m proud to announce that we have rewritten that template to make it work with our new modern stack and version of Alfresco One (5.1). We also put a lot of effort in place to make this new template an Alfresco One Reference Architecture, not only because it is in hight availability, but because we use our latest automation tools like Chef-Alfresco and our experience on tuning but also best practices learned during our latest benchmarks and related to architecture security. In addition to that and to make it faster to deploy, we are using the official Alfresco One AMI published in the AWS Marketplace.
I’d like to mention some features you will find in this new template:
  • All Alfresco and Index nodes will be placed inside a Virtual Private Cloud (VPC).
  • Each Alfresco and Index nodes will be in a separate Availability Zone (same Region).
  • We use Alfresco One with Alfresco Offices Services and Google Docs plugin.
  • All configuration is done automatically using Chef-Alfresco, you don’t really need to know about Chef to make this work.
  • An Elastic Load Balancer instance with “sticky” sessions based on the Tomcat JSESSIONID.
  • Shared content store is in a S3 bucket.
  • MySQL database on RDS instances in Multi-AZ mode.
  • We use a pre-baked AMI. Our official Alfresco One AMI published in the AWS Marketplace, based on CentOS 7.2 and with an all-in-one configuration that we reconfigure automatically to work for this architecture and save time.
  • Auto-scaling rules that will add extra Alfresco and Index nodes when certain performance thresholds are reached.
  • HTTPS access to Alfresco Share not enabled by default but all set to enable it.
As a result of this deployment you will get this environment:
Our CloudFormation Template and additional documentation is available in Github.
In the video below you can see a quick demo about how to deploy this infrastructure in just few minutes of user intervention. Isn’t is cool? Do you know how much time you save doing it this way? And also set up a production and test environment exactly the same way, faster, easier and cheaper!


This blog post was originally posted at Toni's personal blog in 

Delivery Pipelines; When you think pipeline you usually think "I put something in one end and get something out of the other". True, but it's what happens whilst in the "pipe" thats the exciting bit. You can add as many or few gateways/checkpoints/hurdles as you believe is possible. Whatever you feel is necessary to ensure the quality and dependency of your software. Maybe you have multiple pipelines or just one. There isn't a one fits all pattern, which is great. The more innovative ideas the better.


In this blog I will cover how we created a pipeline for 'Continuous Delivery' involving various steps and technologies/methodologies.


Some of the bits covered are Chef (linting, testing), SPK (AMI Creation), AWS CloudFormation and Selenium Testing. I've added links so that you can read and digest those technologies in your own time. This blog will not cover what they do and the assumption is that you have some basic understand of each of them.


So where does it all start? Typically there has been some form of change to our Chef code; some bug fix or new feature that needs to be tested out in the wilds of the 'net. So our build agent checks out the code and runs some checks before moving onto the next stage. These are the steps taken as part of our "pre-ami build" checks:

  1. Foodcritic: This tool runs 'linting' checks on our Chef cookbooks and reports back any anomalies. Very useful to maintain high and consistent standards across all of our cookbooks and repos.
  2. RuboCop: A static Ruby code analyser. Chef is based on Ruby, so checking for Ruby quality is also equally important. Additionally custom modules/libraries also need to be checked that are 100% Ruby.
  3. Unit Tests: We use RSpec to run a bunch of unit-tests against the recipes we have written. Some people feel this step is overkill. Not me, the more checks the better. What I like about unit-testing chef recipes is the contractual obligation between the test and the recipe. The test expects the recipe to contain certain steps and will fail if those steps are removed. The downside to this is that if new steps are added to the recipe and not tested within the unit-test, the "contract" becomes weaker. But, thats were good old code reviews come into play. Don't be afraid to push back and ask for tests! A Coverage report is also generated as part of this step and published as a HTML artefact.


Great! The checks passed! So now we can move onto building our AMI. For this we use an in-house tool called the SPK (link above). We have pre-defined templates that dictate the Chef properties, attributes and packer configurations needed to build our image. Once the AMI has been created, we take a snapshot and return the AMI ID to our build agent.


This is where it gets interesting....


Within our repo we also have the CloudFormation template which contains a list of mappings of AMI's available in specific regions, amongst other things. Right now what we do is, once the AMI id has been return from AWS, we scrape the SPK log files, grab the ID, parse the template and update the AMI ID. This then triggers a commit on that repo that another build job is watching and waiting for. It will only fire IF a change has been made to the template file, not the Chef code. This part could probably be replaced with a Lambda that returns the latest AMI ID.


The next build steps are actually building the CloudFormation stack on AWS. This is triggered using the AWS command line by passing in the template and any parameters needed. One of the outputs of the template is the publicly accessible URL of the ELB in that stack. Once this is available and all the relevant services have started, we use that URL to run a selection of Selenium tests. If the tests pass, we use the AWS command line again to delete the stack. Once this has returned an expected status the pipeline is complete.


There are lots more opportunities for improvements here; once the stack is up and running, we could run more thorough tests such as load-testing, stress-testing, security testing (the list is endless). We also have a whole bunch of integration tests written using the ServerSpec DSL that we have configured to output the results in a JUnit formatted XML. Perfect for pipelines that can read these file types. And perfect for reporting on build-over-time statistics. These need to be fed into the pipeline.


This pipeline currently gives us the confidence that:

  • The Chef code is sound and well-tested
  • The CloudFormation template is valid and repeatably deliverable.
  • We are able to login to our delivered App and run some automated tests.
  • We can delete the stack rapidly to reduce costs.


Thanks for reading all! Any comments and feedback always welcome!

By Martin Whittington, Software Engineer DevOps. 7th July 2015

Disclaimer: This work and the technologies are not related to any Alfresco products.


There isn’t a great deal of documentation or “best practices” that best describes how ReactJS could or can even receive updates via web-sockets.


I trawled the internet for about 10 minutes, which is my typical snapping point of yelling “Fine, I’ll do it myself!”. The first step was to read up on the Observer pattern and I came across a picture and description that fits the bill. You can look at the pattern and description here:


I love UML diagrams. They are, at least to me, very clear and unambiguous. So this one leaped out at me. I decided to follow this diagram but removed the need for the ConcreteObservers as I had a different idea on how to simplify this even further.




All of the implementation, naturally, is in JS and my JS skills have room for improvement (whose doesn’t?). I am happy to receive feedback based on what you see.

So I firstly implemented the Subject module as such:


'use strict';


 var _ = require('lodash');


 module.exports = {

     observerCollection: [],

     registerObserver: function(observer) {

         // to avoid over subscription, check the element isnt already in the array

         var index = this.observerCollection.indexOf(observer);

         if (index == -1) {




     unregisterObserver: function(observer) {

         _.remove(this.observerCollection, function(e) {

             return observer === e;



     notifyObservers: function(data) {

         for (var observer in this.observerCollection) {

             if (this.observerCollection.hasOwnProperty(observer)) {

                 var subscriber = this.observerCollection[observer];

                 if (typeof subscriber.notify === 'function') {


                 } else {

                     console.warn('An observer was found without a notify function');






Pretty simple really. Three methods; registerObserver(), unregisterObserver() and notifyObservers(). The last method is the one methods signature I did have to change so that it accepted an argument. As you can see notifyObservers loops through each observer in a list and calls the notify() method passing in the data the observer may or may not be interested in.


Next I created the Web-sockets module, which calls the notifyObservers() method as seen below:


'use strict';


 var Subject = require('./Subject');


 module.exports = {

     connection: new WebSocket('ws://localhost:8080/events'),

     connect: function() {

         var connection = this.connection;


         // Setup events once connected

         connection.onopen = function() {




         connection.onerror = function(error) {




         // This function reacts anytime a message is received

         connection.onmessage = function(e) {

             console.log('From server: ' +;






So this module utilises the HTML5 web-sockets library and this is as simple as it gets. Once a connection is established events that the connection should respond to need configuring. There are 3 events to be concerned with; onopen, onerror and onmessage.


Onopen is fired only once and handy for sending a message to the server. This message can then be checked in the logs to ensure a connection has been established.

Currently the onerror method only logs an error if the connection is lost or the connection could not be established.


The onmessage method is a callback method that fires every time a message is received from the server the web-socket is connected to. So from there it made the most sense, obviously, to call the notifyObservers() method passing in the data. The ‘e’ object is the full event object, but I was only concerned with the data received from the server. I followed the tutorial for HTML5 web-sockets at this link:


Finally I had to get a ReactJS component to become the observer. Rather than having a “middle-man” observer class I decided to make the ReactJS component actually care about the messages it will receive and respond accordingly.


ReactJS components have a lifecycle. Methods are called chronologically as the component is either mounted, rendered or destroyed. If you want to read more on this lifecycle follow the link below:


A ReactJS component typically manages its own state. When the state is updated the component is then re-rendered. So the first thing the component needs to do is to register itself with the registerObserver() method as shown below:

componentDidMount: function() {

    if (this.isMounted()) {




Look at the line with the registerObserver() method. I am passing in the entire component with this. So now the component is added to the list of observers. Next, for the sake of cleanliness, I added the following method:

componentWillUnmount: function() {



When the component is unmounted from the DOM, the component is removed from the list. The final part was to make sure that the component implemented a notify method as such:

notify: function(data) {

     console.log('Receiving ' + data);


As part of the notifyObservers() loop, this method in the component will be called. Within each notify method in every component that is registered as an observer will be filtering of the JSON. Every component wont care about every message received. So for example we could filter on type as such:


{ type: “type I care about”, action: “action to take” }


This is how I implemented web-sockets and messaging with ReactJS. We have adopted ReactJS for the development of our new internal DevOps tools.


Next steps will involve further development on what I have shown you. I may update this blog with how we did the filtering of messages and any other improvements but for now, this works well for us.


Thanks all!

Preparing for HTTP/2

Posted by mwhittington Employee Oct 13, 2016

By Martin Whittington, Software Engineer DevOps. 23rd July 2015


Disclaimer: This work and the technologies are not related to any Alfresco products


I claim to be a full-stack developer. I’m happy to work on UI all the way back to server side code. But working on what I call the “OS” layer is something that I don’t get the opportunity to do as often as I’d like. Some of what I’ve done may seem simple or over-complicated to the more trained “Ops” person but I don’t care. If you can do it better, you blog about it!


I was recently challenged to configure our web server to be ready for HTTP/2. Luckily, we are using Jetty for one of our internal projects which with its most recent releases is compatible and ready for the new protocol.

These are a summary of the advantages of being ready for HTTP/2:

    • Being able to negotiate between using either HTTP/1.1, HTTP/2 or any other protocol


    • Backwards compatibility with HTTP/1.1


    • Decreases latency that improves page loading times through several techniques such as:

      • Data compression of HTTP headers


      • Server push technologies


      • Fixing the head-of-line blocking problem in HTTP 1


      • Loading page elements in parallel over a single TCP connection

All of this information can be found at so I don’t claim to be an expert, just someone who does their research.


These are the steps I took to get us ready for one of our internal applications:


Install HAProxy:


HAProxy is a free open source solution that provides load balancing and proxying for TCP and HTTP-based applications. The OS I installed this on is OS X Yosemite version 10.10.4 so I used the following code with Homebrew (following the guide at ):

$ brew update

$ brew install haproxy

But this can be installed either via download or on the Linux command line. Make sure to get the latest version from HAProxy and that, dependant on your OS, the repo’s you download from are kept up to date. If you have any doubts you can download the HAProxy direct from or call the following command:

$ sudo apt-get install haproxy

Next I had to create a haproxy.cfg file. A very short and sweet one at that. It’s basically a copy from the link above except for a couple of small changes:



tune.ssl.default-dh-param 1024
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt domain.pem ciphers TLSv1.2 alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain


I haven’t provided the changes I’ve made on the config file I wrote but I will explain what, at the basic level, needs changing. The first highlighted line needs to be updated to refer to the certificate file of your choice, so domain.pem should be replaced with something.pem.


The second highlighted line would just need updating to whatever IP and port your app is running on. Simples!


Wherever you decide to save this file is up to you, but I decided to check it into our main code branch. Eventually it will become part of our Chef configuration. Now to run HAProxy all I have to type is:

$ haproxy –f haproxy.cfg (or wherever you file is)

Now if you run the following command:

$ netstat –anlt

You should see HAProxy listening on ports *80 and *443. You will usually see that information at the top of the list.


Install Jetty 9.3.1.v20150714:


I downloaded the zip archive from and extracted the archive into /opt/jetty. This is also dependant on java being correctly installed on your system, so make sure you have it installed!! To test that jetty is ready to be configured you can run the following command from the /opt/jetty directory:

$ java –jar start.jar

And you should get a screen like so:

2015-06-04 10:50:44.806:INFO::main: Logging initialized @334ms

2015-06-04 10:50:44.858:WARN:oejs.HomeBaseWarning:main: This instance of Jetty is not running from a separate {jetty.base} directory, this is not recommended.  See documentation at

2015-06-04 10:50:44.995:INFO:oejs.Server:main: jetty-9.3.0.v20150601

2015-06-04 10:50:45.012:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:///opt/jetty-distribution-9.3.0.v20150601/webapps/] at interval 1

2015-06-04 10:50:45.030:INFO:oejs.ServerConnector:main: Started ServerConnector@19dfb72a{HTTP/1.1,[http/1.1]}{}

2015-06-04 10:50:45.030:INFO:oejs.Server:main: Started @558ms

Jetty is ready. The next task is to create a new Jetty Base directory. It is mentioned throughout the Jetty documentation that the Jetty Home directory, in this case /opt/jetty/, be separated from where the apps are deployed. This is infact very simple to setup and requires a line of code at the command prompt (again at the directory you have installed Jetty at):


$ jetty   mkdir {new name of base, whatever you want it to be. Name of app is good}
$ cd {new base dir}
$ newDir   java -jar ../start.jar --add-to-start=deploy,http,http2c,https,server,ssl,websocket


This initialises the new directory with the modules listed after the –add-to-start parameter. The important one to notice is http2c. This is the module that speaks clear-text HTTP/2. Whether you do or don’t need the other modules is for you to decide based on your requirements.


Download a Keystore file (optional)


If you have decided to install the SSL module as part of the command to configure your new jetty base directory you will need a keystore file. This can be downloaded from and once downloaded should be placed in /opt/jetty/{basedir}/etc. That’s it, that parts simple.


To make the new base directory accessible by IDE’s and configurable its best to copy of the start.ini file into your new base dir. I achieved this like so:


$ basedir      cp ../start.ini .


Then any changes you want to make (port changes etc) should be made in the newly copied ini file.


IDE Setup


There are plenty of IDE’s out there that support the use of creating application servers. I use Intellij Ultimate so I followed the guide here:


There are obviously plenty of other guides out there for you to follow based on the IDE you use. Once setup, passing in any JVM options you need, you should have a Jetty server ready for HTTP/2 with HAProxy ready to do its work.


Obviously a lot more work needs to go into configuring HAProxy but this blog covers enough so that you are at the starting line for HTTP/2!

Cloudformation is wonderful. Its a wonderful way of designing your infrastructure with code (JSON, YAML). But it does have its limitations and there is feature disparity between the template DSL and the AWS cli. Considering the work that's gone into implementing this system, I'm not surprised but it's still super frustrating.


What is Cloudformation? Its an AWS feature that allows you to design an infrastructure containing one/some/all of the AWS resources available (EC2, S3, VPC etc) with code. This "template" then becomes like any other piece of source-code; versionable and testable but more importantly, it gives the designer the ability to repeatably and reliably deploy a stack into AWS. With one template I can deploy ten identical yet unique stacks.


The issue I was challenged to solve was this; every time a stack we created with Cloudformation was deleted via the console (AWS UI), it would fail. This was because an S3 bucket we created still contained objects. By using the UI there wasn't way of purging the bucket before deletion (unless someone manually empties the bucket via the S3 console) BUT when deleting a stack in the same state (with an S3 bucket that contains objects) using the aws-cli it's possible to just pass a --purge flag with the delete bucket command. Obviously we wanted the stacks to behave the same, regardless of the approach we took to manage the stacks. Some people using the Cloudformation service may not be technical enough to use cli commands.


So my solution; create a Lambda-Backed Custom Resource that would manage the emptying and deletion of an S3 bucket.


A Custom Resource is similar to the other AWS resources that Cloudformation templates manages; it has a similar set of syntactical sugars. For a more detailed understanding, follow . The essential difference with a custom resource is the necessity to manage and implement the communication between the custom resource, the service its working with (currently Lambda and/or SNS) and the Cloudformation template.


The first step was to add the resource to the template. Here's what it looks like (both JSON and YAML):


"EmptyBuckets": {
  "Type": "Custom::LambdaDependency",
  "Properties": {
  "ServiceToken": {
  "Fn::Join": [
  "Ref": "AWS::Region"
  "Ref": "AWS::AccountId"
  "BucketName": {
  "Ref": "S3BucketName"


And the YAML version:


    Type: Custom::LambdaDependency
        - ''
        - - 'arn:aws:lambda:'
          - Ref: AWS::Region
          - ":"
          - Ref: AWS::AccountId
          - ":function:emptyBucketLambda"
        Ref: S3BucketName


The most important part of the resource is the "Properties" section. A "ServiceToken" element must be declared and this must be the ARN (Amazon reference number) of either a SNS topic or a Lambda function. In the examples above we are using references of the region and account the stack is deployed to as that is where the Lambda function has also been uploaded. After the "ServiceToken" element is declared any values can be passed in as either primitive types, objects or arrays. These are the arguments that the Lambda function will pick up for the code to handle as desired. As seen in the example, we simply pass the "BucketName" as that's all the Lambda needs to perform its tasks. The custom resource will fire upon the state change of the template (Create, Update, Delete) and within the Lambda we can decide which states we are interested in and handle that accordingly.


Moving onto the Lambda code now, we can see and point to the "BucketName" parameter being passed in:


'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context) => {
    if (!event.ResourceProperties.BucketName) {
        return sendResponse(event, context, "FAILED", null, "BucketName not specified");
    var bucketName = event.ResourceProperties.BucketName;
    var physicalResourceId = `${bucketName}-${event.LogicalResourceId}`;
    if (event.RequestType === 'Delete') {
        console.log(JSON.stringify(event, null, '  '));
        // Is the bucket versioned?
        s3.getBucketVersioning({ 'Bucket': bucketName }, (err, data) => {
            if (err) return sendResponse(event, context, "FAILED", null, err);
            console.log('Versioning status: ', JSON.stringify(data));
            switch (data.Status) {
                case "Enabled":
                // Initial params without markers
                return emptyVersionedBucket({
                    'Bucket': bucketName
                }, event, context, physicalResourceId);
                // Initial params without continuation
                return emptyBucket({
                    'Bucket': bucketName
                }, event, context, physicalResourceId);
    } else return sendResponse(event, context, "SUCCESS", physicalResourceId);


The ResourceProperties value in the event object contains any of the arbitrary parameters needed for the Lambda to function. If, for some reason, the BucketName param isn't present we send a response back to the pre-signed S3 Url sent with the event to notify the Cloudformation process that firing this function failed. If the event request type is 'Create' we create our own physicalResourceId. This is because if this is not specified the sendResponse function will use the logStreamName which can point to more than one resource, causing issues. As we do not have any logic to action when the request type is 'Create' we simply send a response back stating that all was successful. If the request type is 'Delete', we log the event details and call our emptyBucket function as shown below:


function emptyBucket(objParams, event, context, physicalResourceId) {
    console.log("emptyBucket(): ", JSON.stringify(objParams));
    s3.listObjectsV2(objParams, (err, result) => {
        if (err) return sendResponse(event, context, "FAILED", physicalResourceId, err);

        if (result.Contents.length > 0) {
            var objectList = => ({ 'Key': c.Key }));
            console.log(`Deleting ${objectList.length} items...`);
            var obj = {
                'Bucket': objParams.Bucket,
                'Delete': {
                    'Objects': objectList

            s3.deleteObjects(obj, (e, data) => {
                if (e) return sendResponse(event, context, "FAILED", physicalResourceId, e);
                console.log(`Deleted ${data.Deleted.length} items ok.`);

                // If there are more objects to delete, do it
                if (result.isTruncated) {
                    return emptyBucket({
                        'Bucket': obj.Bucket,
                        'ContinuationToken': result.NextContinuationToken
                    }, event, context, physicalResourceId);
                return checkAndDeleteBucket(objParams.BucketName, event, context, physicalResourceId);
        } else return checkAndDeleteBucket(objParams.BucketName, event, context, physicalResourceId);

So first we list all of the objects in a bucket and if there was any error, bail out sending a response. The listObjects method will only return a maximum of 1000 items per call so if we have more we need to make subsequent requests with a token property. Next we create a list of objects to delete based on the format required for the deleteObjects method in the aws-sdk. Again, if there is an error we send a response stating so. Otherwise, the first batch of items were deleted and we then check that the listed objects result was truncated or not. If so, we make a recursive call to the emptyBucket function with the continuation token needed to get the next batch of items.


You may have noticed logic based on whether the S3 bucket has versioning enabled or not. If the bucket has versioning enabled, we need to handle the listing and deletion of objects a little differently as shown below:


function emptyVersionedBucket(params, event, context, physicalResourceId) {
console.log("emptyVersionedBucket(): ", JSON.stringify(params));
    s3.listObjectVersions(params, (e, data) => {
        if (e) return sendResponse(event, context, "FAILED", physicalResourceId, e);
        // Create the object needed to delete items from the bucket
        var obj = {
            'Bucket': params.Bucket,
            'Delete': {'Objects':[]}
        var arr = data.DeleteMarkers.length > 0 ? data.DeleteMarkers : data.Versions;
        obj.Delete.Objects = => ({
            'Key': v.Key,
            'VersionId': v.VersionId

        return removeVersionedItems(obj, data, event, context, physicalResourceId);


function removeVersionedItems(obj, data, event, context, physicalResourceId) {
    s3.deleteObjects(obj, (x, d) => {
        if  return sendResponse(event, context, "FAILED", null, x);

        console.log(`Removed ${d.Deleted.length} versioned items ok.`);
        // Was the original request truncated?
        if (data.isTruncated) {
            return emptyVersionedBucket({
                'Bucket': obj.Bucket,
                'KeyMarker': data.NextKeyMarker,
                'VersionIdMarker': data.NextVersionIdMarker
            }, event, context, physicalResourceId);

        // Are there markers to remove?
        var haveMarkers = d.Deleted.some(elem => elem.DeleteMarker);
        if (haveMarkers) {
            return emptyVersionedBucket({
                'Bucket': obj.Bucket
            }, event, context, physicalResourceId);

        return checkAndDeleteBucket(obj.Bucket, event, context, physicalResourceId);


Here we need to list the object versions, which returns a list of objects with their keys and version ids. Using this data we can make a request to delete the objects, similar to the way we deleted objects from an un-versioned bucket. Once all versioned files have been deleted we move onto deleting the bucket.


function checkAndDeleteBucket(bucketName, event, context, physicalResourceId) {
    // Bucket is empty, delete it
    s3.headBucket({ 'Bucket': bucketName }, x => {
        if  {
            // Chances are the bucket has already been deleted
            // as if we are here, based on the fact we have listed
            // and deleted some objects, the deletion of the Bucket
            // has already taken place, so return SUCCESS
            // (Error could be either 404 or 403)
            return sendResponse(event, context, "SUCCESS", physicalResourceId, x);
        s3.deleteBucket({ 'Bucket': bucketName }, error => {
            if (error) {
                console.log("ERROR: ", error);
                return sendResponse(event, context, "FAILED", physicalResourceId, error);
            return sendResponse(event,
                    'Message': `${bucketName} emptied and deleted!`


The headBucket function is a very useful API to check that the bucket actually exists. If it does, we call the deleteBucket method as at this point the bucket should be empty. If all goes well we send a response that the bucket has been emptied and deleted! The sendResponse method is shown here for reference only as it was taken from here (with some minor modifications): (The Tapir's Tale: Extending CloudFormation with Lambda-Backed Custom Resources)


function sendResponse(event, context, status, physicalResourceId, err, data) {
    var json = JSON.stringify({
        StackId: event.StackId,
        RequestId: event.RequestId,
        LogicalResourceId: event.LogicalResourceId,
        PhysicalResourceId: physicalResourceId || context.logStreamName,
        Status: status,
        Reason: "See details in CloudWatch Log: " + context.logStreamName,
        Data: data || { 'Message': status }
    console.log("RESPONSE: ", json);

    var https = require('https');
    var url = require('url');

    var parsedUrl = url.parse(event.ResponseURL);
    var options = {
        hostname: parsedUrl.hostname,
        port: 443,
        path: parsedUrl.path,
        method: "PUT",
        headers: {
            "content-type": "",
            "content-length": json.length

    var request = https.request(options, response => {
        console.log("STATUS: " + response.statusCode);
        console.log("HEADERS: " + JSON.stringify(response.headers));

    request.on("error", error => {
        console.log("sendResponse Error: ", error);



Now, whenever we build a stack we are confident in the knowledge that any bucket attached to the stack will be cleared out and emptied. Future improvements could be; a scalable queueing system that fires the same Lambda per queue based on how many items there are in the buckets. Potentially we could have tens of thousands of objects that need deleting and we cant be bottlenecked by this.


Anyways, thanks for taking the time to read this and as always I welcome your comments and contributions. Laters!

Docker is really under the spotlight at the moment so we had a look at it to determine how it could help in our Continuous Integration (CI) pipeline.

This article intends to show where we started and what the benefits we are getting from Docker. 

Our infrastructure:

    • Bamboo CI Server


    • 16 Linux build agents


    • 4 Windows build agents


    • 1 Mac build agent

Most of our builds run on the Linux machines. Mac and Windows are dedicated to specific tasks and projects.

The complex combination of databases versions: 

We decided to start with a simple, yet highly important use case. As part of the automated tests jobs that our CI server runs, we test Alfresco against all the Relational Database Management Systems (RDMS) that we support.

That includes: PostgreSQL, MySQL, MariaDB, IBM DB2, Oracle DB, and Microsoft SQL Server.

We have access to a pool of machines that host different RDMS/Version couples. Multiple running builds can access the same machine simultaneously so we use a specific database name per build agent to avoid concurrency issues.

As we grow, the number of Alfresco releases increase and so does the nebula of supported RDMS. Adding new machines and maintaining the pool raises some interesting concerns:

    • Performance issues and race conditions due to the latency between the build agent and the database


    • The cost of hosting the pool of machines that are not used all the time


    • The process and the time spent to setup a new machine

DB-Testing-Before-Dockeron Complex test environment





Docker makes it easier:


The Docker approach gives us more flexibility and increases significantly our capacity to test Alfresco with a new version of any supported database.


Switching to the test environment based on Docker was fast and fairly easy. All we needed was to install Docker on our build agents and simply change the build command to override all the JDBC connection settings.


Not only we are now running the whole stack locally on the build agents, freeing us from any potential network latency, and concurrent use of the database but everything is isolated and we can now run any version of any RDMS we want without having to maintain live remote instances.


Last but not the least, even though we drop and recreate the database for our tests, this solution ensures that we always run in a fresh unaltered testing environment.



Simplification with Docker containers Simplification with Docker containers




Whenever it’s possible, we use the official images provided on Docker Hub. For Oracle DB and IBM DB2 we had to create and store them in our private registry.


What’s next?


As you may know the native support of Windows Server containers is in the roadmap of Windows Server 2016 but something even better has been recently announced. SQL Server will be released on Linux and Microsoft will even distribute Docker images!


We are looking forward to migrating that last piece of our testing infrastructure to Docker.