Skip navigation
All Places > Alfresco Content Services (ECM) > Blog > 2014 > November
By Dave Draper, Kevin Roast, Erik Winlöf and David Webster


Over the last 4 years (from versions 4.0 through to 5.0) there have been a number of changes in relation to Share development and customization.

From an outside perspective the decisions that have been made might appear confusing or frustrating depending upon your particular use case of Share. If you've written Share extensions or customizations for previous versions then you might have hit breaking changes between major releases and you might be fearful of it happening again.

It might seem that we don't care about these issues. We’re sorry if you have experienced such problems but we can assure you that we try our very best to move Share forwards without breaking what has gone before.

In this post we're going to highlight how things are better than they would have been if different decisions were made.

Decision #1 - Extensibility model

Back in 2010 we introduced an extensibility model that enabled us to dynamically change a default Share page via an extension module. This was an initially coarse customization approach that enabled Surf Components to be added, removed or replaced as well as our custom FreeMarker directives to be manipulated.

This in turn paved the way for us to refactor the Share WebScripts in 4.2 to remove the WebScript .head.ftl files and push the widget instantiation configuration into the JavaScript controller.

If we hadn't have taken this approach then the method of customization would still be copy and pasting of complete WebScript components to the 'web-extension' path.

This wouldn't have stopped breaking changes between versions (e.g. the REST API changing, the Component calling a different WebScript, different properties or URL arguments being passed, etc) and would have required constant manual maintenance of those WebScripts with code from service pack fixes as necessary.

Decision #2 - New Java based client

We know that a few customers still use heavily customised versions of Explorer and the fact that we've finally removed it from 5.0 is going to cause pain to a few.

It has been suggested at various points over the last 4 years that we could create a brand new client to replace Share - even though Share does not yet have complete feature parity with Explorer. We recognise now that we need to invest in Share and improve it over time since creating a new client would ultimately introduce more problems than it solves.

However, when calls were strongest for writing a new client, the recommendation was to move to either GWT or Vaadin. We're fairly sure all those people that lament the fact that this week we're not using Angular would be horrified if they were now stuck with a Java based client that doesn't have feature parity with Share (let alone Explorer).

A new Java based client would have guaranteed that all those customizations would have to be re-written from scratch.

Decision #3 - Aikau

In our opinion it feels like some people miss the point of Aikau. Often we field questions along the lines of:

  • 'Why do we have to use Dojo?' (you don't)

  • 'Why have you written your own JavaScript framework?' (we haven't)

  • 'Why aren't you using Angular/Ember/React/Web Components?' (many good reasons; customization and configuration requirements, framework stability, performance etc.)

As it was said on the Product Managers Office Hours last week; 'web technologies change every 3 years'. It’s probably even more often than that. In the short time that Share has existed there has been a constant changing of the guard for 'best web development framework'.

Even if we started again tomorrow with Angular (the current populist choice), we'd be doing so in the knowledge that there will be breaking changes when Angular version 2.0 is released next year and that in a few years we'll (allegedly) all be using Web Components anyway.

The long and short of it is that unless you're writing an application that is only going to have the lifespan of the carton of milk in your fridge then binding yourself to a single JavaScript library will be a mistake.

With Aikau we're trying to mitigate that problem through declarative page definition. Yes, you have to write a bit of AMD boilerplate but really the choice of JavaScript framework is entirely in your hands.

Aikau isn't tied to Share (theoretically it's not even tied to Surf) so if we do ever do switch to a new client then the existing widgets and your custom widgets will still be applicable. We’re also evaluating breaking Aikau out of the Share development life-cycle so that we can make new widgets and bug fixes available faster.

Will there be more breaking changes?

We've been talking about re-writing the Document Library using Aikau for a while now (and have a pretty good prototype already) along with the other site pages. However, just as the old header Component still exists in Share, the original pages will still remain so you'll always be able to configure Share to use the current YUI2 Document Library with your customizations.

There is also a lot of talk about re-writing the rest of Share in Aikau. Whilst we think this is ultimately a good idea we don't think it's worthwhile until we've gone to the trouble of evaluating and improving the current design.... do you really only want a carbon copy of the current Wiki, Blogs and Data List pages? We’ll get there in time, but we're not sure there’s any great rush to get this all done for 5.1.


Ultimately, any web interface needs to keep modifying its underlying technology. Aikau gives us a way to do that with the least possible pain. We understand that some developers have gone through some suffering with breaking changes in the last few releases, but through the use of Aikau we expect this pain to decrease and customisations to become more stable, transferable and powerful.

We’re working to add transparency to the development process that will hopefully make what we’re working on more obvious and make it easier for external developers to predict what changes there may be in future Share releases.


This afternoon I saw a Tweet asking if there were any examples of how to use the AlfDocumentPreview widget.  Aikau documentation is currently very thin on the ground (as you're probably no doubt painfully aware) so I thought it would be worth writing up a quick blog post to describe how we use it and how you can too. If there's anything that you want more information on then it's worth Tweeting me @_DaveDraper with a request - I can't guarantee that I'll be able to write it up as a blog post, but I will try to do my best as my time allows!




Most of the Aikau widgets are completely new, some are 'shims' around existing YUI2 based code and a few are direct ports of YUI2 widgets. The AlfDocumentPreview widget (and it's associated plugins) is a good example of a ported widget. The original code was copied into an Aikau widget definition and then most of the YUI2 code was replaced, bugs were fixed and thorough JSLinting applied.


You might wonder why we'd go to such lengths when a widget already existed. This essentially gets right to one of the fundamental points of Aikau as a framework. The code inside the widget really isn't important - what's important is defining an interface to a widget that performs a single specific task that can be referenced in a declarative model. The widget becomes an API to a piece of UI functionality - in this case, previewing a widget.


Every Aikau page model that references it will never need to change - even if we decide to completely rewrite the widget to use JQuery, Angular, Web Components or whatever happens to be the current flavour of the month - the pages will function as they always have.


Where is the previewer used?

The rule of thumb that I tell anyone asks me, is that if Alfresco has used an Aikau widget in a product feature then it's fair game for use in your application or extension. There are a number of widgets that are definitely beta quality (and we call these out in the JSDoc) which might be subject to change, but once it's been used in a feature then we're obliged to maintain backwards compatibility and fix any bugs with it.


The AlfDocumentPreview is currently being used in the new filtered search feature that is part of the 5.0 release (and you'll also find it used in the Film Strip View that is part of the prototype Aikau based Document Library which is not yet a product feature!). If you click on the thumbnail of any document (that is not an image) then a new dialog is opened that contains a preview of that document. The preview will render the appropriate plugin (e.g. PDF.js, video, audio, etc) for the content type.


The filtered search page in Alfresco Share 5.0 


A preview of a search result 


How it works

Each row in the search results is an AlfSearchResult widget that contains a SearchThumbnail widget. When you click on the thumbnail widget (of the appropriate type) then a payload is published on the 'ALF_CREATE_DIALOG_REQUEST' topic to which the AlfDialogService subscribes. The payload contains a JSON model of widgets to render in the dialog when it is displayed. The model is an AlfDocument widget that contains an AlfDocumentPreview widget.


widgetsContent: [
    name: 'alfresco/documentlibrary/AlfDocument',
    config: {
      widgets: [
          name: 'alfresco/preview/AlfDocumentPreview'


The point of the AlfDocument widget is to ensure that all of the relevant Node data is available to pass to a child widget (in this case the AlfDocumentPreview - but it could be something else) so to do something with.


One of the key things about the search page is that search requests only return a very limited amount of data about each node (unlike requests from the Document Library which are slower but contain much more information such as all the properties and the actions permitted for the current user).


An additional XHR request is required to obtain all the data required to preview the node. The payload published when clicking on the thumbnail also contains the publication to make once the dialog has been displayed:

publishOnShow: [
    publishPayload: {
      nodeRef: this.currentItem.nodeRef


The 'ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST' is serviced by the DocumentService and the AlfDocument subscribes to successful document loaded publications (note that the SearchThumbnail will have a 'currentItem' attribute set containing the limited data returned by the search request which will contain a 'nodeRef' attribute).


The AlfDocument only processes it's child widgets once it has some data about a specific node. Once the DocumentService has published the node data then it will process the AlfDocumentPreview widget. From that point on the AlfDocumentPreview will use the data that has been provided to create the appropriate plugin to preview the document.


Other Ways to use AlfDocumentPreview

You don't have to use an AlfDocumentPreview within an AlfDocument, you just need to ensure that you provide it with node data as the 'currentItem' configuration attribute. So if you already have the all the data (for example if you've made a request from within your JavaScript controller or if you are accessing it from a list that has been generated from the REST API used to service the Document Library) then you can configure it into the widget directly.


The following is an example of a simple Aikau page model that previews a document (obviously you need to swap in your own nodeRef!):


model.jsonModel = {
  services: ['alfresco/services/DocumentService'],
  widgets: [
      name: 'alfresco/documentlibrary/AlfDocument',
      config: {
        nodeRef: 'workspace://SpacesStore/7d829b79-c9ba-4bce-a4df-7563c107c599',
        widgets: [
            name: 'alfresco/preview/AlfDocumentPreview'


You also don't need to display it in a dialog either.


Once again this should hopefully demonstrate how you can re-use Aikau widgets to achieve very specific objectives - try doing using the YUI2 previewer in isolation and then you'll understand why it's been ported!



Hopefully this has provided both a useful description of how we're currently using the AlfDocumentPreview widget (as well as how we've configured pub/sub in the filtered page to link widgets and services). If anything isn't clear or you have further questions then please comment below.


The Aikau framework provides a simpler way of creating pages in Share where a page can be declaratively defined as a JSON model in a WebScript. This avoids the necessity to create the XML and FreeMarker files for Surf Pages, Templates and Components.


I've been asked how you would create an Aikau page such that it is available as a site page in Share (e.g. a page that can be added via the Site Customization tooling in Share). So thought it would be worth capturing this information in a blog post. This is one of those interesting use cases where the old and new approaches of Share development intersect...




In Share we use 'pre-sets' configuration to provide default User and Site dashboards. These are XML configurations that define the Surf objects that can be used to 'cookie-cut' new page instances (which are then stored on the Alfresco Repository).


The pre-sets can be found in in this file and the 'site-dashboard' pre-set contains a property ('sitePages') that defines the initial set of pages for each site. Once the site is created a new Surf Page instance is created on the Repository and when you add or remove pages from the site it is this property that is updated (in the instance, not the pre-set).


The 'Customize Site' page lists both the available 'Site Pages' and the 'Current Site Pages' and the list of pages to choose from is defined in the 'share-config.xml' file under the 'SitePages' condition, e.g:

<config evaluator='string-compare' condition='SitePages'>
    <page id='calendar'>calendar</page>
    <page id='wiki-page'>wiki-page?title=Main_Page</page>
    <page id='documentlibrary'>documentlibrary</page>
    <page id='discussions-topiclist'>discussions-topiclist</page>
    <page id='blog-postlist'>blog-postlist</page>
    <page id='links'>links</page>
    <page id='data-lists'>data-lists</page>

It's possible to extend this configuration to include additional pages, however the underlying code currently assumes that each page is mapped to a Surf object. This means that if you want to add in an Aikau page to this list then you need to create a Surf Page object (even though it won't actually be used to render the page at all).



Say you want to add in a new Aikau page called 'Example'. You need to create a Share configuration extension that defines the new page (one way of doing this would be to create a 'share-config-custom.xml' file that you place in the 'alfresco/web-extension' classpath).


The file would contain the following XML:

  <config evaluator='string-compare' condition='SitePages' replace='false'>
      <page id='example'>dp/ws/example</page>


But you'd also need to create a Surf Page XML file (placed in the  'alfresco/site-data/pages' classpath) containing:

<?xml version='1.0' encoding='UTF-8'?>
  <title>Example Site Page</title>
  <description>Example of adding a new site page</description>


Which would result in the following being shown when customizing a site:


The Customize Site page showing an Aikau page. 

The latest community release of RM 2.3 is now available and provides support for 5.0.b.

There are, however, a couple of new features that are worth mentioning.

Move In-Place Records

It is now possible to move an in-place record once it has been declared.

The movement of an in-place record does not effect the location or management of the record in the file plan it just allows the user to reorganise their working environment, placing the in-place record wherever makes sense to them.

[vsw id='AAgQ_Lzueps' source='youtube' width='600' height='400' autoplay='no']

Improved Rejected Record Handling

When a record is rejected by the records management team it is now easy for the user to understand why and take action.

Once understood the rejection warning can be removed from the document allowing users to later resubmit the document as a record if the circumstances are right.

[vsw id='tJRQZb-GTNA' source='youtube' width='600' height='400' autoplay='no']

Annotated Behaviours

Support for annotated behaviours has been hiding in RM for a little while now. 2.3.b see's this generally useful feature moved into 5.0.b and made available for everyone to use.

Look out for a separate post with more details.

Is that it?

No that isn't everything, but the really interesting stuff isn't quite ready so has been disabled in this build.

I'm not even going to talk about it yet, you will just have to wait for the next community release later this year!


In a previous post I showed how to use an Apache reverse-proxy with a Hazelcast enabled Alfresco Share cluster to load-balance between multiple Share instances in Tomcat. Since moving to Linux as a development platform I thought I would revisit the set-up using the latest version of Apache and also add transparent failover to avoid interruption for users when a node goes down.


Share Cluster

At least two instances of Share are needed - with config modified in tomcat/server.xml so they are using different ports and AJP route names. For each server enable the AJP Connector and Engine:


<!-- Define an AJP 1.3 Connector -->

<Connector port='8010' protocol='AJP/1.3' redirectPort='8444'

connectionTimeout='20000' URIEncoding='UTF-8' />


<!-- You should set jvmRoute to support load-balancing via AJP ie -->

<Engine name='Catalina' defaultHost='localhost' jvmRoute='tomcat1'>


See my earlier blog post for more details on doing this, but it's really just a case of duplicating a working Tomcat+Share instance and changing the port numbers. Of course you can use instances on separate machines or VMs to avoid some of the port twiddling. On each node also enable Hazelcast Share clustering via tomcat/shared/classes/web-extension/custom-slingshot-application-context.xml as per the previous post again as this also hasn't changed since Alfresco 4.0. There is an example custom-slingshot-application-context.xml.sample provided in the Alfresco distribution which includes this config.


Apache 2.4 on Linux

Now install Apache 2.4 - for Ubuntu I used:

sudo apt-get install apache2

This ends up in /etc/apache2. Enable the various Apache2 modules we need to use the reverse proxy via AJP:

sudo a2enmod proxy_balancer

sudo a2enmod proxy_ajp

sudo a2enmod lbmethod_byrequests

Now to edit the Apache default site config to add the proxy configuration. Open /etc/apache2/site-available/000-default.conf file. Inside the root section <VirtualHost *:80> add the following:



<Proxy balancer://app>

   BalancerMember ajp://localhost:8010/share route=tomcat1

   BalancerMember ajp://localhost:8011/share route=tomcat2


ProxyRequests Off

ProxyPassReverse /share balancer://app

ProxyPass /share balancer://app stickysession=JSESSIONID|jsessionid



You may need to change the port and 'route' values if you aren't using exactly the same as me. Of course you can add more nodes here also if you wish. I also set:

ServerName localhost

to stop the various warnings on starting Apache. Now start Apache:

sudo service apache2 restart

The service should start cleanly, if there is an error instead then look here for info: cat /var/log/apache2/error.log as it may just be missing module dependencies. Start all the Share Tomcat instances - you will see them connect to each other in the Hazelcast INFO log e.g.


Nov 07, 2014 12:43:44 PM com.hazelcast.cluster.ClusterManager

INFO: []:5802 [slingshot]

Members [2] {

Member []:5801

Member []:5802 this


Now you can point your browser(s) at localhost/share directly. Behind the scenes Apache will automatically load balance out to one of the Share instances. The Share clustering magic will keep things in sync - try creating a site and modifying the dashboard configuration. Another use can immediately visit that dashboard and will see the same configuration, lovely. So, if a node goes down, any users attached to the node are logged out as they are bounced onto another node by Apache. It's great that the users can still access Share, but not so great that they get interrupted and have to log in again. We want to add something called Transparent Failover so the user is not aware of a server crash at all! With clustering, all the servers are the same and so the loss of a server should not interrupt the service.


Tomcat Session Replication

Just two steps are needed to enable Session replication between our two Tomcat servers. For all nodes, edit tomcat/webapps/share/web.xml add the following element into the web-app section:



Then for all nodes, enable the following section in the tomcat/conf/server.xml config:

 <!--For clustering, please take a look at documentation at:

     /docs/cluster-howto.html (simple how to)

     /docs/config/cluster.html (reference documentation) -->

<Cluster className='org.apache.catalina.ha.tcp.SimpleTcpCluster'/>


Restart the Share nodes and now you will now see something like this:

Nov 10, 2014 11:21:45 AM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded

INFO: Replication member added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 1, 1}:4000,{127, 0, 1, 1},4000, alive=1009, securePort=-1, UDP Port=-1, id={-122 10 86 -116 -50 35 77 111 -70 -58 -34 49 -128 -95 -29 -111 }, payload={}, command={}, domain={}, ]

NOTE: You may need to clear the internet cache, delete cookies and restart browser instances to ensure clean startup the first time after making these changes. You may see odd behaviour if you don't do this. Login to Share with a couple of different browsers and examine the Cookies (using Chrome Developer Tools or FireBug etc.) to see what node it is currently attached too - you will see something like:




Value: 5ACD598FD19B6C04FE7EECC1664B69C8.tomcat1


Host: localhost


Path: /share/


Then you can terminate tomcat1 and continue to use Share in that browser - the user experience continues without interruption. If you examine the cookies again you will something like this:




Value: DB984B4A51FB30B5E14B5ED71B65CFD4.tomcat2


Host: localhost


Path: /share/


So Apache has switched the over to tomcat2 and because of the Session replication no logout occurs, nice!


This is a basic set-up and there are a lot of options to improve Tomcat replication:


For high-performance production systems there are better choices than raw Tomcat replication - for our Alfresco Cloud offering we use haproxy and memcache:


Finally here is another memcache and Tomcat example:

This post follows on from the getting started post which you should run through first. It includes some free software that you may need to install before you can proceed.


Earlier I introduced the basic functionality of the testing framework we have created for Aikau. In addition to testing Aikau on a couple of browsers within a predefined virtual machine, we have some options to test locally and to generate a coverage report on the spread of the Aikau testing. Here you will find some of these use cases described.

Local testing - why?

Testing against the browsers in a virtual machine is great because any individual can test against the same known set of browsers, irrespective of the platform they are using themselves. You might want to test a specific browser on a specific platform to analyse a platform-specific error and doing a local test could be the best way to do that. More important however is that the virtual machine testing is a little hidden. Whilst it is possible to interrogate selenium to see what it has been doing during a test, and Intern provides the capability to record a screen capture at a specific moment in a test, it can quite often just be easier to see a test running. Errors are often very easy to see when you are just looking at the browser window that Intern is driving.

Local testing - how

In my previous post about testing Aikau we used the following command to install an array of packages required for the testing:

>> npm install

One of the packages installed by that command is selenium-standalone. The selenium-standalone package installs a local instance of selenium with a wrapper and symbolic link to allow it to be launched from the command line, anywhere. You should be able to write the following command in any command prompt and see a small trace as selenium is started on ip and port

>> start-selenium

Note: All of these commands are run in directory:


Note: Depending on the platform you are using you may need to reinstall selenium-standalone as a global package. If the command above does not work, try reinstalling selenium-standalone globally either by itself:

>> npm install selenium-standalone -g

...or with all of the other dependencies:

>> npm install -g

If you're able to launch selenium using the standalone command shown above and you have the required browsers installed then you should be able to proceed. At time of writing the two browsers referenced by the 'local' intern file are Firefox and Chrome.

Launching a local Intern test is as simple as:

>> g test_local

Note: This command is ‘g’ for grunt and ‘test_local’ for run intern test suite (locally).

Now because this test scenario is going to drive the browsers on your machine you probably want to leave the mouse and keyboard alone whilst the tests are running. If you're happy to watch the entire suite run then do just that. If however you are working on one particular test and want to keep quickly running that one, open the Suites.js file and comment out all of the tests you're not interested in, from variable 'baseFunctionalSuites':


The output from local testing is identical to that seen with the virtual machine.

Note: The structure of the selenium-standalone package is quite simple and if you have an unusual browser for which there is a selenium driver in existence, you can modify selenium-standalone to support it. All that remains is to add the required browser to the Intern configuration file and you're good to go.

Generating a code coverage report

A code coverage report can be produced by running a suite of tests with the code pre-instrumented to report on it's use. Commands have already been written into the grunt framework that perform all of the required steps for code coverage.

Code coverage reports are generated locally, so follow the instructions shown above but use the following command for the testing phase:

>> g coverage-report

Once this has completed, which will take slightly longer than basic testing, you will be able to visit the node-coverage console here:


You should see the report you have just generated with scoring and which you can look through all of the files touched by the testing. The tool shows areas of code that remain unvisited or through which not all of the pathways have been found. This feature can be very useful when writing tests to make sure edge cases have been considered.

Note: A particularly poor test result with numerous failed tests will distort the scoring of the coverage report, sometimes in an overly positive way. Make sure all of your tests are passing reliably before assuming that the scoring from the coverage report is accurate.

Filter Blog

By date: By tag: