Skip navigation
All Places > Alfresco Content Services (ECM) > Blog > 2014 > February
2014

Unit Testing Aikau

Posted by ddraper Feb 26, 2014

Introduction

In my last blog post I revealed that 'Aikau' was the project name for the updates we've been making to our approach to developing Alfresco Share and I briefly mentioned that we had been working with Intern.io to develop a unit test framework. That unit testing framework is now available in the latest Community source on SVN and I thought that it would be worth explaining how it works.

 

It's still early days so it's a little rough around the edges and we've yet to hit 100% code coverage (more on that later) but hopefully this will provide an insight into our commitment to improving the overall quality of our code and to reduce the number of future bugs that get released into the wild.

 

Evaluating JavaScript Testing Frameworks

We spent quite a bit of time looking at the available JavaScript unit testing frameworks that are available and I'm confident that we won't have used everyone's particular favourite. However, one thing we did notice that there is quite a considerable amount of overlap between the underlying technologies that they use (e.g. WebDriverJS, Chai, etc).

 

We settled on Intern.io simply because it appeared to be the best fit for our needs - that isn't to say that we think it's the best testing framework available or that it's the right choice for everyone, however it does seem to be serving our needs nicely at the moment and we have had some excellent support via StackOverflow from it's developers.

 

Our main goals were to be able to write functional tests for the Aikau widgets in the context of pages rendered by Surf in exactly the same way as they would be rendered in an application. We didn't want to just test the pure JavaScript functionality as that would only be half the story.

 

We also wanted to be able to perform cross-browser testing - and whilst we're still some way from achieving that completely, we are well placed to get there in the future - especially if we decide to make use of a service such as SauceLabs for our testing (in fact we've already had some degree of success in using SauceLabs already).

 

Getting Our Hipster On

Back in October 2013 the Alfresco UI team went to the Future of Web Apps conference and got an insight into some cool new technologies that we subsequently decided to incorporate into our development practices.

 

We're making use of NPM for managing our packages and Grunt for controlling our development and test environment. The team develop using Windows, OS/X and various distributions of Linux so we use Vagrant to provide a common VM to test against. We've got CSS and JS linting ready to go and we use 'node-coverage' to ensure that our unit tests are fully testing our modules.

 

How It Works

Each unit test is run against a specific page defined by a JSON model containing the widget(s) to be tested. The JSON model is defined as a static resource that is loaded from the file system by Node.js. We've created a new test bootstrap page that requires no authentication. This page contains a simple form and we use the Intern functional test capabilities to manually enter the JSON model into the form and then POST it to a WebScript. The JSON model is stored as an HTTP session attribute and the page redirects to a test page that retrieves the JSON model from the session attribute and passes it through the Surf dynamic dependency analysis to render a page containing the Aikau widgets to be tested. The unit test is then able to go about the business of interacting with the widgets to test their expected behaviour.

 

As widgets are de-coupled over a publication/subscription communication framework it is very easy to both drive widgets and capture their resulting behaviour through that same framework. Buttons can be clicked that publish on topics that a widget listens to and we make use of a special logging widget ('alfresco/testing/SubscriptionLog') to test that it is publishing the correct data on the correct topics.

 

The de-coupling also allows us to test data rendering widgets without needing an Alfresco Repository to be present. Aikau widgets never make REST calls themselves - instead they use the publication/subscription model to communicate with client-side 'services' that handle XHR requests and normalize the data. This means that a test page JSON model can include services that mock XHR requests and provide reliable and consistent test data. This makes setup and teardown incredibly straight forward as their is no need for a 'clean' repository for each test.

 

Code Coverage

We're making use of the 'node-coverage' project to capture how well our unit tests are driving the Aikau code and we'll admit that at the moment we're not doing a great job (our goal is to ensure that any Aikau widget that gets included in a Share feature gets a unit test that provides 100% code coverage.

 

We're able to get accurate coverage results by 'instrumenting' all of the AIkau code and then loading a special test page that builds a layer containing all of that instrumented code. This is a very elegant way of resolving the issue of only getting coverage results for the widgets that are actually tested and a very useful benefit of using AMD for Aikau.

 

We've written Grunt tasks for instrumenting the Aikau code, running the test suites and gathering the results to make it really easy for us to collect the data - and the coverage results enable us to write truly effective and comprehensive tests (as well as identifying any 'dead' code that we can trim).

 

Local, VM and Cross-Browser Testing

When writing the tests we typically run the tests against a local Selenium instance so that we can actually see the tests running as it's incredibly useful to be able to see the mouse move and text being typed, etc. However, we make use of a Vagrant VM to run background tests during development so that we are able to continue to use our keyboards and mice - it's also incredibly useful to have a consistent test environment across all the team members.

 

The Vagrant VM is Linux and only allows us to test Chrome and Firefox but in order to test multiple versions Internet Explorer we need to use a Selenium grid. We're still working through whether or not to go with an internal Selenium grid or outsource to SauceLabs but when everything is in place we will be able to reliably test the whole of Aikau against all supported browsers.

 

Summary

One of the primary goals of Aikau is for it to be a truly robust framework and a reliable unit testing framework is key to that. Hopefully this blog post will have illustrated our commitment to that goal and also given you something to experiment with. In a future blog post I'll provide a guide on how to get the tests running. In the meantime we'd appreciate any feedback you have on the approach that we've taken so far.

Introducing Aikau

Posted by ddraper Feb 26, 2014

Introduction

For some time now I've been writing blog posts that refer to the 'updated UI framework' describing a new approach that we've been working on for further developing Alfresco Share. This is a fairly ambiguous (as well as lengthy) term to use to describe what we've been up to and we thought it would be sensible to come up with a project name that encapsulates the work that we've been doing. The framework is completely reliant on Surf so we wanted it to be somehow Surf related so after throwing a few ideas around settled on the name 'Aikau'. This post is going to attempt to describe exactly what Aikau is and why we've been working on it.

 

 

What is Aikau?

Aikau refers to a method of creating new Surf Pages or Components comprised of small atomic widgets. The widgets are referenced in a JSON model that can either be defined in the JavaScript controller of a WebScript or as a document stored on the Alfresco Repository. Each widget is an AMD module that references it's own CSS, HTML and i18n resources Surf will ensure that all of those resources are imported into the page loaded by the browser when that widget is referenced in the JSON model that defines that page.

 

The widgets are intentionally de-coupled over publication/subscription communication and event frameworks so that widgets can be easily added/removed/changed within a JSON model without causing any missing reference errors. Each widget is intentionally designed to implement a single piece of function, but can themselves define JSON models of child widgets so that it is easy to define re-usable composite widgets.

 

Each widget should be written with customization in mind - methods are kept short and variables abstracted to allow configuration and allow customization through targeted method overrides in extending widgets.

 

Aikau also provides an Intern.io based unit testing framework and each widget should have it's own unit test (although we don't yet have 100% code coverage) to ensure that functionality is not broken during development. We aim to make use of tools such as Grunt and Vagrant to allow continuous testing to run in parallel with development to catch breaking code immediately (although this effort has not yet been completed).

 

 

What Are The Goals?

The primary goal of of Aikau are to:

    • maximize code re-use
    • make UI development faster
    • make it incredibly simple to customize pages
    • allow continuous iteration of existing pages.


Aikau leverages the existing Surf extensibility capabilities to allow a default JSON model to be manipulated by 3rd party extensions. This allows pages to be heavily customized without needing to access or work with the source code of the page - it is possible to isolate the customization work to just the specific target areas.

 

Performance improvements should also be achieved by only loading the minimum amount of JavaScript and CSS data into the browser and by reducing the number of HTTP requests to an absolute minimum.

 

One of the main benefits of Aikau is that it allows you to quickly prototype pages with existing or simple widgets and then refine them over time. Functional pages can be easily constructed in minutes (and hopefully in the future entirely via a drag-and-drop page creation tool within an application itself).

 

Surf now supports Theme based LESS CSS pre-processing and widgets are written to be completely theme-able. This allows functional pages to be have their design modified repeatedly over time without needing to throw away existing work and start over from scratch.

 

 

How Does it Work?

There have been numerous blogs over the past year that have described the work we've been doing - although none of them specifically refer to Aikau by name, the knowledge is already out there!

 

Essentially Aikau is powered by Surf and uses the Dojo AMD loader and widget templating capabilities to provide a basis for dynamic dependency analysis. Each JSON model is dynamically analysed for AMD modules and those modules themselves are then analysed for their dependent modules (and so on, and so forth) until a complete dependency 'layer' has been constructed for that page.

 

Surf caches module dependency relationships as well as page dependencies to ensure that only the minimum amount of analysis is done to maximise performance.

 

JavaScript Framework Ambivalence

Although Aikau makes heavy use of Dojo for the AMD loader and widget templating, it is by no means forces the exclusive use of Dojo. it extends the AMD paradigm to allow non-AMD dependencies to be easily referenced by widgets and is completely ambivalent to the JavaScript framework being used.

We have already made use of numerous other JavaScript frameworks and projects within the current suite of Aikau widgets including:

    • 'Legacy' Alfresco JavaScript code
    • YUI2
    • JQuery
    • ACE
    • JSON Editor Online

     

    Summary

    It's still early days, but we now have a framework and an initial set of widgets that make page development using Aikau a reality.  It's unlikely that Aikau will ever completely replace the existing Share implementation but it is intended to work in harmony with it as well as providing the infrastructure to quickly develop alternative Surf based Alfresco clients.

     

    The goal of this post is simply to allow us to talk about Aikau and for you to know what we're referring to!

    by Dave Draper



    Introduction



    This blog post is going to describe a simple but useful new feature that has been added to Surf. It is available in the latest Alfresco Community nightly builds and in the SVN repository. It is now possible for WebScript iocalization .properties files to include the contents of another file and have it's content merged into the resulting ResourceBundle.



    For a long time it has been possible to include 'library' files in WebScript JavaScript controllers and FreeMarker templates but it was not possible to include 'library' localization content. This meant that any localizations performed by the library files would either have to be duplicated in the including WebScripts properties files or added into a 'global' properties files (such as 'slingshot.properties').



    By enabling library files to be a combination of FreeMarker, JavaScript and localization files it means that you can create much more rounded library files.



    Example Use Case



    Currently we render the Share header bar as a Surf Component and are able to 'inject' additional widget models into the page using a hybrid-template approach (where any content is rendered between a standard header and footer). However, there are issues with this in that the Dojo AMD loader ends up importing dynamically generated resources with duplicate paths (this is only a problem in terms of debugging and the number of bytes loaded - the pages will still function). A much more useful approach would be to have a header library file (as indeed we already have: 'share-header.lib.js' that provides methods for creating the header widgets and services which can be included in page specific WebScripts (e.g. 'share-header.get.js').



    The problem is that this relies on common properties being defined in the slingshot.properties files. For the header this isn't a major problem as it rarely varies, however in other situations (e.g. the Document Library) we want to import a library but vary the localization keys. This means that we either have to duplicate keys across each page WebScripts (e.g. 'Document Library', 'Shared Files', 'My Files', 'Repository') or put all the keys in the core 'slingshot.properties' file (which will then be loaded on pages even when not required).



    A better solution is for the Document Library JavaScript library file to have a partner .properties file that can be imported into each specific WebScript..... and that's exactly what this new capability provides.



    Example



    To declare an import simply add the key 'surf.include.resources', e.g.



    surf.include.resources=org/alfresco/share/header/share-header.lib


    ...and you define multiple imports by delimiting each file with a comma, e.g.



    surf.include.resources=org/alfresco/share/header/share-header.lib,org/alfresco/share/documentlibrary/doc-lib.lib


    Note that you should not attempt to define any locale information. The correct locale specific file will be imported based on the locale of the browser (and will gracefully fall-back as you would expect). So in the above example if your browser is set to use the locale 'en_GB' then Surf will first attempt to import the file 'share-header.lib_en_GB.properties' and if it fails to find that file it will fall back first to 'share-header.lib_en.properties' and finally to 'share-header.lib.properties'.



    Caveats



    There are some important caveats:





    1. Imports cannot import (e.g. if you import a properties file that contains the key 'surf.include.resources' then those resources won't be imported) - this is a good thing as it prevents circular dependencies


    2. Imports are processed and cached BEFORE extension modules are applied (this means that an extension can't alter imported properties - however it can still override them)



    Summary



    It's now possible to include library style properties files in WebScripts. This should hopefully make it easier to create more complete collections of library files that don't rely on global or duplicated localization keys.



    by Dave Draper

    Introduction



    At Summit 2013 I briefly mentioned that Surf provided support for simple CSS token substitution via the Surf Theme XML files. We've since integrated LESS for Java directly into Surf to allow dynamic LESS pre-processing to be performed. This capability is currently available in the Alfresco Community nightly builds and in the Alfresco Community SVN trunk. This isn't going to be a blog post on LESS... if you don't know what it is, or how to use it (and to be perfectly honest - I'm no expert myself!) then you can find plenty of information elsewhere on the internet... this post is simply going to explain how you can now make use of the LESS support that is available in Surf.

    Surf Themes



    Hopefully you're aware that Share provides a number of different themes out-of-the-box and that many Community members have created their own. If you look in the 'webapps/share/themes' folder of your installation you will find a number of sub-folders ('default', 'greenTheme', 'greyTheme', 'lightTheme', etc) all of which hold the CSS and associated image files for rendering that theme. In the 'webapps/share/WEB-INF/classes/alfresco/site-data/themes' you will find a number of XML files whose names correspond to those theme folders (e.g. 'default.xml', 'greenTheme.xml', etc. etc.). When a Share page is loaded it will load the 'presentation.css' file for the current theme which will customize the appearance of the page (mostly this just effects the colour scheme).

    Basic CSS Tokenization



    In the XML files now support the element '<css-tokens>' in which you can define any number of custom elements the value of which will be substituted into any CSS file containing a token matching that element. For example, if you current theme XML file contains:

    <css-tokens>

      <theme-font-family-1>Open Sans Condensed,Arial,sans-serif</theme-font-family-1>

    </css-tokens>


    ...and you have a CSS source file containing...

    .alfresco-example-CssSelector {

      font-family: $theme-font-family-1;

    }


    ...then the CSS file loaded by the browser would actually contain:

    .alfresco-example-CssSelector {

      font-family: Open Sans Condensed,Arial,sans-serif;

    }


    LESS Pre-Processing



    There is one special token that is reserved for LESS pre-processing - '<less-variables>'. The value of this element is injected into a CSS file and dynamically 'compiled' by LESS for Java when that CSS file is required (Surf caches the compiled CSS file so that this only occurs once for each file).



    This means that if the theme XML file contains:

    <css-tokens>

      <less-variables>

        @toolTipBorderColour: #e5e5e5;

      </less-variables>

    </css-tokens>


    ...and the CSS source file contains:

    .alfresco-example-CssSelector {

    border: 1px solid @toolTipBorderColour;

    }


    ...then the CSS file loaded by the browser would actually contain:

    .alfresco-example-CssSelector {

      border: 1px solid #e5e5e5;

    }


    This is obviously a very simplistic example - you can actually do much, much more with LESS than simple substitution.

    Summary



    As well as the obvious benefits of being able to take advantage of CSS pre-processing we have implemented this so that our new AMD widget based approach for developing Share can be written in a modular way but still support externally defined themes.



    At the time of writing we have not yet adapted all the new widgets that have been written to take advantage of the LESS pre-processing but are starting to make the transition so you will start to see code appearing in SVN soon.



    We have yet to completely identify and define the LESS variables that will be defined by the themes but we will be sure to make the information available as and when we have them so that the community can update their own themes to ensure that the new widgets are rendered as they would wish.



    Share Page Creation Code

    Posted by ddraper Feb 24, 2014

    Introduction

    It's taken a little while due the convergence of Cloud and Enterprise code lines that has been ongoing for some time, but I can finally now release the source code for the drag-and-drop page creation tool that I demonstrated at Summit 2013. The recent updates to Alfresco in the Cloud marked the final step of the convergence process and allowed some code that had been on a private branch for many months to make its way to the main SVN trunk.

     

    The code I'm providing is simply a couple of WebScripts that define pages made up of widgets that are now available in SVN. The code is packaged up as a JAR file that can be dropped into the 'webapps/share/WEB-INF/lib' folder of the web server that the Share application is running on and the pages will be available on the next server restart. You can download the JAR from here.

     

    Pre-Requisites

    NOTE: You need to have either built the latest Community code or have downloaded a nightly build - this will NOT work against any current Enterprise or Community release. The pages that you create will be stored on the Alfresco Repository and you will need to create a specific location in the Data Dictionary for them to be saved.

      1. As 'admin' log into Share and click on the 'Repository' link in the header menu
      2. Click on the 'Data Dictionary' folder
      3. Select 'Folder' from the 'Create...' menu
      4. Enter 'ShareResources' as the folder name (NOTE: no space, capital 'R') and click 'OK'
      5. Click on the newly created 'ShareResources' folder and repeat steps 3-4 but create a folder called 'Pages' (NOTE: capital 'P')


    Screenshot showing the Repository page in Share

     

    The New Pages

    The two pages provided are the JSON editor (http://<server>:<port>/share/page/hdp/ws/page-editor) and the Drag-and-drop Creator (http://<server>:<port>/share/page/hdp/ws/page-creator).  The JSON editor allows you to create pages by typing out the page model directly and the Drag-and-drop creator allows you to create pages using a simple GUI.

    The JSON models for the pages themselves are defined in the following WebScript JavaScript controller files found in the JAR:

      • alfresco/site-webscripts/org/alfresco/share/page-creation/dnd-creator.get.js
      • alfresco/site-webscripts/org/alfresco/share/page-creation/json-page-editor.get.js

     

    The JSON Editor

    The actual editor used by the JSON Editor page is provided using the JSON Editor Online third party library. The 'alfresco/site-data/extensions/extension.xml' file in the JAR file shows an example of adding in a new AMD package into Share which was described in more detail in this previous blog post. The JSON Editor AMD modules are referenced from the 'alfresco/forms/controls/JsonEditor' widget that already exists in Share which is referenced by the 'json-page-editor.get.js' file.

     

    Try the following simple example to get you started:

    {
      'widgets': [
        {
          'name': 'alfresco/layout/ClassicWindow',
          'config': {
            'title': 'My Window',
            'widgets': [
              {
                'name': 'alfresco/logo/Logo',
                'config': {
                  'logoClasses': 'surf-logo-large'
                }
              }
            ]
          }
        }
      ]
    }

     

      1. Copy and paste the JSON into the 'Page Definitions' field
      2. Click on the 'Preview' button (the preview should be drawn at the top of the page)
      3. Enter a name into the 'Page Name' field (e.g. 'FirstPage')
      4. Click the 'Save' button
      5. Go back to the 'Repository' page and check that the page definition has been created.
      6. Open the following URL to view the page: http://<server>:<port>/share/page/hrp/p/FirstPage


      Screenshot showing the JSON editor with page definition and preview

       

      You should see the following:

       

      Screen shot showing remotely loaded page

       

      The DND Page Creator

      The drag-and-drop page creator effectively does exactly the same as the JSON editor under the covers (e.g. it uses exactly the same Repository based REST API for saving the pages) but the key difference is that each widget that you drag from the 'palette' to the 'canvas' contains it's own configurable snippet of JSON.

       

      I'd recommend watching both the Share Page Creation Live and Share Widget Library sessions from Summit 2013 to get a better understanding of how to use the page creator because it will be much simpler than trying to explain it in writing!

       

      The Underlying Code

      I'm not going to initially explain the underlying code but will rather rely on any questions in the comments section to provide assistance. You should be able to review the JSON page definitions and match the widget references to the corresponding source files in 'webapps/share/js/alfresco' (on the web server) or 'slingshot/source/web/js/alfresco' (in SVN).

       

      What should be apparent though is that the pages that are used to create pages are defined using the exact same JSON structure as they themselves render. One file that is worth reviewing is the 'slingshot/config/alfresco/site-webscripts/org/alfresco/share/imports/widget-palette.lib.js' as this is contains all the JSON definitions for the items shown on the palette in the drag-and-drop page creator. The palette only contains a subset of the available widgets but hopefully should outline how it would be possible to define and include additional widgets for selection.

       

      Summary

      As I promised at Summit 2013 I've made the page creation code available as soon as possible. I appreciate that this blog post does not provide an in-depth discussion of the underlying code, but hopefully it is enough to whet people's appetite to prompt me with questions for further posts.

      It's ok to store cmis:objectId's in my CMIS client application, right?



      Ah such a simple question, yet hiding a plethora of probable pitfalls!



      Over the last couple of years I've encountered (and held myself!) the misconception that cmis:objectId's are basically synonymous with NodeRef's, Alfresco's native form of identifier.  Unfortunately there is a subtle but significant difference that traps many an unwary CMIS client developer: an Alfresco NodeRef identifies an entire object including that object's version history (if any) - in effect documents and versions are fundamentally different types of 'thing', and versions don't have any independent notion of identity.  In contrast, in CMIS both documents and versions are the same (they're both cmis:documents) and are each uniquely identified by their own cmis:objectId.  From the versioning section of the spec (emphasis added):

      Each version of a document object is itself a document object, i.e. has its own object id, property values, MAY be acted upon using all CMIS services that act upon document objects, etc.


      So going back to our original question, the validity of storing a cmis:objectId for later use depends on what the CMIS client application is storing a reference to; some possibilities include:



      1. An unversioned object type (i.e. something other than cmis:document) => golden!


      2. An unversioned cmis:document => peachy!


      3. A specific version of a versioned cmis:document => good to go!


      4. The latest version of a versioned cmis:document => ruh roh raggy!!



      1 problematic case out of 4 may not seem like too much of an issue, until we recall that versioning is enabled by default for all CMIS-accessible files in Alfresco (i.e. cmis:document and all sub-types).  Add to this that many CMIS client apps, regardless of the server they're connecting to, basically don't care about versioning (and when they do it's often limited to concurrency control via private working copies) - they simply want to treat the CMIS repository as a glorified file/folder store, reading and writing files as if they were flat, unversioned objects - and you start to appreciate the seriousness of the problem.

      In short, cmis:objectId alone cannot satisfy the 80% use case of CMIS client applications i.e. version-agnostic file/folder CRUD!





      So what's the alternative?



      There are at least two approaches that I've come up with for working around this issue (and there may be more):



      1. Store a cmis:objectId and make additional CMIS calls to manually 'fast forward' to the latest version of the object on every subsequent CMIS call.


      2. Store a cmis:objectId for unversioned object types, and a cmis:versionSeriesId for versioned object types, and make subsequent CMIS calls appropriate to each.



      Fast forward



      With this approach, the CMIS client application would store the cmis:objectId as normal, but every single time it accesses the object it identifies, it would look up the cmis:objectId of the latest version of the object first, before continuing with the original operation.  In detail, this involves:



      1. Call the getObject service with the original cmis:objectId.


      2. Look for the cmis:versionSeriesId property in the response.  If the cmis:versionSeriesId property exists in the response:



        1. Call the getPropertiesOfLatestVersion service with the cmis:versionSeriesId.


        2. Pull out the cmis:objectId from the response - this is guaranteed to be the cmis:objectId of the latest version of the object, at the time of the call.


        3. Update the stored cmis:objectId with the retrieved cmis:objectId (optional).





      3. Call the desired CMIS service.



      The advantages of this method is that the logic is reasonably clean and simple, but it has the downside of requiring at least 2, and sometimes 3, CMIS calls for every single 'original' CMIS call the client application wished to make (regardless of whether the object is versioned or not), as well as risking race conditions between steps 2.1 and 3 (i.e. when a new version of the object gets created by some other process between those two calls).



      [UPDATE 2014-02-28] A vendor I'm working with mentioned another variation of this strategy that uses the 'cmis:isLatestVersion' property in step 2 to determine whether the cmis:objectId refers to the latest and greatest version or not.  Other than use of a different property, the logic remains much the same (the client application still needs to 'fast forward' to the latest version, using cmis:versionSeriesId).

      Conditionally store either cmis:objectId or cmis:versionSeriesId



      This approach involves storing cmis:objectIds for object types that are not versioned, and cmis:versionSeriesIds for object types that are.  Unversioned object types include everything that isn't a cmis:document (cmis:folder, cmis:relationship, cmis:policy and cmis:item), as well as, on a case-by-case basis, cmis:document and sub-types of cmis:document (whether such object types are versioned or not can be determined by retrieving the 'versionable' property for each cmis:document object type in the system).



      For unversioned objects (i.e. those that have a cmis:objectId stored in the CMIS client application), CMIS service calls can be made directly by the client application, secure in the knowledge that the results will always refer to the latest version of the object (since, by definition, there can only ever be one version of such objects).



      For versioned objects (i.e. those that have a cmis:versionSeriesId stored in the CMIS client application), one of two possible call sequences are required:



      1. If the CMIS client application only requires metadata, it can call one of the 'OfLatestVersion' services (getObjectOfLatestVersion or getPropertiesOfLatestVersion).


      2. For all other use cases:



        1. Call the getPropertiesOfLatestVersion service with the cmis:versionSeriesId.


        2. Pull out the cmis:objectId from the response - this is guaranteed to be the cmis:objectId of the latest version of the object, at the time of the call.


        3. Call the desired CMIS service with the retrieved cmis:objectId.






      The advantage of this method is that it optimises the number of CMIS calls needed to perform such 'version independent' operations - often only requiring a single call.  The disadvantages are that it requires some initial 'discovery' calls to figure out exactly what's versioned vs what isn't, the client application's logic is more complex due to the two different types of CMIS identifier that must be used, and there is the risk of a race condition between steps 2.1 and 2.3 in the event of a concurrent update by another process.



      You might be wondering why a CMIS client application can't simply store the cmis:versionSeriesId in all cases.  Unfortunately cmis:versionSeriesId is optional (you'll have to manually scroll down to the cmis:versionSeriesId definition in that reference) - a compliant CMIS repository does not have to provide this property for unversioned object types, and in my experience most don't.

      This sux - surely there's something better?



      I've been unable to come up with a better alternative based strictly on the CMIS 1.x specifications, but that doesn't mean others don't exist - I'd love to hear about them if you've come up with one.  That said, having worked fairly extensively with CMIS client application implementers over the last couple of years I'm reasonably certain there isn't a fundamentally better approach.



      The good news is that the issue has been brought to the attention of the CMIS Technical Committee, and there is a proposal from Oracle for something called 'representative copies' that potentially has some overlap with this use case.



      Speaking personally, I would like to see something along the lines of the following, minimally intrusive change:



      • Make cmis:versionSeriesId mandatory for all object types and rename it (e.g. to cmis:id) to show case its more general utility.


      • Update all services that receive a cmis:objectId to also support the new identifier property as an alternative.  When the new identifier is provided, the semantics would be 'perform the requested service against the latest version of the object'.


      • Remove the 'OfLatestVersion' services, as they would now be redundant.



      Conclusion



      CMIS is a valuable addition to the content management repertoire, but as with version 1s of most products, it has its share of flaws.  This particular flaw happens to be both subtle and of significant impact, which makes it all the more important for CMIS client application developers to understand it and factor it into their designs.



      More generally, it is my opinion that this also reflects the specification's focus on addressing 'hard core ECM' requirements, to the (unintended) detriment of the 80% content management case i.e. simple file/folder CRUD.  I suspect no one on the CMIS TC realised at the time that the intersection of versioning and identity would 'bleed through' the basic file/folder CRUD use case in this way.



      Ultimately the best way for problems like this to be fixed (or better yet, to not surface in the first place!) is community involvement.  I've found the CMIS TC to be an open and welcoming place, and I strongly encourage all CMIS client application implementers to get involved in the committee's good work, at the very least at the level of an observer (as I have).

      Filter Blog

      By date: By tag: