Skip navigation
All Places > Alfresco Content Services (ECM) > Blog > Authors richards
This post follows on from the getting started post which you should run through first. It includes some free software that you may need to install before you can proceed.

Introduction



Earlier I introduced the basic functionality of the testing framework we have created for Aikau. In addition to testing Aikau on a couple of browsers within a predefined virtual machine, we have some options to test locally and to generate a coverage report on the spread of the Aikau testing. Here you will find some of these use cases described.

Local testing - why?



Testing against the browsers in a virtual machine is great because any individual can test against the same known set of browsers, irrespective of the platform they are using themselves. You might want to test a specific browser on a specific platform to analyse a platform-specific error and doing a local test could be the best way to do that. More important however is that the virtual machine testing is a little hidden. Whilst it is possible to interrogate selenium to see what it has been doing during a test, and Intern provides the capability to record a screen capture at a specific moment in a test, it can quite often just be easier to see a test running. Errors are often very easy to see when you are just looking at the browser window that Intern is driving.

Local testing - how



In my previous post about testing Aikau we used the following command to install an array of packages required for the testing:

>> npm install


One of the packages installed by that command is selenium-standalone. The selenium-standalone package installs a local instance of selenium with a wrapper and symbolic link to allow it to be launched from the command line, anywhere. You should be able to write the following command in any command prompt and see a small trace as selenium is started on ip and port 127.0.0.1:4444:

>> start-selenium


Note: All of these commands are run in directory:

{alfresco}/code/root/projects/slingshot


Note: Depending on the platform you are using you may need to reinstall selenium-standalone as a global package. If the command above does not work, try reinstalling selenium-standalone globally either by itself:

>> npm install selenium-standalone -g


...or with all of the other dependencies:

>> npm install -g


If you're able to launch selenium using the standalone command shown above and you have the required browsers installed then you should be able to proceed. At time of writing the two browsers referenced by the 'local' intern file are Firefox and Chrome.



Launching a local Intern test is as simple as:

>> g test_local


Note: This command is ‘g’ for grunt and ‘test_local’ for run intern test suite (locally).



Now because this test scenario is going to drive the browsers on your machine you probably want to leave the mouse and keyboard alone whilst the tests are running. If you're happy to watch the entire suite run then do just that. If however you are working on one particular test and want to keep quickly running that one, open the Suites.js file and comment out all of the tests you're not interested in, from variable 'baseFunctionalSuites':

{alfresco}/code/root/projects/slingshot/tests/config/Suites.js


The output from local testing is identical to that seen with the virtual machine.



Note: The structure of the selenium-standalone package is quite simple and if you have an unusual browser for which there is a selenium driver in existence, you can modify selenium-standalone to support it. All that remains is to add the required browser to the Intern configuration file and you're good to go.

Generating a code coverage report



A code coverage report can be produced by running a suite of tests with the code pre-instrumented to report on it's use. Commands have already been written into the grunt framework that perform all of the required steps for code coverage.



Code coverage reports are generated locally, so follow the instructions shown above but use the following command for the testing phase:

>> g coverage-report


Once this has completed, which will take slightly longer than basic testing, you will be able to visit the node-coverage console here:

http://localhost:8787/


You should see the report you have just generated with scoring and which you can look through all of the files touched by the testing. The tool shows areas of code that remain unvisited or through which not all of the pathways have been found. This feature can be very useful when writing tests to make sure edge cases have been considered.



Note: A particularly poor test result with numerous failed tests will distort the scoring of the coverage report, sometimes in an overly positive way. Make sure all of your tests are passing reliably before assuming that the scoring from the coverage report is accurate.

Introduction



With the Aikau framework for Share construction we have created a large number of widgets with some underlying code from Dojo’s UI Library, Dijit. It makes sense to automatically test the Aikau widgets in isolation and therefore we have created a test framework using Intern (http://theintern.io/) to do just that. The tests run quite fast and with predictable results. It is now easy to see if a recent addition or update to an existing widget has caused a bug or regression.



To perform tests we have written a number of test web scripts that contain widget models. The widget models are usually quite simple but occasionally have a more complex structure to expose edge case behaviours. Some test web scripts also contain intentionally broken models to test the catching of errors. For testing purposes the test web scripts are run using Jetty as we do not need any of the more complex overhead from Share.

Process Overview



To run a suite of tests we do the following:



  1. Launch an instance of an Ubuntu virtual machine using Vagrant

    The virtual machine provides a set of test browsers, drivers for those browsers and a running instance of Selenium that Intern can target


  2. Run up the test web scripts using Jetty

    We do not need all of Share so a basic Jetty server is fine here


  3. Run tests against the test web scripts using Intern

    Each of the tests will be run according to the Intern configuration and against the browsers defined and requested therein


Note: One of the tools we install when setting up testing is Grunt (http://gruntjs.com/). Much of the process of running tests is handled for us in commands we have already added using Grunt. In fact, once we have set up the testing framework for the first time, we really only need two commands to start a test run - firstly launch the virtual machine and then secondly run the tests.

Step 1 - Prerequisites



The testing framework for Aikau makes use of a number of technologies and you will need the following free downloads to proceed. Please install with the default settings:

I am assuming that you can use a command line and have a vague idea of what a virtual machine and Vagrant are. If not, please read about it first: http://www.vagrantup.com/

Step 2 - Installation



Having downloaded and installed a current version of the Alfresco code base, open a command prompt and navigate to the following Alfresco directory:

{alfresco}/code/root/projects/slingshot


Run the following command to install Node dependencies:

>> npm install


If you’re interested to know what is being installed you can look at file package.json in the directory above which contains a list of the dependencies. Once you have installed all of the components required for the test framework, you should be able to launch a virtual machine with vagrant for the first time. The first time you run this process it may be slow as it has a lot to download:

>> g vup


Note: This command is ‘g’ for grunt and ‘vup’ for vagrant up.



When the ‘g vup’ command has completed, if it has been successful you should be able to observe the Selenium instance running here:

http://192.168.56.4:4444/wd/hub/static/resource/hub.html



The selenium console should look something like this:



selenium

Step 3 - Running the suite of tests



With an instance of Selenium available on the virtual machine, you should now be able to run Intern by issuing the following command:

>> g test


Note: This command is ‘g’ for grunt and ‘test’ for run intern test suite (virtual machine).



This command first checks if the test application server is running and launches it if not. Once it is happy that the server has started completely it will proceed to launch the Intern test runner. You should see the test suite run through a large number of tests (about 110 tests run twice for two different browsers at time of writing) and log them to the console. Hopefully they will all pass.



This is the sort of output you should expect to see once the initialisation steps have been performed:

...

>> Starting 'AccessibilityMenuTest' on chrome

>> Test page for 'AccessibilityMenuTest' loaded successfully

>> AccessibilityMenuTest: Find the menu element

>> AccessibilityMenuTest: Find the heading text

>> AccessibilityMenuTest: Find the menu items

>> AccessibilityMenuTest: Find the first target

>> AccessibilityMenuTest: Find the second target

>> AccessibilityMenuTest: Find the first menu link - which links the first target

>> AccessibilityMenuTest: Hit the browser with a sequence of different accesskey combinations and the letter 's' for a nav skip

>> Starting 'SemanticWrapperMixinTest' on chrome

>> Test page for 'SemanticWrapperMixinTest' loaded successfully

>> SemanticWrapperMixinTest: Check NO_WRAPPER dom is correct

>> SemanticWrapperMixinTest: Check GOOD_WRAPPER dom is correct

>> SemanticWrapperMixinTest: Check BAD_WRAPPER dom is correct

>> SemanticWrapperMixinTest: Check LEFT_AND_RIGHT_WRAPPER dom is correct

>> Starting 'Pie Chart Test' on chrome

>> Test page for 'Pie Chart Test' loaded successfully

>> Starting 'PublishPayloadMixinTest' on chrome

>> Test page for 'PublishPayloadMixinTest' loaded successfully

...



Note: When a test run is complete the Jetty server is left running. If there are any failures it is possible to immediately investigate if something catastrophic has occurred by simply observing the test web script in a browser.

Useful commands



There are several grunt commands that can be used with Vagrant, the test application server and Intern. Here are the ones we've already seen and a few more, all of which should be run from the directory shown above:



















































































CommandFunction
g vupShort for ‘vagrant up’ this will launch a virtual machine instance with Vagrant
g vdownShort for ‘vagrant down’ this will stop a running instance of a Vagrant virtual machine
g vcleanShort for ‘vagrant clean’ this will delete an existing instance of a Vagrant virtual machine
g testRun up an instance of the test application server and run the Intern test suite against it
g shell:startTestAppStart the test application server
g shell:stopTestAppStop the test application server if it is running
g ntShort for ‘new test’ this command will restart the Vagrant virtual machine, restart the test application server and finally run the Intern test suite against them
g utdShort for ‘update test deployment’ this command will bring down the test application server, rebuild slingshot and then relaunch the test application server with any file modifications that have been made


Adding a test



If you wanted to add a test of your own there are three steps to the process:



  1. Create a test web script


  2. Create an Intern test


  3. Add the test to the Intern Suites.js file


Let's investigate those steps individually:

Create a test web script



Test web scripts all live in this location or a sub-directory of it:

{alfresco}/code/root/projects/slingshot/tests/testApp/WEB-INF/classes/alfresco/site-webscripts/alfresco


Each web script requires three files - a JavaScript file, a Freemarker template file and an xml description file. There are examples in the directory that can be looked at and copied. As with web scripts in the main applications the format of the file names is important. With a correctly written test web script you should be able to view the test model in action at a URL such as:

http://localhost:8089/aikau/page/tp/ws/AccessibilityMenu


This example test web script has in it's model an AccessibilityMenu widget. It isn't very pretty as rendered here but it isn't supposed to be.

Create an Intern test



The actual test files are created here or in a sub-directory of it:

{alfresco}/code/root/projects/slingshot/tests/alfresco


Intern tests are written in JavaScript using a promise-based dependency called Leadfoot which is provided by SitePen, the company who wrote Intern itself. You can read the Leadfoot documentation here. Strategies for writing Selenium tests are complex and I'm not going to investigate them here. Needless-to-say, one emulates the behaviour of an individual using a browser to interrogate the test web script as rendered.



The specific way in which an Intern test addresses a test web script can be seen if any of the existing tests is viewed. Pay close attention to this part of most tests which is the point at which the test web script is loaded:

...

var browser = this.remote;

var testname = 'AccessibilityMenuTest';

return TestCommon.loadTestWebScript(this.remote, '/AccessibilityMenu', testname)

...



Add the test to the Suites.js file



Tests that should be run are all listed in this file in the array 'baseFunctionalSuites':

{alfresco}/code/root/projects/slingshot/tests/config/Suites.js


Next steps



The testing framework supports testing against a local instance of Selenium rather than a VM instance as described above. It is also possible to run a coverage report on the test framework to indicate the amount of the code base being interrogated. The details of these alternative scenarios will follow in another post.

Filter Blog

By date: By tag: