Skip navigation
All Places > Alfresco Content Services (ECM) > Blog > Author: tbagnall

In Alfresco, on our platform we support a large number of code versions, somewhere over 30. The platform is also big, made up of many moving parts. Naturally all of these can produce bugs. 

 

We also have considerable work on our enhancements, our small increments to the features and architecture of the software.

We bring quite a mix across from these sources into our scrum teams. We have to balance across:

  • Items the team discover they need to make improvements on how they work - things that come out of retrospectives
  • Feature items - new exciting features or assess, such as the new ReST API we are working on
  • Architecture items - to make it easier to support and add features
  • Maintenance work - issues that our customer, community and internal users find on previous versions - the 29+ branches we are supporting.

 

Relatively the first 3 are easy to size. We can use story points based effort using complexity as the most significant indicator, but realising some things are not complex, just hard work.

 

We wanted to ensure we really were reserving capacity for maintenance and across the three others. We were also faced with a challenge: How do we know we can forecast the number of maintenance items we can get done?

 

Everyone know it is very hard to size a bug! We are lucky that we have an excellent support part of our organisation, that triage the issues, reproduce them and capture loads of information. Still it is hard to estimate.

 

As a team we also do not want to include in our velocity fixing issues that we (or previous teams / members) let out the door. It does not seem right.

 

What we have done for a few sprints is use T-Shirt sizes for these issues. We use labels in our jira tickets. Then as we progress we discover more about the issue. At that stage we resize - in an agile way we re-estimate constantly, and revise our plans. So if we find a ticket is taking 3 plus weeks to get done, and we originally thought it was a T-Shirt size of Extra Small, we revise it to reflect the actual effort we spent.

 

We have been able to take the T-Shirt sizes and equate them to story points. This allows us to relatively get a ratio of the amount of work we are doing across the 4 sources of work and ensure we are taking care of our support obligation, while continuing to innovate the Open ECM market place with new architecture and features.

 

At the moment this way works well for us, however we will continue to improve and look to other ways of doing it.

 

How have you gone about it?

by Tristan Bagnall

Following up on my recent post How to have engaging agile reviews[1] I wanted to share the experience of one team here at Alfresco.

I would like to call out a few of the aims from the review:

    • Find out how easy it was to consume the information we presented
    • Get people to play with the changes we had made
    • Start an engaging conversation with the consumers
    • Encourage people to part with their time
    • Listen to the stakeholder feedback, and use it to help us pivot


How did it go? What sort of thing are we developing?

The first piece of information that would be consumed by the stakeholders was the title of the invite, so this was carefully worded as a call to action

'Do you want to guide us on how we are doing with the new REST API?'

Next was the content of the review.

    • We put ourselves in the audience's shoes -
      • we were careful to ensure we had thought through and planned what were were going to show
      • what we were going to ask
    • Our audience was going to be as wide as possible - a few senior managers, plenty of people who would use that part of the product, we socialised the review and got referrals that increased the list.
    • We provided some snack food and booked a big meeting room.
    • In the invite we tried to spark interest, by saying it would be interactive and included instructions on how to get set up for the interactive part of the review.

 

Thinking about our audience, we wanted to

 

    • show them how our APIs  could be used - we settled on a 5 minute demonstration of a UI application currently in development that uses the new API.
    • highlight those APIs that had changed (added, updated, removed),
    • show them how to simulate the usage of the API - we chose to show some scenarios using Postman, but we could have used SOAP UI, SmartBear API tester
    • show them our new API Explorer, based on Swagger UI.
    • let them play in their own isolated environment, and get updates as we supply them - we decided to use a docker image / container
    • provide a Q&A, and gather feedback - we decided to use Skype group chat.


Overall the review went well. Once the play with our APIs started we got some very valuable feedback over Skype.

A few areas we would look to improve on were:

 

Sections of the review:

 

  1. We had a lot to show in the Postman scripts, going through them took a while. Although the questions that were asked during that part showed it was valuable.
  2. People loved being able to play - there and afterwards. This lead to deeper, useful and specific feedback.

 

Tools:

 

  1. Webex - we had difficulty getting it to work on Linux, to the extent that those with Linux were unable to present.
  2. Skype is not a suitable feedback tool, especially at scale as multiple cross conversation are forced into one thread.


Next steps... can we do this with the Alfresco Developer Community?

by Tristan Bagnall

I recently started having discussions with teams and stakeholders on what a sprint review, or kanban review milestone should look like.

There seemed to be varied opinions ranging from, it is a chance for us to show how we implemented the code to fellow engineers, to a chance for the product people to sign off each story.

In this post I will cover how get better engagement with your stakeholders.

Lets head back to core Scrum - the the Scrum Guide: Sprint Review event. There is quite a detailed set of steps that those involved must consider but it boils down to:

'A Sprint Review is held at the end of the Sprint to inspect the Increment and adapt the Product Backlog if needed.' - Scrum Guide, Scrumguides.org


But what does that mean, and is the description in the scrum guide the only way to achieve it?

The review is the first chance for the team to engage with the stakeholders and get a feedback loop in place. The team can then pivot based on that feedback.

How to engage stakeholders


Have you ever sat in a review and been confused waiting to be delighted? Well it is likely that that team, like all teams is on a journey. A journey of discovery, of learning about how to take a problem, work on it and deliver it as working software, to live.

Wow, you might say, that is idealistic. No team can take a problem and put it live. There are those who can, but the majority cannot... yet. They are on their way there and if given time and support they too can get there.

Here are some examples of how teams may engage stakeholders in sprint reviews, starting with the earlier stages of the journey:

    • A play by play description of the teams activities

      'We picked up story 123, then we wrote some tests to cover scenarios xyz. Once those were done we got a red build, as the tests failed. The writing of the application code was quite quick. It passed the tests, but then we realised we needed to add a little more tests as the code was....'
    • A story by story demonstration

      'We did 10 stories this sprint, here is the list. Let me start with story 123. I have a virtual machine set-up to show you this story. On the login page we added the cancel button. As you can see it is in the correct branding styling, and is screen reader ready. Okay now let me show you story 124. This one needs me to log in. As you can see the users home page has changed after login, and it now shows their username in the top right. Great so the next in the list is story 125. Lets close the browser, and go back to the login screen. So as you can see we hae the cancel button I showed you in 123, but we also have the login button. In this story we corrected the rounded corners on that button. Over to Terry for the next story...'

 

    • A narrative through the system pointing out where the changes are

      'So today we are going to show you the changes we have made to the system. Let me get to the login page. Here you can see we added the cancel button and changed the login button. There were some issues around the curve on the button. Okay so now we are in you can see we moved the users name to the top right of the page....'
    • An experience for the stakeholders

      'Hi, thanks for coming along, did you all bring your machines? Great, so here are a few highlights of the changes we made and where they can be found. I will leave them up for your reference. Naturally we are after feedback on these, but we would like feedback on the general system.

      Please go to url http://teamA.stable.greatbrand.com/ and play. The team are all here to answer questions over IM / voice / video. They will also wonder around to ask you find out what you think. All feedback is good feedback.'


Where are you? Any other examples? - send them to me and I can add them.

I often find when teams move from working as a collection of individuals with a shared purpose, but in different silos that they needs a helping start on when to communicate with each other.

Naturally there are points in time where Scrum creates the opportunities, but those opportunities are not enough for a performing team.

I don’t intend on going into how to communicate in the Scrum ceremonies, but rather share some thoughts on how to start teams effectively talking.

Here are the guideline I give teams:

When you are about to pick up a new user story have a quick chat with the team - all the cross functionals (Coding, testing, documenting, deploying, user experience, etc.) to:

    • … ask if there is anything you can do on a currently in progress user story to help get that completed first.
    • … make sure the story is still good - incorporating anything we may have learnt so far in the sprint. Include the PO in this.
    • … make sure the plan on how to complete the user story is still right - it might need updating based on what we have learnt so far in the sprint.
    • … make sure the tasks represent he plan to complete the user story and have enough information to enable anyone to pick them up.
    • … ensure there are the right people available to work on the user story. If you have specialisms in the team make sure there is someone from the cross functionals to work on the user story, such as a tester to ensure a coder knows what to produce to pass tests.



This conversation would normally be anywhere from 30 seconds to 5 minutes, unless we have learnt something that causes he plan to change considerably.

When you are about to pick up a new task:

    • … ask other team members if they need any help on what they are working on, to help them get their task completed.
    • … check that your approach is going to be correct for the task at hand with another team member. For example
      • a coder may check with a tester before implementing code to ensure they understand the tests, edge cases, negative paths, etc. that they need to cover.
      • a tester may check with coder or user experience to see how they are thinking they would add a button to a user interface to ensure their automated tests would capture it


This is a start for teams, and once they get into the flow they will adjust and improve, probably without even thinking about it.

Story Points?

Posted by tbagnall Employee Mar 20, 2015

by Tristan Bagnall



Recently I have been asked quite a bit about story points here are some of the answers I have given.

To give some context and scope around this post, here are some quick facts I have learnt about story points:

    • For story points we use an altered fibonacci sequence: 1, 2, 3, 5, 8, 13, 21, 40 ,100. (Some tools use 20 instead of 21)
    • Story points are abstracted from elapsed or ideal time.
    • They are like buckets of accuracy / vagueness.
    • The larger they are the more assumptions they contain, the larger the probable complexity and therefore effort.
    • They are numbers, allowing the use of an empirical forecast
    • They are used by the Product Owner and enable them to do forecasting - the PO should find themselves being asked, when will I be able to have a usable checkout (or other feature).
    • They are used on user stories a.k.a. product backlog items (PBI)
      • Epics are included as user stories, even though some tools have adopted a taxonomy that suggests Epics are different to user stories.
      • They show the relative effort and complexity of a chunk of work.
        • They are a vector - 8 story points is 4 times as much effort as 2 story points (4 x 2 = 8)

       

      There is plenty of literature out there about story point, estimation, etc. This is not meant to be exhaustive, but I would encourage everyone to read about them.

      Why not use man days instead?

      Everyone has an opinion on what a man day is - it is kind of mythical as it means so many things to different people.

      Man days suggest that there is little complexity and we are certain on what needs doing - after all days can be divided into hours (24ths) so they are very accurate.

      Man days also start give an expectation of a delivery date, even if they are padded out by saying they are ideal man days. However once you start with ideal man  days you then get into confusing realms of what is ideal and what is really happening. For example:

       

      • 1 man day, might be 2 ideal man days as the person is only spending 50% of their time on a team (a 50:50 split).
      • But in reality they are context switching every 30 minutes, so the time split is really less than 50% - context switching is very expensive and leads to poor quality work. So the real split might be something like 40:40:20.
      • This suggests that 5 man days are really 2 ideal man days.
      • At this point normally a large debate starts, with boasts about how easily someone can context switch and these (or any) figures are wrong.
      • At the end of the debate there is a lack of clarity and therefore the man days have become meaningless


          It is generally accepted that it is better to work out the effort and then measure how a team progresses through that effort.

          Why the sequence of numbers?

          As we continue to have conversations about an item of work we get to know more about it, we therefore learn about its complexity, we remove uncertainty and get an idea of the effort involved to deliver it. While we do this we break the work down into more manageable parts. Through all this we are testing assumptions, either removing them, or correcting them or validating them.

          As we gather all these moving parts we can become more accurate about how much effort is needed. While it is big, we have a lot of assumptions, and due to that we are pretty vague.

          So how does this tie back to the sequence of numbers?

          As we can be more accurate with the smaller items we need more buckets that are closer together to put the chunks of work into. Therefore the first part of the sequence is ideal: 1, 2, 3, 5, 8.

          Then we have the large chunks with lots of assumptions - the epics - that need to be broken down before we can work on them: 40, 100

          Then we have chunks that we have become more familiar with, partially broken down, but are still too big: 13, 21.

          How small should a user story be before I start working on it?

          Another way of putting the question is

          • how much uncertainty should remain;
          • how many assumptions should be cleared up;
          • how effort should there be;
          • can it be finished in the sprint - meeting the Definition of Done?

          before I pull a user story into a sprint or into progress on a kanban board?

          This depends on several factors:

          • How much uncertainty are you comfortable with?
          • How will the remaining assumptions affect your ability to deliver the chunk of work?
          • What is the mix of the sizes you are pulling into a sprint?

             

            As with all things agile there are exceptions and generalisations. One observation I have made is that many teams think that they can take large chunks of work into a sprint, however this means there are lots of assumptions still to be worked out, lots of vagueness and uncertainty. This leads to a lack of predictability and consistency on the sprint delivering.

             

            Therefore I have normally advised the largest single chunk going into a sprint is 8 story points, but there should always be a mix of sizes going into a sprint.

            by Tristan Bagnall

             

             

             

            Not sure how to start sizing stories?

             

             

             

            A clever agilist once showed me a really useful technique to help teams start with story points and break the initial barrier on estimating. Here it is with my twist:

             

             

            Pick a story that you think is small, perhaps even the smallest on the wall -

            • well understood,
            • not many assumptions,
            • understood by all the team,
            • little effort to get done

             

                1. Find all the similar size stories and label them all as small
                2. Look for the next significant size up and label them medium
                3. Look for the largest stories and label them large
                4. Now go back to the small stories
                5. Mark all the small stores on a scale of small, medium and large, Try to think of the medium as about twice as large as the small and the large as about three times as large as the small.
                6. Move on to the medium sized stories and mark them all as small, medium and large
                7. Move to the large stories and mark them as small, medium and large. Try to think of the medium being half (or less) the size of the large.

               

              You should now have user stories labelled and  marked:

              • Small - Small
              • Small - Medium
              • Small - Large
              • Medium - Small
              • Medium - Medium
              • Medium - Large
              • Large - Small
              • Large - Medium
              • Large - Large

              We can use these to translate to story points:

               

              SmallMediumLarge
              Small123
              Medium5813
              Large20 or 21*40100

               

              * Depending on your tool you may find support for 20 or 21

              Filter Blog

              By date: By tag: