So now we are at Question 3:
Do the UI stories count as "done", even when no software has been written?
"Count" is the operative word here. Do we mean "count" as in "keep track" or "tick off on a list" or do we mean "count" as in adding up some numbers and coming to a conclusion about actuals vs. estimates?
The purpose of saying a story is Done, is to have a means of measuring the team's velocity -- how much working software gets built on average per iteration/ sprint. Thus, in theory, you only really care about Done if the story is a coding story.
But you still need some way of determining if the flow of stories out of the backlog will impede the team's velocity. Now we're talking lean. Are stories getting delivered into the "ready-to-build" backlog fast enough so that the development team can always pull something off the backlog given their average velocity? Or are the developers going to sit around with nothing to do because stories won't be ready? Or better yet, can we move on to the next phase early because we're going to blow through / complete all the stories originally planned for this sprint/phase?
It is important to see the status of those higher-level analysis and UI research tasks that precede the coding stories. If UI work was substantial enough to make it visible as a "story" then you have to be able to say if it is not started, in-progress, or done, and have target expected completion dates so the team can get some kind of assurance about how and when the backlog may grow or shrink.
You can even assign and track analysis estimates and velocity in the same way that you track development velocity. But don't mix the two things together; the velocities are measuring completely different things.
UI research and the subsequent problem definition and solution finding (design thinking) will either add new coding stories, provide details for existing stories, or remove stories. Where the research provides new details for existing stories, that information will increase or decrease the stories's complexity. Likewise, adding and removing stories changes the estimated amount of effort in the backlog.
Being able to see when UI findings are likely to conclude gives the team information about future events that could affect the backlog and helps everyone roadmap the project's progress.
The team needs to be able to see it like this: Our velocity is V. The total point value of our backlog is 4xV so the project should complete in 4 sprints BUT, we still have two UI research tasks to complete and our experience so far on this project indicates that those tasks could add as much as 1xV each in changes to the backlog. So we have a total potential story point increase of 2xV. Therefore, as of today, our best estimate of finishing is in 5 sprints but we may need as many as 6.
(If you are curious about assigning story points for tracking velocity, here's one nice way to do it.)
Maybe this is a good place to introduce something like Cucumber. When a story can be "executed", as software that is. But still remain written as clean-human-readable-text, but only be "done" when the digital execution of this text is passing. It could make the story and the code somehow interconnected.
ReplyDeleteSo I would imagine pulling a story from the backlog would be "write the story so that a code can run it", and marking it "done" would be "does it pass?".
I don't know many frameworks like Cucumber that can fill this niche, and its a pity - I would rather have more competition in this field. There are some "runnable specification" behemoths somewhere, of that I am sure, but nothing that can agile enough for the described purpose.
And say if we are going the cucumber way, then story points can be estimated by the amount of "features" or written "scenarios" this story has. (speaking in cucumber terms)
At ThoughtWorks, teams have used frameworks like FIT or Cucumber to more easily move from analysis to a baseline functional regression test suite. The primary barrier is whether it makes the people in the analysis or qa test roles more or less efficient. If you have one of each available to pair, it can be efficient.
ReplyDeleteOf greater concern to me is correlating the number of acceptance tests to story complexity. Story points should represent the development team's assessment of the coding effort. Since acceptance tests generally only focus on the "happy path" from a user's point of view, assigning points based on those test counts would exclude the necessary discussion of the coding risks and tasks that developers should have when planning each iteration.