Tuesday, September 25, 2012

Thoughts on Moving Beyond Scrum

Moving Beyond Scrum, a posting by Todd Charron on Infoq.com has been making the rounds of our team lead discussion groups. Here's my take on what the writer says and the comments he received:

I’m with the voices that say that Agile is a philosophy and set of principals; not a practice. When we live those principals, we choose and adapt practices to best fit the current situation. Thus, each project team may do things a little differently from the others and one project team may change what it does over the life of its project.

The key drivers of being Agile are focus on business value, delivering working code as frequently as possible while maintaining a sustainable pace, and constant improvement in process and outputs by regularly iterating (review/change) over the results.

Practices that facilitate being Agile are close communication among business representatives and the development team (where UX is part of the development team and not handled as an over-the-wall service); pairing – programming, designing, user researching, testing; test driven development and continuous integration; user research; rigorous ETA projecting and prioritizing based on business value and team velocity.

Attitudes that facilitate being Agile are tolerance for ambiguity and unknowns; flexibility to compromise; interest in hearing and trying new ideas; satisfaction in teamwork (getting things done) over heroics (getting credit for being right).

Shu – Ha – Ri describes three stages of learning -- beginning to learn fixed patterns, experimenting with changes to the patterns, intuitive actions transcending patterns.  This is what much of the article’s discussion was about and describes why the article and its commenters feel there is a problem. There are too many people and organizations who are only in the Shu phase (I read about Agile, I got a Scrum certificate, we always do it like this) and not so many people and organizations at the Ri stage.

SCRUM, whatever its original intention, has become a defined set of practices. Ri people that I know have always referred to Scrum as a project management method and not about how software development happens. Scrum in itself is not magically agile.  Ditto Lean and Kanban.

Entropy, gravity, and human nature combine to emphasize or depend on predictable patterns and relationships. Difficult things fall apart faster, bodies at rest tend to stay at rest, following directions is easier than making a trail or reading a map. Agile works when you bring together a collection of people who prefer to go against these patterns. Therefore, the first challenge is to collect the right people.

Collecting the right people becomes difficult in large, established organizations unless the organization’s culture already rewards innovators and explorers.

Starting small with selective teams and projects is generally the best way to introduce change but such change is never sustained if it depends on outside coaches and consultants or if the higher leadership in the organization does not support and exemplify the change through its own behavior. In other words, long-lasting change needs to start simultaneously at both ends and work its way into the middle.

Things that can be done to foster a JIT delivery culture:
·         Evangelizing – reading & discussion group(s); brown bag sessions (internal & external presenters); publicizing internal examples; agile game sessions.
·         Hiring
·         Project staffing – all intentionally “agile” projects should have sufficient staffing at the Ha level and a Ri member or coach.
·         Co-location – UX, QA, Dev, Business sit together and have mixed meetings beyond IPMs  to discover and negotiate project requirements and design (google "Divergent and Convergent thinking" to learn more).
·         Shadowing
·         Regular process and structure to identify negative patterns, think of solutions, and conduct real-world tests (retrospectives make this happen at the project team level, what’s the equivalent at levels above that? Are the right people participating? Is the right information surfacing? )
·         Demonstrative leadership – actions and words are aligned.

Friday, July 13, 2012

How big is a 1?


Nothing sinks a project faster than a poor estimate, or rather, an estimate that sets the wrong expectation. Part of the problem with setting expectations is finding the right words to paint an idea of the development  effort.

It usually starts to go wrong when somebody tells a prospective client that the team estimates work in story points and 1 point “is about a day”. Wrong!  And too late.
 
The client will never lose the impression that a point is really just another word for “a developer day” and they will always be suspicious why it’s taking weeks to do what apparently was rated as a few days work.

A point is not always a day. I’m on teams right now that are only getting 1 or 2 points done per week. I was on a project where it took 3 weeks to complete a one point story.  I’ve also been on a project where the stories were so small, we stopped using points completely and did 3 – 5 stories a day.

So how big is a 1 or rather, what should a 1 represent?

All we want is some way to express time and effort relative to everything else that’s on the table.  

A 1 seems like an easy way to say that of all the things there are to do, each story in this pile will likely take about the same effort and time to do and they are all smaller than any other story on the table (or in the list).  From there, 2, 3, and 5 are useful to express increasingly larger scales of work, with anything above 5 or 8 signaling something big enough to require decomposition. 

But although it is convenient to use numbers to express relative positions on a sizing scale; using numbers tempts people to apply the same numbers without adjustment to generally quite unrelated timing scales.

Eliminating the numbers and using other mental images may be better for illuminating story sizing and starting a discussion around the development effort that might be involved. Let’s try some different scales:

     Cherry, apple, cantaloupe, watermelon, holiday fruit basket.
     Skateboard, bicycle, Vespa scooter, Ford Escort, 18-wheeler moving van.
     Pebble, rock, boulder, Corinthian column, Stonehenge.
     Stick horse, pony, racehorse, the Budweiser wagon, a circus carousel.

The next time you are sizing stories (and yes, call it sizing not estimating), it could be interesting to forget about numbers and try using some flashcards with pictures of objects instead. Your team might end up saying, “This week we finished three little ponies." Or, "The last time we did a beer wagon, it took a week and a half."  

Sunday, June 17, 2012

Lifecycle of a project: What's wrong with this picture?

This weekend, I attended a Design Hack session for CivicLab, a new effort in Chicago inspired by Milwaukee's Bucketworks.org; an institution that fosters member-driven civic and creative projects and activities.

There were several discussion groups taking up various themes. Since it seemed well correlated to what I do at work, I joined one about the lifecycle of activist projects and campaigns. Quite a few other software geeks took part, too.

The facilitator kicked things off by putting the following illustration on the board. Take a look, do you see anything wrong with this picture?


I quickly killed my good will with the facilitator by pointing out that Inception was a noun and Execute a verb. You don't execute at the end of a process, you execute all the way through a process. Someone else jumped in and observed that a lot of preparation has to happen before you can do an Inception. Another person wondered where Feasibility studies fit. And so on.

In the end it was a good discussion, enough for two or more blog posts but what I've been pondering all day is what kind of diagram would have said it better?  This is what I came up with:



Execution happens throughout a process. Throughout a process, research and investigation are happening to detect problems and opportunities. At any time, ideas may pop up, or you may set aside specific brainstorming sessions. Ideas, and problems and opportunities have to be prioritized. Problems and solutions have to be validated and evaluated - are they real problems? Will the solutions work? Will people adopt the solution as designed? Is it cost effective? Prototypes at varying levels of fidelity are in order throughout the process.

Feeding into all of this is an understanding of the context; the bigger environment of the problem and the people you want to assist: politics, money, geography, values, culture, etc.  Understanding the context and where you are in the process determines what tools will be useful in the execution of any phase.  

Tuesday, May 22, 2012

We Don't Estimate Hours!

Now that I'm at Pathfinder, many new clients are coming in through the UX door and they are not familiar with Agile. I often have to explain how estimating and story points work.

If you search the Internet, there are tons of links about Agile estimating and story points so it seems redundant to add another but I really wanted to blog about this, if only to organize my favorite references.

So here are four links that cover just about everything on the subject (plus they'll send you to other places anyway). And if you don't have the patience to comb through these references, I've added my basic explanation to customers at the end.

1.I know Anand Vishwanath from a long time ago.  I value his overview for the thorough job it does of covering all the angles: http://bit.ly/MdH1tS

2. I love Dan North's post because it correctly, and effectively, shoots down that compulsive estimation marathon most companies want to indulge in: http://bit.ly/J9DvxF

3. Jay Fields is another former Thoughtworker who sums up the how-tos concisely: http://bit.ly/LbKy9Q

4. Mike Cohn is the ultimate writer in this area and this link leads to a chapter from his book on Agile estimation and planning, "Chapter 6: Techniques for Estimating": http://bit.ly/K6pvqQ

What I tell customers: We estimate expected effort, not hours. My teams count effort in points -- one is small, two is something more than small, three is something more than a one and a two, five is a large effort and going above five indicates the story is stated too broadly or contains enough unknowns to be manageable within an iteration.

Effort is either a measure of complexity (figuring out the unknown) or time spent on something that is naturally slow and tedious or a combination. Effort doesn't translate directly into hours but a team can use effort to figure out approximately how much can get done per iteration. It takes a team several iterations (2-5) to get a handle on their velocity (stories points completed per iteration) but you can use yesterday's weather to get a starting range for estimating a project.

Yesterday's weather comes from looking at what a similar team (size, experience) accomplished in a similar situation (familiarity with technology and domain). You only need one previous project to get this information. Use the averages - mean, mode, median - to figure out a likely range of outcomes and prioritize stories accordingly. Add 1-4 iterations for ramp-up (because both sides have to get up to speed) and 2-4 weeks for wrap-up (bugs, testing, and the unforeseen). Review and adjust the plan as you complete each iteration.



Tuesday, March 13, 2012

Acceptance Criteria and UI Controls

Lately, I've been working with some newer team members who haven't done user stories or acceptance tests before. Figuring out the best way to break up stories or write the tests leads to some good discussions where I have to think carefully about the Whats and Whys.  Like today.

Today, I saw that someone had created a story specifically to say that there would be hover-overs on such-and such screen. I pointed out there were already three stories for the functionality the hover-over would present.  "But, if I don't do that," my teammate said, "Then I have to write acceptance tests in each story for when the hover-over appears and when it goes away and I don't want to do that. This way, I only have to write those tests once."

Well, no. You usually don't need to write any stories or tests for GUI widgets, technical elements or UI controls, you only need to focus on the business aspects.

The user may have personal preferences or expectations about how a UI may work, but generally, those are unspoken assumptions. The user really cares about achieving some goal and the precise set of tools is incidental. Will there be a picklist or radio buttons? A field that you type in or a calendar widget?  A single field for phone number or three fields?  In each case, the user doesn't really care as long as the interaction delivers the desired results, reduces the likelihood  of mistakes, and minimizes their effort. 

Generally, whenever the UI is using a known convention, the user story is about the goal and not how the system lets the user achieve the goal: "As a user, I want to enter a phone number so that ...." "As a user, I want to enter a date range, so that ....."   "As a user, I want to select X, ....."

Aside from setting out acceptance criteria for specific business requirements like default selections and required fields, we don't have to write explicit tests for types and behavior of controls; we assume behavior like "the radio button shows as selected when you click it" or "the droplist retracts when focus moves away" are well-tested already.  Of course, the dedicated QA always does a sanity check for correct UI behavior but acceptance tests are meant to be somewhat less than that -- just enough testing to prove that the business logic is correct so that the application can be deployed for production or for more rigorous testing.

Once an application establishes a convention for doing something, if there is a change in UI technology, then we want to write user stories and acceptance criteria in order to track that the technical change is planned for and will not adversely affect the business.

For example, when Ajax came along, a long-running application that I worked on underwent a significant facelift on certain pages to take advantage of the technology. In those cases, we had specific user stories that sort of went like this "as the system, I want to use Ajax to look up X...."  The acceptance tests spelled out that we would see a spinner before that section of the page refreshed. And no surprise, we did find that some things on those pages were broken after we made the change.

For our project today, we are adding completely new functionality. There is no existing UI to break and the user will not have to get over any previous training or expectations about the UI. We plan to use hover-overs but hover-overs or the alternative to show/hide fields when the user clicks into a row are known conventions. Therefore, we don't have to have a separate user story that says "..... use hover-overs on screen X",  instead, we just need separate stories for the business goals -- edit, delete, shorten URL, etc.. 

As part of the story details, or as a comment in the screenshot, it is fine to say something like, "Assume the use of hover-overs to show/hide the edit link." or "This story will use hover-overs to show/hide the link."

Within the acceptance tests, we only need a test that says "When I put focus on a list entry, I will see a link to do X."  We don't need a test to say the link is hidden when we move off the row, although the obsessive test-writer is always free to add that or expand the first test to say that.

It's the simplest way we can write the stories to convey the objectives.