Showing posts with label done. Show all posts
Showing posts with label done. Show all posts

Wednesday, August 19, 2009

Quote Series Part 2 - Estimates and DONE

This is part two of the series, to see what this series is about, review part 1.

Quotes on "DONE"
One of the goals of the sprint is to measure what you have done - ObjectMentor coach
Burn-up should happen every day to stop scrunch stress (all things shouldn't finish on the last day) - ObjectMentor coach
If we’re not shipping our software when it’s ready, it’s poor business practice. If we’re not sure whether our software is ready, it’s poor software practice. - Ron Jeffries
Make Ship Happen - Todd Little
Quotes on "Estimates"
Ability to estimate correctly is not an ‘ability’ … it is a fluke and lucky guess - Roy Morien
Estimates only help us take action to get to the next step - unknown

Note: where I can, I've credited or linked the source of the quote. Finding the source of a quote is like chasing a ghost. When a mentor says something witty, you might not know they are quoting someone else. If you are aware of a more appropriate source for any quote, PLEASE put a comment on the post and I'll do what I can to validate this. I mean no disrespect to anyone. I believe the risk of incorrect citings is outweighed by the value of sharing these wonderful nuggets.

Wednesday, July 29, 2009

DONE vs. Complete...

From time to time this topic comes up. Today it was raised in my RSS feed by Boggin of the EscApologist blog. He posits "When is a task Done"? He's on the right track, but I was worried his list was too prescriptive. I see DONE criteria as a specific area of coaching and focus for all agile teams, and I think it is more important for people to understand the reasoning behind DONE than have a solution handed to them. This will allow them to find the right answer for themselves. So, here is the nugget I left for him:
Simply put, “DONE” is the full list of stuff the team needs to complete to put the story (set of tasks or full work effort) behind them. If there is work still remaining then it isn’t done because this work will either become technical/business debt, or will delay the next work while it is being completed. This is why done can’t be measured by deployed.

This is also why teams I work with are allowed to use the word “complete” somewhat freely, but they use the word “DONE” with respect and reservation.

For every team, this list is unique… and every team should manage/evolve it as they mature. It is a critical part of agile success.

Oh yeah… and it should be filtered through the lens of business value (not tech complete), but this isn’t an issue if your team is cross-functional.

Anything you would add?

Thursday, July 9, 2009

Early Kills are an Improvement...

It's been a few days since I read it (so I apologize for the missing reference), but it came up in my mind again today. There was a recent discussion about the definition of 'success' in relation to Forrester's or Gartner's research published about projects. It seems that software projects became more successful as agile adoption hit the mainstream market; but project success metrics have decreased in the last few years with widespread enterprise agile adoption. One side of the debate says it is because Agile doesn't work in complex environments (that's ignorance since I learned about agile in one of the largest most complex global 500 companies). The other side of the debate is focusing on the definition of 'success'.

For example, if you were going to work on your project for 3 years in your old model and then it failed... wouldn't it be an improvement if you realized this way in advance? That's a lot of cost savings. It gets those people on something that they can be excited about sooner. It's less "stuff" to throw in the trash. It's less wasted time.

My point is that I agree with the side of the debate that says Agile can help you be successful by shortening feedback loops and creating transparency ESPECIALLY WHEN it leads to the project failing sooner!

Now, I want to take this a step further. Mike Cottmeyer wrote a great post yesterday about Real Option Theory and how to think about risk management. He reviewed an example where he chose to put a deposit on something to extend the window of time where he could gather more information to make the right decision between 3 options. His points surrounded risk management and how every choice (or non-choice) we make affects the economics of the project.

I took this a step further in my comment:
I think there is another piece to this. Some people make the mistake of looking at the deposit as part of your future decision. For example, option 2 and 3 are even choices, but because we paid the deposit... we should not waste it and choose #2 over #3.

The deposit was a payment to extend time (as you described). Some people forget this and see it as a partial payment of the total cost and can't let go of it.

This is how companies keep projects going that should be killed because "they already spent 12 million dollars on it". My response is, "and you want to waste another million?". Instead they see it as "so close to completed, they can't throw it away."

I was once told that the main rule of an MBA education is you decide what to do moving forward based on costs/profit moving forward, not money already spent.
I encourage you to read Mike's post (he's a much better writer than I) and if anyone knows the reference I mentioned above, place it in a comment and I'll add it to this post for future readers!

Tuesday, June 23, 2009

Are Deadlines Important?

Olga Kouzina posted about deadlines on her blog here. She then posted a related discussion thread on LinkedIn's Agile and Agile Alliance groups. Next thing you know, there are 26 combined comments between the two threads!

She is basically questioning the core concept of deadlines. Do we need them? What value to they add? What negative impact do they carry? Unfortunately, some people mistakenly took this to mean that we should never work with goals, delivery dates, or time based measurements. I had to clarify my stance later in the thread with the following:
I believe in and agree with groups that leverage timeboxes, iterations, and release dates. I see these as good metrics and motivating goals.

I believe that Olga and I were talking about Deadlines with a capital "D". These are the dates put in the ground before the team has committed to scope, effort, etc. We know the requirements will change as we work the project, why won't the date change also?

I've seen too many projects where you realize pretty quickly that the "Deadline" was picked due to a desire but no real business need or planning validation. These dates are a great way to de-motivate a team... after all, why work hard if you already know you can't make it?
When Olga asked why I might think we put up with deadlines, I responded with:
We stick to them because people with higher authority don't trust a better solution and force them on us.

So... it's our job to start educating the decision makers around or above us. This is a topic I recently blogged about.
Ben Linders summarized the agile point of view well with the following:
If there really has to be a deadline, then it's up to the PO, project managers and stakeholders to do whatever they can to ensure that the teams know what to deliver, and have an enviroment and the right conditions in which they can be most effective and efficient. IMHO the deadline should never be passed on to the teams! The teams commits to iterative deliveries of highest priority user stories. That automatically leads to maximum result in the shortest time, which increases the change that deadlines are met.
And finally, I like Vinay Krishna's points:
Can we achieve anything without deadline? It looks funny to know that one starts a project and says that he doesn’t know when it would get finished.

I want to quote here the famous Parkinson Law which states that “Work expands to fill (and often exceed) the time allowed.”

This is important to recognize in software development because when we are not restricted by time and/or money, it’s easy to create an endless amount of features or strive for what we perceive as perfection.
Of course we can all agree there are sometimes market reasons for deadlines. After all some dates can't be moved (Y2K, elections, tax dates, etc).

Thursday, June 18, 2009

I'm happy with the Toyota that doesn't work!

I have to expand on yesterday's post. The LinkedIn discussion has had a few more comments added to it and there are some valid viewpoints. But, because of the discussion, I found myself coming up with this metaphor to further answer the original question-
I ask Toyota to build me a cool new car.

They deliver to me a shiny new one-of-a-kind car. I love it. I check it over and every part is there and connected properly... looks great!

... it doesn't start.

If they forgot to put gas into it... that's just bad account management.

If they never started it themselves to prove it worked before delivery, the questions are-
Is the story DONE?
Do I give them credit for their work and pay the bill?
Do I create a new story to "fix" the problem (additional cost)?

Fundamentally, do I get value out of the work they delivered?

I think the metaphor leads to obvious answers, so now I'll play devil's advocate. The car I ordered is going in the showroom of my dealership. It will never get driven anywhere. It's a halo prototype to get people to come to my dealership. In this light... it is DONE. Why pay (in time or money) for the "fix"?

This counter-point is that you can't answer the question unless you understand the value trying to be achieved.

Wednesday, June 17, 2009

Testing, Estimates and Burndown...

Adam Feldman posted the following question to the Agile Alliance forum on LinkedIn:
In your experience, what do you all do in estimating for testing? Do you include the time expected to test the story in the points allocated for the actual story, or is this normally not included?

The second part of my question regards burn down charts. If we are not allocating points to the actual testing of the story, is the burn down chart really only telling us what is built - not what is actually complete?
The first three responses answered that testing should be part of the team effort and not an after-thought. To help Adam find a way to explain this for the team and management, I focused on the concept of DONE criteria:
The fundamental question here is "what is your DONE criteria?".

If the team's DONE criteria is built (but not tested), then the burndown and estimates do not include testing. This will lead to an optimized development team, but potentially create problems for testing and ultimate delivery. Things will get built faster, but ultimately may be delivered slower (which is not upper management's goal).

If the team's DONE criteria is delivered business value to the user/customer (the true agile definition), then the estimate and burndown includes testing effort. Testers should be part of the team, tests need to be run within the sprint, stories should be completed within the sprint including testing.

This isn't easy, but the second option is the mature one.

There are two pieces to overcoming this. You (the testing group), need to learn to work quicker and more closely with the people building. They (the developers) need to learn that DONE is defined by business value and not code complete. If "you" and "they" both overcome this, then everyone will become a "we" or "us". Testers become involved in sprint planning and review also.

I've blogged about this several times if you are curious:
Downstream testing
Tests after story closure
What to do with found bugs
How do testers fit in agile
I'm curious where the discussion thread will go over the next few days on LinkedIn. Anyone have additional thoughts?

Friday, June 5, 2009

Scrum weaknesses and Potentially Shippable...

Jack Milunsky posted another great article on the Agile Software Development blog about the areas where scrum falls short. It's an interesting read and I encourage you to consume it. I'm a strong believer that Scrum is a great starting point, but an agile transition shouldn't end with it.

One point I didn't agree with though was this:

Potential Shippable code - what does this really mean?
Most Agile thought leaders define the output of an iteration as an increment of potentially shippable code. And more often than not, it appears to be acceptable (please tell me if I perceive this incorrectly) that this potentially shippable increment of code is not actually live production code. And therein lies my beef with Scrum.

Here's the response I left:

We have to remember that when "potentially shippable" was created as a concept, we were combating waiting months or years to ship a product. In large enterprise environments, this was an even larger/longer problem. This new concept embodied the idea that we could stop at the end of any 1 month iteration and have something working (and probably of value).

Since then, the industry at large (due to the internet and companies like Google) has embraced eternal betas, imperfect software, and constantly evolving software focused on constant value. Within agile, we've seen a migration to iterations under a month. I believe it is time to rename this to "proven shippable". This is the compromise between Jack's comments and the enterprise's needs. Enterprises build large solutions to large problems and they understand the needs of the market. When a complex domain (like banking or healthcare) already has mature software solutions, you know your product won't displace the competitor's software after the 2nd or 3rd iteration. There is effort to marketing and advertising new products. It is important to have a release cycle and the power that comes with it. You can't just dribble out small changes every week and expect to catch the market's attention.

This is why I think that in small continuous flow shops, I agree with Jack's done = production use. But in large shops with large customer bases, you might need to allow the business and market determine when to take on the new release and not force it on them. If your releases are compelling enough, they will migrate (VersionOne has mastered this!) In this case I don't think your done criteria can be production.

Thus, I propose -> "proven shippable". This means you have shipped it to somewhere external to your development group that is a proxy for the customer base.

Tuesday, January 20, 2009

Don't abuse the metrics...

Tim Ross blogged about his concerns on burndowns. He offers some interesting points on how they can hurt including:
  1. too much pressure early on
  2. it doesn't handle the unplanned
  3. it is unforgiving
  4. PM's and customers misread it
  5. it discourages refactoring
  6. it focuses on a predetermined time
Hmm.... my responses:
  1. really? most people feel it at the end
  2. it is a reminder of the reality of the clock
  3. neither is the market, it is changing whether you deliver or not
  4. well... yes, that's true with any data point
  5. not if you refactor constantly
  6. or you could say it focuses on meeting a completion of commitment
Okay... some of that is me playing devil's advocate. I can relate to some of the points that Tim makes. Unfortunately, he proposes a solution where teams focus on the task board and maybe even hide the burndown for only the manager's eyes.

Umm... no. If you are really interested in going agile, then you have to understand that there is a power shift from the managers to the team. Instead of having people above (who aren't doing the work) ask for ridiculous things and make useless statements, the team is now more in charge of the process. But, the idea of a self-empowered, self-managed team is that the team has to be willing to be accountable for the work. This is not just the validation and quality of that work, but also the predictability of its timely delivery.

My view on this matter is two-fold:
  • The burndown is an indicator. Bad trends in burndowns should be a flag for resetting expectations. When expectations are adjusted (be it the developers, customers, or managers), then everyone should know about it (that is why we do sprint planning and sprint review together, right?).
  • ANY METRIC can be abused. If your culture stinks, or there isn't a trusting relationship between the team and the customer or managers... then any metric can be abused to support finger pointing. This is not specific to the burndown.
This is why many waterfall projects fail. It takes so long to uncover problems that when they are realized by lower management, they can delay the discovery further by gaming the metrics and reports to protect their jobs as long as possible.
"My definition [of agile] is that you accept input from reality, and you respond to it" - Kent Beck
A burndown is one of many measurements of reality! Unfortunately I think the answer in this situation is that the group needs to improve the cultural issues instead of throwing out the burndown.

Thursday, November 13, 2008

Are we there yet?

Are we there yet?
Are we there yet?

Have you ever been driving and heard this from the back seat? I don't have kids that are old enough yet, but I've seen it in the movies plenty of times and am not looking forward to it.

I once found myself saying this when on a flight to Japan. 13 hours on a plane will make you go a little batty. The thing that saved me was the projection on the wall showing the plane relative to the earth with a miles and minutes countdown to landing. Every time I asked myself if we were there yet, I had the answer right in front of me.

When the jetstream shifted and the flight was delayed, this was immediately shown on the screen with the new projection.

Sound familiar? Sounds like a burn-down chart to me (minus the history).

When the plane first took off, a projection was made immediately even though we hadn't made any progress yet. How'd they do that? Sounds like they used yesterday's travels to determine velocity.

When I download software from the internet to my computer, it gives me a sense of how much is complete and how much time remains. This is constantly adjusted as more pieces and packets arrive.

All of these methods make more accurate projections as progress is made.

In some form, ever since we've had the ability to provide this type of feedback on planes, gps devices, software downloads, etc... we've been staring the burndown concept in the face. Why did it take me so long to want this in my daily job?

Monday, September 22, 2008

Improve your testing with 5 for 5...

Someone sent me this link on the Search Software Quality site about the differences of agile vs waterfall related to quality and testing. It's not written in a style that normally lands in my reading list, but there are some interesting tidbits contained inside.

One great little gem deep inside the article is about building respect on the team around peers and testing. This link led to a great blog post about the "five bugs in five minutes" game that is a fabulous idea.

In case you don't want to follow the last link, here's the summary:
  • I (developer) think I'm done
  • I challenge a peer to review my work
  • If they find 5 bugs in 5 minutes... I buy them lunch

Why?
  • TDD and auto-testing is good, but it's not creative like the human brain
  • 5 minutes is quick
  • I learn from my peers (like pair programming)
  • It's cocky (I challenge the best because I believe there aren't any) and therefore fun
  • It encourages your team to have lunch together and build stronger relationships.
Check it out... I'm going to see if I can convince my team to try it.

Monday, September 15, 2008

Iteration/Sprint != Waterfall...

This discussion about code reviews has triggered a bunch of thoughts in my head. For now, I'll stick to the loudest one.

Sprint boundaries are supposed to provide transparency to the customer and the business, not be a scramble for shippability. The end of the sprint shouldn't create a flurry of check-in activity leading to a pile of new tests to find previously unknown bugs (which can't be fixed in time for review).

Instead, the daily meeting/scrum should provide a heartbeat for the team and every day or so, stories should be completed. Adopting agile should convert you from yearly releases to quarterly releases, to monthly releases, and maybe even weekly releases.

Have you ever flipped a rock in the woods and watched all the critters scramble and disappear? This is not what the day before sprint review is supposed to be like!

Instead of thinking of the sprint review as a point in time to have stuff working for demo and approval; you should be thinking of it as a time to have the customer, product owner, and team assess progress and determine if the upcoming plan, priorities, and communicated delivery dates are still realistic.

Here is another great list you can use to spot an agile team.

Monday, June 30, 2008

What do to with found bugs...

As a customer I tend hang out in the VersionOne google-group user forums. Sometimes I find myself providing agile coaching feedback unrelated to the tool. Today was one of those days.

The discussion was surrounding the tool and what to do when a tester finds a bug.

Option 1: reopen the old story and tasks
Option 2: enter a new story or task to fix the bug.

I pushed the discussion towards the process issues and impact.

My summary point -
If the customer/product owner can live with the bug and it is not critical to release, then option 2 might be acceptable to save time and focus on prioritized business value.
But if delivery can't be reached without the bug being fixed, and especially if the bug was injected while working on that story... THEN THE STORY ISN'T DONE.

It's painful to re-open a story and go back to something the team believes is done and wants credit for. But this is our job. The quality criteria is deliverable and working.

If you buy a burger and it has a fly in it... you expect it to be fixed or replaced. You don't go back to the counter and get charged for a new one. Just like the restaurant doesn't get to charge again, neither do you. Your team's velocity is no different than money in a restaurant.