#NoEstimates: Between Numbers and Reality
One of the challenges faced by today’s agile teams is estimating effort and time spent on tasks. Traditional methods often require significant effort and have quickly been replaced by complex algorithms or simpler techniques like planning poker.
Over time, even these methods proved inadequate as they also required significant team involvement. The agile environment became unforgiving, and questions began to arise more frequently: Is estimation really necessary in this form? Perhaps our agility is too low in this aspect?
By the way, there’s nothing more beautiful than questioning the status quo, is there? 🙂
But let’s get back to the topic… Opinions became divided: some continued with old methods while others questioned them more and more. Additionally, the dynamic and constantly changing environment made it difficult, further confirming the inefficiency of estimation that didn’t yield sufficient results.
How do I remember it? Statements like “we’re just guessing” or “taking estimates out of thin air” started to appear more frequently, often accompanied by a sense of… hmm… time wasting. A bit weak, isn’t it? But why was this happening?
Functionality rarely repeated itself, teams became more efficient, but also “overworked” to the limit. The constant race against time meant that experts focused solely on their area, with few people from outside expanding their knowledge.
At this point, I won’t delve into whether this is good or bad. Let’s just accept for now that it sometimes happens – we have a snapshot of the situation 🙂 Now imagine Poker Planning… (A small digression, what is Poker Planning: Each member of the development team votes using Fibonacci numbers on the complexity of a given story). Let’s say we have a story about creating a new scoring model. The expert knows what it entails and that it won’t be trivial, so they vote a 7. The other people have no idea in this specific area and vote on completely different numbers: 2, 2, 2, 5, 12.
How do you choose a sensible estimate from such voting? Does it make sense? In our specific case: it didn’t. It often led to long discussions and debates about whether something was laborious or not. Over time, team members gave up, explaining that it was difficult for them to assess it rationally.
Is there another way?
And right here… at this very point, standing almost at a crossroads (I even imagined it 😉), we heard about #NoEstimates! Fortunately, we didn’t need much persuasion; we quickly concluded: Let’s try it, we can always go back to the old, but proven practices later.
Let me spoil it here: We didn’t go back, and we still use #NoEstimates… and we’re not complaining 🙂
What does it involve? This simple technique proposes a radical departure from conventional estimation practices: instead of trying to predict the future, it’s better to focus on adaptation, continuous value delivery, and decision-making based on current information.
How does it work in practice?
It’s extremely simple…
Instead of estimating, we break the problem into equally sized stories. During refinement, we ask ourselves: Is this story as big as the previous one?
If not… and it’s bigger, we try to break it down as much as possible.
And if it’s smaller, we try to combine it.
And so on, cyclically…
We adopted a base unit of about 2-2.5 working days per story (about 20 hours). Then, every sprint, we measure statistics by examining the actual sizes of the stories. If during the analysis we find that a story was larger than the assumed average size, we ask ourselves why?
Could it have been divided differently?
Of course, it’s not always possible – you’ll say – sometimes something can’t be divided. Yes. There are exceptions, but we’re learning to react during sprints and just divide it. Of course, we don’t stop there… Stories are constantly undergoing ongoing analysis. And this often turned out that it was possible to divide it somehow, even though initially we left it connected.
How does it look from the PO’s perspective?
From my perspective as a Product Owner, I can quickly determine the project’s progress – if I have 10 out of 20 stories completed, we’re halfway there. You might say… Okay, but what if we divided the stories poorly at the beginning?
There’s always a risk! It turns out that even with large projects, roughly divided, over time some stories dropped out, and new ones came in, after clarification, and surprisingly everything matched with high accuracy. And that’s great!
NoEstimates Is it good? Is it the only right solution? I think not. There are many ways to achieve the same goal, and one way can branch out in different directions. It all depends on what is really needed, what project we are working on, what group of people is collaborating.
Is the #NoEstimates approach an antidote to the challenges of traditional estimation? Or could it be just another passing trend that will disappear as quickly as it appeared? I don’t know… but I know that in our case – in our specific case – it worked. Will it work for you? Maybe… try it and go back to the old technique if the attempt fails.
What’s important to remember is not to focus solely on one working method, because it’s the only right one. If it doesn’t work, change it, test it, if necessary, step back one or as many steps as needed.
Remember, only a cow doesn’t change its mind and keeps moving forward 🙂