- Continuous improvement is in the agile manifesto . Experimentation is a tool for continuous improvement.
- More often than not, this experimentation process lacks formality, confusing the teams on whether the experiment is successful or not.
- The author proposes to use the well-known scientific method in agile teams.
- The scientific method is described with an example of a common agile team pain.
- A template is added to make easier the application/adoption of the scientific method
To continuously and iteratively improve a team or product is at the core of the agile methodology: either by delivering features that serve the needs of the customers or by changing processes that enable faster and better development of a codebase.
More often than not, we lack enough information to invest time in a change, not knowing whether this will be an improvement or a deterioration. On other occasions, we want to improve something which is already good or where the pains are not clear. For such situations, we can use experimentation.
The Scientific method
There is enough information written about the scientific method, so I will not write details about it, but the main idea is: Based on an observation of a phenomenon or event, a hypothesis is formulated. Based on this hypothesis, predictions regarding the consequences are done. After that, using an experiment, these predictions are tested. Then, the data gathered in such an experiment is analyzed to see whether the predictions (and hence the hypothesis) are true or false.
From my perspective, one important aspect is that no matter whether a hypothesis is found to be “right” or “wrong” is a step forward: it is good to find how something is done and how it is not done.
But what does this have to do with Agile and software engineering?
Let’s approach this with a story: a software engineer, Teresa, wants to improve a process: every day, the host of the next daily stand-up needs to be chosen. But Teresa notices that this leads to a loss of time looking around for the next host. So, after having gather data for a sprint, she raises this feeling on a retro and the team comes up with an action item: “For the duration of the next sprint, we keep a host for the whole week instead of rotating every day. This would make us more efficient, reducing the 1-2 minutes we need every day to find the next host. Then, on the following retro, we evaluate whether this was a good solution or not”.
So, what do we have here?
|Observation:||Loss of 1-2 minutes every day looking around for the next host.|
|Hypothesis and prediction:||If we keep the same host for the daily stand up, then the team will stop spending 1-2 minutes per daily 4 times per week. The fifth time per week, the next host needs to be selected.|
|Experiment (including variables to measure):||
For a week, the host is just selected once on Monday morning and they are responsible for the moderation of the daily stand-up.
The following variables are measured:
1. The time lost looking for a host.
So, the sprint passes by, and the retro comes. It is time to evaluate the experiment and validate the hypothesis.
|Analysis of the data:||
1. 2 minutes on Monday. 0 minutes the rest of the week.
The hypothesis is correct. The measured variables behave as predicted:
1. Looking for a host was reduced from 10 minutes per week before the experiment to 2 minutes per week during the experiment.
This simple exercise of formalizing and documenting an arguably trivial user case transferred the agile team “from screwing around to doing science”
Of course, the above example is a rather trivial scenario. Doing this formalization and documentation effort is an overhead, so, as usual, it is about the trade-offs. My suggested rule of thumb is: if the experiment will take a considerable (where “considerable” is left to the criteria of the reader) amount of resources or its impact exceeds the short term or its consequences are hard to revert, do the effort and formalize it. This will also help other people trying to get the same knowledge. After all, that is how science is built upon.
Remove the text in parenthesis.
|Observation:||(Include a clear goal or pain, or even a question)|
|Hypothesis and prediction:||(Include what you expect to happen after having taking an action. Make it specific, testable, and measurable. This could also be seen as the goal of your experiment.)|
|Experiment (including variables to measure):||(Define how you pretend to modify the variables based on the hypothesis and prediction)|
|Analysis of the data:||(Gather the measurement of the variables of the experiment)|
|Conclusion:||(Compare the measurements gathered on the experiment with the predictions done by the hypothesis. Prove the hypothesis true or false.)|
- Disclaimer: This is a common thing, adding this template here is just for facilitating purposes.
Using a common scenario in an Agile team, the application of the scientific method was discussed:
1) An observation was made.
2) A hypothesis was formulated.
3) An experiment was designed and carried out.
4) Measurements were done.
5) The hypothesis was tested and considered to be true.
Also, a template with a small description of each step was added for facilitating purposes.
And, hey! “remember kids, the only difference between screwing around and science is writing it down” 
Send me a tweet with your feedback, I am eager to hear from you!
We all like improving our processes and products by experimenting with them. But how can we be more effective? Have a look at my latest blog post :D https://t.co/BbDvV1fN64 #software #agile #experiments #fun— Enrique Llerena Dominguez (@ellerenad) April 30, 2021
 Principles behind the Agile Manifesto, https://agilemanifesto.org/principles.html
 The origin of the “remember kids, the only difference between screwing around and science is writing it down”, https://www.reddit.com/r/mythbusters/comments/3wgqgv/the_origin_of_the_remember_kids_the_only/
Subscribe via RSS