I’ve been measured as a 3.6 out of a possible 5 “performance” points. I’ve been measured on my sales volume per hour. I’ve even been measured on my percentage of overtime hours as a percentile representing “efficiency”. From salespeople, to service workers, to manual laborers, it’s nearly impossible to meet a worker that hasn’t been exposed to metrics as a form of management, motivation, or appraisal.
Deming, Drucker, and a host of wisdom spoke on the utility of metrics and, over time, business leaders have responded. An obsession with metrics is everywhere and there is no question that measurements of valuable business processes can help improve outcomes necessary for success. Enter the pursuit of (demand for?) “Agile metrics”.
In the spirit of meaningful metrics that we can use to amplify Agile in our organizations, it may be useful to examine an accepted meaning of process and consider it in the realm of knowledge work, such as software development.
Process
noun pro·cess \ˈprä-ˌses, ˈprō-, -səs\A series of actions that produce something or that lead to a particular result.
As quality engineering and manufacturing processes became disciplined in the early 1980s, surface-level knowledge of statistical measurement came forth into common application. With “best practice” a necessary element of repeatable work, the power of metrics became something Western management could no longer ho-hum over. When the industry of software development started its rapid, upward journey shortly thereafter, the focus on statistical practice–rather than the context for using them–resulted in a misfire.
In this period, we hadn’t fully realized (or discovered, if you prefer) adaptive systems for complex software work. Therefore, and unsurprisingly, the architecture of a phase-gated SDLC maps quite intentionally to an assembly line where process metrics have tremendous utility. In repeatable process, measurement of activity helps us to manage behavior within the process to our competitive advantage. In other words, if “coding” is a process, emphasizing the output of code seems useful. If “testing” is a process, it’s not ridiculous to think a measurement of output (say, “bugs found”) might be helpful. Thus, if we look at the act of building software (or organization transformation, yikes) as a series of actions that lead to a particular result, it’s easy to see where people have experienced discomfort with measurement.
Ask a software developer if s/he feels an intuitive connection to the idea that creating software fits into the container of process. Little, if any, is repeatable – let alone predictable. This danger is compounded significantly when the potential reach of people affected increases, as is the case with an organizational transformation. If we cannot rely on measurements from within process, it’s understandable that some (many?) may feel on the edge of chaos… and an organization in a state of chaos is potentially catastrophic.
Chaos does not need to be a fearful place, though, if we recognize the nature of work systems affected. After all, the context of the system was Deming’s true message and one that modern thinkers are reiterating. Contextual models, such as the popular Cynefin framework, point us to realize software development likely exists within the Complex realm, perhaps even touching the boundary of Chaotic. In this space, we “act-sense-respond” and emphasize the discovery of emergent practices and processes (note the connection to Agile principles). And in this emergent space, the application of traditional process-centric metrics (aligning with the Cynefin quadrant of Complicated “probe-sense-respond” behavior) are out of place.
Thus for software development, Agile transformation, team development, and all complex knowledge work, we begin to meet a place of success when metrics shift from management of behavior to insight (“evoke conversation”) about behavior, instead.
In almost all cases, encouraging conversation and insight into behavior is assisted by measuring “up”–away from the interacting “parts”–to the outcome of the collective whole. The goal is simple: turn the attention of people to their impact on results, not solely their individual activity. I will not pretend this is profound or intellectual; this is merely systems thinking re-stated.
For example, in software development–and regardless of the strategy (phased, iterative, adaptive, etc.)–a metric describing the time it takes for “customers” to receive real value is essential (typically: lead time). Knowing that a complex relationship exists, we can see how the system is improving–or not–in adapting for the outcome that truly matters.
As an another example, rather than measuring the processes associated with quality, recognize the system’s interdependence and measure “up” – examine the ratio of time spent creating value (needs, wants, opportunities) with time spent fixing software defects and errors. Such a measurement doesn’t emphasize any one behavior, but rather, enables a conversation about what behaviors contribute to the current and desired state.
With respect to Agile, or any continuous improvement activity, a state of “comfortable conflict” should be present. This indicates that people are exposing problems, identifying dysfunction, questioning, inspecting, and seeking to adapt. In such an environment where the desire to make forward progress is constant, it makes sense to keep metrics and measurements lightweight – “just barely good enough“. Avoid falling in love with a set of measurements (or building governance around them), as they may need to change (or be discarded) to encourage new journeys over new barriers.
One pathway is to experiment with measurement that is not necessarily cemented in absolute numbers, e.g., “hours of…” or “number of…” In measuring “up”, I’ve found success in creating categorical themes representing behaviors and outcomes desired. Then, through methods like group/team retrospective, survey, and other collaborative assessment, pool information to create relative comparisons. This principle scales particularly well, too, and is easy to expand as the need for alignment grows (take a look at this very principle on a large scale).
For example, we may wish to amplify leadership, self-organization, and general Agile practices within our teams. By creating workshops designed to teach and assess these themes, we can use any number of visualizations to plot where a team, teams, programs, or organizations are weakest – and use this “metric” to focus our attention for a period of time. This is simply visualization of data, rather than a statistical metric… yet the power to elicit improvement in results may be amplified.
In conclusion and with emphasis (also known as, tl;dr):
- Measurements of activity for a process where a repeatable “best way” is known may help improve output. At worse, such metrics provide useful insight.
- For work that fits in the realm of “knowledge work” or other unpredictable activity (the Complex quadrant of the Cynefin framework, for example), measurement of activity can inhibit improvement. At worst, such metrics result in deterioration of results without visible insight into the systemic cause.
- Instead, we begin to meet a place of success when metrics shift from management of behavior to insight (“evoke conversation”) about behavior.
- Use systems-thinking to “measure up”: turn the attention of people to their impact on results, not solely their individual activity.
- In complex work, allow yourself to abandon the assumption that metrics must be statistical. Often, visualization of data helps evoke change across the system.