Pages

Friday, 27 August 2021

About Fit-for-Purpose Design

I got to thinking about Product Design the other day when I bought one of those "ab-roller" devices - the latest weapon in my seemingly never-ending battle against my ever-expanding girth.

Anyway, the shop I went to only had this one model and it was very cheap - surprisingly so. I was expecting the device to:

  • Have the wheel made of aluminium, with some ribbing for strength
  • Have the shaft made of steel, since that's where forces would be highest
  • Have a ball bearing based rolling mechanism
  • Have a kind of "tyre" for better grip
  • Be fairly wide for stability

Instead, it was a simple device, pictured below:



So, it was made of plastic and had no ball bearing or other rolling mechanism other than a steel tube through a hole in the plastic. It did have a "tyre" type thing at least. My first impression was: "WTF is this?". But then I put it together and found that it was surprisingly stable, felt solid and rolled very well. I've been using it for a few days now without any issues.

It occurred to me that, contrary to my initial impression, this could be an example of a good, customer-centric design. Why? Well, it's cheap and does what it says on the box. If you consider most people's experience with home exercise equipment, it is this: you use it for a few days - maybe even months - and then...forget about it. I've seen many elliptical walkers and treadmills repurposed into places for hanging clothes to dry. It's for this reason that I buy cheap home exercise equipment.

So the makers of this product ostensibly know their market and have made this product to accomplish its purpose at the lowest possible cost.

I thought there's a lesson here for us Process folks: Don't over-engineer your solutions. Simplicity is a key factor in the success of any solution - not only because simple solutions are cheaper, but they also tend to have fewer potential points of failure. For example, if you can automate a task now, using a simple RPA bot, why go through the hassle of speccing out a core system change, or integration requirement that would a. cost more, and b. take a lot longer to implement. Back in the early days of my career, I was guilty of the opposite - I'd formulate fancy, ideal-state to-be processes that would invariably take months to get to.

The concept of a Minimum Viable Product (MVP) is an important component in the Agile space and we should adapt this concept in the Process Optimisation field as well. When solving problems, or trying to mitigate the root-causes of process performance issues, always seek to find the simplest solution - I've often made significant improvements to process performance metrics with actions as simple as adding a checklist to a process step, or improving an input form, or putting in simple poke-yoke mechanisms in a manufacturing step. Look for, and test, these types of changes before incurring the time and cost impacts of more complex solutions.

Pragmatism, not idealism, drives superior design. There's nothing wrong with "good enough"...or as we call it, fit-for-purpose.

Friday, 20 August 2021

About Making Metrics Meaningful

If I hear one more person say "If you can't measure it, you can't manage it", I may very likely whack them across the head with an angry meerkat*.

The statement is true, of course. But it's become a platitude...an annoying one at that. Walking the floor, I am often mortified - but sadly, not surprised - to learn that Operations staff don't have key metrics related to the processes they are performing at their fingertips. They're basically flying blind! Even more worrying is when interviewing: everytime I ask about their experience using quantitative methods, the interview gets a bit awkward, because the candidate typically spouts some Six Sigma jargon, but more often than not is unable to demonstrate a deep understanding of the tools available, or actual application of any of these.

The data-rich environments we work in these days are a goldmine of insights for process professionals and Ops leaders, so one is tempted to ask why this would be the case. Often, Ops folks point out that they're too busy running things and putting out fires to look at fancy graphs, and certainly the con that is Six Sigma is partly to blame on the practitioner front, but we'll address this in another post. Instead, this week, I'd like to focus on how to rectify this and propose some simple Key Practices for defining and using metrics in Operations.

Practice 1: Use a mix of leading and lagging metrics.

Lagging metrics can be used to understand the behaviour of your processes and clients over time. Here, you would use time-series data - this could be at the level of days, or months or even intra-day. The key thing is that these must contain a sufficient amount of history.

Applying trendlines, forecasts and control lines to these views offer valuable insights into how a particular metric is changing over time. Nowadays, you can also use something like PowerBI to automatically apply some simple Machine Learning algorithms to derive valuable insights from historic data. Insights from lagging metrics help to understand broad trends and patterns that can be used to inform planning decisions.

Leading metrics can be used to inform your short-to-medium term actions. These metrics relate to actions that are taken now to produce the targeted process performance. These types of metrics are difficult to identify, but are extremely useful in managing operations. The key is to look for indicators both inside and outside the organisation that affect the performance of a process.

Practice 2: User Experience is Important

Ease-of-use is a widely understood concept, and it applies to metrics as well.

Firstly metrics must be easy to access. Publish them on the company intranet so Ops Managers can access them? No! Metrics must be pushed to the users that need them. Use instant messaging services and even email to deliver metrics to their users. In fact, the most effective practice in this regard that I've come across is to publish metrics on screens in Operations environments, where they can't help but be seen.

Secondly, metrics must be easy to understand. This is why the old-school control charts are a favourite of mine. When using a dashboard, users must, at a glance, be able to tell what's going on in a process. Use such tools as drill-downs, so that users can start with an overview and drill down to details in need. But for critical metrics, present these directly and clearly. Also, don't go too crazy with the visualisations - some people prefer just seeing the critical numbers...basically, design for your audience.

Finally, the quality of the metrics is critical. By quality, I mean: Is it fit for purpose (expected vs. delivered). All metrics must have maximum utility. To ensure this, they need to be timely, accurate (and hence trusted), pertinent and context-specific. Never publish metrics of little-to-no usefulness, regardless of how fancy those graphs look.

Practice 3: Careful Metric Selection

What metrics to use is a function of the nature of the processes and the business itself. However, some general principles can be applied.

The key thing about metric selection is utility: Metrics must be shown to users only if they are useful. Further, on default views, only show the most useful level of detail. I've often made dashboards with needless drill-downs - this is harmful to user experience.

Process metrics should provide a view of both inputs and outputs. Of course, things like output quality and turnaround time are important, but metrics related to resources used to create the outputs can also be useful to both Process practitioners and Ops managers.

Stratification of metrics is important when trying to understand a problem, or identify an opportunity. Therefore, it's always good practice to define stratification for key metrics and have this at hand in drill downs, or even alternative views.

Finally, perspective is important. You ideally need to define your metrics so that you get a holistic view of the process performance. But be careful to not try to track too many metrics. As much as we would like to get a 360-degree view, we must also ensure that the metrics and associated dashboards are focused.

Practice 4: Define Specific Actions for Critical Levels per Metric

Metrics must be actionable. I'd go so far as to say that if a metric is not actionable, it is not needed. A pretty graph that nobody knows how to act on is simply a pretty picture. And we aren't in the Art business. Users of a given metric must know explicitly what action(s) to take when the level of a given metric nears or passes a defined threshold. Further, those threshholds must be clearly visible on visualisations. Most tools today enable automatic alerts to be sent on detection of a particular event or trend in a dataset. This is very useful...but it's important to map these critical events/trends to actions to be taken in operations.


There is no need for complex, intimidating metrics and statistics in most environments - the key for both Ops owners and Process practitioners is this: deliver actionable, pertinent, accurate, timely and context-specific metrics, in an easy-to-consume form. Everything else follows from this.

*Not a real meerkat of course, in case any PETA nuts are reading this.

About Fit-for-Purpose Design

I got to thinking about Product Design the other day when I bought one of those "ab-roller" devices - the latest weapon in my seem...