A demo is an opportunity to showcase an increment of your system to stakeholders in return for feedback. Many teams and organizations use demos, but few get to enjoy their real benefit. It’s all too easy to misuse a demo and get negative outcomes, but not too hard to up your game and get the most out of them.
Back to the Purpose
A demo exists to provide feedback. Feedback uncovers the unknowns your effort naturally contains. The more complex the effort, the more unknowns it has. Development is a game of rapid knowledge acquisition more than anything else, so the more effective you are at acquiring knowledge, the more effective you are at development. There are two important dimensions to uncovering unknowns: speed and depth.
The speed component measures our ability to acquire knowledge about our unknowns. It’s better to uncover important knowledge earlier, because it gives us more flexibility, and minimizes waste. Depth is a little trickier, so let’s explore the notion.
A team prepares a demo of their latest system increment for the stakeholder who defined the requirements in the first place. Stakeholders might be called product owners, product managers, business venture coordinators, or another sophisticated name. These people define the requirements but will not be using the system defined by those requirements. You see the disconnect already, don’t you? This kind of demo gives us surrogate feedback.
Surrogate feedback is extremely dangerous. It’s often worse than no feedback at all, as it can guide the team in a completely wrong direction. If the blind lead the blind, all you are doing is validating your ability to create outputs to satisfy the requirements, but you have no idea if those outputs can produce favorable outcomes. The only case when this type of feedback is useful is when it’s coupled with the outcome feedback.
The most critical type of unknowns relates to achieving outcomes. If only our demo could help us reveal that…
The Demo That Better Approximates Outcomes
Let’s say we are working on new features for the corporate accounting system. As our product manager, Josh defines the requirements, so we are probably in trouble if our only feedback mechanism is the demo to Josh. We won’t realize how much until the new features get exposed to a realistic context, and by then we might have already produced a staggering amount of waste. Luckily for us, Josh understands the problem, and is seeking ways to make his demos more effective for his teams.
His first idea is to invite a real user to the demo. Great idea! Larissa is an accountant, and after playing around with the new features, appears to like them, providing some good insight that Josh would have never figured out on his own. Such is the power of a real user!
Two months later, the software is released to the organization’s accountants, and soon a hailstorm of complaints reveals that the new functionality has serious problems. And these are not defect reports. The main complaint is “wrong functionality”.
How could this possibly happen?
Simply put, now that Larissa and her colleagues have to prepare quarterly reports, the software is finally exposed to the realistic context, and real user objectives.
So something is still missing in our demo: it doesn’t represent future reality well enough. Even though demonstrated to the right people, the actual user tasks are missing from the demo context. But now we have an idea about how to make a more effective demo:
- Ask the end user to fulfill an actual task, end-to-end, during the demo. This might require some preparation (such as proper datasets in your staging databases, giving users a heads-up so they can gather what they need to fulfill the task ahead of time, and so on).
- Have the end user carefully evaluate the state of the system after fulfilling a given task. Often the system you develop creates inputs to other business processes or systems further along in the chain. If you use older data with outputs that are already known, compare these with the demo outputs.
- Given that your system increment is likely to be incomplete, mock-up the rest of the data in a way that makes sense to the user (consider validating it with them before or at the beginning of the demo).
- Help the user fully immerse in the execution context. Set the stage by specifying the context. For instance, “Today is the day for filing quarterly reports. Data from over 20 accounts payable needs to be validated and consolidated into a report…”
A lot can be learned by making the demo as close to reality as possible. Most importantly, having a user objectively fulfill their task, instead of asking them for a subjective opinion, is a much better approximation of the immediate outcomes the functionality is supposed to enable.
But What If We Can’t?
There’s a big difference between “can’t” and “don’t want to” (or “don’t know how”). It’s often difficult to have your system behave the same way in staging as it ultimately will in production, when having to deal with highly coupled architectures, legacy systems, sophisticated data transmission protocols, and so on. It’s a lot easier to assume that you “can’t” do it and keep developing what you believe is right, but that isn’t a great recipe for successful product development. It also highlights another interesting fact: the teams have never seriously wrapped their mind around proper validation in an incomplete ecosystem. Problems with data feeds, or communications with other systems, can be resolved by using captured static datasets from feeds, generating data in a compatible protocol setting, using data source flags that feed the data directly in a more consumable manner, and many other possibilities. This type of challenge is normal for teams that deliver winning technology solutions. In most cases, “can’t” is simply not true!
Meaningful Increments Are the Foundation
It is hard to create a meaningful demo if your increment has no meaningful value. This is a deep topic itself, of course, and deserves a separate, lengthy article. But basically, if we want to approximate outcomes effectively—and immediate outcomes are typically user behavior—we must seek ways to slice the entire initiative so that early slices can support some meaningful user behavior (possibly with some additional data scaffolding). It’s too late to start thinking about this when your demo is approaching: you need to think about it when you plan your increment. Proper slicing is the foundation for productive feedback!
A more effective demo might help you to better approximate outcomes and reveal some important unknowns that would otherwise remain undiscovered. This is key to tactical decision-making from the product management standpoint. But demos have natural limitations. They’re not very good at identifying delayed effects, and they don’t cover a diverse variety of scenarios that emerge in real usage situations. We must also keep in mind that our end user is not the only “actor” on the overall enterprise landscape. In Larissa’s case, her bosses expect a certain type of reporting and might intervene in the process in some (unexpected) way. Manufacturing engineers provide data which Larissa uses in her reports and they might make mistakes or deliberately alter formats for their own convenience (and without telling anyone else). Larissa’s customers have their own intricacies and uncertainties that ultimately impact the accounts payable which Larissa is creating her report for.
It’s not feasible to include any of this as a part of the demo. However good the demo is, it can never be good enough. Further exposure is needed in the form of more frequent releases, telemetry, usage feedback from other parties, and so on.
Complex effort requires multiple probing methods!