How to Make Most of Your Demo

A demo is an opportunity to showcase an increment of your system to stakeholders in return for feedback. Many teams and organizations use demos, but few get to enjoy their real benefit. It’s all too easy to misuse a demo and get negative outcomes, but not too hard to up your game and get the most out of them. 

Back to the Purpose

A demo exists to provide feedback. Feedback uncovers the unknowns your effort naturally contains. The more complex the effort, the more unknowns it has. Development is a game of rapid knowledge acquisition more than anything else, so the more effective you are at acquiring knowledge, the more effective you are at development. There are two important dimensions to uncovering unknowns: speed and depth. 

The speed component measures our ability to acquire knowledge about our unknowns. It’s better to uncover important knowledge earlier, because it gives us more flexibility, and minimizes waste. Depth is a little trickier, so let’s explore the notion. 

A team prepares a demo of their latest system increment for the stakeholder who defined the requirements in the first place. Stakeholders might be called product owners, product managers, business venture coordinators, or another sophisticated name. These people define the requirements but will not be using the system defined by those requirements. You see the disconnect already, don’t you? This kind of demo gives us surrogate feedback

Surrogate feedback is extremely dangerous. It’s often worse than no feedback at all, as it can guide the team in a completely wrong direction. If the blind lead the blind, all you are doing is validating your ability to create outputs to satisfy the requirements, but you have no idea if those outputs can produce favorable outcomes. The only case when this type of feedback is useful is when it’s coupled with the outcome feedback.    

The most critical type of unknowns relates to achieving outcomes. If only our demo could help us reveal that… 

The Demo That Better Approximates Outcomes 

Let’s say we are working on new features for the corporate accounting system. As our product manager, Josh defines the requirements, so we are probably in trouble if our only feedback mechanism is the demo to Josh. We won’t realize how much until the new features get exposed to a realistic context, and by then we might have already produced a staggering amount of waste. Luckily for us, Josh understands the problem, and is seeking ways to make his demos more effective for his teams. 

His first idea is to invite a real user to the demo. Great idea! Larissa is an accountant, and after playing around with the new features, appears to like them, providing some good insight that Josh would have never figured out on his own. Such is the power of a real user! 

Two months later, the software is released to the organization’s accountants, and soon a hailstorm of complaints reveals that the new functionality has serious problems. And these are not defect reports. The main complaint is “wrong functionality”. 

How could this possibly happen?  

Simply put, now that Larissa and her colleagues have to prepare quarterly reports, the software is finally exposed to the realistic context, and real user objectives. 

So something is still missing in our demo: it doesn’t represent future reality well enough. Even though demonstrated to the right people, the actual user tasks are missing from the demo context. But now we have an idea about how to make a more effective demo: 

  • Ask the end user to fulfill an actual task, end-to-end, during the demo. This might require some preparation (such as proper datasets in your staging databases, giving users a heads-up so they can gather what they need to fulfill the task ahead of time, and so on). 
  • Have the end user carefully evaluate the state of the system after fulfilling a given task. Often the system you develop creates inputs to other business processes or systems further along in the chain. If you use older data with outputs that are already known, compare these with the demo outputs.
  • Given that your system increment is likely to be incomplete, mock-up the rest of the data in a way that makes sense to the user (consider validating it with them before or at the beginning of the demo).
  • Help the user fully immerse in the execution context. Set the stage by specifying the context. For instance, “Today is the day for filing quarterly reports. Data from over 20 accounts payable needs to be validated and consolidated into a report…” 

A lot can be learned by making the demo as close to reality as possible. Most importantly, having a user objectively fulfill their task, instead of asking them for a subjective opinion, is a much better approximation of the immediate outcomes the functionality is supposed to enable.

But What If We Can’t?

There’s a big difference between “can’t” and “don’t want to” (or “don’t know how”). It’s often difficult to have your system behave the same way in staging as it ultimately will in production, when having to deal with highly coupled architectures, legacy systems, sophisticated data transmission protocols, and so on. It’s a lot easier to assume that you “can’t” do it and keep developing what you believe is right, but that isn’t a great recipe for successful product development. It also highlights another interesting fact: the teams have never seriously wrapped their mind around proper validation in an incomplete ecosystem. Problems with data feeds, or communications with other systems, can be resolved by using captured static datasets from feeds, generating data in a compatible protocol setting, using data source flags that feed the data directly in a more consumable manner, and many other possibilities. This type of challenge is normal for teams that deliver winning technology solutions. In most cases, “can’t” is simply not true!     

Meaningful Increments Are the Foundation

It is hard to create a meaningful demo if your increment has no meaningful value. This is a deep topic itself, of course, and deserves a separate, lengthy article. But basically, if we want to approximate outcomes effectively—and immediate outcomes are typically user behavior—we must seek ways to slice the entire initiative so that early slices can support some meaningful user behavior (possibly with some additional data scaffolding). It’s too late to start thinking about this when your demo is approaching: you need to think about it when you plan your increment. Proper slicing is the foundation for productive feedback!    

Transcending Demos

A more effective demo might help you to better approximate outcomes and reveal some important unknowns that would otherwise remain undiscovered. This is key to tactical decision-making from the product management standpoint. But demos have natural limitations. They’re not very good at identifying delayed effects, and they don’t cover a diverse variety of scenarios that emerge in real usage situations. We must also keep in mind that our end user is not the only “actor” on the overall enterprise landscape. In Larissa’s case, her bosses expect a certain type of reporting and might intervene in the process in some (unexpected) way. Manufacturing engineers provide data which Larissa uses in her reports and they might make mistakes or deliberately alter formats for their own convenience (and without telling anyone else). Larissa’s customers have their own intricacies and uncertainties that ultimately impact the accounts payable which Larissa is creating her report for. 

It’s not feasible to include any of this as a part of the demo. However good the demo is, it can never be good enough. Further exposure is needed in the form of more frequent releases, telemetry, usage feedback from other parties, and so on. 

Complex effort requires multiple probing methods!

The Wizards of Short-Term Effects

Meet Bob. Bob is an organizational change agent. Bob has worked as a consultant with quite a few companies over the last couple of years, and his confidence is obvious to new clients. He fearlessly explains the primary essence of his method, and gets important stakeholders to come together behind his changes. Implementing change usually involves significantly updating an organization’s teams and processes, sometimes drastically. The leadership has high expectations of the remodeled organizational landscape, and who can blame them for wanting everything faster and at a more reasonable cost? Bob’s system requires initial change, but offers a strong sense of quick and obvious benefits. It’s a win-win every time: Bob has a great sense of accomplishment after every transformation, and each company is happy with what he leaves them with, which was all anyone wanted to do anyway. Bob’s confidence and charm inspires the acceptance of drastic changes.

Alice is also an organizational change agent, although she never truly liked that title. Alice doesn’t seem as confident about change as Bob is. Alice doesn’t instantly offer a plan for a big change. She explores, together with the company, exactly where they stand, how people assess current success and failures, from a front-line worker to a high-profile stakeholder, and the good and the bad of their organizational reality. She spends even more time trying to correlate diverse opinions with objective measures, when these can be readily discovered. When it isn’t so easy, she might launch an entire expedition with the sole purpose of uncovering the hidden knowledge that will help the organization assess their value delivery landscape. The management gets a little impatient with her exploratory style at times, and even more so with the gradual process of facilitated growth which comes later. Some think Alice lacks confidence, and that’s why she resorts to techniques that seem so different to Bob’s. Maybe there is something to the idea that humans are great at rationalizing the behavior that is driven by their personality type. 

In time, Bob moves to a new client, and the work starts anew. He is fully absorbed, not having the time to raise his head until he is done with all the meetings, and sessions, and workshops. But however involved, Bob always misses one aspect of his work. He doesn’t know what really happens months, or years, down the road. He doesn’t know that once the novelty has worn off, his clients begin to struggle with the structures and processes that Bob devised. It takes a year or two for them to realize things are not working out, and then they desperately seek help again. It’s very likely that they’ll find another Wizard of Short-Term Effects, like Bob, who might seem to have a different approach, but which is still characterized by charm, confidence, and drastic change. The management can never resist the unmitigated confidence of a Wizard. 

Meanwhile, Alice sticks with her clients a little longer, and even after she has mostly disengaged, it is an integral condition of her contract (as well as her way of thinking) to have multiple checkpoints over an extended period of time. The essence of her approach is that she cares most of all about context, and tries to get a sense of where the organization’s ecosystem is heading at any time, performing experiments and responsive action along the way. Alice is not particularly admired in the beginning, and even doubted, but in time she is recognized as the Queen of Sustainable Advantage, which is exactly what the company was seeking to acquire. 

“But wait,” some may argue, “drastic change has merit, too!”

It does, indeed. But this article is about dealing with complex environments on reality’s terms, not about big change vs. small change. After any drastic change in a complex adaptive system, there is a lengthy period of adaptation, which might take the system far away from its anticipated destination. It takes time, perhaps many months, for a reshuffled system to absorb the shock and produce a response, but only then can you get a sense of the true destination of the system. Sometimes big change is acceptable, but it would be irresponsible and unreasonable to believe that the state achieved right after a drastic reshuffling can be sustained over the long term in its initial form. 

If you are an organizational change agent, it is a matter of taste whether you want to be Bob, the Wizard of Short-Term Effects, or Alice, the Queen of Sustainable Advantage: a taste that will define the fate of the organization. 


New Book: Pursuing Enterprise Outcomes

Hello, Friends. My new book, PURSUING ENTERPRISE OUTCOMES: Maximizing Business Value and Improving Strategy for Organizations and Teams, is out and available for purchase via Amazon.

Enjoy the read, and here’s a brief book description for you:

Today’s enterprises are overwhelmed with complex tasks in marketing, business development, software engineering, IT infrastructure, talent management, and other critical areas. Yet a large number of those tasks fail to deliver desired outcomes, despite tremendous spend.

A must-read for leaders at every level, Pursuing Enterprise Outcomes provides answers that will boost your ability to succeed with your challenging initiatives. You will learn how to identify organizational disconnects and complex bottlenecks that prevent you from succeeding with your mission; discover and progressively refine outcomes and the business value that they deliver; drive the emergence of a complex solution that will help achieve the outcomes; discover and develop leverage points that will offer you a strategic advantage. You will learn to see the opportunity for creating enterprise value where others can’t see it.

Enablement Analysis by Example – Offshore Outsourcing

Today we are going to illustrate some key aspects of enablement chain analysis on a specific example: software development outsourcing, more specifically — its offshore variant. This is a particularly useful exercise as besides its utility to exploring the power of enablement chains, it will also shed some light on outsourcing-related issues; something many organizations engage in and not so many happen to benefit from. So, hopefully a win-win.

We will start in the simplest imaginable manner by capturing the most primitive picture we have in mind, which could be something as follows:

Let’s try to bring some clarity as to the business benefit. One of the key reasons organizations outsource is cost reduction. The other common one is access to a potentially larger pool of labor thus providing better scalability in terms of hiring new talent. Let’s stop there and capture what we just learned on our enablement chain:

Usually, the cost advantage organizations are looking to have is on the order of 50%-75% of domestic rate. Reality, however, can paint a much more complicated (and much less encouraging) picture where the precious numbers may actually cost us dearly, unless certain focused effort is applied in that regard. Let’s explore this a little deeper. For that we are going to focus the following question:

Does outsourcing inevitably lead to such cost reduction or are there some factors that moderate the connection?

Well, the answer is: no, not inevitably so. And a big reason for that is a whole class of phenomena that one can characterize as hidden costs of outsourcing that can be very significant and yet hard to put your finger on it. Here are some factors that must be in place to exercise the desired benefit:

Alignment. Tasking remote team members gets you only so far. Proper alignment in terms of what, how and why to build requires sustainable and significant overlap across involved locations, which in case of offshore outsourcing is inherently difficult to attain. The actual manifestation of misalignment is actually in both building the wrong thing and building it the wrong way. Despite its superficial simplicity, it is often pretty hard to catch this problem early enough, when the cost of fixing it is still tolerable.
Mitigated delays. Different time zones impose yet another problem: extra wait time. Often a question that could be answered in 2 minutes—should all the participants be in the same time zone—may take the a full 24-hour cycle. This ingrains a multitude of small delays that may have a significant cumulative effect of the group’s productivity. Those delays need to be systemically addressed.
Full devotion to a common goal. That is not always the case with offshore outsourcing. For starters, you are dealing with a different organization and, as such, it has its own priorities. What’s interesting, however, is that even if its not a different organization but an offshore subsidiary of the same company, it still manifests some behaviors that are not so well aligned with the common objective. Namely, remote offices tend to invest in showing that they are of value and that the business relationship should continue or, even better, expand in its capacity. In addition to all the non-value-added “we-need-to-look-good” activities, that naturally emerge as a result of dealing with a geographically remote entity, there are also contracts that tend to over-emphasize what’s easy to measure (scope, time, for example), often at the cost of the actual value delivered.
Effective risk management. “Effective” means that issues transpire early and there is enough time to address them before more stuff gets layered on top of an existing problem. For multiple different reasons, including cultural impact, customer-vendor pressure and others, remote groups may end up favoring delivery of the good news over the bad one, thus delaying and amplifying the effect of impediments encountered in the development process.
Different facets of the skill. There is a little trick to saying: “The full cost of an in-house Java developer is $100/hr but we can find a remote one for just $35/hr”. The reason it is tricky is in that it usually isn’t just knowledge and experience in Java that you are looking for. One has to take into account their exposure to organizational context, understanding of the enterprise architecture, business domain, specific technology frameworks and their local customizations, etc. When you are hiring someone new, expect a learning curve. Furthermore, offshore format may impose additional challenge: some commonly shared metaphors at the HQ may be quite foreign to people from remote locations simply as a result of fundamental differences in their countries’ economies and people’s lifestyles. Something that is a habitually established notion in countries like the US (the concept of credit score or social security, for example) may appear unfamiliar or even exotic to some locations in Eastern Europe or APAC and thus impose additional layer of complexity in communications.

As a result, we are arriving at an enablement chain of the following kind:

All of the added factors moderate the connection between outsourcing and the desired outcomes we envision to attain from it. One may think of them as of “valves” on the pipeline. But I wouldn’t recommend going too far with that analogy.

Our next question will be as follows: what needs to be done in order for this enablement chain to actually work as expected? In other words, are those new connections that we’ve added going to just work no matter what or do they need enablement of their own? Well, clearly there are things that are vital for the chain to actually work, such as:
• Regular travel
• Extensive remote communication
• Growing skills locally

These, in turn, need enablement of their own: on the one hand they require regular investment of time by leadership and various subject matter experts and additional funding will be needed, too.

Besides what’s been presented, there are some factors that influence the entire chain, such as:
• The chasm between those who make a decision to outsource and those who will have to face direct consequences of this decision (a negative factor, obviously)
• Lack of proper feedback mechanism that would allow to understand if the outsourcing strategy is working out or not (also a negative)
• Development leadership is quite enthusiastic and eager to make this work at all cost (a positive factor)
The first two seem to be reinforced by the traditional stage-gate mentality. Our enablement chain will look as follows:

The chain allows us to conclude a few interesting things. First and foremost, we seem to have a better understanding of what kind of enablement is needed for outsourcing to work in our case. In fact, if it does look a bit complicated, that is for a reason: we actually are dealing with a complex decision and it has been a problem that such decisions are approached through unreasonable oversimplification, omitting the bulk of critical enablement that has to be in place. Now, our thought process as based on this diagram may inform a decision to continue on or to scrap the idea. One way or another, it will be based on a far better approximation than a mere comparison of domestic vs. offshore hourly rates and the job pool sizes. We may realize, for example, that as an organization, we are not prepared to invest in regular travel, all additional communication, extensive virtual knowledge sharing, etc. Or on the opposite, that we are totally willing to accept the investment because we have reasons to believe that we can still benefit from the model.

Second, if we were to continue, we would want to know what areas of our enablement chain should we expect to have the highest impact on the outcomes. This brings us to two important elements:
A) leverage points (in blue), and…
B) problem areas (in red)

The logic behind these elements is quite simple: leverage points is where you primarily want to focus your enablement effort and problem areas are what is inhibiting good outcomes and thus should be addressed. From both of those the organization derives action items in pursuit of the desired outcomes.

As a little commentary to the article, have you noticed how quickly the complexity of the “chain” has grown from the two connected dots to a whole cobweb of factors and enablement connections between them? Please note that all the diagram does is just reflecting the relationships that are out there. In other words, every time you look at an enablement chain like this and think “Oh goodness, this is quite complex”, keep in mind that your actual decision domain is complex and all the diagram does is simply reflecting that and helping you navigate through possible choices. This prompts an important takeaway that one of the goals here is to appreciate the complexity of the environment and make a judgement based on a reasonable consideration of important enablement factors and their connections.

Lastly, unless proven differently, every diagram is a hypothesis and thus requires validation. But this, as well as some de-biasing techniques is a matter of future articles.

What’s The Business Value of … Anything?

What’s the value of a new software feature, a corporate training or a marketing collateral? Sounds like something we should be able to answer at least because one way or another each of those assets have their cost, so we naturally want to know what is the benefit. And this is where things usually get tricky. The cost is quite easy to figure out; the benefit (or the business value) are not so much so. Here’s why…

A software feature, in and of itself, rarely represents business value. What usually happens is that it may enable some new user capabilities that subsequently create new user value. So, for example, de-duplication of business transactions data (as a feature) enables an analyst do provide a better sales forecast and thus the organization to more effectively plan the manufacturing process. A corporate training, similarly, does not represent direct value, but through instilling new techniques and behaviors in workers, will result in certain business benefits. A marketing collateral represents value only as much as it enables better brand and product awareness. So, here’s the common trait across all our examples:

An asset (like a software feature, a training, a marketing material, …) may itself not represent direct business value at all; it’s utility is in how it enables business value.

Today’s organizations are incredibly complex and often we need to determine value of an asset that is so far down in the long enablement chain that it takes many “transitions” for it to result in something that is ultimately valuable to the business. No wonder that organizations happen to waste precious resources on doing things of little value as the proper understanding of enablement chains is missing.

In a few subsequent blog posts we will make a deeper dive into the specifics of analysis of enablement chains and thus advancing our understanding of the value an asset represents to the organization. But as a little teaser, I’d like to outline an example here. It outlines one specific aspect of enablement, namely understanding the required enabling factors and their connections.

Assume an organization considers buying a training for their sales people. The training delivery will cost the organization $60,000. By dint of astrology and crystals, the organization has figured that they should expect $800,000 increase in sales within 12 months following the training. Let’s ask ourselves a question: is the $60,000 investment going to inevitably turn into an $800,000 payoff? The answer is actually: no, not at all. There is no inevitability in that outcome. The enablement connection contains quite a bit of complexity:

The training has to actually be relevant (i.e. apply to the organization’s environment), the instructor needs to be really good (and that’s never a given), attendees have to be 100% present and engaged (not so easy with sales people)… But even more importantly, the new approaches and behaviors that the training encompasses are going to need to be further adjusted to the organization’s context, some additional software enablement may be required (different functionality of the CRM system, for instance), and over time some additional coaching may be needed to properly adopt the techniques across the diverse customer base. So, all of a sudden, we are talking about a more complex picture.

This example shows how multivariate enablement is and how easy it is to commit organizational resources to something that may, despite the great expectation, produce zero business value. It also shows us that alchemy and crystals have very limited applicability in enterprise context (despite the fact that almost every organization uses their version of “supernatural” means to rationalize desired outcomes).

But there is a better way and we are going to incrementally uncover it in this post series.

Hope you enjoyed this little blurb!

Webinar: Faster Value Delivery

Hi guys,

We’re gonna have a webinar on improving value delivery throughput in software and systems development. If you are familiar with the basic concept of throughput, register and you will be surprised how inaccurate conventional notions can be in the environment of complexity and uncertainty. More info regarding the webinar:

Every organization wants to be able to improve their speed of value delivery. Far not every company, however, realizes what it actually entails. In this webinar we will talk about the following topics:

  • The actual meaning of value throughput in software and systems development
  • Uncertainty-based flow model
  • Structural changes to improve value throughput
  • Changes in planning and execution
  • Further optimization

The talk is for coaches, facilitators and leaders of all levels.

Insight: Faster Value Delivery

Everybody cares about the speed of value delivery. And why wouldn’t they, it makes so much sense. The higher the speed of value delivery, the more value you can deliver to your customer, the better the business outcomes. Well, it is, but only if you understand what it really means.

First and foremost, it’s not “speed of delivery”, it’s “speed of value delivery”. The difference is everything! You can actually learn to be faster in terms of development and deployment of new scope (and some organizations do), but what is being delivered may have little-to-nothing to do with value: the functionality produced may not be something customer wants or likes or finds useful. Or, even though customer likes it, it produces no economic benefit to your organization and sometimes even on the opposite – leads to negative economic outcomes. So, takeaway one: speed of value delivery matters only when what is being delivered is valuable. Seems trivial. But isn’t so in a real life. Often it is automatically assumed that “what we deliver is a value” and the only thing we need is “to speed up”. Big mistake and often leads to poor results.

Two… Once we’ve realized that value comes first, we naturally begin seeking faster ways to create and deploy it. And this is where things get a little tricky. We used to this common perspective: stickies (that represent valuable backlog items) moving left-to-right on the board. All you need to do is make them move faster, right? Well, actually it is not in software and systems development world. To understand why we need to look a little deeper into what is contained in those items. But first, let’s begin with a different case where moving them fast left-to-right makes a lot of sense. That would be in the case of a predictable environment. In that case, your backlog items are containers of value and your job is to drive them left to right through the process as fast as you can.

Repetitive processes or processes composed of a relatively small set of basic variations are usually like that and that often is the case in manufacturing and logistics.

What’s different about software and systems development is a significant amount of uncertainty involved. By definition it’s not a repetitive process, composed of a small set of basic variations. Instead, it’s an inherently complex process that inevitably contains impactful unknowns. The “speed of value delivery” is now not all about moving backlog items left-to-right in a fast manner. That may actually be detrimental. Different thinking is needed. Let’s start with updating our model with more adequate for this type of environment:

Backlog items now contain not only value, where we have strong reasons to believe that such exists, but also unknowns. Unknowns may relate to any aspect of development, such as whether this is the right requirements, whether the architecture will work, implementation, etc. etc. Importantly, unlike in the previous example, the actual content of a container is changing throughout the process itself! New unknowns transpire, existing ones resolve, producing value or damage, respectively:

A simple example of such progression could be as follows:

Our backlog item is a product feature “horizontal browsing of a media collection on mobile devices”. There is certain strong belief at the beginning that it has some value to the customer as it creates a more engaging user experience. But there’s one big unknown – this hasn’t been implemented yet and nobody knows what to expect. Over time, it turns out that the unknown resolves into damage: it doesn’t appear possible to make this a smooth movement; besides other problems occur like automatic shift of the slider after a media item was opened, watched and closed. And so forth.

Now it is obvious that our goal is not to get a backlog item as fast as we can across the finish line. What we would like to do is to 1) achieve high business value for an item and 2) do it fast. Generally speaking, the speed with which backlog items are moving through the system is no longer a key indicator of value throughput, as the actual content of those “containers” changes. Therefore, high-value throughput requires effective progression of backlog items “content”, rather than just quick execution of the initially planned work. Trying to make it faster may actually have an adverse effect on value.

Managing unknowns becomes critical to maximizing value throughput. “Managing an unknown” means devising a strategy of dealing with the unknown in a way that provides a better outcome. This brings an important perspective: the flow of value can rather be thought of as a flow of unknowns from identification to effective resolution in the form of business value.

By Alex Yakyma, Org Mindset.

Webinar Recording and Downloads: Enabling Dynamic Interactions and Flow with Integrated Footprint

Hello folks,

The webinar took place earlier today. Great discussion, good questions from the audience. This was a first time I presented the concept of Integrated Footprint in a systematic manner. Much of this material goes into our Enterprise Coach course (OMEC) v2.2 and Enterprise Facilitator course (OMEF). See our course schedule and course description for OMEC and OMEF.

Here’s the presentation download in PDF format:

…And the webinar recording itself:



Mile High Agile – Presentation Download and More

Hello folks.

Mile High Agile, even though just a two-day conference, felt like a whole week of focused effort. As you probably know, Org Mindset was a platinum sponsor of the conference and I had a presentation on day 2: “Addressing the Reductionist Mindset in Your Lean-Agile Transformation”. Very many people came by our booth, discussed our trainings, we talked about their challenges and plans in terms of transformation effort. Many old friends came to the conference, many new people we’ve met.

At the end of day one our impatience took over and we decided to do an experiment and provide a little transformation self-assessment handout that would allow people to quickly perform self-assessment just with a pencil, right on the spot. That seemed to have worked really well plus what could be more Agile than creating a deliverable overnight (including printing job; thanks to local Kinkos for quick turnaround).

BTW, you can get the electronic version of the same in our Products menu. It’s a method-agnostic (like everything we do) 50,000 ft view on your organization. Hope you’ll find some valuable insights out of that.

The room was full for the talk I mentioned above. We had plenty of people standing at the back wall. Good interaction with the audience, good questions! I enjoyed presenting it a lot to that audience!

Here’s the presentation file:

Lastly, I think the conference was really well organized. We were interacting with the committee for quite a while (as a sponsor, we had a ton of things to do leading to the conference itself) and that collaboration was really effective (thanks to Lissette Wells, among other people who helped us in the process). The conference was fully ran by volunteers and they did a very good job!

A quick reminder on side topic. We have a webinar on May 30 on “Enabling Dynamic Interactions and Flow of Value“. You are very welcome to join!

Stay tuned and have a great Lean-Agile rest of your day,


Webinar: Enabling Dynamic Interactions and Flow of Value


Hello folks, please join us for our next webinar: “Enabling Dynamic Interactions and Flow of Value”.

When: May 30, 2018, 11 am MDT.

How to register: Follow this registration link.


Modern organizations are complex organisms and one of the aspects of complexity lies in the dynamic nature of work to be executed. It is due to insufficient understanding of this important fact that facilitators, coaches and leaders struggle to effectively implement Lean and Agile in their environment. As a result, team structures, interaction patterns, specific processes and success indicators become obsolete and further improvement ends up being impossible.

In this webinar we will talk about solving the problem and will touch on the following topics:

  • Dynamic work footprint and team formations
  • Planning that preserves flexibility
  • Facilitating dynamic team interactions and swarming
  • Applying contextual success indicators to measure and improve flow

This talk will be useful to Lean-Agile coaches, leaders and facilitators.

See you there,

-Alex Yakyma