Measure What Matters for Business Agility with SAFe Metrics – Safe Agile

Safe Business Agility

In direct response to customer feedback, the Framework team recently released a significant update to its guidance on SAFe Metrics that helps for efficient business agility. In this episode, Andrew Sales, a member of the Framework team and SAFe Fellow, joins us to explain what’s changed, why these changes are important, and where enterprises can find key resources to help them apply the new guidance.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Learn About the New SAFe Metrics

We heard your feedback. The Scaled Agile Framework team recently released a significant update to its guidance on SAFe Metrics. In this episode, Andrew Sales, member of the Framework team and SAFe Fellow, joins us to explain what’s changed, why these changes are important, and where enterprises can find key resources to help them apply the new guidance. 

In their discussion, Andrew and Melissa dive into details including:

  • The story behind the changes
  • How the measurement model works
  • Examples of each measurement domain
  • What enterprises can do with the data

Follow these links to access the resources Andrew mentions in the podcast:

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe), and its mission.

Guest: Andrew Sales

Andrew Sales is a SAFe Fellow

Andrew Sales is a SAFe Fellow, principal consultant, and SPC at Scaled Agile. He regularly contributes to the Framework, and has many years of experience in delivering SAFe implementations in a wide range of industries. Andrew is also passionate about continuous improvement and supporting teams and leaders in improving outcomes for their business and customers. Connect with Andrew on LinkedIn.

Practical Tips for the New RTE – Business Agility

Safe Business Agility

Release Train Engineers (RTEs) play an important role in aligning the organization and maintaining it during PI Planning. In this episode, we talk to Kimberly Lejonö and Carl Starendal, both former RTEs and experienced Agile coaches, who share their tips for RTEs just getting started in their role. And we’ll dive into some questions we hear from RTEs in the field around inspiring change across teams and Agile Release Trains (ARTs) and managing the flow of value.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Release Train Engineers (RTEs) play an important role in aligning the organization. In this episode, we talk to Kimberly Lejonö and Carl Starendal, both former RTEs and experienced Agile coaches, who share their tips for RTEs just getting started in their role. We’ll also dive into some questions we hear from RTEs in the field around inspiring change across teams and Agile Release Trains (ARTs) and managing the flow of value.

 Topics that Kimberly and Carl touch on include:

  • PI Planning preparation and execution
  • Maintaining alignment during the PI
  • Supporting cultural change
  • Metrics, and what not to measure

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe) and its mission.

Guest: Kimberly Lejonö

RTE, project leader, and scrum master

Tapping into her background working as an RTE, project leader, and scrum master, Kimberly brings a high-energy and curious mindset to affect change in others. She loves connecting with the people around her and unlocking their potential to help organizations move in their desired direction. Connect with Kimberly on LinkedIn.

Guest: Carl Starendal

Carl Starendal

With a background in game development and a decade of hands-on experience at the center of global Lean-Agile transformations across multiple industries, Carl co-founded We Are Movement, an Agile and Lean advisory team based in Stockholm. A highly regarded trainer, advisor, and facilitator, he is a passionate advocate and resource for organizations throughout all stages of the Agile journey. Carl is recognized internationally as a speaker on leadership, Agile, and product development. Find Carl on LinkedIn.

AgilityHealth Insights: What We Learned from Teams to Improve Performance – Agility Planning

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

At AgilityHealth®, our team has always believed there’s a correlation between qualitative metrics (defined by maturity) and quantitative metrics (defined by performance or flow). A few years ago, we moved to gather both qualitative and quantitative data. Once we felt we had a sufficient amount to explore, we partnered with the University of Nebraska’s Center for Applied Psychological Services to review the data through our AgilityHealth platform. The main question we wanted to answer was: What are the top competencies driving teams to higher performance? 

Before we jump into the data, let’s start by reviewing what metrics make up “performance.” Below are the five quantitative metrics that form the Performance Dimension within the TeamHealth® radar: 

  • Time-to-market 
  • Quality
  • Predictable Delivery
  • Responsiveness (cycle time)
  • Value Delivered
AgilityHealth

During the team assessment, we ask the team and the product owner about their happiness and their confidence in their ability to meet the current goals. We consider these leading indicators for performance, so we were curious to see what drives the qualitative metrics of Confidence and Happiness as well. 

Methodology 

We analyzed both quantitative and qualitative data from teams surveyed between November 2018 and April 2021. There were 146 companies representing a total of 4,616 teams (some who took the assessment more than once) which equates to more than 46,000 individual survey responses.

We used stepwise regression to explore and identify the top five drivers for each outcome. Stepwise regression is one approach in building a model that explains the most predictive set of competencies for the desired outcome. 

The results of our analysis identified the top five input drivers for each of the performance metrics in the TeamHealth assessment, along with the corresponding “weight” of each driver. We also uncovered the top five drivers of Confidence and Happiness for teams and product owners. These drivers are the best predictors for the corresponding metrics. All drivers are statistically significant, and each metric has the driver’s ranked order. 

By focusing on increasing these top five predictors, teams should see the highest gain on their performance metrics. 

Results

 After analyzing the top drivers for each of the performance metrics, we noticed that a few kept showing up as repeat drivers across performance. 

AgilityHealth

When analyzing the drivers for Confidence and Happiness, we found these additional predictors:

AgilityHealth

We know from experience that shorter iterations, better planning and estimating, and T-shaped skills all lead to better performance—but we now have data to prove it. It was a welcome surprise to see self-organization and creativity take center stage, as it did in our analysis. We’ve always coached managers to empower teams to solve problems, but for the first time, we have the data to back it up. 

Recommendations

Pulling these patterns together, it’s clear that if a team wants to impact its performance in an efficient way, it should focus on weekly iterations, T-shaped team members, effective planning and estimating, enabling creativity and self-organization, role clarity, and right-sizing and skilling. Teams that invested in these drivers saw a 37 percent performance improvement over teams that didn’t. So when in doubt, start here!

We’re excited to share that you can now see the drivers for each competency inside the AgilityHealth platform. We hope it helps you make informed decisions about where to invest your time and effort to improve your performance.

Visit the AgilityHealth page on the SAFe® Community Platform to learn more about these assessment tools and get started!

About Sally

Sally- Agile and business agility space

Sally is a thought leader in the Agile and business agility space. She’s passionate about accelerating the enterprise business agility journey by measuring what matters at every level and building strong leaders and strong teams. She is an executive advisor to many Fortune 500 companies and a frequent keynote speaker. Learn more about AgilityHealth at https://www.agilityhealthradar.com.

Share:

Back to: All Blog Posts

Next: How Do We Measure Feelings?

How Do We Measure Feelings? – SAFe Transformation

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

As business environments feature increasing rates of change and uncertainty, agile ways of working are becoming the dominant way of operating around the globe. The reason for this dominance is not that agile is necessarily the “best” way of working (agile, by definition, embraces the idea that you don’t know what you don’t know) but because businesses have found agile better-suited to addressing today’s challenges. Detailed three-year plans, extensive Gantt charts, and work breakdown structures simply have less relevance in today’s world. Agile, with its emphasis on fast learning and experimentation, has proven itself to be more appropriate for today’s unpredictable business environment.

Agility Requires Data You Can Trust

Whereas a plan-driven approach requires an extensive analysis phase, today’s context demands frequent access to high-quality data and information to facilitate quick course correction and validation. One of these critical sources of data is targeted assessments. The purpose of any assessment is to gather information. And the quality of the information collected is a direct result of the quality of the assessment. 

Think of an assessment as a measuring tool. If we were studying a physical object, we might use measuring devices to assess its length, height, mass, and so on. Scientists have developed sophisticated definitions of many of these physical characteristics so we can have a shared understanding of them.

However, people—especially groups of people—are not quite so straightforward to measure: particularly if we’re talking about their attitudes and feelings. It’s not really possible to directly measure concepts like culture and teamwork in the same way we can measure mass or length. Instead, we have to look to the discipline of psychometrics—the field of study dedicated to the construction and validation of assessment instruments—to assist us in measuring these complex topics.

Survey researchers often refer to an assessment or questionnaire as an “instrument,” because the purpose is to measure. We measure to learn, and we learn to apply our knowledge in pursuit of improvement. This is one reason why assessment is such an integral part of the educational system. Properly designed, assessments can be a powerful tool to help us validate our approach, understand our strengths, and identify areas of opportunity.

Ensuring Quality is Built into the Assessment

Since meaningful information is so critical to fast inspection and adaptation, it’s important to use high-quality assessments. After all, if we’re going to leverage insights from the assessments to inform our strategy and guide our decisions, we need to be confident we can trust the data.

How do we know that an assessment instrument is measuring what it purports to? It’s so important to use care when designing the assessment tool, and then use data to provide evidence of both its validity (accuracy) and reliability (precision). Here’s how we ensure quality is built into our assessment.

Step 1: Prototype

All survey instrument development starts with a measurement framework. When Comparative Agility partnered with SAFe® to design the new Business Agility assessment, subject matter experts leveraged their experience from the original Business Agility survey to explore enhancements. 

The original Business Agility survey had generated a variety of important insights and proved to be incredibly popular among SAFe customers. But one area of potential improvement was the language used in the assessment itself. Customers wanted to leverage a proven SAFe survey to understand an organization’s current state, without first requiring the organization to have gone through comprehensive training. With the former Business Agility survey, this proved difficult, since the survey instrument often referred to SAFe-specific topics that many had not been exposed to yet.

To address this issue, subject matter experts (SPCTs, SAFe Fellows) teamed up with data scientists from Comparative Agility to craft SAFe survey items that would be meaningful at the start of a SAFe implementation, while avoiding terms that would require prior knowledge. This work resulted in a prototype survey or “minimum viable product.” 

Step 2: Test and Validate

Once the new Business Agility survey instrument was developed, we released it to beta and began to collect data. Several people in the SPCT community were asked to participate in a pilot. In follow-up interviews, respondents were asked about their experience with the survey. Together with respondents, the survey design team, and additional subject matter experts, we examined the results. (We also received external feedback from a Gartner researcher to help improve the nomenclature of some of the survey items.) Only once the team has been satisfied with the reliability and validity of the beta survey instrument will it be ready for production.

Step 3: Deploy and Monitor

Even after the Business Agility survey instrument reaches the production phase, the data science team at Comparative Agility and Scaled Agile continuously monitor the assessment for data consistency. A rigorous change management process ensures that any tweaks made to survey language, post-deployment, are tested to ensure they don’t negatively impact the accuracy.

Integrating Flow and Outcomes
Although validated assessments are a critical component of a data-driven approach to continuous improvement, they’re not sufficient. To gain a holistic perspective and complete the feedback loop, it’s also important to measure Flow and Outcomes. 

Flow
Flow metrics express how efficient an organization is at delivering value. When operating in complex environments characterized by uncertainty and volatility, flow metrics help organizations identify performance across the end-to-end value stream, so you can identify impediments to agility. A more comprehensive overview of Flow metrics can be found in the SAFe knowledge article, Metrics.

OutcomesFlow metrics may help us deliver quickly and effectively, but without understanding whether we’re delivering value to our customers, we risk simply “delivering crap faster.” Outcome metrics address this challenge by ensuring that we’re creating meaningful value for the end-customer and delivering business benefits. Examples of outcome metrics include revenue impact, customer retention, NPS scores, and Mean Time to Resolution (MTTR). 

Embracing a Culture of Data-Driven, Continuous Improvement

It’s important to note that although data and insights help inform our strategy and guide our decisions, to make change stick and ultimately to drive sustainable cultural change, we need to appreciate that data is a means to an end.

That is, data—even though it’s validated, statistically significant, and of high quality—should be viewed not as a source of answers, but rather as a means to ask better questions and uncover new insights in our interactions with people. By having data guide us in our conversations, interactions, and how we define hypotheses, we can drive a culture of inquiry and continuous improvement. 

Just like when a survey helps us better understand how we feel, the assessment provides us with an opportunity to interact in a more meaningful way and increase our understanding. The data itself is not the goal but a way to help us learn faster, adapt quicker, and remove impediments to agility.

Start Improving with Your Own Data

As 17 software industry professionals noted some twenty years ago at a resort in Snowbird, Utah, becoming more agile is about “individuals and interactions over processes and tools.” 

To start your own journey of data-driven, continuous improvement today, activate your free Comparative Agility account in the Measure & Grow area of the SAFe Community Platform.

About Matthew

Matthew Haubrich is the Director of Data Science at Comparative Agility.

Matthew Haubrich is the Director of Data Science at Comparative Agility. Passionate about discovering the story behind the data, Matt has more than 25 years of experience in data analytics, survey research, and assessment design. Matt is a frequent speaker at numerous national and international conferences and brings a broad perspective of analytics from both public and private sectors.

Share:

Back to: All Blog Posts

Next: Everything You Wanted to Know About SAFe® Enterprise (but Were Afraid to Ask)

Honest Assessments Achieve Real Insights

In this post, I share my experience of running a series of Measure and Grow assessments at a government agency in the UK I’m working with—including the experiments that we decided to run and our learnings during the SAFe transformation process.

The last year has been a voyage of discovery for all of us at Radtac. First, we had to figure out how to deliver training online and still make it an immersive learning experience. Then, we needed to figure out how to do PI Planning online with completely dispersed teams. Once that was sorted, we entered a whole new world of ongoing, remote consulting that included how to run effective Measure and Grow assessments.

In this post, I share my experience of running a series of Measure and Grow assessments at a government agency in the UK I’m working with—including the experiments that we decided to run and our learnings. The agency has already established and runs 15 Agile Release Trains (ARTs). We agreed that we wouldn’t run assessments for 15 ARTs because we wanted to start small and test the process first. Therefore, we picked four ARTs to pilot the assessments and only undertake the Team and Technical Agility and Agile Product Delivery assessments.

Pre-assessment Details

What was really important was that each ART we had selected had an agility assessment pre-briefing where we set the context with the following key messages:

  1. This is NOT a competition between the ARTs to see who had the best assessment.
  2. The assessments will support the LACE in identifying the strengths and development areas across the ARTs.
  3. The results will be presented to leadership in an aggregated form. Each ART will see only their results; no individual ART results will be shared with other ARTs.
  4. The results will identify where leadership can remove impediments that the teams face.
  5. We need an honest assessment to achieve real insight into where leadership and the LACE can help the teams.

In addition, prior to the assessments, we asked the ARTs to:

  1. Briefly review the assessment questions.
  2. Prioritise attendance with core team members with a cross-section of their team.

Conducting the Assessment

The assessment was facilitated by external consultants to provide some challenges to the responses. We allotted 120 minutes for both the Technical and Team Agility and Agile Product Delivery assessments, but most ARTs completed them within 90 minutes. We used Microsoft Teams as our communication tool and Menimeter.com (Menti) to poll the responses.

Each Menti page had five to six questions that the team members were asked to score on a scale of 1 to 5–with 1 being false, 3 being neither false nor true, and 5 is true. To avoid groupthink, we didn’t show the results until all questions and all members had been scored. Because Menti shows a distribution of scores, where there was a range in the scoring, we explored the extremes and asked team members to explain why they thought it was a 1 while others thought it was a 5. On the rare occasion that there was any misunderstanding, we ran the poll again for that set of questions.

Scaled Agile Partners
Some results from the Team and Technical Agility poll.

What we found after the first assessment was that there was still a lot of SAFe® terminologies that people didn’t understand. (Based on this and similar feedback, Scaled Agile recently updated its Business Agility assessment with simpler, clearer terminology. This is helpful for organizations that want to use it before everyone has been trained or even before they’ve decided to adopt SAFe.) So, for the next assessment, we created a glossary of definitions, and for each set of questions before they scored, we reminded them of some of the key terminology definitions.

The other learning was that for some of the questions, team members didn’t have a particular experience, and therefore scored a 1 (false) which distorted the assessment. Going forward, we asked team members to skip the question if they had no experience. We also took a short break between the assessments. And of course, no workshop would be complete without a feedback session at the end, which helped us improve each time we completed the assessments.

Here is a quote from one of the ARTs:

“As a group, we found the Agile Assessment a really useful exercise to take part in. Ultimately, it’s given our supporting team areas to focus on and allowed us to pinpoint areas where we can drive improvements. The distributed scores for each question are where we saw a great deal of value and highlighted differences in opinion between roles. This was made more impactful by having a split of engineers and supporting team roles in the session. The main challenge we had about the session was how we interpreted the questions differently. To overcome this, we had a discussion about each question before starting the scoring, and although this made the process a little longer, it was valuable in ensuring we all had the same understanding.”

Post-assessment Findings

We shared the individual ART results with its team members so that they could consider what they as an ART could improve themselves. As a LACE, we aggregated the results and looked for trends across the four ARTs. Here’s what we presented to the leadership team:

  1. Observations—what did we see across the ARTs?
  2. Insights—what are the consequences of these observations?
  3. Proposed actions—what do we need to do as a LACE and leadership team? We used the Growth Recommendations to provide some inspiration for the actions.

We then made a commitment to the teams that we would provide feedback from the leadership presentations.

Next Steps

We need to run the assessments across the other 11 ARTs and then repeat the assessments every two to three Program Increments.

You can get started with Measure and Grow, including the updated Business Agility assessment and tools on the SAFe® Community Platform.

About Darren

Darren is a director at Radtac, a global agile consulting business

Darren is a director at Radtac, a global agile consulting business based in London that was recently acquired by Cprime. As an SPCT and SAFe® Fellow, Darren is an active agile practitioner and consultant who frequently delivers certified SAFe courses. Darren also serves as treasurer of BCS Kent Branch and co-authored the BCS book, Agile Foundations—Principles, Practices and Frameworks.

Share:

Back to: All Blog Posts

Next: Creating Your PI Backlog Content

Measuring Agile Maturity with Assessments – Agile Adoption

Stephen Gristock is a PMO Leader and Lean Agile Advisory Consultant for Eliassen Group, a Scaled Agile Partner. In this blog, he explores both the rationale and potential approaches for assessing levels of Agility within an organization.

A Quick Preamble

Measuring Agile Maturity with Assessments

For every organization that embarks upon it, the road to Agile adoption can be long and fraught with challenges. Depending on the scope of the effort, it can be run as a slow-burn initiative or a more frenetic and rapid attempt to change the way we work. Either way, like any real journey, unless you know where you’re starting from, you can’t really be sure where you’re going.

Unfortunately, it’s also true that we see many organizations go through multiple attempts to “be Agile” and fail. Often, this is caused by a lack of understanding of the current state or a conviction that “we can get Agile out of a box.” This is where an Agile Assessment can really help, by providing a new baseline that can act as a starting point for Agile planning or even just provide sufficient information to adjust our course.

What’s in a Word?

We often hear the refrain that “words matter.” Clearly, that is true. But sometimes humans have a tendency to over complicate matters by relabeling things that they aren’t comfortable with. One example of this within the Agile community is our reluctance to use the term “Assessment.” To many Agilists, this simple word has a negative connotation. As a result, we often see alternative phrases used such as “Discovery,” “Health-check,” or “Review.” Perhaps it’s the uncomfortable proximity to the word “Audit” that sends shivers down our spines! Regardless, the Merriam-Webster dictionary defines “assessment” as:

“the act of making a judgment about something”

What’s so negative about that? Isn’t that exactly what we’re striving to do? By providing a snapshot of how the existing organization compares against an industry-standard Agile framework, an Assessment can provide valuable insight into what is working well, and what needs to change.

The Influence of the Agile Manifesto

When the Agile movement was in its infancy, thought leaders sought to encapsulate the key traits of true agility within the Agile Manifesto. One of the principles of the manifesto places emphasis on the importance of favoring:

“At regular intervals, the Team reflects on how to become more effective, then tunes and adjusts its behavior accordingly”

Of course, this is key to driving a persistent focus on improvement. In Scrum this most obviously manifests itself in the Retrospective event. But improvement should span all our activities. If used appropriately, an Agile Assessment may be a very effective way of providing us with a platform to identify broad sets of opportunities and improvements.

Establishing a Frame of Reference

Just like Agile Transformations themselves, all Assessments need to start with a frame of reference upon which to shape the associated steps of scoping, exploration, analysis, and findings. Otherwise, the whole endeavor is just likely to reflect the subjective views and perspectives of Assessor(s), rather than a representation of the organization’s maturity against a collection of proven best practices. We need to ensure that our Assessments leverage an accepted framework against which to measure our target organization. So, the selected framework provides us with a common set of concepts, practices, roles, and terminology that everyone within the organization understands. Simply put, we need a benchmark model against which to gauge maturity.

Assessment Principles

Measuring Agile Maturity with Assessments

In the world of Lean and Agile, intent is everything. To realize its true purpose, an Assessment should be conducted in observance with the following overriding core principles:

  • Confidentiality: all results are owned by the target organization
  • Non-attribution: findings are aggregated at an organizational level, avoiding reference to individuals or sub-groups
  • Collaboration: the event will be imbued with a spirit of openness and partnership- this is not an audit
  • Action-oriented: the results should provide actionable items that contribute toward building a roadmap for change

Also, in order to minimize distraction and disruption, they are often intended to be lightweight and minimally invasive.

Assessment Approaches

It goes without saying that Assessments need to be tailored to fit the needs of the organization. In general, there are some common themes and patterns that we use to plan and perform them. The process for an archetypal Assessment event will often encompass these main activities:

  • Scoping and planning (sampling, scheduling)
  • Discovery/Info gathering (reviewing artifacts, observing events, interviews)
  • Analysis/Findings (synthesizing observations into findings)
  • Recommendations (heatmap, report, debrief)
  • Actions/Roadmap

Overall, the event focuses on taking a sample-based snapshot of an organization to establish its level of Agile Maturity relative to a predefined (Agile) scale. Often, findings and observations are collected or presented in a Maturity Matrix which acts as a tool for generating an Agile heatmap. Along with a detailed Report and Executive Summary, this is often one of the key deliverables which is used as a primary input to feed the organization’s transformation Roadmap.

Modes of Assessment

Not all Assessments need to be big affairs that require major planning and scheduling. In fact, once a robust baseline has been established, it often makes more sense to institute periodic cycles of lighter-weight snapshots. Here are some simple examples of the three primary Assessment modes:

  • Self-Assessment: have teams perform periodic self-assessments to track progress against goals
  • Peer Assessments: institute reciprocal peer reviews across teams to provide objective snapshots
  • Full Assessment: establish a baseline profile and/or deeper interim progress measurement

Focus on People—Not Process and Tools

Measuring Agile Maturity with Assessments

Many organizations can get seduced into thinking that off-the-shelf solutions are the answer to all our Agile needs. However, even though a plethora of methods, techniques, and tools exist for assessing, one of the most important components is the Assessor. Given the complexities of human organizations, the key to any successful assessment is the ability to discern patterns, analyze, and make appropriate observations and recommendations. This requires that our Assessor is technically experienced, knowledgeable, objective, collaborative, and above all, exercises common sense. Like almost everything else in Agile, the required skills are acquired through experience. So, choosing the right Assessor is a major consideration.

Go Forth and Assess!­

In closing, most organizations that are undergoing an Agile transformation recognize the value of performing a snapshot assessment of their organization against their chosen model or framework. By providing a repeatable and consistent measurement capability, an Assessment complements and supports ongoing Continuous Improvement, while also acting as a mechanism for the exchange and promotion of best practices.

We hope that this simple tour of Assessments has got you thinking. So what are you waiting for? Get out there and assess!

For more information on assessments in the SAFe world, we recommend checking out the Measure and Grow article.

About Stephen Gristock

Stephen Gristock - Specializing in Agile-based transformation techniques

Specializing in Agile-based transformation techniques, Stephen has a background in technology, project delivery and strategic transformations acquired as a consultant, coach, practitioner, and implementation leader. Having managed several large Agile transformation initiatives (with the scars to prove it), he firmly believes in the ethos of “doing it, before you teach/coach it.” He currently leads Eliassen Group’s Agile advisory and training services in the NY Metro region.

View all posts by Stephen Gristock

Share:

Back to: All Blog Posts

Next: How I Prepared to Teach My First Remote Class

Understanding Leading Indicators in Product Development and Innovation

It’s quite common for people to nod knowingly when you mention leading indicators, but in reality, few people understand them. I believe people struggle with leading indicators because they are counterintuitive, and because lagging indicators are so ingrained in our current ways of working. So, let’s explore leading indicators: what they are, why they’re important, how they’re different from what you use today, and how you can use them to improve your innovation and product development.

What Are Leading and Lagging Indicators?

Leading Indicators in Product Development

Leading indicators (or leading metrics) are a way of measuring things today with a level of confidence that we’re heading in the right direction and that our destination is still desirable. They are in-process measures that we think will correlate to successful outcomes later. In essence, they help us predict the future.

In contrast, lagging indicators measure past performance. They look backwards and measure what has already happened.

Take the example of customer experience (CX). This is a lagging indicator for your business because the customer has to have the experience before you can measure it. While it’s great to understand how your customers perceive your service, by the time you discover it sucks it might be too late to do anything about it.

ROI is another example of a lagging indicator: you have to invest in a project ahead of time but cannot calculate its returns until it’s completed. In days gone by you might have worked on a new product and spent many millions, only to discover the market didn’t want it and your ROI was poor.

Online retailers looking for leading indicators of CX might look instead at page load time, successful customer journeys, or the number of transactions that failed and ended up with customer service. I often tell clients that if these leading indicators are positive, we have reason to believe that CX, when measured, will also be positive.

Don Reinertsen shares a common example of leading vs. lagging indicators: the size of an airport security line is a leading indicator for the lagging indicator of the time it takes to pass through security screening. This makes sense because if there is a large line ahead of you, the time it will take to get through security and out the other side will be longer. We can only measure the total cycle time once we’ve experienced it.

If you operate in a SAFe® context, the success of a new train PI planning (which is a lagging indicator) is predicated on leading indicators like identifying key roles, training people, getting leadership buy-in, refining your backlog, socializing it with the teams, etc.

Simple Examples of Successful Leading Indicators

The Tesla presales process is a perfect example of how to develop leading indicators for ROI. Tesla takes refundable deposits, or pre-orders, months if not years before delivering the car to their customers. Well before the cars have gone to production, the company has a demonstrated indicator of demand for its vehicles.

Back in the 90s, Zappos was experimenting with selling shoes online in the burgeoning world of e-commerce. They used a model of making a loss on every shoe sold (by not holding stock and buying retail) as a leading indicator that an online shoe selling business would be successful before investing in the necessary infrastructure you might expect to operate in this industry.

If you are truly innovating (versus using innovation as an excuse for justifying product development antipatterns, like ignoring the customer) then the use of leading indicators can be a key contributor to your innovation accounting processes. In his best-selling book, The Lean Startup, Eric Ries explains this concept. If you can demonstrate that your idea is moving forward by using validated learning to prove problems exist, then customers will show interest before you even have a product to sell. Likewise, as Dantar P. Oosterwal demonstrated in his book, The Lean Machine, a pattern of purchase orders can be a leading indicator of product development and market success.

Leading Indicators Can Be Near-term Lagging Indicators

Let’s loop back and consider the definitions of leading and lagging indicators.

  • Lagging: Measures output of an activity. Likely to be easy to measure, as you’ve potentially already got measurement in place.
  • Leading: Measures inputs to the activity. Often harder to measure as you likely do not do this today.

Think about the process of trying to lose weight. Weight loss is a lagging indicator, but calories consumed and exercise performed are leading indicators, or inputs to the desired outcome of losing weight.

Leading Indicators in Product Development

While it’s true that both calories consumed and exercise performed are activities that cannot be measured until they’re completed, and therefore might be considered near-term lagging indicators, they become leading indicators because we’re using them on the path to long-term lagging indicators. Think back to the CX example: page load time, successful customer journeys, and failed transactions that end up with customer service can all be considered near-term lagging indicators. Yet we can use them as leading indicators on a pathway to a long-term lagging indicator, CX.

How to Ideate Your Leading Indicators

The most successful approach I’ve applied with clients over many years is based on some work by Mario Moreira, with whom I worked many moons ago. I’ve tweaked the language and application a little and recommend you create a Pathway of Leading to Lagging Indicators. To demonstrate this, I will return to the CX example.

Ideate Leading Indicators

If we walk the pathway, we can estimate that an acceptable page load time will lead to a successful user journey, which—if acceptable—will then lead to fewer failed transactions that revert to customer service, which ultimately will lead to a positive customer experience metric.

Work Backwards from Your Lagging Indicator

To create your Leading to Lagging Pathway, start from your lagging indicator and work backwards looking at key successful elements that need to be true to allow your lagging indicator to be successful.

At this stage, these are all presuppositions; as in, we believe these to be true. They stay this way until you’ve collected data and can validate your pathway. This is similar to how you need to validate personas when you first create them.

Add Feedback Loop Cycle Times

Once you have your pathway mapped out, walk the pathway forward from your first leading indicator and discuss how often you can and should record, analyze, and take action for that measure. You should make these feedback loops as short as possible because the longer the loop, the longer it will take you to learn.

Feedback Loop Cycle

All that’s left is to implement your Leading to Lagging Pathway. You may find a mix of measures, some which you measure today and some you don’t. For those you already do measure, you may not be measuring them often enough. You also need to put in place business processes to analyze and take action. Remember that if measures do not drive decisions, then your actions are a waste of resources.

Your leading indicator might be a simple MVP. Tools like QuickMVP can support the implementation of a Tesla-style landing page to take pre-orders from your customers.

Applying Leading Indicators in Agile Product Management

A common anti-pattern I see in many product management functions is a solution looking for a problem. These are the sorts of pet projects that consume huge amounts of R&D budget and barely move the needle on profitability. Using design thinking and Lean Startup techniques can help you to validate the underlying problem, determine the best solution, and identify whether it’s desired by your potential customers and is something you can deliver profitably.

In SAFe, leading indicators are an important element of your epic benefit hypothesis statement. Leading indicators can give you a preview of the likelihood that your epic hypothesis will be proven, and they can help deliver this insight much earlier than if you weren’t using them. Insight allows you to pivot at an earlier stage, saving considerable time and money. By diverting spending to where it will give a better return, you are living by SAFe principle number one, Take an economic view.

Let’s look at some working examples demonstrating the use of leading indicators.

Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management

I hope you can now see that leading indicators are very powerful and versatile, although not always obvious when you start using them. Start with your ideation by creating a Leading to Lagging Pathway, working back from your desired lagging indicator. If you get stuck, recall that near-term lagging indicators can be used as leading indicators on your pathway too. Finally, don’t feel you need to do this alone, pair or get a group of people together to walk through this process, the discussions will likely be valuable in creating alignment in addition to the output.

Let me know how you get on. Find me on the SAFe Community Platform and LinkedIn.

About Glenn Smith

Glenn Smith is SAFe Program Consultant Trainer (SPCT), SPC, and RTE

Glenn Smith is SAFe Program Consultant Trainer (SPCT), SPC, and RTE working for Radtac as a consultant and trainer out of the UK. He is a techie at heart, now with a people and process focus supporting organizations globally to improve how they operate in a Lean-Agile way. You will find him regularly talking at conferences and writing about his experiences to share his knowledge.

View all posts by Glenn Smith

Share:

Back to: All Blog Posts

Next: Traits of the Stoic Agilist

Agility Fuel – Powering Agile Teams

Agility Fuel

One of my favorite analogies for agile teams is to compare them to an F-1 race car. These race cars are the result of some of the most precise, high-performance engineering on the planet, and they have quite a bit in common with high-functioning agile teams. Much like F-1 cars, agile teams require the best people, practices, and support that you can deliver in order to get the best performance out of them.

And just like supercar racing machines, agile teams need fuel in order to run. That fuel is what this post is about. In the agile world, the fuel of choice is feedback. I would like to introduce a new ‘lens’ or way of looking at feedback. I’ll leverage some learning from the art of systems thinking to provide a better understanding of what various metrics are and how they impact our systems every day.

Most often, this feedback is directly from the customer, but there are other types as well. We have feedback regarding our processes and feedback from our machinery itself. In broad terms, the feedback in an agile world falls into three different categories:

  1. Process: Feedback on how the team is performing its agility.
  2. DevOps: This is feedback on the machinery of our development efforts.
  3. Product: The so-called ‘Gemba metrics.’ This segment of feedback is where we learn from actual customer interaction with our product.

Thinking in Feedback

Every agile framework embraces systems thinking as a core principle. In this exercise, we are going to use systems thinking to change how we see, interact with, and make predictions from our feedback. If you want to go deeper into systems, please pick up “Thinking in Systems,” by Donella Meadows or “The Fifth Discipline,” by Peter Senge. Either one of these books is a great introduction to systems thinking, but the first one focuses solely on this topic.

For the purposes of this post, we will be thinking about our feedback in the following format:

Metric

This is the actual metric, or feedback, that we are going to be collecting and monitoring.

Category

Every feedback loop will be a process-, operational-, or product-focused loop.

Stock

Each feedback metric will be impacting some stock within your organization. In each case, we will be talking about how the stock and the feedback are connected to each other.

Type

Balancing: Think of the thermostat in a room; it drives the temperature of the room (the stock) to a specific range and then holds it there. These are balancing feedback loops.

Reinforcing: Because a savings account interest is based on how much is in the account, whenever you add that interest back in, there is more stock (amount in the account) and more interest will be deposited next time. This is a reinforcing feedback loop.

Delay

Feedback always reports on what has already happened. We must understand the minimum delay that each system has built into it, otherwise system behavior will oscillate as we react to the way things used to be.

Limits

We will talk about the limits for each stock/feedback pair so that you can understand them, and know when a system is operating correctly, but has just hit a limit.

A Few Examples

Let’s look at one example metric from each category so that you can see how to look at metrics with this lens.

ART Velocity

Agility Fuel

Discussion:

ART velocity impacts two stocks: Program Backlog and Features Shipped, both of which are metrics themselves. In both cases, ART Velocity is a balancing loop since it is attempting to drive those metrics in particular directions. It drives Program Backlog to zero and Features Shipped steadily upward. In neither case will the stock add back into itself like an interest-bearing savings account.

The upper limit is the release train’s sustainability. So, things like DevOps culture, work-life balance, employee satisfaction, and other such concerns will all come into play in dictating the upper limit of how fast your release train can possibly go. The lower limit here is zero, but of course, coaches and leadership will intervene before that happens.

Percent Unit Test Coverage

Agility Fuel

Discussion:

Percent Unit Test Coverage is a simple metric that encapsulates the likelihood of your deployments going smoothly. The closer this metric is to 100 percent, the less troublesome your product deployments will be. The interesting point here is that the delay is strictly limited by your developers’ integration frequency, or how often they check in code. Your release train can improve the cadence of the metric by simply architecting for a faster check-in cadence.

Top Exit Pages

Agility Fuel

Discussion:

This list of pages will illuminate which ones are the last pages your customers see before going elsewhere. This is very enlightening because any page other than proper logouts, or thank-you-for-your-purchase pages, is possibly problematic. Product teams should be constantly aware of top exit pages that exist anywhere within the customer journey before the value is delivered.

This metric directly impacts your product backlog but is less concerned with how much of anything is in that backlog and more of what is in there. This metric should be initiating conversations about how to remedy any potential problem that the Top Exit pages might be a symptom of.

Caution

Yes, agility fuel is in fact metrics. Actual, meaningful metrics about how things are running in your development shop. But here is the thing about metrics … I have never met a metric that I could not beat, and your developers are no different. So, how do we embrace metrics as a control measure without the agile teams working the metric to optimize their reward at the cost of effective delivery?

The answer is simple: values. In order for anything in this blog post to work, you need to be building a culture that takes care of its people, corrects errors without punitive punishment, and where trust is pervasive in all human interactions. If the leadership cannot trust the team or the team cannot trust its leadership, then these metrics can do much more harm than good. Please proceed with this cautionary note in mind.

Conclusion

This blog post has been a quick intro to a new way of looking at metrics: as agility fuel. In order to make sense of how your high-performance machine is operating you must understand the feedback loops and stocks that those loops impact. If this work interests you, please pay attention to our deep-dive blog posts over on AllisonAgile.com. Soon, we’ll be posting much more in-depth analysis of metrics and how they impact decisions that agile leaders must make.

About Allison Agile

Lee Allison is a SAFe 5.0 Program Consultant

Lee Allison is a SAFe 5.0 Program Consultant who implements Scaled Agile across the country. He fell in love with Agile over a decade ago when he saw how positively it can impact people’s work lives. He is the CEO of Allison Agile, LLC, which is central and south Texas’ original Scaled Agile Partner.

View all posts by Allison Agile

Share:

Back to: All Blog Posts

Next: Tips to Pass Your SAFe Exam and Get Certified

Deep Dive: Measuring Business Agility with Scaled Agile Assessments

Safe Business Agility

In this episode of the SAFe Business Agility podcast, Melissa Reeve, SPC, and Inbar Oren, SAFe® Fellow and principal contributor to the Scaled Agile Framework®, take a deep dive into how organizations can measure their progress toward business agility. Listeners will learn what business agility is, why measuring an organization’s business agility is so important, and how they can use Scaled Agile’s seven business agility assessments.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Visit these links to learn more about the business agility assessments referenced in the podcast:

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe) and its mission.

Guest: Inbar Oren

Inbar Oren a SAFe Fellow

Inbar Oren a SAFe Fellow and a principal contributor to the Scaled Agile Framework. He has more than 20 years of experience in the high-tech market, working in small and large enterprises, as well as a range of roles from development and architecture to executive positions. For over a decade, Inbar has been helping development organizations—in both software and integrated systems—improve results by adopting Lean-Agile best practices. Previous clients include Cisco, Woolworths, Amdocs, Intel, and NCR.

Working as a Scaled Agile instructor and consultant, Inbar’s current focus is on working with leaders at the Program, Value Stream, and Portfolio levels to help them bring the most out of their organizations, build new processes and culture.

A martial arts aficionado, Inbar holds black belts in several arts. He also thinks and lives the idea of “scale,” raising five kids—including two sets of twins—with his beautiful wife, Ranit.

Unlocking Data’s Value and Quantifying Impact of Choices in SAFe – Monte Carlo Simulations

Safe Business Agility

In this podcast episode learn how assuming variability and preserving options help unlock the value of data, and how to quantify the impact of choices in SAFe. We’ll also answer questions about how deep to go with WSJF and use Monte Carlo simulations to predict epic completion rates.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

SAFe in the News

Unlocking the true value of data: Choosing the right project delivery approach is key for big data project success

By Windsor Gumede, Director Technology, Kaiser Permanente

Full article

SAFe in the Trenches

Hear Joe share insights on his talk at this year’s Global SAFe Summit. The title of the talk was Strengthening SAFe’s Use of CoD and WSJF and suggested ways to improve the economic choices by quantifying impacts in dollars.

To watch Joe and Don’s presentations, as well as other presentations from the Global SAFe Summit, visit global.safesummit.com/presentations (Videos will be available after Nov. 15)

Audio CoP

The Audio Community of Practice section of the show is where we answer YOUR most frequently asked and submitted questions. If you have a question for us to answer on air, please send it to podcast@scaledagile.com

The two questions we answer in this episode are:

  • When standing up a team, the entire backlog of features are WSJF’d in order to determine priority. But the SAFe materials don’t seem to have a conclusive approach for subsequent scoring. Do your organizations do a full WSJF of all features on the backlog as part of PI prep? Or is a WSJF score given to each feature as part of the ongoing grooming at the program level (if so, how is effort defined? By the EAs? ?). Is reviewing WSJF for every feature on every team board realistic?
  • Does anyone have insight into the logic/calculations used for monte carlo simulation in relation to historical velocity of the teams to predict the completion of the epics based on the forecasted points by team.

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe) and its mission.

Hosted by: Joe Vallone

Joe Vallone is an experienced Agile Coach and Trainer

Joe Vallone is an experienced Agile Coach and Trainer and has been involved in the Lean and Agile communities since 2002. Mr. Vallone has helped coach several large-scale Agile transitions at Zynga, Apple, Microsoft, VCE, Nokia, AT&T, and American Airlines. Prior to founding Agile Business Connect, Joe Vallone served as an Agile Coach at Ciber, CTO/CIO of We The People, and the VP of Engineering for Telogical Systems.