However, this doesn’t directly answer who writes user stories. Does managing and prioritizing include writing stories? Should POs write most or all of the stories? Or should the Agile Teams?
This question arose when a new PO asked me for ideas for her team. She asked me, “Who writes the stories on your team?”
Although I gave the new PO an answer about my team, how we worked, and shared who touches stories, this basic question stayed with me.
Who should write the user stories?
Here are three potential answers to consider:
Agile teams should write the user stories
POs should write the user stories
It depends on the situation
I’ll explain my reasoning and how I reached for SAFe® guidance considering each possible answer. I also hope you’ll check out the user story-writing resources at the end of this post.
When Agile teams should write the user stories
Lean-Agile leaders have acknowledged a game-changing truth: attempting to ‘manage’ knowledge workers with traditional task management is counterproductive. Management visionary Peter Drucker was one of the first to point this out: “That [knowledge workers] know more about their job than anybody else in the organization is part of the definition of knowledge workers.
The importance of unlocking the intrinsic motivation of knowledge workers was never more apparent to me than when I became a PO. Many of my teammates had more experience with their expertise area and working on our courses than I did.
This applies across many fields, products, and enterprises: your teammates become experts in what they are doing because most of us want to succeed at work. Thinking about how to help your team individually and collectively find motivation is key, and recognizing team members’ expertise is a crucial way to build trust and respect to lead the work effectively.
The language in Principle #8 reminds me to define my role and sphere of influence. As a PO, I don’t manage people. However, for my team to produce great learning content, I must care about their knowledge, ideas, experiences, and expertise at each juncture of planning, refining, and iterating on the work they’re doing.
On my team, Agile teammates usually write user stories. Every teammate has expertise that I don’t have. It would be foolish of me as a PO to write every story when someone else on my team understands the details about accomplishing the work more than I do.
If I tried to represent their work by defining and writing every story, it could lead to too much rewriting. Rewriting a story is fine as we refine and understand more details and can often be done as a team activity alongside Backlog Refinement, estimation, and even Iteration Planning and it can be streamlined when the right expertise is applied as the story is discussed.
If I wrote every story, I’d probably have to consult on and rewrite plenty (most) of them. Worse, it could demotivate my teammates if every story was dictated by their PO.
That said, what I do as my team refines stories is:
1. Provide a strong voice in crafting acceptance criteria
2. Remind the team of our definition of done for work
3. Share who the customer is, and what they want
4. Work with the team (collectively and individually) on what counts as a “minimum viable story” for our backlog.
You’ll find an example of a minimum viable story at the end of this post.
Why POs should manage, but not always write, user stories
While any team member can write stories at any time, it is the PO’s responsibility to ensure that they are well-formed and aligned with product strategy. The PO clarifies story details, applies user-story voice, ensures ‘INVEST’ characteristics are present, assists with story splitting, defines enablers, and incorporates behavior-driven design (BDD) to ensure stories support continuous value flow. The PO also allows space for ‘local’ stories and spikes that advance product design but are not derived explicitly from ART-level features.
POs “manage” the team’s backlog and have content authority. This can mistakenly turn into an expectation that POs write the stories.
POs may write all the stories if:
1. Agile teams are not yet feeling the benefits of transformation. Therefore, they may be slow to embrace the work of writing stories themselves. It can become “another thing to do” or “taking time away from doing the actual work.”
2. They want to ensure the ART and team backlogs are aligned, and stories in the team’s backlog meet the definition of done and support ART progress.
However, if the PO writes every story, they will have little time to perform all the functions POs are otherwise busy with!
For me, it’s important to note the Framework talks about management, not authorship. In fact, the PO article talks about guiding story creation rather than authoring stories.
We don’t find SAFe specifying that POs author the team backlog. What a PO does in order to manage the backlog and guide story creation is both different and deeper than simply writing everything in it:
Strategizing across the ART to meet ART-level objectives
Working with Product Managers, Business Owners, and the RTE to deeply understand the matrix of metrics the ART is using, strategic themes, and how both are applied
Working closely with Product Managers, who own the ART Backlog, to refine features
Working closely with the team and stakeholders to decompose features into stories
Ensuring stories meet user needs and satisfy the team’s definition of done
Being the go-to person to share decisions the team is making on “how” to complete the work and how it may show up in meeting objectives
Prioritizing which work to do when so the team can accomplish its goals and contribute to the ART goals while maintaining flow
Here is the guidance I shared with another PO on the idea of the PO writing every story.
Becoming a story-writing PO will:
Create demonstrable work for the PO
Codify the (damaging) idea that writing stories is a bureaucratic task
Lead to future disagreements or dissatisfaction about what should be included in the scope of this work
Set POs up to be the target for those disagreements and dissatisfactions
Here’s what it won’t do.
It won’t remove the need for teams to plan their work together to achieve flow or avoid rework or misunderstandings about what value is being delivered or how the team is delivering it.
For these reasons, I refuse to become a story-writing PO. I insist my team come together to discuss work and decide who is best informed to write a story. I further drive us to consider all of our stories in refinement so there are multiple teammate eyes on it.
The immediate result of this process with some of my teammates early on was frustration: “I’m so busy that asking me to write about my work instead of doing it feels like you’re wasting my time.”
Over time, it has borne other, much more nourishing fruit for the team, including:
More paired work and team stories
Better flow and processes to manage flow
Growth of T-shaped skills
Improved understanding and thought about customer centricity across the team
An understanding of each other’s areas of expertise and how work connects on a cross-functional team
If you have teammates who resist writing stories, I recommend you surface this conflict sooner than later and work through it upfront.
The most knowledgeable people write the user stories
I believe the best answer to who writes user stories is the answer to “Who knows the most about this work?”
Sometimes the answer may be the PO because they’ve gathered the most information.
In this case, it’s best for the PO to write the story.
I have been in an org-wide or ART-level meeting and know about work for every team. This has included tooling updates, requests to prepare specific demos of our work for different kinds of audiences, work around specific milestones, or professional development requests.
It is work that came out of my meetings with other teams to service dependencies.
It is walk-up work coming from changes or needs that weren’t surfaced before.
Talking with an internal or external customer helped me understand a need we had not previously written stories to meet.
It is rare I would write a story for work I am doing. In the above cases, the work would be handled by the team. By writing the story, I am capturing the need as I understand it but not carving the story in stone.
When thinking of how stories enter the backlog, I find it useful to remember the three Cs of story writing:
The C for conversation is most relevant to my mindset on this. The story is a promise our team will discuss this need and decide how to deliver value on it.
The conversation could include:
Refining our understanding of the user and their need
Revising acceptance criteria
Discussing who might start the work on it or how the team will deliver the work’s value
This is the minimum viable story criteria my team uses.
1. Story title: Provide a name that accurately describes the work 2. What: Give a description of the work 3. Activities or tasks: Break down what it will take to complete this work 4. Acceptance criteria: This answers the question, “What can we or our customers do now that we or they could not before?” or “How will we know this is done?” 5. Parent: Connect it to a feature where possible 6. Tags: We note if this is a team story (more than one person tagged) or an individual contributor’s work (a single person tagged) 7. PI and Iteration: When we expect to start and complete this work
I encourage you to use this template as a jumping-off point.
Look through your backlog and think about who is and could write user stories with the most thoughtful details.
Now that I’ve shared my answer to who writes user stories, here are some resources to help write them.
If you want to receive helpful content like this for your role, don’t forget to set your role in SAFe Studio. This allows us to bring the best content to your SAFe Studio homepage. Set your role in SAFe Studio today.
About Christie Veitch
As a writer and education nerd who loves processes, Christie seeks to move the needle on what learners can do and what educators and trainers will try with learners. She designs and delivers compelling content and training and builds communities of avid fans using these resources as a Scaled Agile, Inc. Product Owner. Connect with Christie on LinkedIn.
We started this series on SAFe® team metrics with the simple acknowledgment that contextualizing SAFe metrics for a non-technical Agile team can be difficult. To continue learning how other teams adapt SAFe metrics to their specific context, I sat down with Amber Lillestolen, the Scrum Master of the Corporate Communications (Comms) team at Scaled Agile, Inc. Here’s what she shared about applying SAFe metrics for a ‘traditional’ communications team.
Applying SAFe® Metrics in Marketing for a Communications Team
Scaled Agile has a few small cross-functional marketing teams to support different business areas. Corporate Comms is one of these small teams. Their purpose is to propel market growth through a cohesive, compelling brand experience that inspires delight, confidence, and loyalty in current and prospective customers.
The team works across the company—from public relations to customer stories and brand work for new product launches. Rather than debugging and deploying, communications professionals help simplify and communicate complex messaging, refine product value propositions, develop thought leadership content, and much more. This work requires significant cross-functional skill, research capabilities, collaboration, qualitative reasoning, and the ability to build alignment while planning for future releases.
Their common work items include
Company-wide brand reviews
Auditing and updating brand guidelines
Developing product messaging and value propositions
Understanding customer needs and building messaging frameworks with other marketing teams
Thought leadership content development with executives
Public relations strategy and management
Material preparation for events and conferences
Recruiting and curating event customer stories
Naming and messaging standardization across the organization
Amber is Corporate Comms’ first Scrum Master, about four months into serving the team. Corporate Comms is a unique team because they are a shared service across the organization. This means they receive a significant amount of walk-up work from other teams. Since this type of work consistently (though not always predictably) consumes a portion of the team’s capacity, it’s important to track it using metrics.
Below, Amber shares her process for tracking the team’s performance, including which metrics she uses to coach and guide the team. We separated these team metrics into the three measurement domains outlined in the metrics article: outcomes, flow, and competency.
Question: What metrics do you use to measure outcomes?
Outcome metric #1: PI Objectives
The Corporate Comms team reviews PI objectives throughout the PI to ensure that their related features are progressing. This helps the team determine if they are ahead, on track, or behind on the outcomes they promised to deliver in the PI.
Here’s a basic example of a Corporate Comms PI objective:
In support of SAFe’s evolving brand, provide enablement resources for applying new communication messaging and naming to team-level assets.
Outcome metric #2: Iteration Goals
The team creates goals every iteration and tracks their progress. They create goals related to the high-priority stories planned for the iteration.
Question: What metrics do you use to measure flow?
For Amber, flow is about delivering value, so she also included PI objectives under the flow metrics category. The Corporate Comms team uses PI objectives to measure its flow as part of the Operations Agile Release Train (ART). She reviews the team’s objectives during iterations to help the team understand what value they’re bringing to the organization.
If you don’t remember seeing PI objectives in the flow section of the SAFe Metrics article, you’re right. flow metrics, like predictability, show how well teams achieve outcomes like a PI Objective.
But for the Corporate Comms team, tracking the degree to which an objective is completed functions as a handy ‘pseudo-flow’ metric for the comms team. Ultimately, their PI objectives will roll up into broader Program Predictability Measurements, but this is a good way to track flow predictability on a smaller scale at the team level.
This is a good example of adapting metrics to meet their team needs, and simplifying their measurement process to retain its simplicity and usability. If objectives are continually missed over several PIs, this means value isn’t flowing. And value flow is the purpose of flow metrics.
Amber combines PI Objective progress with a review of the team’s velocity to understand the flow of value and completed work each iteration.
Flow Distribution and Flow Velocity
Flow distribution measures the amount of work type in a system over time, which is important for the Corporate Comms team to track.
As mentioned above, Corporate Comms is a shared services team. This means the entire organization is an internal customer of their work. As a result, the team has frequent walk-up work from other groups. Some examples of the team’s walk-up work include:
Booth designs for domestic and international events
Market research and analysis when a change occurs
Reviewing other teams’ slide decks and presentation materials
Because this walk-up work is a regular occurrence for the team, they reserve some of their capacity for it each iteration. It’s important for the team to see how their work is distributed across planned and unplanned work so they know how much capacity on average to reserve for walk-up work each iteration. They track their capacity in an ALM tool using the following views:
The team looks at their capacity and velocity metrics during iteration planning to see if they are over capacity.
Flow velocity measures the number of backlog items completed over a period of time; in Corporate Comms’ case, this period of time is an iteration.
They review these metrics at iteration review to see if they finished all planned work. Amber also uses the team planning board in an ALM tool to show if the team is over capacity and discuss what items need to move at iteration planning to adjust their capacity.
Using capacity metrics to move stories
If the team discovers they’re over capacity, it’s usually for one of two reasons:
1) Long review cycles
2) Undefined work
A lot of the team’s work is tied to cross-functional changes across the business, and those carry unknowns. As strategy evolves, sometimes the work changes to match what’s new.
Marketing teams are prone to getting buried in last minute requests and repeated context switching. SAFe provides a shared language and approach to work that teams can use to define:
What work can be done
How long that work will take
What other work needs to move or change to accommodate priority requests
This is a helpful level-set on expectations for how other teams can protect the time and resources needed to deliver planned value.
Reserving capacity for walk-up work
Amber also tracks the work that comes in and the team adds stories for walk-up work. They use this data to measure unplanned work requests during the PI compared to the work planned during PI Planning.
During the last PI, which was the team’s first PI Planning with a Scrum Master, Amber encouraged the team to plan to only 80 percent capacity to allow for any walk-up work. She arrived at this number based on the 20 percent of walk-up work from previous capacity and velocity metrics.
The team also uses ‘bucket’ user stories every iteration for team members who run office hours and complete other routine work. Office hours are blocks of time reserved for people from other teams in the organization to bring items to Corporate Comms for review. These brand office hours occur three times a week for an hour.
Any work brought to office hours that will take over two hours becomes a story with points and is added to the iteration based on urgency and capacity. Tracking their work this way creates a reliable record of capacity needed to address business as usual items, planned new work each iteration, and walk up requests.
Velocity Chart and Cycle/Lead Time
As the Corporate Comms team grows, Amber wants to track the team’s flow better and share more data with the team during Iteration reviews. Specifically, she plans to use the velocity chart to see how the team’s velocity changes throughout each iteration:
She also plans to use the cycle/lead time report to see how long it takes the team’s work to flow through the iterations.
Because dependencies impact the speed at which the team can do their work, Amber would like to start tracking how dependencies impact the team’s flow. Currently, the team uses daily standups and retrospectives to discuss work going through the kanban and when things don’t get finished in an iteration.
The Corporate Comms team recently completed a Team and Technical Agility assessment.
Amber is reviewing the organization-wide results with the LACE team and discussing how to analyze the results to determine next steps. But she has shared initial results with her team. Corporate Comms identified growths and strengths based on their results and retrospectives.
They need to continue to work on their capacity/velocity to help their flow.
They are great at collaboration and peer review as a team.
Since their formation less than a year ago, the Corporate Comms team has learned a lot about how to work as a shared service for the whole company. They’ve created new outlets of communication and coordination to increase the value they can deliver.
About Amber Lillestolen
Amber is a Scrum Master at Scaled Agile. For years, she has used empathy and understanding to coach teams to reach their full potential. She enjoys working in a collaborative environment and is passionate about learning. Connect with Amber on LinkedIn.
About Madi Fisher
Madi is an Agile Coach at Scaled Agile. She has many years of experience coaching Agile teams in all phases of their journeys. She is a collaborative facilitator and trainer and leads with joy and humor to drive actionable outcomes. She is a true galvanizer! Connect with Madi on LinkedIn.
Even when metrics are captured, improvement areas might remain blurry without a shared understanding of what the metrics mean and show.
How do teams with different skill sets, business objectives, and types of work all use the same measurement domains to gauge success? How do they know if they are on track to achieve PI Objectives and Iteration Goals?
We asked Scrum Masters from five different teams to answer five questions about the metrics they use to measure flow, outcomes, and competency. Their answers were illuminating, and we’re excited to share the results in a new article series titled “SAFe Metrics for Teams.” Each article will highlight one or more teams and take a close look at how they use SAFe metrics in their own domains.
METRICS SURVEY QUESTIONS
What metrics does your team use to track outcomes?
How do these metrics help your team define and plan work?
What flow metrics does your team use?
How has your team used metrics to drive continuous improvement?
How does your team self-assess competency?
Let’s get started!
Applying SAFe® Metrics for a Sales Operations Team
Sales ops teams are charged with enabling the sales team to hit their growth targets. This means sales ops should do work that creates a better, faster, and more predictable sales process, including:
Lead management and routing
Sales team onboarding
Continued learning and development
Contract management solutions
Sales process optimization
By nature, sales ops work is often routine and process-heavy. It’s a lot of “run the business” or “business as usual” (BAU) work. Despite its sometimes repetitive nature, the pace and volume of this work will affect sales team objectives. Agile marketing teams are often in a similar position.
Plus, enterprise-level sales are getting more complicated. There’s more technology, data, tools, and systems than ever. For sales leaders, this complexity means spending more time managing systems and less time working with teams and growing territories. As sales teams face increasing pressure to become more customer-centric and responsive, the demand for Agility also grows.
How can sales ops teams embrace the same Agile spirit and practice that drives other business areas like development and IT? More specifically, how can a sales ops department employ measurements like flow, outcomes, and competency in the same way as other SAFe teams?
For Kate Quigly, a senior Scrum Master with the sales ops team, it starts with the right mindset and clear goals focused on helping sales teams embrace Agility in their planning and operations. We asked Kate to lift the lid on her team’s process for using metrics, pursuing improvement, and applying SAFe measurement domains. Below, Kate explains how the sales ops team makes SAFe guidance work for them.
Question: What metrics does your team use to track outcomes? How do these metrics help your team define and plan work?
Answer: Outcomes are the perfect opportunity to assess which work is worth doing and what value is delivered. Here are the metrics we use to measure outcomes:
The nature of our work makes it fairly simple to craft SMART PI objectives. Here’s an example of a good PI objective for a typical sales ops team:
Example: “To grow strategic accounts by five percent in Q1 2023, create three account plans per region, and complete five live training sessions with sales by the end of Q4 2022.”
This objective would align with our team’s mission to bolster the sales team’s performance and operations in several ways:
It empowers sales to develop their account plans in the future. With this capability, sales can immediately take action to capture new opportunities in expanding regions.
It’s written to help sales achieve growth targets, which could be a strategic theme and/or PI Objective for the entire ART.
It enables future work that will contribute clear business value.
Team Business Value
Business value assignments are where we learn whether the right work was planned and completed correctly. We can use business value scores to understand the following:
Did we plan the right mix of work? For us, this could mean uncovering the wrong balance of BAU work vs. “user-facing” projects.
Were we able to complete our committed objectives? If not, why?
Were our objectives clear and measurable?
Are we delivering value?
Iteration goals keep us focused on the most important work. Because iteration goals are not always reflected in a single story, we can check each iteration to see if visible work is actually contributing to our iteration goals (and PI Objectives). We want to discover if completed stories actually support the iteration goals. If there’s misalignment here, and we’re planning proper capacity, trouble could be around the corner.
Team Purpose Statement
While objective metrics are critical, qualitative assessments also matter. We often refer to our team purpose statement to check whether planned work needs to be rephrased, reexamined, or re-scoped to align with our core purpose.
Question: What flow metrics does your team use?
Answer: Since we’re a relatively new team, we’re still honing the best set of metrics. I recommend using a variety of metrics as each one can highlight valuable insights. Right now, we use the following metrics most often:
Team velocity should be a stable, predictable measurement that helps the team forecast capacity for future work. Velocity metrics should never be compared across teams or used as a productivity measurement. Too much emphasis on achieving the right number can cause teams to “game” the system.
I use some standard questions to spark conversations, including:
Is our team velocity significantly dropping? Let’s discuss why.
Is our team velocity significantly increasing? What’s the cause of this increase?
In particular, we look at the number of rollover stories from one iteration to the next. The root cause of these rollover stories can reveal issues with prioritization, role competency, and dependencies. This analysis helped our team discover a missing toolset needed for data management work.
We use cycle time scatterplot charts to show lead time and cycle time. These charts capture when an item starts and finishes. Low cycle time means the team is performing well and value flows fast to customers. Items with high cycle time are easy to identify and retrospect on.
Measuring flow time helped us find a recurring delay within the legal department. The delay caused an issue with generating new contracts, which are critical for our internal customers (sales) to finalize their work within a fixed timeline.
This data helps us talk about problems in a new way by asking the following:
Why did some items take so long to complete? What could we have done differently? Are there action items we can do to improve?
Our average time to finish an item is X amount of days. What improvements would lower this for even more efficiency?
Distribution is crucial for understanding whether we’re spending time on the right mix of work types. The team believes (rightly) that BAU work should be shown and recognized. Common BAU work includes:
Analyze closed lost/won deals
Training, onboarding, and enablement
However, I’ve had to coach the team on how to incorporate this type of work into Scrum, and how to balance BAU with “new development” work, which we affectionately call “special projects.” For our team, special projects would include work like:
New system implementation for key account management
For us, all of these metrics roll up into two primary dashboards. We frequently use the following charts to check our progress:
Iteration and Release cumulative flow diagrams help visualize the work. These diagrams show cycle time, work in progress, and throughput.
This metric surfaced an interesting “stair step” pattern, which could indicate rushing at the end of the iteration. Once identified, we were able to find some key bottlenecks that are unique to the sales environment.
Burndown charts. These charts can predict the team’s likelihood of completing the committed work in a given time period. By visualizing the work in this way, you can see if delivery is on track, will complete early, or will be delayed.
Question: How has your team used metrics to drive continuous improvement?
Answer: These metrics help us identify mistakes, whether it’s a missed dependency, unclear acceptance criteria, or errant planning. For example, I’ve seen our team continually battle context switching, indicated by a large amount of work in progress (WIP).
The context switching was causing us to miss more meaningful opportunities. For sales ops, this could mean putting new dashboards on hold and instead prioritizing new account research. During team retros, we can use questions like the following to identify improvement areas:
Are there too many items “in progress” which can result in costly context switching?
Are there too many items “in progress” which can signal a bottleneck problem in our workflow?
Is there a large amount of work that is “not started” that may cause us to delay or miss upcoming milestones?
Stop starting and start finishing by limiting WIP
Swarming or pairing to complete work faster
Teams identify and embrace metrics depending on how you set the stage and approach these conversations. Again, for us, the priorities are clarified by focusing on the most valuable work instead of giving all planned work the same priority level. We must continually ask: “what’s the most value we can deliver this week, this iteration, and this PI?”
Question: How does your team self-assess competency?
Answer: As a new team, we’re focused on the team and technical agility assessment, which was just distributed to the team. I am very excited to use this as another tool to start team conversations about improvement areas.
Kate shared a few other key ways that sales ops teams can “think” Agile and adapt SAFe practices to their business domain:
Will the planned work allow sales team members to be more decentralized, productive, and empowered in their job?
Is the planned work only planned because “that’s the work we do”?
What work would support cross-functional capabilities for sales teams/members?
Example: A standardized sales deck with editable fields to eliminate dependence on graphic design support.
What processes can be automated to create economies of scale?
Example: An automated, repeatable de-duplication process for faster and more accurate CRM data management.
Overall, changing their way of thinking about work and measuring value has helped the sales ops team embrace Agile principles and improve alignment across the ART. As a result, they have better tools for visualizing, categorizing, and prioritizing BAU work with critical projects while also seeing the real value they deliver.
About Kate Quigly
Kate is a Senior Scrum Master at Scaled Agile, Inc. She has many years of experience coaching Agile teams with high energy and creativity. She is passionate about lifelong learning, experimenting with teams, and creating a collaborative culture. Connect with Kate on LinkedIn.
About Madi Fisher
Madi is an Agile Coach at Scaled Agile. She has many years of experience coaching Agile teams in all phases of their journeys. She is a collaborative facilitator and trainer and leads with joy and humor to drive actionable outcomes. She is a true galvanizer! Connect with Madi on LinkedIn.
The title of my post may read like acronym soup but all of these concepts play a critical role in SAFe, and understanding how they’re connected is important for successful SAFe implementation. After exploring some connections, I will suggest some actions you can take while designing, evaluating, or accelerating your implementation.
KPIs and OKRs
The SAFe Value Stream KPIs article describes Key Performance Indicators (KPIs) as “the quantifiable measures used to evaluate how a value stream is performing against its forecasted business outcomes.”
Health of day-to-day performance
Work to create sustainable change in performance
Objectives and Key Results (OKRs) are meant to be about driving and evaluating change rather than maintaining the status quo. Therefore, they are a special kind of KPI. Objectives point towards the desired state. Key results measure progress towards that desired state.
But how do these different concepts map to SAFe’s Operational Value Streams (OVSs) and Development Value Streams (DVSs)? And why should you care?
Changing and Improving the Operation
Like Strategic Themes, most OKRs point to the desired change in business performance. These OKRs would be the ones that company leadership cares about. And they would be advanced through the efforts of a DVS (or multiple ones).
For example, if the business wants to move to a subscription/SaaS model, that’s a change in the operating model—a change in how the OVS looks and operates. That change is supported by the development of new systems and capabilities, which is work that will be accomplished by a DVS (or multiple ones).
This view enables us to recognize the wider application of the DVS concept that we talk about in SAFe 5. Business agility means using Agile and SAFe constructs to develop any sort of changing the business needs, regardless of whether that change includes IT or technology.
Whenever we are trying to change our operation, there’s a question about how much variability we’re expecting around this change. Is there more known than the unknown? Or vice versa? Are we making this change in an environment of volatility, uncertainty, complexity, and ambiguity? If yes, then using a DVS construct that employs empiricism to seek the right answers to how to achieve the OKR is essential, regardless of how much IT or technology is involved. We might have an OKR that requires business change involving mainly legal, marketing, procurement, HR, and so on, that would still benefit from an Agile and SAFe DVS approach.
These OKRs would then find themselves elaborated and advanced through the backlogs and backlog items in the various ARTs and teams involved in this OKR.
In some cases, an OKR would drive the creation of a focused DVS. This is the culmination of the Organize around Value Lean-Agile SAFe Principle. This is why Strategic Themes and OKRs should be an important consideration when trying to identify value streams and ARTs (in the Value Stream and ART identification workshop). And a significant new theme/OKR should trigger some rethinking of whether the current DVS network is optimally organized to support the new value creation goals set by the organization.
Maintaining the Health of the Operation
As mentioned earlier, maintaining the health of the operation is also tracked through KPIs. Here we expect stability and predictability in performance. It’s crucial work but it’s not what OKRs or Strategic Themes are about.
This work can be simple, complex, or even chaotic depending on the domain. The desire of any organization is to bring its operation under as much control as possible and minimize variability as it makes sense in the business domain. What this means is that in many cases, we don’t need Agile and empiricism in order to actually run the operation. Lean and flow techniques can still be useful to create sustainable, healthy flow (see more in the Organizational Agility competency).
Whenever people working in the OVS switch to improving the OVS (or in other words working on versus in the operation), they are, in essence, moving over implicitly to a DVS.
Some organizations make this duality explicit by creating a DVS that involves a combination of people who spend some of their time in the OVS and some of their time working on it together with people who are focused on working on the OVS. For example, an orthopedic clinic network in New England created a DVS comprising clinicians, doctors, PAs, and billing managers (that work the majority of their time in the OVS) together with IT professionals. Major improvements to the OVS happen in this DVS.
Improving the Development Value Stream
The DVS needs to relentlessly improve and learn as well. Examples of OKRs in this space could be: improving time-to-market, as measured by improved flow time or by improving the predictability of business value delivered, as measured by improved flow predictability. It could also be: organize around value, measured by the number of dependencies and the reduction in the number of Solution Trains required.
This is also where the SAFe transformation or Agile journey lives. There are ways to improve DVSs or the overall network of DVSs, creating a much-improved business capability to enhance its operation and advance business OKRs.
Implementing OKRs in this space relates more to enablers in the SAFe backlogs than to features or capabilities. Again, these OKRs change the way the DVS works.
Running the Development Value Stream
Similar metrics can be used as KPIs that help maintain the health of the DVS on an ongoing basis. For example, if technical debt is currently under control, a KPI monitoring it might suffice and hopefully will help avoid a major technical debt crisis. If we weren’t diligent enough to avoid the crisis, an objective could be put in place to significantly reduce the amount of technical debt. Achieving a certain threshold for a tech debt KPI could serve as a key result (KR) for this objective. Once it’s achieved, we might leave the tech debt KPI in place to maintain health.
It’s like continuing to monitor your weight after you’ve gone on a serious diet. During the diet, you have an objective of achieving a healthy weight with a KR tracking BMI and aiming to get below 25. After achieving your objective, you continue to track your BMI as a KPI.
Taking Action to Advance Your Implementation Using OKRs
In this blog post, we explored the relationship between operational and development value streams and the Strategic Themes and OKRs. We’ve seen OVS KPIs and OKRs as well as DVS OKRs and KPIs.
A key step in accelerating business agility is to continually assess whether you’re optimally organized around value. OKRs can provide a very useful lens to use for this assessment.
Start by reviewing your OKRs and KPIs and categorize them according to OVS/DVS/Change/Run.
You can use the matrix below.
If you find some OKRs on the left side of the matrix, it’s time to rethink.
Run-focused OKRs should actually be described as KPIs. Discuss the difference and whether you’re actually looking for meaningful change to these KPIs (in which case it really can be an OKR—but make sure it is well described as one) or are happy to just maintain a healthy status quo.
You can then consider your DVS network/ART/team topology. Is it sufficiently aligned with your OKRs/KPIs? Are there interesting opportunities to reorganize around value?
This process can also be used in a Value Stream Identification workshop for the initial design of the implementation or whenever you want to inspect and adapt it.
Find me on LinkedIn to learn more about making these connections in your SAFe context via an OKR workshop.
About Yuval Yeret
Yuval is a SAFe Fellow and the head of AgileSparks (a Scaled Agile Partner) in the United States where he leads enterprise-level Agile implementations. He’s also the steward of The AgileSparks Way and the firm’s SAFe, Flow/Kanban, and Agile Marketing. Find Yuval on LinkedIn.
Assessing your team’s agility is an important step on the path to continuous improvement. After all, you can’t get where you want to go if you don’t know where you are. But you probably have questions: How do you measure a team’s agility, anyway? Who should do it, and when? What happens with the data you collect, and what should you do afterwards?
To bring you the answers, we interviewed two of our experienced scrum masters, Lieschen Gargano and Sam Ervin. Keep reading to learn their recommendations for running a Team successfully and Technical Agility Assessment.
Q: How does SAFe help teams measure their agility, and why should I care?
The SAFe® Business Agility Assessment measures an organization’s overall agility across seven core competencies: team and technical agility, agile product delivery, enterprise solution delivery, Lean portfolio management, Lean-agile leadership, organizational agility, and continuous learning culture.
The SAFe Core Competency Assessments measure each of these core competencies on a deeper level. For example, the Team and Technical Agility (TTA) Core Competency Assessment helps teams identify areas for improvement, highlight strengths worth celebrating, and baseline performance against future growth. It asks questions about how your team operates. Do team members have cross-functional skills? Do you have a dedicated PO? How are teams of teams organized in your agile release trains (ARTs)? Do you use technical practices like test-driven development and peer review? How does your team tackle technical debt?
For facilitators, including scrum masters, the Team and Technical Agility Assessment is a great way to create space for team reflection beyond a typical retrospective. It can also increase engagement and buy-in for the team to take on actionable improvement items.
Q: Who should run a Team and Technical Agility Assessment?
Running assessments can be tricky. Teams might feel defensive about being “measured.” Self-reported data isn’t always objective or accurate. Emotions and framing can impact the results. That’s why SAFe recommends that a scrum master or other trained facilitator run the assessment. A scrum master, SPC, or agile coach can help ensure that teams understand their performance and know where to focus their improvement efforts.
Q: When should I do this assessment?
It’s never too early or too late to know where you stand. Running the assessment for your team when you’re first getting started with an agile transformation will help you target the areas where you most need to improve, but you can assess team performance at any time.
As for how frequently you should run it … it’s probably more valuable to do it on a cadence—either once a PI or once a year, depending on the team’s goals and appetite for it. There’s a lot of energy in seeing how you grow and progress as a team, and it’s easier to celebrate wins that are demonstrated through documented change over time than through general sentiment.
Before you start the Team and Technical Agility Assessment, define your team’s shared purpose. This will help you generate buy-in and excitement. If the team feels like they’re just doing the assessment because the scrum master said so, it won’t be successful. They have to see value in it for them, both as individuals and as a team.
Some questions we like to ask to set this purpose include, “What do we want it to feel like to be part of this team, two PIs from now?” And, “How will our work lives be improved when we check in one year from now?”
There are two ways you can approach running this assessment. Option #1 is to have team members take the assessment individually, and then get together to discuss their results as a group. Option #2 is to discuss the assessment questions together and come to a consensus on the group’s answers.
When we’ve run this assessment we’ve had team members do it individually, so we could focus our time together on review and actions. If you do decide to run it asynchronously it’s important as a facilitator to be available to team members, in case they have questions before you review your answers as a team.
Q: What else should I keep in mind?
We like to kick off the assessment with a meeting invitation that includes a draft agenda. Sending this ahead of time gives everyone a chance to prepare. You can keep the agenda loose so you have flexibility to spend more or less time discussing particular areas, depending on how your team chooses to engage with each question.
Q: Is the assessment anonymous?
Keeping the answers anonymous is really helpful if you want to get more accurate results. We like to be very clear upfront that the assessment will be anonymous, so that team members can feel confident about being honest in their answers.
For example, with our teams, we not only explained the confidentiality of individuals’ answers but demonstrated in real-time how the tool itself works so that the process would feel open and transparent. We also made it clear that we would not be using the data to compare teams to each other, or for any purpose other than to gain a shared understanding of where we are selecting improvement items based on the team’s stated goals.
Q: Then what? What do I do with the results?
Once you’ve completed the assessment using one of the two approaches, you’ll want to review the sections one by one, showing the aggregate results and allowing the team to notice their top strengths and top areas for improvement. Your job as facilitator is NOT to tell them what you think based on the results; it’s to help guide the team’s own discussion as they explore the answers. This yields much more effective outcomes!
One thing one of us learned in doing the assessment was how much we disagreed on some things. For example, even with a statement as simple as, “Teams execute standard iteration events,” some team members scored us a five (out of five) while others scored us a one. We treated every score as valid and sought to understand why some team members scored high and others low, just like we do when estimating the size of a user story. During this conversation, we learned an important fact. The product owner thought the iteration was executed in a standard way because she was the one executing it. But team members gave that statement a low score because they weren’t included in much of the decision-making. There was no consensus understanding for what “standard iteration events” meant to the team.
This prompted a conversation about why the team isn’t always included in how the iteration was executed. We talked about the challenge of aligning schedules to share responsibility for decision-making in meetings. And we talked about the impact of team members not having the opportunity to contribute.
As a result, the assessment did more than help us see where we needed to improve; it showed us where we had completely different perspectives about how we were doing. It prompted rich conversations that led to meaningful progress.
Q: Okay, I ran the assessment; now what? What are the next steps?
With your assessment results in hand, it’s now time to take actions that help you improve. For each dimension of the Team and Technical Agility Assessment, SAFe provides growth recommendations to help teams focus on the areas that matter most and prioritize their next steps. You should:
Review the team growth recommendations together to generate ideas
Select your preferred actions (you can use dot voting or WSJF calculations for this; SAFe®Collaborate has ready-made templates you can use)
Capture your team’s next steps in writing: “Our team decided to do X, Y, and Z.”
Follow through on your actions, so that you’re connecting them to the desired outcome
Check in on your progress at the beginning of iteration retrospectives
Finally, you’ll want to use these actions to set a focus for the team throughout the PI, and check in with business owners at PI planning on how these improvements have helped the organization make progress toward its goals.
Lieschen is a product owner and former scrum master at Scaled Agile. She’s also an agile coach and conflict guru—thanks in part to her master’s degree in conflict resolution. Lieschen loves cultivating new ideas and approaches to agile to keep things fresh and exciting. And she’s passionate about developing best practices for happy teams to deliver value in both development and non-technical environments. Fun fact? “I’m the only person I know of who’s been a scrum master and a scrum-half on a rugby team.”
Sam is a certified SAFe® 5.0 Program Consultant (SPC) and serves as the scrum master for several teams at Scaled Agile. His recent career highlights include entertaining the crowd as the co-host of the 2019 and 2020 Global SAFe® Summits. A native of Columbia, South Carolina, Sam lives in Denver, CO, where he enjoys CrossFit and Olympic weightlifting.
We all know that any time you start something new in an organization it takes time to make it stick, and if teams and leaders find value, they will work to keep a program flourishing. The same is true when you implement a Measure and Grow Program within your organization. It takes planning and effort to get it started, but the rewards will definitely outweigh the efforts in the end.
At AgilityHealth®, our Strategists work with organizations every day to help them set up Measure and Grow programs that will succeed based on their individual needs. Through their experiences, they have noticed some consistent patterns across our customers, both commercial and government, for- and non-profit. Understanding these patterns can help you set up a program that’s right for your organization.
Before we jump into the patterns, let’s review what a Measure and Grow program is. Simply stated, it’s how you will measure your progress toward business agility. When we look at how Enterprise Business Agility was defined by Sally Elatta, AgilityHealth Founder, and Evan Leybourne, Founder of the Business Agility Institute, you can see why this is important.
The ability to adapt to change, learn and pivot, deliver at speed, and thrive in a competitive market.
Sally Elatta, CEO AgilityHealth and Evan Leybourn, Founder, Business Agility Institute
We need to maintain our competitive edge, and in the process, make sure that healthy teams remain a priority—especially as we start to identify common patterns across teams.
Define how you will measure success.
Bertrand Dupperin said, “Tell me how you will measure me, and I will tell you how I will behave.” This is true of our teams, our team members, and our leaders. After this success criteria have been defined, allow the team members to measure themselves in a safe environment where they can be open and honest about their maturity with a neutral facilitator. The process of actioning on the data is very powerful for teams.
Provide a way to help teams grow after you measure them.
“Measurement without action is worthless data.” (Thanks, Sally, for another great bit of wisdom.) When you set up your Measure and Grow program, make sure it includes a way for teams to learn and mature.
Some of the common ones we see are:
Dojo teams—high-performing teams paired with new or immature teams to help them learn
Pre-defined learning paths for teams using instructor-led or virtual learning
Intentional learning options for teams through Communities of Practice or other options
Tie the results to the goals.
“Why are we taking the time to do this?” This is a common question that teams and leaders ask when we are starting Measure and Grow programs. They feel that the time reserved for an Inspect and Adapt session could maybe be used to tie up those last few story points or test cases, when in reality there is a corporate objective to mature the teams. Be sure to share these kinds of goals with your teams and managers so they understand that this is important to the organization.
Provide a maturity roadmap that takes the subjectivity out of the questions.
We all have an idea of what “good” looks like, but without a shared understanding of “good”, my “good” might be a 3, my teammate’s might be a 4, someone else’s might be a 2, and so on. When you share a common maturity roadmap to provide context for your assessment, your results will be less subjective.
Measure at multiple levels so that you can correlate the results.
When we just look at maturity from the team perspective, we get one view of an organization. When we look at maturity from the leadership and stakeholder perspectives, we get another view. When we look at both together—the sandwich model—we get a three-dimensional view and can start to surmise cause and effect. This gives a clearer picture of how an organization is performing.
Minimize competing priorities and platforms.
Almost all teams, regardless of organization, share that there are too many systems, too many priorities, too many everything (except maybe pizza slices …). Be sure to schedule your measurement and retrospective time when the team is taking a natural break in their work. Teams should take the time to do a strategic retrospective on how they are working together at the end of every PI during their Inspect and Adapt, so use that time wisely.
Engage the leaders in the process.
When this becomes a “we” exercise and not a “you” exercise, then there is a sense of trust that is built between the teams and their leaders. Inevitably the teams are going to ask the leaders for assistance in removing obstacles. If the leaders are on board from the start and are expecting this, and they start removing them, this creates an atmosphere of psychological safety where teams can be honest about what they need and leaders can be honest about what they expect.
Remember, this is all change, and change takes time.
Roy T. Bennett said, “Change begins at the end of your comfort zone.” It takes time, perseverance, and some uncomfortable conversations to change an organization and help it to grow. But in the end, it’s worth doing.
Setting up a Measure and Grow program isn’t without its struggles, but for the organizations and teams that put the time and effort into doing it right, the rewards far outweigh the work that goes into it. If you would like to chat with us about what it would take to set up your Measure and Grow program, we’re ready to help.
About Trisha Hall
Trisha has been part of AgilityHealth’s Nebraska-based leadership team since 2014. As VP of Enterprise Solutions, she taps into her 25 years of experience to help organizations bring Business Agility to their companies and help corporate leaders build healthy, high-performing teams. Find Trisha on LinkedIn.
This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.
One of our large financial services clients needed immediate help. It was struggling to meet customer demands and industry regulations and needed to align business priorities to capacity before it was outplayed by competitors. The company thought the answer would be to invest in business in Agility practices. But so far, that strategy didn’t seem to be paying off..
Teams were in constant flux and the ongoing change was causing unstable, unpredictable performance. The leading question was, “How can we get more output from existing capacity?”
Among the client’s key challenges:
No visibility into common patterns across teams
Inspect-and-adapt data was stuck in PowerPoint and Excel
Output expectations didn’t match current capacity
Teams weren’t delivering outcomes aligned to business value
Getting a baseline on team health
We introduced the AgilityHealth® TeamHealth Radar Assessment to the continuous improvement leadership team, and it decided to pilot the assessment across the portfolio. Within a few weeks after launching the assessment, the organization got a comprehensive readout. It identified the top areas of improvement and key roadblocks for 90+ teams.
These baseline results showed a lack of a backlog, not to mention a lack of clarity around the near-term roadmap. Teams were committing to work that wasn’t attached to any initiatives and the work wasn’t well-defined. Dependencies and impediments weren’t being managed. And the top areas of improvement matched data collected during inspect and adapt exercises over the previous two years. Even though the organization had previously identified these issues, nothing had been done to resolve them, as leaders did not trust the data until it came from the voice of the teams via AgilityHealth.
The ROI of slowing down to speed up
Equipped with this knowledge, leaders took the time to slow down and ensure teams had what they needed to perform their jobs efficiently. Leaders also developed a better understanding of where they needed to step in to help the teams. The organization re-focused efforts on building a sufficient backlog, aligned with a roadmap, so teams could identify dependencies earlier in the development lifecycle.
By leveraging the results of the AgilityHealth assessment, leaders now had the data they needed to take action:
A repeatable process for collecting and measuring continuous improvement efforts at the end of every planning increment (PI)
Clear understanding of where teams stood in their Agile journey and next steps for maturity
Comprehensive baseline assessment results showing where individual team members thought improvement was needed, both from leaders and within their teams
An enterprise transformation doesn’t stop with the first round of assessments. Like other Fortune 500 companies, this client plans to continue scaling growth and maturity across the enterprise, increasing momentum and building on what it’s learned.
The company plans to introduce the Agility Health assessment for individual roles, so it can measure role maturity and accelerate the development of Agile skills across defined competencies. It will continue to balance technical capacity with an emphasis on maintaining stable, cross-functional teams since these performance metrics correlate to shipping products that delight customers and grow the business. And to better facilitate “structural agility” (creating and tracking Agile team structures that support business outcomes), it will focus on ensuring the integrity of its data.
You too can leverage AgilityHealth’s Insights Dashboard to get an overall view of your organization’s Agile maturity: baseline where you are now, discover how to improve, and get to where you want to be tomorrow. Get started by logging into the SAFe Community and visiting the Measure and Grow page.
Sally is a thought leader in the Agile and business agility space. She’s passionate about accelerating the enterprise business agility journey by measuring what matters at every level and building strong leaders and strong teams. She is an executive advisor to many Fortune 500 companies and a frequent keynote speaker. Learn more about AgilityHealth here.
Why do some people find SAFe® to be helpful in empowering teams, while others find implementing the Framework painful? To be honest, both scenarios are equally valid.
As I was beginning to refocus my career on transforming the operating models and management structures of large enterprises, I found that the behavioral patterns of Agile and the operational cadence of Scrum shined a spotlight on an organization’s greatest challenges. As a byproduct of working faster and focusing on flow, impediments became obvious. With the issues surfaced, management had a choice: fix the problems or don’t.
As we scale, the same pattern repeats, though the tax of change is compounded because change is hard. Meaningful change takes time, and the journey isn’t linear. Things get better, things get worse, then they get better again.
Consultants will often reference the Dunning-Kruger curve when selling organizational change.
The Dunning-Kruger curve illustrates change as a smooth journey. One that begins with the status quo, dips as the change is introduced, and then restores efficiency as organizations achieve competence and confidence in the new model. Unfortunately, that’s not how change works, and depicting organizational change this way is misleading.
When I’d spend time doing discovery work with a prospective client, I’d instead cite a more accurate picture of change: the Satir curve. The Satir image depicts the chaos of change and better prepares people for the journey ahead. Change is chaotic, and achieving successful change requires a firm focus on the reason why the change is important—not simply the change itself. Why, then, can a SAFe transformation (or any other change) feel painful? Here are the patterns of SAFe transformation that I observed pre-COVID.
The Silver Bullet
An organization buys ‘the thing’ (SAFe) thinking it’s a silver bullet that will solve all of their problems. For example, the inability to deliver, poor quality, dissatisfied customers, unhappy teammates, and crummy products. SAFe can help address these issues, but not by simply using the Framework. The challenge we often face is that leaders just want ‘the thing.’ Management is too busy to learn what it is that they bought. That’s OK though. They did an Agile transformation once and read the article on Wikipedia.
How can you lead what you don’t know? How can you ask something of your team that you don’t understand yourself? Let’s explore.
Start with Why
Leaders don’t take the time to understand what SAFe is, what problems it intends to help organizations solve, or the intent with which SAFe is best used. Referencing the SAFe Implementation Roadmap, its intent is to avoid some of this pain. We begin by aligning senior leaders with the problems to solve. After all, we’re seeking to solve business problems. As Kotter points out, all change must start with a compelling vision for change.
With the problem identified, we then discuss if SAFe is the best tool to address those concerns. We continue the conversation by training leaders in the new way of working, and more importantly, the new way to think to succeed in the post-digital economy.
Middle management, sometimes distastefully referred to as the ‘frozen middle,’ is the hardest role to fill in an organizational hierarchy. Similar to how puberty serves as the awkward stage between adolescence and adulthood, middle management is the first time that many have positional responsibility, but not yet the authority to truly change the system.
Middle managers are caught in a position where many are forced to choose between doing what’s best for the team and doing what’s best to get the next position soon. Often, when asked to embrace a Lean and Agile way of working, these managers will recognize that being successful in the new system is in contrast to what senior leaders (who bought the silver bullet but could not make time to learn it) are asking of them.
This often manifests in a conversation of outputs over outcomes. In that, success had traditionally been determined by color-coded status reports instead of working product increments and business outcomes. Some middle managers will challenge the old system and others will challenge the new system, but in either context, many feel the pain. This is the product of a changing system and not the middle manager’s fault. But it is the reason why many transformations will reset at some point. The pain felt by middle management can be avoided by engaging the support of the leadership community from the start, but this is often not the case.
Misaligned Agile Release Trains
Many transformations begin somewhere after the first turn on the SAFe Implementation Roadmap. Agile coaches will often engage after someone has, with the best of intentions, decided to launch an Agile Release Train (ART), but hasn’t understood how to do so successfully.
As a result, the first Program Increment, and SAFe, will feel painful. Have you ever seen an ART that is full of handoffs and is unable to deliver anything of value? This pattern emerges when an ART is launched within an existing organizational silo, instead of being organized around the flow of value. When ARTs are launched in this way, the same problems that have existed in the organization for years become more evident and more painful.
For this reason, many Agile and SAFe implementations face a reboot at some point. Feeling the pain, an Agile coach will help leaders understand why they’re not getting the expected results. Here’s where organizations will reconsider the first straight of the Implementation Roadmap, find time for training, and re-launch their ARTs. This usually happens after going through a Value Stream and ART Identification workshop to best understand how to organize so that ARTs are able to deliver value.
Moving Fast Makes Problems More Obvious
Moving fast (or trying to) shines a big spotlight on our problems and forces us to confront them. Problems like organizational silos, toxic cultural norms, bad business architecture, nightmarish tech architecture, cumbersome release management, missing change practices, and the complete inability to see the customer that typically surface when we seek to achieve flow.
The larger and older an organization is, the more problems there are, and the longer it takes to get to a place where our intent can be resized. Truly engaged leadership helps, but it still takes time to undo history. For example, I’ve been working with one large enterprise since 2013. It’s taken eight years since initial contact for the organization to evolve to a place that allowed them to respond to COVID confidently and in a way that actively supports global recovery. Eight years ago, the organization would have struggled to achieve the same outcome.
When I first started working with this organization, it engaged in multi-year, strategic planning, and only released new value to its customers once every three years. The conceptual architecture diagram resembled a plate of spaghetti—people spent more time building consensus than building products. And the state of the organization’s operations included laying people off with a Post-it note on their monitor and an escort off-campus.
Today, the organization is much healthier in every way imaginable. It’s vastly better than it was, but not nearly as good as it will be. The leadership team focuses on operational integrity, and how maintainable, scalable, and stable the architecture is—and recognizes that the team is one of the most important assets.
Embracing Lean and Agile ways of working at scale begins with the first ART launch. It continues with additional ART launches, a reconsideration of how we approach strategy, technology, and customers. And it accelerates as we focus on better applying the Lean-Agile mindset, values, and principles on a daily basis. This is the journey to #BecomingAgile so that we can best position the team and our assets to serve customers.
Change Is Hard
Change takes time, and all meaningful change is painful because the process challenges behavior norms. The larger the organization is, the richer the history, and the longer it may take to achieve the desired outcome. There will be good days, days when things don’t make sense, and days when the team is frustrated. But all of that is OK. You know what else is ok? Feeling frustrated during the change. It’s important to focus on why the change is taking place.
A pre-pandemic pattern (that I suspect may shift) is that change in large organizations often comes with evolution instead of revolution. With the exception of a very few clients, change begins with a team and expands as that team gains success and the patterns begin to reach other adjacent areas of the operation. The change will reach a point where supporting organizational structures must also change to achieve business agility.
As mentioned, moving fast with a focus on flow and customer-centricity exposes bottlenecks in the system. At some point, it will become obvious that structures such as procurement, HR, incentive models, and finance are bottlenecks to greater agility. And, when an organization begins to tackle these challenges, really cool things start to happen. People behave based on how they are incentivized, and compensation and performance are typically at odds with the mindset, values, and principles that are the foundation of SAFe.
Let’s Work Together
SAFe itself is not inherently painful. The Framework is a library of integrated patterns that have proven successful when paired with the intent of a Lean-Agile mindset, set of core values, and guiding principles. Organizations can best mitigate the pain associated with change by understanding what’s changing, the reason why the change is being introduced, and a deliberate focus on sound change-management practices. If you’re working in a SAFe ecosystem that feels challenging, share your experience in the General Discussion Group forum on the SAFe Community Platform. Our community is full of practitioners who represent all stages of the Satir change curve, and who can offer their advice, suggestions, and empathy. Together, we’ll make the world a better place to work.
About Adam Mattis
Adam Mattis is a SAFe Program Consultant Trainer (SPCT) at Scaled Agile with many years of experience overseeing SAFe implementations across a wide range of industries. He’s also an experienced transformation architect, engaging speaker, energetic trainer, and a regular contributor to the broader Lean-Agile and educational communities. Learn more about Adam at adammattis.com.
My career path shifted at the beginning of 2021 when I became a full-time scrum master. I knew right away that to be the best scrum master I could be, I’d need to do continuous professional development.
One disadvantage I noticed right away was that I‘d only seen Agile, Scrum, and SAFe® in action in the context of one, unique, midsize organization. To be more well-rounded in my abilities to lead and coach, I needed to experience companies of different sizes, within different industries, and with a different company culture to see how these principles and practices played out—as well as how their SAFe transformation took shape. The more that I saw, the less I would view my company, my teams’, and my own routines as the only perspective. The more I saw, the more I could view innovation as possible because it worked over there.
That’s when the idea came to me to be my own hero and think of something tailored to my professional development needs. With support from my leaders and peers, I created a Scrum Master Exchange Program. I invited interested scrum masters from Scaled Agile and from Travelport, and we paired and connected. From there, pairs self-organized and scheduled several sessions.
Introduction—in this session, pairs introduced themselves and shared their professional background, strengths, weaknesses, and goals. They also talked about their current context: their company, their teams, a typical day/iteration, current conflicts, and recent successes.
Shadow—in these sessions, one scrum master silently sat in on the other’s scrum events or even parts of their ART’s PI planning events. The silent scrum master noted group dynamics, facilitation techniques, or anything interesting.
Debrief—pairs scheduled debrief seasons soon after shadow sessions to share observations, relay positive and constructive feedback, and ask questions.
We closed the program with a retrospective for all participants and a summary email to participants and their people managers.
What I Learned
So, how did it go? When we came together to review the program and its benefits, we all agreed that the new perspectives, experiences, and what we learned were things we deeply valued. Connecting with our partners and problem solving together was empowering and often resulted in us taking action toward solving our challenges.
For me personally, as a new scrum master, I gained confidence in my knowledge and abilities. While my partner was extremely experienced, I could empathize with her problems. And I could even inspire her to consider something new, which made me feel competent and affirmed.
I took away a new mindset, now pursuing more simple, effective, and tried-and-true methods, focusing on the purpose. For example, I would get really creative with my iteration retrospectives, but they could be time-consuming to ideate and build, and the results easily became disorganized. My partner had a very simple, organized, centrally located method and kept things predictable. Though mine still isn’t perfect, I continue to take steps to bring my style a bit closer to hers (I can’t abandon all flair!).
Last but not least, I was further reminded that my professional development, my teams’ development, and my company’s development is a journey. I know what you’re thinking: “How cliché.” But the truth is, you can’t do everything, so you might as well do something. By maintaining a relentless improvement mindset and taking small steps, both you and your teams can get better.
Was the exchange program perfect? No. But we all met someone new in our same role and got a peek behind the curtain at their respective organizations. And, if we decide to implement the program again, we know how to improve it. I’m proud of the fact that I noticed a hole in my professional development and took action, learned a ton, and brought some of my peers with me along the way. I’d call that a successful experiment.
I’d strongly encourage you to try out an exchange in whatever role you have. Here’s how:
Float the idea with your peers to find people to join you.
Reach out to your networks, your coworkers’ networks, and your company’s networks to select a potential organization with which to work.
Pitch the idea, gain buy-in, and connect with legal for any necessary paperwork.
Finalize the participant lists, pair them up, and send them off.
Don’t forget to run a retro and spot ways to improve.
Let us know how it goes!
About Emma Ropski
Emma is a Certified SAFe 5 Program Consultant and scrum master at Scaled Agile, Inc. As a lifelong learner and teacher, she loves to illustrate, clarify, and simplify to keep all teammates and SAFe students engaged. Connect with Emma on LinkedIn.
The circular economy offers opportunities for better growth through an economic model that is resilient, distributed, diverse, and inclusive. It tackles the root causes of global challenges such as climate change, biodiversity loss, and pollution, creating an economy in which nothing becomes waste, and which is regenerative by design.
Many enterprises are committed to making their products eco-friendlier and participating in global coalitions such as The Plastics Pack. Nevertheless, due to the lack of global standards or lack of dialogue and collaboration, they could create fragmented, small-scale, and sub-optimal solutions. For example, an enterprise might design a product that contains recyclable materials, is built with mono-material components, and is easy to disassemble. Still, it would only maximize its recycling value when embedded in a functioning collection system and treated in proper recycling facilities.
What Is the Solution, Then?
Circularity is a property of a system and not of individual products. It depends on how different actors, products, and information interact with each other. Improving the whole system would require that a group of loosely coupled actors combine their business models to achieve a better collective outcome. The proposed solution is a virtual organization that aligns the strategy and execution of all the stakeholders creating a solution ecosystem.
Let’s look at one example. I will illustrate a management framework to improve the packaging plastics system shown below.
Applying SAFe Principles to the Circular Economy
SAFe principle #10, Organize around value, recommends creating a virtual organization that would maximize the flow of value. It involves eliminating silos and barriers for collaboration, including the people, the processes, and the tools, from all relevant stakeholders that are trying to achieve the same outcome.
This organization would be called a solution ecosystem, and its goal will be to implement the desired changes. Following SAFe principle #2, Apply systems thinking, the solution ecosystem would include all the actors involved in or impacted by the flow of packaging plastics, from business, government, scientists, and NGOs to end-user communities, including all the necessary activities and information flows required. Decisions would be made collaboratively, iteratively, and based on science-based targets.
The objective of the solution ecosystem would be to deliver a series of interventions to improve the flow of plastics iteratively. The teams would validate each intervention hypothesis through a series of minimum viable products following a roadmap. An intervention example could be, “to get the top 20 manufacturers of packaging plastics to commit to plastic packaging that’s 100% reusable, recyclable, or compostable by 2025,” while the desired outcome would be “to reduce packaging plastics flowing into the ocean by 50%.”
The solution ecosystem comprises small, long-standing, cross-stakeholder, and cross-functional teams or teams of teams dedicated to addressing specific outcomes. They will also have access to part-time specialized resources and count on all the necessary skills to deliver value independently of other teams.
The solution ecosystem could be coordinated top-down, from organizations such as the World Economic Forum, or led by a single enterprise coordinating with all the stakeholders impacted by its products. This organization could reach out vertically to all actors along the supply chain, such as those in logistics, packaging, and wholesale, horizontally to competitors, or circularly to all stakeholders impacted.
Aligning Strategy to Execution
The solution ecosystem is likely to be composed of many people and organizations. To align strategy and execution, SAFe proposes to create a golden thread. From a single and shared vision to strategicthemes to a common backlog of interventions to hold and prioritize all the interventions that will realize those themes.
The overarching vision of the New Plastics Economy is that plastics never become waste. Instead, they re-enter the economy as valuable technical or biological nutrients, creating an effective after-use plastics economy, drastically reducing the leakage of plastics into natural systems, and decoupling plastics from fossil feedstocks.
Strategic themes are the way to achieve that vision or areas of investment. They are a way to group and classify Interventions. The solution ecosystem’s scientific community would express them in objectives and key results (OKRs). Thus, providing a qualitative and quantitative measurement to evaluate progress and success. An example could be:
Objective: Drastically reduce leakage of plastics into natural systems.
Key result 1: Improve after-use infrastructure in high-leakage countries by x%
Key result 2: Increase the economic attractiveness of keeping materials in the system
Key result 3: Increase investments in efforts related to substances of concern by x %
The teams would strive to accomplish the strategic themes by implementing a series of interventions. The solution ecosystem’s backlog is the prioritized list of interventions to be done. For example, it might look like this:
Plastics toolkit for policymakers
Bid data service to track the flow of dangerous chemicals
Food delivery containers as a service
Collaborative Decision-making Process
SAFe recommends using Participatory Budgeting (PB) as a tool for budget allocation across the same enterprise business units. We could expand PB for multi-stakeholder decision-making, as many municipalities use it, gathering all the stakeholders’ voices. All the stakeholders impacted would be heard, voice their concerns, choose their priorities, and learn about other stakeholders’ concerns. The PB process should be done periodically to create a rolling wave agreed plan.
Creating a Balanced Portfolio
To maintain a well-balanced portfolio, SAFe proposes several budget guardrails:
Capacity allocation: This technique classifies interventions into different types and allocates a percentage of the available capacity to each kind, such as building the basic science, writing communications material for end-users, or drafting policy documents. Every three months, we can decide the percentage allocation to each type, keeping the desired balance across all categories.
Investment horizons: Classifying interventions by their impact timeframe allows leadership to maintain the right balance between the immediate, short, and long term. Quick wins are needed to win the hearts and minds of the naysayers, while the more difficult things usually take longer.
Epic approval: Decentralizing decision making is fundamental to reduce time-to-market and to improve flow. Nevertheless, substantial initiatives that impact multiple stakeholders need to go through an approval process based on a short business case.
Project to Product
The traditional projectapproach would have required well-defined Interventions with fixed scope, fixed budget, and a fixed timeframe, such as building a clearly defined database of biomaterials at the cost of £2m over one year. One major drawback of this approach is that the success criteria of the intervention usually focus more on staying within these artificial constraints rather than on achieving the desired outcome of increasing the percentage of biomaterials used in packaging plastics by x%. Another problem is that designs and plans must be agreed upon upfront to obtain funding and approval. At that moment is when we know the least about the problem and the solution. Hence, it becomes harder to pivot later if needed.
The book Project to Product proposes a product approach, where funding is associated with long-standing teams working on a set of interventionsrelated to thedesiredoutcome. They would iteratively validate hypotheses and measure progress irrespective of the validity of their initial plans and assumptions. Products must be launched and maintained during their life cycle and have multiple target users with evolving needs.
For instance, the budget would be related to a product called ‘biomaterials for packaging,’ including research, product launch, product support in life, and end-of-life activities, rather than related to a project to launch a new packaging material.
SAFe principle #1, Take an economic view, proposes that we work incrementally and iteratively. Working in small timeboxes and on small pieces of independently valuable work would allow us to obtain the best economic outcome. We will get quick feedback; the value will get accumulated over time, and it will enable us to test our hypothesis and pivot quickly if needed.
SAFe principle #7, Cadence and synchronization, promotes that all teams involved in the solution ecosystem get together every three months to collaboratively plan the work for the next three months. This recurrent process helps evaluate progress toward the shared outcome, manage cross-team dependencies, and facilitate cross-team collaboration to create a stable and predictable rhythm of key events.
Every three months, all teams demonstrate their accomplishments to evaluate progress objectively. They would get together to reflect on how they deliver value and look for opportunities to improve the process.
The Epic Owner is a new role that would work at the solution ecosystem level to track and shepherd the intervention through its life cycle and across all the teams involved. In our example, the Epic Owner for the biomaterials database would be accountable to define the scope, building the short business case, getting it approved, building the teams across all stakeholders, tracking progress, being a consultant to the delivery teams, and evaluating whether they are meeting the desired outcome. It is a role, not a title. Hence, it might be fulfilled by a group of people.
Transparency and visualization of all the work and all the dependencies by everyone are key. Kanbanboards would allow us to see every intervention’s status to match demand with available capacity. A dependency board would show when each intervention will be delivered and its dependencies with other teams.
No amount of central planning will be enough at this scale. To enable decentralized decision-making, we need to create a framework that provides organizational clarity and technical competence. This would allow individual teams to make decisions independently with the confidence that those will be good decisions. An example could be that a team can decide to increase the cost of the solution up to £1,000 to produce an additional reduction on the amount of plastics leakage into the ocean, as long as there is no impact on any of the other planetary boundaries.
References and Sources of Inspiration
Several reports are calling for organizations like the proposed solution ecosystem that could lead a multi-stakeholder systemic change:
TheMetabolic Institute proposed that The Netherlands implements a regional ecosystem approach to scale up circular economy innovation.
The Ellen MacArthur Foundation calls for a global, independent collaboration initiative that brings together all actors across the value chain from consumer goods companies, plastic packaging producers, plastics manufacturers to cities, businesses involved in the collection, sorting and reprocessing, policymakers, and NGOs.
J. Konietzko writes, “Ecosystem innovation aims at changing how actors relate to each other and how they interact to achieve the desired outcome… circular products and services often maximize their circularity in conjunction with other assets. A circular ecosystem perspective thus goes beyond the question, what is our value proposition? Instead, it asks, how does our offering complement other products and services that together can provide a superior and circular ecosystem value proposition?”
D. Meadow, in her book Thinking in Systems, says, “You can’t predict a system, but you can dance with it.” Hence, do not design a solution upfront at the enterprise level, expecting the whole ecosystem to react as you hoped. Instead, implement a management framework that allows you to work iteratively at the system level, which we call the solution ecosystem; listen to the feedback, and react accordingly.
In this blog post, I proposed a management framework, adapted from the Scaled Agile Framework, to manage a multi-stakeholder ecosystem to scale up solutions for the circular economy. At this stage, these are ideas extrapolated from my experience in business agility transformations and my readings into the circular economy. Please get in touch with me via LinkedIn to explore these ideas further, or if you have a concrete initiative you would like to apply them to.
About Diego Groiso
As a Principal Consultant at Radtac, a Scaled Agile Partner, Diego supports companies in their Business Agility journeys as an Enterprise Agile Coach, Trainer, and Release Train Engineer. Recently, he has transformed the whole infrastructure department of a global utility company, as well as launched and coached several Agile Release Trains within the Digital Transformation Programme in a global telecom company. He has a passion for the circular economy as one of the solutions to climate change. Connect with Diego on LinkedIn.