• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

philc

Adapting to Technological Shifts in Enterprise Software Development

August 19, 2023 by philc

5 min read

The focus of this post is on more significant shifts in technology stacks.

Last week, I shared an article about a core framework used in our organization’s platform with an alternative viewpoint. The post sparked posts of similar topics and a conversation among senior engineers, discussing the drawbacks and shortcomings of this once-popular (still popular) framework.

Although I strongly support the inclusion of this framework in our stack, I also value the importance of fostering awareness of choices and perspectives and staying up-to-date with the options available in our industry. Some of our exceptionally skilled team members, who possess expertise in this technology but may have limited experience in supporting a large organization (and I want to emphasize that I mean no disrespect), are utilizing the ideas presented in the article that I shared to post additional articles with similar arguments and propose a deviation from the widely accepted framework.

As frameworks expand and evolve, with more contributors joining in, there are instances where the framework veers away from its original purpose. Evolution in a framework isn’t necessarily harmful but can pave the way for a new technology that addresses the prevailing framework’s shortcomings or pain points.

Supporting future aspiring technology leaders

With a tech career of over 24 years under my belt, I have gained experience as a software engineer, software architect, and senior leader. My experience ranges from working with startups to global enterprise organizations.

I have engaged in numerous discussions on selecting programming languages and frameworks. The delicate balance of embracing new technologies has always been at the forefront. I am amazed at the solutions or implementations I learn from these newer engineers and individual contributors. As a seasoned professional in this field, I wish to impart insights from these conversations, share hard-earned lessons, and offer wisdom to the next generation of tech leaders.

In the face of the relentless progression of technology, the temptation to adopt new and flashy tools grows stronger by the day. Yet, when it comes to enterprise software development, selecting technologies and frameworks should only be approached casually and carefully.

The Responsibility of Choice

Decisions regarding technology adoption in large organizations can have far-reaching implications. They shape the organization’s future trajectory, influence development and maintenance costs, impact the ability to attract and retain top talent, and ultimately determine its overall competitiveness. Therefore, such decisions must be approached strategically, with careful planning and a profound understanding of the potential consequences.

Keeping up with the latest technological trends is crucial, but comprehending the broader scope of transformation is equally vital. It entails thoughtful planning, strategic thinking, and evaluating the ramifications of decisions in the short and long run.

The Art and Science of Managing Change

Change is inevitable, but managing it requires careful consideration, especially in a large organization. The more significant the technology shift, the greater the responsibility of the decision-makers.

Transitioning from a legacy system to a new technology stack presents its fair share of challenges. It goes beyond simply choosing the appropriate framework or programming language. It entails comprehending the entire ecosystem – the architecture, inter-dependencies, business objectives, and the organization’s long-term vision.

The role of an architect in this context is crucial. The architect creates the blueprint for development, scaling the system, and ensuring that the chosen technologies align with the organization’s goals. The architect also needs to consider the impact of their choices on the various stakeholders involved.

Evaluating When to Move On

Recognizing the right time to retire a framework or transition away from a system that has become burdensome is a crucial aspect of technology leadership. As technology stacks age, stacks can often encounter performance bottlenecks, increased maintenance overhead, and growing technical debt. However, before making a decision, it is essential to conduct a comprehensive cost-benefit analysis. Even if a framework has its share of challenges, it may offer advantages that outweigh its drawbacks.

When considering whether to migrate to a new framework, evaluating its integration into your organization, the wealth of institutional knowledge built around it, and the unique features critical to your business is essential. Balancing the immediate challenges with the potential risks and costs is crucial. This decision should be driven by a well-thought-out strategy considering the overall business goals, the impact on stakeholders, the expected return on investment, and the organization’s readiness to adapt to a new technology stack. Retiring a framework or making a technology shift should be a deliberate and informed choice, with a comprehensive understanding of the change’s current challenges and future implications.

Evaluating the Options

Following the latest trends and adopting emerging technologies can be tempting without more profound thought. However, it is crucial to approach these decisions with a critical mindset. Before diving in, it is essential to ask some tough questions. Is the new framework mature enough to meet the demands of an enterprise setting? Does it have a vibrant community and reliable support? What are the potential risks associated with this cutting-edge technology? Moreover, it is essential to consider the expected lifespan of the chosen framework and have a plan for future technology migrations. These factors will lead to more informed and strategic decisions, ensuring long-term success.

Strategizing for the Long-Term

Enterprise platforms are often built to serve high-transactional organizations with long-lived systems, where a technological shift can be a significant undertaking. As a technology leader, you must understand the implications of moving from the past to the future – the costs, the investment, the impact on stakeholders, and the risks involved.

When selecting technologies for an organization, it is crucial to remember that you are not simply choosing for a small business storefront. Your decisions should be grounded in a comprehensive evaluation of the available options and a deep comprehension of the organization’s objectives.

While it’s recommended and often expected that engineers stay updated with the latest technological advancements, it’s crucial to remember that your organization relies on you to embrace new trends and plan and support your past decisions. Merely recognizing the latest shiny objects is one thing, but possessing the skill set to strategize, plan, and effectively manage change is imperative for technology leaders operating within large organizations.

In an ever-evolving technology landscape, striking a balance between staying current with trends and maintaining the stability and sustainability of our enterprise systems is crucial. It’s a delicate dance that demands thoughtful deliberation, strategic planning, and a profound comprehension of how change impacts the organization.

Conclusion

Adopting new technologies or transitioning to alternative frameworks in a large or enterprise technology company goes beyond simply following the latest trend. It necessitates thoughtful consideration of the impact of change on the organization, stakeholders, and the company’s long-term vision. As technology leaders, we are responsible for approaching these decisions with a strategic mindset and a well-defined plan, considering the broader context of change.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Software Engineering

Mitigating Metric Misuse: Preventing the Misuse of Metrics and Prioritizing Outcomes Over Outputs

June 21, 2023 by philc

6 min read

The business needs feedback on technology investments. Teams need insights into flow efficiency and potential bottlenecks.

Part 3 of a continuing conversation regarding today’s delivery system metrics: Flow Metrics, DORA, and the traditional concerns regarding the Gamification of numbers.

Links to the previous posts:

Part 1: Finally, Metrics That Help: Boosting Productivity Through Improved Team Experience, Flow, and Bottlenecks.

Part 2: Developer Experience: The Power of Sentiment Metrics in Building a TeamX Culture.

What problem are we trying to solve?

Identifying the specific problem you are trying to solve with metrics is essential. Could you suggest other solutions apart from using these metrics? How can we determine where to invest and track progress if we don’t use them?

The problem we are trying to solve is the improved efficiency of software delivery and employee engagement. The focus is on continuous improvement of flow. Using metrics, we can illuminate insights into bottlenecks and obstacles that reduce the team’s ability to deliver software. Our goal is to continuously improve the flow of work, which ultimately leads to better outcomes. Improvements in outcomes reflect efficiency improvements.

Business interest in metrics (investing in technology, investing in work)

  • Are we improving our business by investing in technology? Are we getting better?
  • Return on investment, return on outcomes
  • Delivering faster with high quality

Teams (delivering work, removing friction, feeling successful)

  • Improve efficiency by reducing waste, shortening lead and cycle times, optimizing workflow, and promoting employee engagement.
  • We do this by providing teams with data, insights, and optics into bottlenecks and areas of friction that generate conversations around why these bottlenecks exist and brainstorm experiments to resolve them.

Is there an elephant still in the room? What about the Gamification of metrics?

Concern for system metrics like Flow and DORA is still the team’s focus, as they try to gamify the numbers instead of focusing on the data and looking for patterns that highlight bottlenecks and friction, otherwise known as areas of improvement.

Stakeholders need system metrics, and using them effectively within the organization is essential. Some tools can be expensive. There is also a risk of gaming the system to achieve a desired metric, and these tools’ values decrease when teams focus solely on the numbers.

How can we avoid becoming hyper-focused on these metrics this time around? How can we encourage teams to use them? We should separate the business view from the team’s perspective. The team should focus on the insights and illuminated areas of improvement, not just the numbers.

Some leaders adopting these newer metrics and dashboards measuring flow and DORA still warn that Gamification wins, and teams fall back to focusing solely on improving the number.

Yet, numerous teams have succeeded by fostering a positive culture and adopting the right mindset. These teams analyze the patterns and identify the areas that pose obstacles. Doing so enhanced the flow, mitigated friction, and boosted engagement, activity, and overall satisfaction.

The key is leadership.

Bad managers or incompetent managers will diminish efforts.

If you still fear team gamification and misuse of metrics that defeat the value of modern ways to measure and motivate efficiency improvement, consider improving your leadership instead of blaming the tools or teams.

The increasing pressure on engineering leaders to be “more data-driven” has pros and cons depending on the managers leading the effort, even with today’s metrics and the “why,” bad managers can erode the value of these modern team data insights.

Depending on the competencies of the managers leading the effort, the push for engineering leadership to be more data-driven can have positive and negative effects, despite the availability of metrics and understanding of the “why.” In the case of bad managers, the value of these team insights can be quickly diminished.

Although metrics like Flow and DORA can offer valuable insights into team efficiency and process bottlenecks, it is crucial to remember their purpose. These metrics serve as tools for understanding and improving the system, not micromanaging, unfairly critiquing the team, or ranking performance across teams.

These are “team” metrics. Misusing these metrics to measure individual performance is an unfortunate managerial anti-pattern. As with comparing teams, managers focusing on individual performance can lead to a toxic culture and create an environment where team members might manipulate the metrics rather than focus on delivering value.

If your teams prioritize numbers instead of identifying improvement areas and working together to overcome challenges, consider examining the person guiding the team and reporting the team’s metrics.

Competent and influential managers:

Leadership needs to create a clear cultural imperative, acknowledging that, while sometimes it may be unavoidable, it is human nature to want to focus on the numbers. However, intentionally doing so will not be accepted. It is important to reinforce a culture of improvement and help teams understand that metrics are not the ultimate goal. Instead, metrics result from efforts to enhance different processes, such as removing bottlenecks, improving flow, automating processes, and enhancing practices. With the focus on improving rather than the numbers, each improvement will increase metrics over time.

  • Foster psychological safety for teams to make all work and impediments visible.
  • Don’t use metrics to compare or punish teams. Each team has a unique set of customers, complexity, and challenges.
  • Use metrics in retrospectives to drive discussion and ideas on improvements.
  • Celebrate experiments and improved trends.

The Benefits.

Teams should be encouraged to view and use the metrics differently than how the business views them. Teams finally have data to advocate for investments in other work besides features.

There are ways in which teams can benefit once they have data to back up the evidence of their bottlenecks and show the business and stakeholders the value of investing in and addressing these bottlenecks. Teams can use this data to demonstrate the necessity for investing in technical debt and efficiency improvements rather than just investing in feature work. The benefits include:

  1. More data to act upon: Give your team more data and insights to talk about, and if required, act upon it before things start to fall off the rails.
  2. Exposing Bottlenecks: Flow Metrics and DORA Metrics can help teams identify bottlenecks in their development process. Bottlenecks include areas where work is consistently getting held up, causing delivery delays. By identifying these bottlenecks, teams can focus on improving these specific areas through automation or other solutions leading to overall improvements in efficiency and delivery time.
  3. Promoting Proactive Improvement: Using these metrics encourages a proactive approach to improvement, as teams can use the data to identify potential issues before they become significant problems. Early detection can lead to a more efficient and effective development process.
  4. Demonstrating Value Beyond Features: Often, stakeholders focus on feature delivery as the primary measure of a development team’s value. However, these metrics can help prove that a team’s value extends beyond delivering features. They can show how improvements in technical debt reduction, process efficiency, and team collaboration can also provide significant value.
  5. Facilitating Conversations with Stakeholders: These metrics provide teams with the data they need to have meaningful conversations with stakeholders about where investment is required. They allow teams to move beyond subjective arguments to data-driven discussions about the state of the development process and what is needed to improve it.

By adopting these newer system metrics, with the support from exemplary leadership, and a great culture, teams can avoid focusing solely on the metric numbers to please the business and shift instead towards an improved flow, higher team member engagement, and a more balanced and sustainable approach to software development.

Poking Holes

I invite your perspective to analyze this post further – whether by invalidating specific points or affirming others. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.

Related Posts

  • Finally, Metrics That Help: Boosting Productivity Through Improved Team Experience, Flow, and Bottlenecks. December 29, 2022.
  • Developer Experience: The Power of Sentiment Metrics in Building a TeamX Culture. June 18, 2023.

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering

Developer Experience: The Power of Sentiment Metrics in Building a TeamX Culture

June 18, 2023 by philc

6 min read

“If you are in a good culture, you will feel and know it, and it’s sometimes hard to put words on those things.” ~ Wayne Crosby

What is the objective of our focus on Developer Experience? We aim to address various aspects, such as enhancing efficiency, encouraging collaboration, boosting job satisfaction, improving output quality, and fostering innovation and creativity.

The Buzz Around Developer Experience

There have been so many publications on this topic lately. Google “developer experience,” and it will return a list of links to DevX definitions, examples of DevX teams, and frameworks.

DevX is a new spin on prioritizing the investment in people and ways of working. I recall every presentation emphasized a people-first culture four years ago. Still, lately, there has been a surge in the number of articles published about developer experience (a.k.a. DX, DevX, and DevEx).

Why is developer experience becoming more prevalent?

During the previous waterfall and project-based software delivery practices, some have argued that developers were treated like a resource from a factory line. They were often referred to as “resources” by the business, measured by their code output and utilization. It’s great to be recognized as a human being and feel engaged and valued. But do the attributes of DevX apply only to developers or possibly many others on a delivery team? In many cases today, cross-functional delivery teams are delivering value.

I have spent much of my career as a software developer and manager of software development teams, my contributions and output have measured me, and I have measured others similarly. I have worked with previous cultures, tools, and practices, and today’s tools, architectures, and ways of working. More than anyone else, I can appreciate the message and focus on the developer experience.

Attributes of Developer Experience

The term “developer experience” refers to the experience of developers as they do their everyday work, including any difficulties they may encounter.

The attributes of developer experience (DevEx, DX) are as follows:

  1. Perception of the development infrastructure: How developers perceive the technical infrastructure (e.g., development tools, issue trackers, programming languages, cloud platforms, libraries) and ways of working (e.g., working agreements, processes, and methods)​.1
  2. Feelings about work (happiness and engagement): How developers feel about their work, including whether they feel respected, care about it, and feel like they belong in their team.1
  3. Value of work (purpose and success): How developers value their work, including whether they feel they’re making an impact and whether their values and goals align with the company​.1

In addition, a fourth attribute, Onboarding and investment in upskilling: Is how developers value an organization or department that prioritizes the onboarding process for new members and invests in their ongoing skills development.

Here are a few of the initiatives that are driving the developer experience:1

  1. Reduce developer wait times and interruptions
  2. Invest in maintaining a healthy codebase
  3. Make deployments safe and fast
  4. Empower teams
  5. Optimize for high work engagement

Developers with high work engagement exhibit persistence, dedication, and a commitment to delivering quality software. They proactively support the organization and consistently produce excellent work when they have the tools, autonomy, mastery, purpose, and a sense of success.

Success Comes From The Team and Team Experience

As of 2023, many organizations have significantly invested in transforming their ways of working through culture, Agile, Lean, DevOps, and cloud technologies. They invested in DevOps and Platform teams that build the capabilities for teams to improve software delivery and the developer experience. It still takes a team to deliver software today. What is so different about developer experience versus quality assurance experience, agile leadership experience, or product owner experience? We should expand the message to Team Experience (TeamX).

I recognize and respect software developers’ specific type of work; it is knowledge work, so developer experience must be acknowledged. However, we need to expand the focus to the delivery team experience, which includes developer experience.

  • What if the Quality Assurance Engineer could spin up an ephemeral test environment to test changes and have innovative tools and ways to run performance, exploratory, and chaos testing?
  • What if product owners could press the “delivery” trigger in an evolved, highly confident continuous delivery pipeline to deliver features to production or review features in an ephemeral environment?
  • Why would we ignore the agile leaders’ need for tools to facilitate team building, retrospectives, sentiment analysis, cycle management, and more?

Most of the “developer experience” aspects relate to the other team roles on a cross-functional team and the team’s overall experience. Therefore prefer to focus on team experience and promote that “teams and team members with high work engagement exhibit persistence, dedication, and a commitment to delivering quality software. They proactively support the organization and consistently produce excellent work when they have the tools, autonomy, mastery, purpose, and a sense of success”.

Treating all workers with respect is important, but for creative work to thrive, a supportive environment must also be provided. I will continue to advocate for team experience (TeamX, TX) over developer experience (DevX, DX), and that developer experience is part of team experience.

Unlocking the Potential of Metrics

As a follow-up to my first post on modern-day metrics, “Finally, Metrics That Help: Boosting Productivity Through Improved Team Experience, Flow, and Bottlenecks,” this post highlights the exciting combination of modern-day insights available today. These insights come from both your delivery systems and the team’s sentiment.

Measuring team experience requires both delivery efficiency (system metrics) and team feedback (sentiment metrics).

System metrics: I have become an evangelist and promoter of today’s system metrics and data insights based on value stream management, the Theory of Constraints, and a mix of flow metrics and DORA metrics as a holistic workflow and measurement to accelerate efficiencies and product and portfolio delivery.

Sentiment metrics: Since 2022, I have increased my focus on sentiment frameworks like the SPACE framework2 and, more recently, the DevEx framework created by Abi Noda, Margaret-Anne Storey (author of SPACE), Nicole Forsgren (creator of DORA), and Michaela Greiler (previously Microsoft Research).3

I have learned that it is not uncommon for organizations to start with system metrics and then realize they can benefit from targeted frequent sentiment metrics.

One unique thing about my experience at my current organization is that in addition to a semi-annual organization-wide employee net promoter score type survey (eNPS), we have been collecting simple team sentiment over many years using a Google Sheet, wherein each team member’s sentiment is recorded at the end of their daily standup comments: How are you feeling today? Positive, Negative, or Neutral.

Wanting to expand on our sentiment feedback, we are looking into creating short, consistent, and frequently delivered surveys in-house using existing tools that provide us with this capability or investing in services with significant experience in this area and the types of questions that bring the best results. As we still learn to master value streams and system flow metrics, we must expand and invest in our sentiment metrics.

Final thoughts

Creating and delivering digital products is currently an exciting field. Modern delivery practices, methodologies, and innovative measurement techniques bring positive changes. Two types of data analysis are necessary to evaluate team effectiveness and happiness: delivery system metrics (such as Flow and DORA) and sentiment metrics (measured through surveys).

To remain competitive and succeed in today’s business environment, software delivery organizations must update their delivery practices and adopt modern system metrics and sentiment measurements.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.


References

  1. Ari-Pekka Koponen (28 February 2023), The ultimate guide to developer experience, swarmia.com, short URL: bit.ly/468g0q2
  2. Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler (06 March 2021), The SPACE of Developer Productivity: There’s more to it than you think, queue.acm.org, https://queue.acm.org/detail.cfm?id=3454124
  3. Abi Noda, Margaret-Anne Storey, Nicole Forsgren, and Michaela Greiler (03 May 2023), DevEx: What Actually Drives Productivity: The developer-centric approach to measuring and improving productivity, queue.acm.org, https://queue.acm.org/detail.cfm?id=3595878

Filed Under: Agile, Delivering Value, DevOps, Metrics, Product Delivery, Software Engineering

Shift Left Security, Security Unit Tests, OWASP Top 10, and AI: Key Practices for Secure Development

June 10, 2023 by philc

6 min read

What are we trying to improve? The adoption of practices to find security vulnerabilities early in the development lifecycle.

What outcome do we hope to achieve? Additional security coverage, where applicable, earlier in the software development lifecycle.

Let’s Shift Left

Are you familiar with the term “shift left”? It is a popular concept in the tech industry for good reasons. I define shift left as enabling the earliest feedback. It’s about determining if your code modification is functioning as intended and detecting any potential damage to pre-existing code as soon as possible. 

Why should we postpone identifying an issue until the last minute? The cost of detecting an issue increases as it is detected later in the delivery flow (cost is a whole other conversation). For us, we started shifting left for quality, running tests at all levels, and moving from depending on extensive, long-running tests in staging or production environments to reducing the number of these tests and replacing them with tests earlier in the flow (shifting the tests left). Several libraries support unit test implementation in most languages. Our journey started several years back with Martin Fowler’s article on the practical test pyramid1 and adopting the practice of test-driven development (TDD).

Security Shift Left Mindset

Security has become indispensable to our work in the evolving technology and software development landscape. It’s no longer just about developing features but ensuring they are secure and reliable for our users. Significant improvements have been made at the platform and systems levels. Here’s the kicker: what if we did the same “shift-left” approach for security? Developers can access helpful tools such as profilers, static analysis, and dynamic analysis scanners. The objective is to identify security issues quickly alongside quality and defects. Why not make security-based unit tests a core practice of your team?

OWASP Top 10: Key Practices for Secure Development in the coding stage

We want to ensure our code is secure and get feedback early. One way to do this is to follow the OWASP Top Ten2 security risks while writing and compiling our code. We can use unit tests to help prevent these risks from happening.

1. Injection (OWASP A1): Create input validation tests to mitigate injection flaws.

// Java
@Test 
public void testSqlInjectionVulnerability() { 
  String maliciousInput = "1'; DROP TABLE users; --";  
  assertFalse(isSqlInjectionSafe(maliciousInput)); 

}

2. Broken Authentication (OWASP A2): Develop tests to verify session management and authentication.

// Java
@Test 
public void testSessionExpiration() { 
  User testUser = new User("Test User"); 
  Session testSession = new Session(testUser); 

  Thread.sleep(MAX_SESSION_TIME + 1); 
  assertFalse(testSession.isValid()); 
} 

3. Sensitive Data Exposure (OWASP A3): Formulate tests to prevent inadvertent data leaks.

// Java
@Test 
public void testDataLeak() { 
  User testUser = new User("Test User", "password"); 
  Logger testLogger = new Logger(); 

  testLogger.log(testUser);
  assertFalse(testLogger.containsSensitiveData()); 
} 

4. XML External Entity (XXE) (OWASP A4): Test XML parsers for correct configuration.

// Java
@Test 
public void testXXE() { 
  String maliciousXML = "..."; // some malicious XML   
  assertThrows(XXEException.class, () -> parseXML(maliciousXML)); 
} 

5. Broken Access Control (OWASP A5): Assert appropriate access levels for different user roles.

// Java
@Test 
public void testAdminOnlyAccess() { 
  User testUser = new User("Test User", Role.USER); 
  Resource restrictedResource = new Resource("Restricted Resource"); 
  assertThrows(AccessDeniedException.class, () -> restrictedResource.access(testUser)); 
} 

6. Cross-Site Scripting (XSS) (OWASP A7): Implement tests to check how the application handles untrusted data.

// Java
@Test 
public void testXSSVulnerability() { 
  String maliciousInput = "<script>alert('XSS');</script>"; 
  assertFalse(isXssSafe(maliciousInput)); 
} 

Other examples for the user interface (JavaScript)

1. Cross-Site Scripting (XSS) Protection: To prevent XSS attacks, you should test that your rendering function properly escapes user input.

describe('XSS Protection', () => {
  it('should escape potential script tags in user input', () => {
    const userInput = '<script>alert("xss")</script>';
    const escapedInput = escapeUserInput(userInput);
expect(escapedInput).toEqual('&lt;script&gt;alert("xss")&lt;/script&gt;');
  });
});

2. Injection and Input Validation: Confirm your software correctly validates the user input and prevents SQL injection.

describe('Input Validation', () => { 
  it('should invalidate input containing SQL Injection attempt', () => { 
    const userInput = "'; DROP TABLE users; --";
    expect(isInputValid(userInput)).toBe(false); 
  });
}); 

3. Authorization/Access Control: Ensure certain UI elements are accessible only to authenticated or authorized users.

describe('Authorization', () => { 
  it('should not show the admin button for non-admin users', () => { 
    const user = { isAdmin: false };
    render(<Dashboard user={user} />);
    expect(screen.queryByText('Admin Panel')).not.toBeInTheDocument();
  });
}); 

4. Token Handling: Verify that authentication tokens are stored and handled securely.

describe('Token Handling', () => { 
  it('should not store tokens in localStorage', () => {
    setAuthToken('exampleToken');
    expect(window.localStorage.getItem('authToken')).toBeNull();
  });
});

Check out this YouTube video: DevSecOps wins with Security Unit Tests.3

What about AI?

As we focus on modern security practices, it’s worth touching on these efforts’ using artificial intelligence (AI) advantages.

While you can find posts and articles from earlier this year (2023) concerning the security vulnerabilities that code assistance tools like GitHub copilot can create. However, you can also find posts and articles detailing how quickly the security features of these tools are improving.4

There is no perfect solution. Still, tools like copilot can learn from past incidents, analyze patterns, and predict vulnerabilities. They can generate test cases based on software behavior and suggest edge cases to enhance our unit testing process. These tools can enhance security unit tests. Machine learning models can be trained on numerous secure and insecure code examples, predicting whether a new piece of code might contain security vulnerabilities based on patterns they’ve learned. These tools can flag potential security issues as developers write code, providing immediate feedback and opportunities for learning.

AI can assist in static and dynamic security testing. One example is that AI can help with the time-consuming task of sorting through false positives in static code analysis results. Additionally, AI can identify patterns in code that humans may overlook and point out areas that require further examination. In dynamic analysis, AI can help mimic the actions of real users by interacting with the software as humans would, finding vulnerabilities that manual testing might not uncover. The continuous learning process of AI models also ensures that our testing procedures will evolve alongside new threat patterns.

Wrapping this up

While the OWASP Top Ten may not encapsulate all possible security issues, it provides a robust foundation for our security practices. Incorporating these tests into our daily workflow is a strategic move that will substantially enhance our products’ security. Such tests should be designed to verify that the user interface effectively implements various security measures. For teams yet to adopt this “shift left” practice, it is imperative to start integrating security testing earlier in the development process and uphold high-security standards.

To stay ahead of evolving threats and reinforce our software’s security, we should also consider the integration of AI into our security strategy. AI can enable us to identify and tackle potential vulnerabilities proactively. However, it is essential to remember that these powerful tools are intended to supplement, not substitute, our existing security practices and intuition.

Let’s focus on enhancing our security practices as we move forward. We should adopt emerging technologies like AI while keeping our main goal in mind – creating secure and reliable software for our users. By implementing strategies such as “shift left” security and utilizing the tools available to test the security of our code, we can stay ahead of evolving security threats and maintain the trust of our users.

Poking Holes

I invite your perspective to analyze this post further – whether by invalidating specific points or affirming others. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.


References

  1. Martin Fowler (26 February 2018), The Practical Test Pyramid, martinFowler.com, https://martinfowler.com/articles/practical-test-pyramid.html
  2. Aimee, Nikki, and featuring Abhay Bhargav (26 September 2021), DevSecOps wins with Security Unit Tests, youtube.com, https://www.youtube.com/watch?v=i34Ihbuslgw
  3. OWASP Top Ten, owasp.org, https://owasp.org/www-project-top-ten/
  4. Shuyin Zhao (14 February 2023), better AI model and new capabilities, github.blog, https://github.blog/2023-02-14-github-copilot-now-has-a-better-ai-model-and-new-capabilities/

Filed Under: DevOps, Engineering, Software Engineering

Beyond Features: A Software Engineer’s Code of Conduct for Delivering Impactful Product Outcomes

April 23, 2023 by philc

3 min read

“The software industry is a great example of an industry where the responsible thing to do is not always the easy thing to do.” – Bill Gates

and

“With great power comes great responsibility.” – Uncle Ben, Spider-Man

When you are hired to work for someone else, it is essential to prioritize their interests in your work. The organization, its customers, and your team trust that you have their interests at heart.


In the rapidly evolving world of software development and digital product companies investing in digital transformation, cross-functional product delivery teams often grapple with effectively managing four critical aspects of their products: features, technical debt, risks (security), and defects. 

Moving Beyond the Feature Factory Mindset Today

Despite the widespread adoption of agile, lean, and DevOps practices, an excessive focus on delivering new features can lead to technical debt and deprioritized defects, ultimately hampering the team’s efforts and overall product quality. This misalignment of priorities and culture is often the result of overdominant voices influencing the end product.

Showcasing Security, Quality, and Performance as Core Product Features

One solution to this challenge lies in cultivating a team mindset and culture that values risk (security), quality, and performance (technical debt) as integral aspects of the products delivered to customers. As one Product Manager insightfully mentioned in a podcast, he and his team view quality, performance, and security as core “features” of their products.

To effect lasting change in team culture, we must foster collaboration and alignment across all roles, working together towards a shared purpose and goals. This balanced approach to feature development, technical debt management, risk mitigation, and defect resolution can lead to a more purposeful alignment of goals and exceptional customer products.

Introducing the Software Engineer’s Code of Conduct and Responsibilities to Product Delivery

As a software engineer hired to craft and deliver software on behalf of my team, my organization, and our customers, I will:

  1. Goal Alignment, Outcome Clarity, and Realization: Developers should ensure their work aligns with broader organizational goals and clearly understand desired outcomes before starting any task. Work items include features, technical debt, defects, and security risks. Clarity promotes alignment, purpose, and direction, ensuring every effort contributes to the organization’s strategic objectives. Additionally, clarity involves evaluating results after project completion. Teams should compare actual metrics with expected ones to assess the impact of their efforts. This reflective process fosters continuous improvement, ensuring that work advances progress rather than just increasing output.
  2. Strive for excellence: I will produce high-quality implementations using the best of my abilities and skills at the time, and I will speak up when quality is compromised or overlooked.
  3. Foster collaboration: Actively collaborate with team members to share knowledge and achieve collective goals, recognizing that teams – not individuals – deliver software.
  4. Embrace testing and continuous improvement: Implement all required automated tests, follow test-driven development practices, and actively participate in pull request code reviews to ensure high code quality and incorporate feedback before deploying to production.
  5. Prioritize security: Ensure code is secure and adheres to best practices to minimize risks and vulnerabilities before production and address any existing vulnerabilities in the code I work on as part of my daily routine.
  6. Manage technical debt: Regularly refactor code to maintain readability, optimize performance, and minimize accumulated technical debt, speaking up for any debt that is being ignored.
  7. Use my voice: Feel safe speaking out when the other five responsibilities are neglected, and seek alignment from the team to understand why.
  8. Protect intellectual property and responsibly use AI: I recognize the benefits of leveraging AI in software development. However, I will act responsibly to protect my organization’s intellectual property. I will not share proprietary information or code without proper authorization and will ensure that AI technologies are used ethically and in compliance with applicable regulations.

Adopting these principles for software engineers working in cross-functional teams can help prioritize a balanced approach during product planning. This will lead to consistent production of high-quality, secure, and performant software while avoiding a backlog of risks, defects, and technical debt that can result from constantly adding new features.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]


Related Posts

  • Agile Software Delivery: Unlocking Your Team’s Full Potential. It’s not the Product Owner. December 29, 2022 by philc

Filed Under: Agile, DevOps, Software Engineering

Outcome Metrics and the Difficulty of Reporting on Value

February 18, 2023 by philc

4 min read

What does it mean to “deliver value”? Defining value deserves its own focus. This article picks up at the point of the delivery backlog, assuming that your product leadership has identified the customers’ or organization’s needs, prioritized, defined, and outlined the value for the business and its customers, created a business case for the investment (including impact mapping and cost analysis) and defined the expected outcomes from changes or improvements to their digital product.

What problem are we trying to solve?

The outcomes are not kept from the teams, ensuring we are closing the loop.

This article dives into the crucial topic of measuring the outcomes following the release of enhancements or changes and informing the team(s) that delivered the work. Did the change or new feature deliver the expected value? Are we delivering the right things? Knowing the outcome or level of success motivates team members and bolsters their purpose. Teams can use the results to glean valuable insights even when they do not meet expectations.

Fast and agile delivery is not the end goal; value is the end goal

“Making the wrong thing faster only makes us wronger.” 1

In software delivery, it is essential to remember that delivery is not the end goal; value is. It is easy to fall into the trap of delivering software quickly and efficiently. Still, it is all for nothing if it does not provide value to the customer or organization. Delivering unwanted features can be a sad waste of productivity and a misuse of talent.1 These are just a couple of reasons why it is crucial to understand what value means in software delivery and how to measure it.

Organizations need to understand the real-world impact of their digital product changes, so measuring its outcome value, determining the return on the investment, and learning from outcomes are critical. Unfortunately, accurately tracking and reporting outcomes and value returned can be complex due to several challenges.

The challenges of measuring the final outcome of digital changes

What are the meaningful outcome metrics? Are such metrics communicated down to the delivery team level? Do companies practicing OKRs report on the final outcome of those OKRs?

First, many organizations need more tools to help them measure the value of outcomes from software delivery. The lack of tools and data insights can make it challenging to track and report on the success of the delivered changes.

Secondly, measuring the actual ROI requires significant time and effort. It is essential to determine the impact of digital product changes on the business or customer; this can be a complex process. This work may require additional resources, like data analysts or business intelligence tools.

Third, the impact of the software changes may take time to become apparent. It might take months or even years to see the actual effect of the changes delivered on the business or customer. Time duration can make it challenging to accurately track and report the real degree of success or value delivered within the allotted time to influence teams.

Fourth, accounting for the success of an outcome and the value it returns may require additional resources and a shift in the organization’s mindset to prioritize measuring this work.

Finally, there could be pushback when inquiring about the value of the product or platform changes teams delivered. To ensure that the value outcome is consistently tracked and reported, organizations must determine who is best suited for monitoring and reporting the value outcomes of what the teams deliver.

Teamwork and transparency at the team level

For those using Scrum or Kanban or similar lifecycle practices and tools, consider adding elements to the delivery team’s Epics, Features, and possibly User Stories.

Why: Why are we working on this?

Value: Short description of the expected outcome for the organization or customers.

These can align with OKRs for those using them.

Benefits:

  • Shared understanding, alignment, purpose driven development, and delivery.
  • Documenting the why and value enables team alignment and autonomy and increases team member engagement.
  • A more precise understanding of priority reasoning.
  • Learn from the outcomes (gain insights).

Challenges:

  • Tools to help measure the outcome.
  • Measuring the outcome requires significant time and effort.
  • The impact of the change(s) delivered may take time to become apparent, ranging from weeks to months or even longer.

Closing the loop with the delivery teams:

  • Schedule outcome retrospectives with teams.
  • Document the outcome(s) details to Jira, Azure DevOps, Rally, or whatever tool your teams use.

Final thoughts

In many organizations, technology leadership must measure and report on the performance of the software delivery teams.2 Do your delivery teams receive feedback on their work’s value to the organization or customer? Are they aware of the impact and success of their efforts after they deliver on a change? If not, it’s time to reevaluate your approach.

By providing your teams with regular feedback and, more importantly, the overall results or outcomes from their work, you can increase their motivation and a sense of purpose, leading to a more engaged and productive workforce.

If you aren’t doing so already, start tracking, measuring, and reporting on your team’s outcomes to align your business objectives and change investments with their performance, avoid costly and wasteful overproduction, learn from the changes made to your delivered product, and achieve greater success. Are your teams “delivering value”?

Related articles:

  1. Value part 1: Maximizing Technology Team Effectiveness: Insights from a CEO Conversation
  2. Measuring delivery teams: Finally, Metrics That Help: Boosting Productivity Through Improved Team Experience, Flow, and Bottlenecks.

References:

  1. Smart, Jonathan [@jonsmart]. “From Faster to Sooner” Twitter, 26 June 2021

Poking Holes

I invite your perspective to analyze this post further – whether by invalidating specific points or affirming others. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.

Filed Under: Agile, Delivering Value, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact