Isolated Scrum: Does Scrum need to be Complemented with eXtreme Programming to Succeed?

Abstract

Extreme Programming [1] (XP) and Scrum [2] are Agile methods based on the same underlying principles, but they also differ in purpose. As a project management methodology, Scrum omits agile technical practices, an area that XP fills with abundance. When used in conjunction it is easy to see how XP can be used to fill these gaps, but this blog post questions whether XP is in fact a necessity for Scrum to be successful. It looks at why using Scrum in isolation might degrade software quality, and explores whether this does in fact happen. A review of the available literature suggests that XP is not a pre-requisite for Scrum to be successful, but that good technical practices are.

Introduction

Extreme Programming (XP) and Scrum are frequently discussed in today's software development industry. Both methods are based on the Agile Manifesto, so share many of the same principles [3]. However, there is one key area where there are substantial differences: technical practices. Scrum does not prescribe any technical practices, while XP has technical practices at its very core [4].

When used together it is easy to see why Scrum and XP work very well, but what happens when Scrum is adopted on a software engineering project without complimenting it with agile technical practices such as those prescribed by XP? Does software quality suffer? This is a question that recently came to mind, and as I soon came to find out, has been a hot topic for debate. I couldn't hope to summarise all the arguments and keep this post concise at the same time, so in this post I am only going to highlight the pertinent questions that came to my mind and summarise what I found.

The Transition to Evolutionary Design

To set this post in context, consider the following Agile principles [3]:

  • Deliver working software frequently
  • Continuously deliver valuable software
  • Welcome changing requirements

To deliver software frequently requires short development iterations (termed 'Sprints' in Scrum), which generally range from one or two weeks to a month. Delivering valuable software requires the implementation of software features according to business value, and because requirements change, feature prioritisation needs to be reconsidered before each Sprint. In Scrum, the Product Owner [5] is responsible for determining the most valuable features and asserting their priority.

Implementing the most valuable features does however come at a cost: each sprint, developers need to produce value without advanced preparation [6]. As a result, developers cannot set aside several weeks for designing and implementing system frameworks that consider the 'big picture', and must instead focus on designing and implementing only the prioritised features. But good design is critical to the long-term maintainability of code, and generally speaking, developers are taught to deliver large, up-front designs that consider the 'big picture', not just the features being added.

However, in this new paradigm it isn't that design is ignored, but rather, the design of the system is continually updated and improved as each new feature is added. The overall system design (the 'big picture') emerges as time proceeds, rather than being determined weeks, or even months ahead of implementation. This style of design is termed Incremental, Continuous, or Evolutionary Design, and provides a solution to the Agile design problem by allowing the technical infrastructure to be designed and built in small pieces as you deliver functionality [6]. Shore presents an excellent discussion of incremental design in 'The Art of Agile' [6] and is recommended reading for anybody looking for a more in-depth introduction.

We thus have a design mechanism that is compatible with Scrum, but does it fall down in the absence of XP?

A Design Evolved, or a Design Dissolved?

Incremental design has the potential to allow Scrum developers to produce well designed software through short, productive sprints, but can incremental design be performed effectively without the support of agile technical practices, or do the changes to the design really become sets of tweaks and hacks which ultimately pollute the design and increase software rot [7]?

To see what problems might be encountered, consider the three steps of incremental design:

  1. Create the simplest design that would work: Creating a graceful, robust and generic design generally requires more effort than a simple design, so this step is relatively 'easy' if one puts aside the self-discipline required not to implement a 'better', more complex design.
  2. Incrementally add to that design as the needs of the software evolve: Adding to the design is also generally 'easy', again, putting aside any associated technical challenges.
  3. Continually improve the design: This can be a little more involved, and generally requires good engineering skills as opposed to merely programming skills. There are no hard and fast rules that tell us when this step is complete.

Looking further at the third step; improving a design is very much a process of reducing Technical Debt [8]. Technical debt is a metaphor for the extra effort required during future developments because of quick and dirty design choices made now. When a design is incrementally added to, but is not improved, the technical debt increases, while improving a design can be seen as a way of paying down the debt and reducing future development time. Technical debt is reduced by code refactoring [9]; restructuring code internally without changing its external behaviour. By its very nature the effect of code refactoring will not be visible to the user or product owner.

XP encourages the inclusion of Slack Time in each iteration [10], where Slack Time can be used for important, but not urgent technical work. This includes major refactoring tasks [11, 10], and helps to support continually improving the design. But because Scrum doesn't include technical practices, this is one area where Scrum, if used in isolation, could build up technical debt. This is especially true if the Product Owner does not accept the justification for time spent on code refactoring. It is however a vital step in incremental design, and is one area where Scrum will surely falter if not performed regularly.

Continuing the discussion under the assumption that refactoring has been accepted as a necessary companion to Scrum, it also needs to be recognised that the process gets riskier the less frequently it is performed. There are several reasons for this:

  • As additional features are added, code can easily become closely coupled [12] unless the developer is very disciplined about 'separating concerns' [13].
  • As the scope of a refactoring task increases, so too does the likelihood of introducing defects
  • If there are unit tests associated with the refactored code, the probability of having to update unit tests (e.g. restructure them) increases in-line with the size of the refactoring task.

For these reasons, it seems important for Refactoring time to be included in each Sprint, allowing for smaller refactoring as part of the feature implementation process, and for larger, 'breakthrough' refactoring tasks [6].

Refactoring Without Unit Tests: A Fuss About Nothing?

There is no reason why a project cannot write unit tests without Test Driven Development (TDD), but for the sake of argument, assume that developers aren't writing unit tests but are performing incremental design, and are regularly refactoring. Ron Jeffries constantly warns against refactoring without unit tests [14], so is this situation likely to lead to Scrum failure?

This post [15] asserts that although it is preferable to have unit tests when refactoring, the risk associated with some tasks is negligible. This is particular true of automated refactors such as extracting code into a method or renaming a variable, as provided by tools such as Resharper. Although even these tools fail occasionally, especially when variable names are referenced via string literals (e.g. generated code bindings in C#).

There has also been a small amount of scientific research that has as looked at the effect of having unit tests during refactoring tasks. In [16] they conducted experiments with two groups of developers performing a set of refactoring tasks. Each group was comprised of professional software developers and students with unit testing experience, but unit tests were only made available to one group.

The results indicated that having unit tests did not lead to higher-quality refactored code, although the measures used to gauge code quality were not reported, and the quality of refactored code was the only thing considered (newly written code was excluded).

In terms of risk, there appear to be very few people debating the merits of having unit tests available when refactoring. This may be because those who write unit tests already believe they are beneficial, and from a personal experience I can certainly recall occasions when unit tests have caught defects introduced during larger refactoring tasks. In contrast, some people who don't unit test seem just as successful in their refactoring efforts and don't have particularly high defect rates, while others clearly do. One reason for this could simply be the different technical practices of individual developers, which leads to the penultimate section.

Good Technical Practices

This post started with a simple question: Does software quality suffer if Scrum is adopted without Agile technical practices such as those prescribed by XP? Pursuing the answer to this question quickly led to further questions, and after trawled through book extracts, forum posts and blogs, a resounding message started to emerge: good technical practices are required to succeed with Scrum, but this is true of many projects, and is not Scrum specific [17]. This stance is echoed by Waters, who states that Scrum can achieve good results without XP if product quality isn't a problem area [18].

Scrum cannot be successful without iterative design though, and poor design will always increase technical debt (regardless of the project's management methodology). Unless developers and testers are confident that they can maintain quality without XP techniques, transitioning to Scrum without XP could still be a risky move, but is not doomed to failure. Fowler offers a similar opinion, stating that Scrum does exacerbate the problem of taking on too much technical debt, and that the Scrum community needs to redouble its efforts to ensure that people understand the importance of strong technical practices [17],

Scrum and XP are clearly highly compatible, and it is easy to see how XP can bring improvements to Scrum by improving quality. If Scrum were combined with XP practices such as Test Driven Development (TDD) [19] and Pair Programming [20], the risks associated with refactoring could be reduced. In pair programming, one of the developers is constantly considering design improvements as the pair develop, facilitating many design improvements, not just refactoring. TDD on the other hand explicitly repeats a refactoring step every few minutes, giving a developer continuous opportunities to stop and make design improvements. Neither of these techniques is mandatory for a developer to continually improve software design, but they can clearly be beneficial.

Another area of compatibility is regression testing: XP would almost certainly provide us with automated regression testing. Regression testing is key to finding defects early, and catching defects early significantly lowers the total development cost [21]. Iterative design will force us to constantly refactor code, and each code refactoring provides opportunities for new defects to sneak in, so automated testing seems logical for any reasonably large system. But at the end of the day, the use of automated regression testing needs to be a cost based decision: the cost of writing automated tests verses the cost of performing them manually. The limited evidence so far suggests that unit tests alone don't improve the quality of refactored code, so in this respect at least, unit tests don't appear to be beneficial, although the evidence found was very limited in scope.

Conclusion & Future Work

Refactoring is a vital component of iterative design, and thus is a vital technical component of Scrum. In this respect, Scrum does seem to be missing an important concept, and Product Owners need to accept this if they hope to maintain any reasonable level of software quality. Refactoring needs to be performed regularly, and the less frequently it is performed, the higher the stakes become. Refactoring tasks could be explicitly added to Sprints as and when needed, or performed via the introduction of Slack Time, but either way, this is one area where it is hard to see how Scrum can be successful without this technical practice.

The benefits of having unit tests during large refactoring tasks do not appear to have been established, and this is one area where further research could be very interesting. One problem with conducting such research is that the necessary experiments are hard to conduct scientifically, with too many variables changing (developer experience, style, knowledge).

In general, the resounding message has been that XP is not required for Scrum to succeed, but that good technical practices are. This shouldn't come as too much of a surprise, but it's nice re-reach that conclusion in a logical, supported manner. It is good to know that if good technical practices are in place, Scrum isn't doomed to failure from the outset. This was a genuine concern when the initial question was posed. It seems obvious that the techniques proscribed by XP could benefit a Scrum project by improving quality, but like many things in software engineering, having 'tools' at your disposal is more important than trying to follow hard and fast rules, and this stands whether those tools are XP techniques, unit testing frameworks or design patterns. This post [22] has an excellent representation of the technical skills you might want to employ on a Scrum project, but the success or failure of a Scrum project is more likely to be based on the skills and attitudes of the team members rather than on the project methodology being followed.

References

  1. http://www.extremeprogramming.org
  2. http://www.scrum.org/
  3. Principles behind the Agile Manifesto, 2001
  4. The Differences Between Scrum and Extreme Programming, M. Cohn, MSDN TechLeaders, 2009
  5. Scrum Product Owner, Scrum Methodology:Scrum Basics, 2009
  6. The Art of Agile Development, J. Shore, and S. Warden. Chapter 9: Incremental Design and Architecture. O'Reilly Media, 2009
  7. Software Rot, Wikipedia, June 2013
  8. Technical Debt, M. Fowler, 2009
  9. Refactoring: Improving the Design of Existing Code, M. Fowler et. al, Addison Wesley, 1999
  10. The Art of Agile Development, J. Shore and S. Warden. Chapter 8: Planning. O'Reilly Media, 2009
  11. XP Practice: Slack, A. Marchenko , 2007
  12. Coupling, Wikipedia, June 2013
  13. Separation of concerns, Wikipedia, June 2013
  14. Extreme Programming Adventures in C#, R. Jeffries. Microsoft Press, 2004
  15. Living Dangerously: Refactoring without a Safety Net, J. Sonmez, 2010
  16. Refactoring with unit testing: A match made in heaven?, F. Vonken and A. Zaidman, WCRE, page 29-38. IEEE Computer Society, 2012
  17. Flaccid Scrum, M. Fowler, 2009
  18. eXtreme Programming Versus Scrum, K. Waters, All About Scrum, 2008
  19. Test Driven Development, M. Fowler, 2005
  20. The Art of Agile Development, J. Shore and S. Warden. Chapter 5: Thinking. O'Reilly Media, 2009
  21. The Economic Impacts of Inadequate Infrastructure for Software Testing, G Tassey, National Institute of Standards and Technology, Technical Report NIST PR 02-3, 2002
  22. Engineering Practices Necessary for Scrum, A. Fuqua, 2011
blog comments powered by Disqus