Back to Blog

Agile vs Fragile: The role of “independent” testing in the Agile framework

Part 15 in a series of 17. To start at the beginning, first read Agile vs Fragile: A Disciplined Approach or an Excuse for Chaos.

Often times teams make the argument that Agile means that there is no longer a need for  “independent” teams of testers that verify that the solution was built correctly according to the requirement specifications.  They argue that this is now the role of the “super developer”, and the business representative, who are both embedded into the team.  After all, the business knows what they wanted in the first place.  Agile in and of itself doesn’t change any of “Brian’s basic laws of software development”:

  1. Developers suck at testing
  2. The business is not technically proficient to do system testing.
  3. Just like the Easter Bunny, Super Developers don’t exist
  4. Applications don’t test themselves
  5. Just having testers doesn’t guarantee success

Now before you get upset because you feel slighted, let’s dive in to each of these areas, and then we can talk about how they might apply to Agile or Fragile teams.

The first concept that Developers suck at testing is easily defended as a basic law of software development.  I give you this one argument; if developers were great at testing, then there would be no defects. They would fully test their code and integrations, and discover all of the defects in advance.  Since this never happens, then I can safely and accurately make the inference that they must not be good at it.  Oh, by the way, not doing it is the same as sucking at it.  Relying on developers to test, especially in an Agile world, is not realistic.  They just don’t have the time in the short sprint cycles to do it.

The second law says that the business is not technically proficient to do system testing.  Why do I say system testing?  Because system testing differs significantly, in its purpose, from user acceptance testing.  The purpose of system testing is to verify that the solution, as built, meets the specifications.  In other words, it verifies that you built the solution right.  User acceptance testing, on the other hand validates that the solution, as delivered, meets the business need.  In other words, it validates that you built the right solution.  We don’t train our business to do detailed testing aimed at ensuring that the solution meets the specifications.  They aren’t versed in doing destructive or negative testing. They don’t understand how to performance test. They aren’t interested in verifying that all error handling works correctly. Their look at the solution is very superficial.  They are simply trying to determine if the solution solves the business need.  Relying on only user acceptance does nothing to verify the correctness and stability of your solution.

Law three states, just like the Easter Bunny, Super Developers don’t exist. Before you get all mad at me, and say that you have some of the best developers there is, understand what I am saying.  There is no company that has a corner on good developers.  In every organization there is a scale of competency and skillsets.  No organization has found the only super developers out there.  For every great developer, there are 4 or 5 average developers, and some poor developers.  Your team is only as successful as the weakest link.  If you have average or poor developers, you will get average or poor results.  The concept of finding a single developer that is great at design, and coding, and testing is not realistic. The concept of finding an entire team of them is delusional.

The fourth law; applications don’t test themselves, is the law that if you reduce or eliminate the testing cycles within your release or sprint, then quality is necessarily impacted.  On Agile projects, if you expect development to test, they won’t have time; if you expect the business to test they won’t have the depth.  Ignoring testing as a critical part of the success of your project will only lead to failed projects and significantly impactful issues in production.

The final law states, just having tester doesn’t guarantee success.  This law is the indictment of many testing groups that they don’t add value.  Yes, there are testing groups that do nothing to add value to the delivery of quality software.  Why, you ask? Because they see their job to be more about pushing back on development at all cost than ensuring the successful delivery of quality software to the business.  These teams have lost their way, and are part of creating a “Quality Adverse” environment, where development and testing don’t trust each other and don’t see each other as having a common goal.  These testing teams aren’t staffed with the brightest and best, but the dregs of the organization.  The testing team is the final step of eliminating a team member.  These testing teams often have no formal training in testing or are staffed with Subject Matter Experts that have no clue how to effectively test.  Effective testing teams are professional testing teams, staffed with highly skilled and competent test resources that understand how to test highly complex software solutions.  They are technically proficient in the technologies that are deployed, and have a sound understanding of the business domain.

“Independent” testing doesn’t mean silo. “Independence” refers to the role the testing team plays in the value chain. Separate not separated.  Testers have to understand that their job is to make an independent assessment that the developed solution meets the requirements.  That doesn’t mean that they are recused from talking to developers or participating in project activities.  Quite the opposite, they need to be highly involved and identify issues before they are even coded into the solution. They need to participate in design sessions, and look with an independent eye at what is being missed by the team.

Testing can’t be treated as a “bolt on” activity that only happens at the end of all of the sprints. Testing should be a continuously integrated part of every sprint.  The testing team should make sure that they are a planned part of each sprint. The testing activity needs to be planned as part of the sprint planning session. Testing costs story points, and the testing team needs to ensure they are presenting that as part of the overall cost of the sprint.  Testing can’t be held until the end when the final piece of the puzzle is put in place.  I have been asked if it makes sense to do a testing sprint as the last sprint.  My answer is that if you feel you have to bolt on a sprint at the end to do testing, then you probably failed at testing all along the way.  This doesn’t mean that you don’t have to do integration testing at the end of the development of a solution, but it shouldn’t be the only place where you capture defects.

Testing can’t be seen as an “us” versus “them” activity between developers and testers.  To be successful, must have a team dynamic.  In a “Quality Adverse” environment, development believes that the testing team is focused on making them look bad; and the testing team believes that development is trying to push bad code past them.  In this environment, what both teams fail to realize is that neither team is as important as the product that gets into the hands of the customer.  In a true Agile environment, all team members are laser focused on delivering value to the customer, and they realize that they only do that by delivering a quality product the customer can use.  To do this they have to see themselves as having a common goal. Another way to put is that they view themselves as all being in the same boat and paddling in the same direction…at the same time.

In true Test Driven Development, “independent” testing becomes less about execution, and more about confirmation.  TDD, focuses on the developer writing the test before writing the code. They identify the test case or cases that will verify they correctly built the solution, and then they build the solution that they will test.  In this case, the role of testing becomes less about developing and executing test cases, and more about verifying that the tests designed will actually verify the requirements and then verifying that the tests were successfully run.  This becomes more of a Quality Control function than a testing function. In a Fragile environment, development runs a unit testing tool and passes the results to a QC resource that verifies they ran the test.  No one verifies that the test actually ensures that the solution meets the requirements.

Yes, there is a role for “independent” testing in the Agile world.  There is a role for effective testing, and not just a group of testers that add no value to the delivery.  Fragile tries to eliminate testers from the equation; Agile embraces them as an essential and trusted part of successful delivery.

In the next installment we will talk about the place for tools in Agile testing.  In the meantime…Keep on testing!

Back to Blog