The 5 Pilars of Unit Testing


These are, what I consider to be the 5 most desirable quality attributes of a unit test:
1.       Unit tests should be accessible, everyone should be able to run them
Running a unit test should not involve more than getting the source code from a source controller, compile the SUT/Test code and clicking on a button to run all the tests. No configuration should be involved. Your tests should not require to setup external components like a DB or SMTP server. 
2.       Unit tests should be repeatable
What is the value of a test you can’t trust? Unit tests results should only change when modifications are introduce in the SUT. When the SUT remain the same the unit tests should always return the same result. Therefore unit tests should be state independent and in anyway they should never be dependant on a shared resource like a file or persistent memory.
3.       Unit tests should be easy to write and maintain
In an software shop we should only use a practice or tool when this is economical viable meaning that the cost of writing the tests and maintaining it should not exceeds the costs of having a bug and fixing it. To minimize the cost of creating and maintaining tests we should use appropriate practices and tools. We should also have the same quality requirement for test code as for production code. Refactoring is as important for test code as for production code.   
4.       Unit tests should be resistant to change
We should write our test in such a way that the amount of tests failing when introducing a change is minimal. When testing the SUT it is common to set his variable in proper state so it can be tested. This type of setup test code is very fragile. We should always try to isolate this type of code so it is easy to change. 
5.       Unit tests should run quickly
ideally unit tests should be executed after every build. Having to wait more than 1 minute to execute all the unit tests will decrease the overall productivity of the tests. Unit tests should never be dependant on costly resources to invoke like web services or DB.
Is there someting missing here? 
View Geoffrey Vandiest's profile on LinkedIn

Extensibility for ASP.NET MVC


I’m working on an extensibility framework made to be used on top of ASP.NET MVC. My extensibility framework is based on the famous Ioc container: Structuremap .
The use case I’m trying to fulfill is simple: create an application that should have some basic functionality that can be extended for every customer (=multi-tenancy). There should only be one instance of the application hosted but this instance can be adapted for every customer without making any changes to the core website.
I was inspired by the article on multi tenacy wroted by Ayende Rahien: Another source of inspiration was the book of Eric Evans on Domain Driven Design. My Extensibility framework is based on the repository pattern and the concept of root aggregates. To be able to use the framework the hosting application should be build around repositories and domain objects. The controllers, repositories or domain objects are bind at runtime by the ExtensionFactory.
A plug-in is simply an asselmbly that contains Controllers or Repositories or Domain Objects that respects a specific naming convention. The naming convention is simple, every class should be prefixed by the customerID e.g.: AdventureworksHomeController.
To extend an application you copy a plug-in assembly in the extension folder of the application. When a user request a page under the customer root folder e.g:[customerID]/[controller]/[action] the framework check if there is a plug-in for that particular customer and instantiate the custom plug-in classes otherwise it loads the default once. The custom classes can be Controllers – Repositories or Domain Objects. This approach enables to extend an application at all levels, from the database to the UI, through the domain model, repositories.
When you want to extend some existing features you create a plug-in an assembly that contains subclasses of the core application. When you’ve to create totally new functionalities you add new controllers inside the plug-in. These controllers will be loaded by the MVC framework when the corresponding url is requested. If you want to extend the UI you can create a new view inside the extension folder and reference the view by a new or subclassed controller .To modify existing behavior you can create new repositories or domain objects or sub classing exiting ones. The framework responsibility is to determine which controller/ repository / domain object should be loaded for a specific customer.
I advise to have a look at structuremap ( and especially at the Registry DSL features .
This is the code I use at the startup of the application to register all plug-in controllers/repositories or domain objects:
protectedvoidScanControllersAndRepositoriesFromPath(string path)
this.Scan(o =>
AddAllTypesOf<SaasController>().NameBy(type => type.Name.Replace("Controller", ""));
AddAllTypesOf<IRepository>().NameBy(type => type.Name.Replace("Repository", ""));
AddAllTypesOf<IDomainFactory>().NameBy(type => type.Name.Replace("DomainFactory", ""));
I also use an ExtensionFactory inheriting from the System.Web.MVC. DefaultControllerFactory. This factory is responsible to load the extension objects (controllers/registries or domain objects). You can plugin your own factories by registering them at startup in the Global.asax file:
protectedvoid Application_Start()
l'll try to find some time to explain my approach more clearly on my blog and hopefully I'll be able to publish my Extesnsibility framework on CodePlex soon.

Is mangling PRINCE2 and SCRUM Agile?

This week I attended a session of the Agile users-group in Belgium. The topic was about: how to mangle SCRUM and PRINCE2. 

The discussions were held following the fishbowl technique. The advantage of  this technique is that it allows the entire group to participate in a conversation.   Five chairs were put in the middle, this was the fishbowl and the rest of the people attending the event were arranged in concentric circles outside the fishbowl. Any member of the audience could, at any time, occupy the empty chair and join the fishbowl. The concept was a real success as everyone went at least once inside the fishbowl.
Interesting thoughts were exchanged. This is what I personally will keep from this evening:
Because PRINCE2 isn’t prescriptive about how the implementation will be performed it perfectly fit with AGILE methodologies. If your management is reluctant to try Agile methodologies like SCRUM you can refrain their fears by using PrinCE2 as a harness for SCRUM. The project manager and the higher management can use PRINCE2 as a governance framework while the developers and testers can use SCRUM practices and values. 
 The risk is to generate a lot of waste. As PRINCE2 is highly document driven your team and especially the project manager will have to generate lots of documents that are not really adding value to an agile team. Another risk is that your whole team would break apart. Developers will naturally tend to adopt the Agile principles and values and the chickens (PM & Customer & Management) will be more comfortable with Prince2. The danger is that the 2 groups will use different vocabulary leading to confusion. The two groups will be less cohesive and you risk ending with the worst of both worlds.
My advice is that you should try to stick to the pure Agile values and practices but if you’re really forced to adopt Prince2, fake it! Prince2 prescribe a min. set of doc and practices, simply adopt this min. set and do the less possible.

First Steps with Unit Testing Walkthrough

 In this article I introduce through a walkthrough the basics of how to write a good well structured unit test. This post is the first of a series about Unit testing and TDD. I’ll not be doing TDD for now because I first want to start with the basics of creating well structured unit tests.  In later posts we will build on this to practice TDD. Therefor the tests of this first walkthrough will seem overly simple, but let’s crawls before we start running.
The details on how we should write tests vary with the programming framework & unit test framework. For this walkthrough I've choosen to implement  the tests and examples witrh C# and VSTS .Nevertheless the core principle of how to implement automated tests stays applicable in any technology.  If you desire to get the walkthroughs for NUnit or for other programming frameworks post a demand in this blog.  You can download the completed walkthrough project here.


public class Fleet : IDisposable
private List _planes { get; set; }

public Fleet()
this._planes = new List();

public Plane this[string index]
return this._planes.Find(p => p.Name == index);

public void Add(string name)
this._planes.Add(new Plane(name));

public int Count
return this._planes.Count;

public void Dispose()
this._planes = null;

public class Plane
public Plane(string name)
this.Name = name;

public int PlaneID { get; set; }
public string Name { get; set; }


Examine the code of Fleet and Plane. The Fleet class represents an aero fleet (a simple Model) this class is a sort of collection of planes. The fleet class contains a property “Count” that counts the number of planes in the fleet.
Create a new Test Project File, right click solution, add, new, Project, select C#, Test, name your Project:”FirstStepsTest”.
Launch all tests: Ctrl R,A or Test, Windows, Test View
VS should launch the TestMethod1 and it should pass.

 Replace the generated TestMethod() with the following test:

public void NewFleet_WithNoPlanes_CountWillReturn0()
  Fleet fleet = new Fleet();

  int actual = fleet.Count;

  Assert.AreEqual(0, actual);

Run the test by right click on the test name and select “Run Test”.
The test above is a public method whose giving the attribute [TestMethod] the test class itself is been annotated with the attribute [TestClass]. This enable the testRunner of VSTS to retrieve all test methods.

Every test will always consist of 3 parts:
• The setup: to be able to test our SUT we’ve to put him in a certain state so it can be tested.
• The test execution: when our SUT is prepared to be test we can call the actual execution unit on our SUT.
• The verification: A test generally ends with the comparison on the expected results and the actual outcome of our execution.

The last part of our test uses an assertion method of our unit testing framework. Knowing what to assert about in your tests is pretty much 90% of the battle. An assertion is basically taking an expected value and comparing it to an actual value. If the expected value does not match the actual value, then the test will fail.

There are many other type of assertions but for simplicity and for readability when debuging I prefer to only use the Assert.Equals(). Assert equals is the most generic form of assertion and it will always display a meaningful message like Assert.AreEqual failed. Expected:. Actual:.Add following new tests to the test class:


public void Add_OnePlane_WillIncrementCountByOne()
Fleet fleet = new Fleet();


Assert.AreEqual(1, fleet.Count);

public void RetriveByIndex_APlane_WillReturnThePlane()
Fleet fleet = new Fleet();

String actual = fleet["X2938"].Name;

Assert.AreEqual("X2938", actual);

Examine the tests and run them, they should all pass.

We'll refactor a little bit our code because the test contains some parts of code that are repeated and repeating ourselves is always bad!
First we will remove the magic string: “X2938” and will instantiate a string field flightName containing this value. Replace “X238” with this new variable.
We will also promote the fleet object in each test to a new private field. We will use the [TestInitialize] attribute to instantiate this field.
Also imagine that our Fleet class implements IDisposable because it uses somehow an expensive resource that isn’t automatically disposed by the garbage collector. We could call fleet.Dispose() after the Assertion but what if our test fails? The dispose method would never been called and the object will remain in memory.
The result should look like that:

public class FleetTest
private Fleet fleet;
const string flightName = "X2938";

public void TestInitialize()
fleet = new Fleet();
} [TestCleanup()] public void TearDown() { fleet.Dispose(); }

public void NewFleet_WithNoPlanes_CountWillReturn0()

int actual = fleet.Count;

Assert.AreEqual(0, actual);

public void Add_OnePlane_WillIncrementCountByOne()


Assert.AreEqual(1, fleet.Count);

public void RetriveByIndex_APlane_WillReturnThePlane()

String actual = fleet[flightName].Name;

Assert.AreEqual(flightName, actual);

We've defined a SetUp() and TearDown() pair of methods. In major XUnit frameworks, these are used to create Fixtures. A Fixture is some resource all tests in a test case have in common and which sets the context of the test (the situation in which we're testing), as well as other repetitive objects or resources.
VSTS uses the attributes [TestInitialize()] for the Setup, this part is run before each test and [TestCleanup()] for the teardown, run after each test.
Because each test must be isolated (cannot share information) these methods will tell the Framework to create our Fixture (a Fleet instance) before each test, and destroy it after. So each test gets a shiny new version to play with, free of anything a previous test may have done.

The test case above is a good small start and I hope you enjoyed this first walkthrough on unit testing, don’t hesitate to post some feedback!


View Geoffrey Vandiest's profile on LinkedIn

ASP.NET Web setup project walkthrough

This post contains a walkthrough that will help you understand the basic tasks to accomplish to create an MSI packages for deploying your web application. The walkthrough describes the most common scenario of deploying a web project.  We will create a standard Web setup project.  This project will generate as output an msi package that will create and configure the application IIS and copy the application’s file to a specific location on the server.   This walkthrough also describes how to modify the web.config file for providing a connectionstring or change any of the configuration sections. 
Because msi packages are mostly deployed by system engineers from the command line, this walkthrough will also pay attention on how to enable quiet installs and how to pass parameters to the installer from the command line.

1.     Create a “Hello World” web service

To be able to create a setup will need a sample project that will be deployed by the setup project.  So we create a new empty ASP.NET website application.
- File->new ->ASP.NET WebService
Add  a Class library project to your solution.
- File->Add->Class Librrary
Now we should have 2 projects inside our solution.
- Add a static method SayHello() to the class1.
Reference this class in your web service project:
- Add Reference-> Projects->SayHelloLibrary
- Change the code of the Hello world method to use the Library static method SayHello()
- Hit F5, your solution should compile. 
- Test the application through the invoke button of the test web page, you should see the following:

2.     Add a Web setup project

We will add the web setup deployment project to the solution.
- File->Add->New Project->Other Project Types->Web Setup Project
Now our solution should look like this:
Now the real work begins, we will configure our web setup project to hold our content and our compiled assemblies.

3.     Add content and assemblies to your web setup project

First we will add the content of our web service project.  These are the asmx pages and the content of the APP_CODE folder.  They will be compiled by the host IIS server the first time the site will be hit.
Right click on the setup project->Add->Project output.
A “Add project output Group” dialog box should appear.
Because our Web setup project does not have any compiled assemblies as output we only can select “Content Files”.   
You should be in the file system view. If you don’t see the above screen.
-  right-click on the web project setup folder and choose view->File system.
Now we will add our web service application compiled assemblies. 
- Right-click again on the web-setup project and choose Add->Project Output and select the SayHelloLibrary project and click on “Primary Output”.
After we click on the ok button Visual studio has added the “Primary Output” to the root of the Web Application Folder. This will result in a compile error the first time we try to hit the application we deploy.
To correct this error we must move the “Primary output” to the Bin folder of our web application. 
- Simply drag & drop the “Primary Output” to the bin folder.
Before we test our first installation package we’ve to compile it because Visual studio will not compile it as part of the solution. When we compile our project, visual studio will generate an msi file containing all parts of our application.
It’s time to test our first installation packet. 
- Select the setup project->richt-click->Install
The setup wizard pops-up, click next.
You can change the default value of the  Virtual directory textbox.  The default Virtual Directory is a property of your “Web application folder” available in the File system view.  Important: If you want to be able of passing the virtual directory as a parameter through the console for a quiet installation you should set this property to a blank field otherwise the msi package will overwrite a blank value with the value of the property.
When the installation is completed successfully the web service should be available on the your local IIS server under the path http://localhost/HelloWorldSetup/service.asmx
The application should also be visible in the “Add or Remove Programs” in the Windows Control Panel.

4. Adapt the connectionString property of the Web.Config file.
The web setup project wizard alow us to change common IIS settings through the File System view (View->File System) by changing the properties of the “Web Application Folder”. 

We can also add prerequisits of our solution (the .net framework2 is a standard prerequisit).
To adapt the connectionString property of our Web.Config file we will have to add custom code to our project that will edit the config file.  To be able to this type of actions during a setup we’ve to create custom actions.  Custom actions are code that is executed during the installation process.  Therefore we’ve to create a new ClassLibrary project. 
- Add a classlibrary project and name it HelloWorldCustomActions.
- Add following assemblys references: System.Configuration.dll,
System.Configuration.Install.dll, System.Web.dll.
- Add a new class to MyCustomAction.cs your project that will be used during the installation of your project.    This class must inherit from “System.Configuration.Install.Installer” class and be annotated with the RunInstaller(True) attribute. using System;
using System;
using System.Collections.Generic;
using System.Text;
using System.Configuration.Install;
using System.ComponentModel; 
namespace HelloWorldCustomActions
    publicoverridevoid Install(System.Collections.IDictionary stateSaver)       
    {            base.Install(stateSaver);       
Now we need to create our custom action and associate it with our class.
First we need to add the output of the HelloWorldCustomActions assembly in our Setup project.
- Right-click “HelloWorldSetup” project->Add->Primary Output
- From the File System View move the “Primary output from HelloWorld actions” to the bin folder.
Now we will make sure that our code is executed as a custom action during the installation.
- Right-click the project folder->view->CustomActions.
The Custom action view should be displayed.
- Right-click on the install folder->Add Custom Action->select “Primary output from HelloWorldCustomActions (Active)”.
The web setup project wizard contains a form where the user must select the application site and the virtual directory. To be able to locate and change the web.config file of our application we need to read these parameters.  We can also pass these arguments by the command line and provide a targetdirectory. 
To be able to access these parameters we need to declare them in the “CustomActionData” property of our CustomAction: 
- Select your custom action->In the property window select “CustomActionData” Insert the following values:
/targetdir="[TARGETDIR]\" /targetvdir="[TARGETVDIR]" /targetsite="[TARGETSITE]" /targetconn="[EDITA1]"  /db="[DB]"
We will now add a screen to our wizard that prompts the user for a connectionString. 
- Right click the project->View ->User Interface.
- Right click start-> Add Dialog
- Select Textboxes(A) and move it between “Installation Address” and “Confirm Installation”.
- Set the following properties:
Now we will code our custom action.
- First check you have following references in you CustomAction project.
- Set following using statements in your CustomAction class:
using System;
using System.Configuration;
using System.Configuration.Install;
using System.ComponentModel;
using System.Diagnostics;
using System.IO;
using System.DirectoryServices;
using System.Web.Configuration;
using System.Windows.Forms;
your install method should look like:
 publicoverridevoid Install(System.Collections.IDictionary stateSaver)       
{            base.Install(stateSaver);
              // Retrieve configuration settings            string targetSite = Context.Parameters["targetsite"];
            string targetVDir = Context.Parameters["targetvdir"];
            string targetDirectory = Context.Parameters["targetdir"];
string targetConnectionString = Context.Parameters["targetconn"];
            //if this is a quiet install //the connectionstring is passed as with a parameter named "db"
            if (targetConnectionString.Length < 1)                targetConnectionString = Context.Parameters["db"];
              if (targetConnectionString.Length < 1)thrownewInstallException("Please provide a connectionstring!");
              if (targetSite == null)thrownewInstallException("IIS Site Name Not Specified!");
              if (targetSite.StartsWith("/LM/"))                targetSite = targetSite.Substring(4);  ConfigureWebConfig(targetSite, targetVDir, targetConnectionString);

Add the ConfigureWebConfig method:
 void ConfigureWebConfig(string targetSite, string targetVDir, string targetConn)       
{            // Retrieve "Friendly Site Name" from IIS for TargetSite
            DirectoryEntry entry =                 newDirectoryEntry("IIS://LocalHost/" + targetSite);            string friendlySiteName =                 entry.Properties["ServerComment"].Value.ToString();
              // Open Application's Web.Config            Configuration config =                 WebConfigurationManager.OpenWebConfiguration("/" + targetVDir, friendlySiteName);
              addConnectionStringAttribute(targetConn, config);            togleCompilationAttribute(config);
              // Persist web.config settings            config.Save();
Method to add the connectionstring to your web.config:
privatestaticvoid addConnectionStringAttribute(string connectionStringValue, Configuration config)       
{            ConnectionStringSettings appDatabase = newConnectionStringSettings();
            appDatabase.Name = "db";            appDatabase.ConnectionString = connectionStringValue;
            appDatabase.ProviderName = "System.Data.SqlClient";

This method will set the compilation attribute to debug=false
privatestaticvoid togleCompilationAttribute(Configuration config)       
{            CompilationSection compilation;            compilation = config.GetSection("system.web/compilation") asCompilationSection;
              if (compilation != null)                compilation.Debug = !compilation.Debug;
Now you can test your installation package. 
- Right-click Setup project->Install.

The second screen should ask for a connectionstring.
5.     Quiet install
When distributing a setup package you will mostly install the application through a quiet install.  Therefore you’ve to pass the parameter through the command line.   I experienced that the parameters you pass through the console are sometimes overwritten by the parameters you’ve added in your custom screens.  This is why I added a supplementary parameter “db” and don’t use the parameter “targetconn”.  The installer first checks if the parameter is empty.  This will be the case if you install the application through a quiet install. 
You will find this parameter in the CustomactionData property of the custom action:
/targetdir="[TARGETDIR]\" /targetvdir="[TARGETVDIR]" /targetsite="[TARGETSITE]" /targetconn="[EDITA1]"  /db="[DB]"
 The code that handles the parameter in the Install method:
if (targetConnectionString.Length < 1)
                targetConnectionString = Context.Parameters["db"];
You can launch the msi setup package with the folowing command line: (First navigate to the directory containing the builded release version of the msi)
>HelloWorldSetup.msi /qb targetvdir="HelloWorld" db="Test for connectionstring"
If you want to specify the target website you need to specify the website IIS metabase targetsite path.  This path mostly begins with “/LM/W3SVC/” followed by the site id. 
A trick to find the site id or to debug the setup application is to use the /l* option.  Launch the installer with the /l* option and follow the wizard.  The /l* will log all the parameters passed to the installation program.  After the installation completes you can retrieve website metabase path by searching for the “targetsite” in the log file.
>HelloWorldSetup.msi /l* log.txt
I hope this tutorial help you to understand the basics of creating a web setup project.  This type of setup will enable you creating testable and repeatable deployment scenarios.  It will also help our IT engineers to deploy web applications gracefully on production machines.

View Geoffrey Vandiest's profile on LinkedIn

Webservice Interoperability with contract first approach

We just solved an interoperability problem for one of our customer that was trying to interoperate between Java & .Net. He was using jaxws to generate his java proxy code and wcf (svcutil) for the .Net proxy . The customer had worked based on a contract first approach and tested his contract through jaxws. When he tried to generate the proxy classes for his .Net app, using svcutil, the generated proxy class was missing all the collections. He also had a runtime error when trying to serialize some messages. 

We rapidly found the cause of the missing collection. The customer had used maxOccurs="unbounded” in his wsdl, what is not supported by the DataContractSerializer used by svcutil. We replaced the unbounded with a 9999999 and suddenly the collections appeared in his proxy class. 
But the runtime error was still there L
After some research we found an interesting article on MSDN describing the subset of  WSDL supported by SVCUTIL. Based on the article we discovered that the customer had used a <choise> element in a complextype. This caused the SVCUTIL to use the old XMLSerializer in place of the DataContractSerializer  of WCF.   The XMLSerialzer made some strange construct in the proxy class causing runtime errors. So we simply removed the <choice> element and everything worked fine after that.


Unit testing, what else?


In the preceding post I stressed why automated testing is so important for the distributed team, but I didn’t really specify what automated testing is. Automated testing is not just about uni tests and all tests created with a unit testing framework are not really unit tests! There are many flavors and terms to define automated testing.  There exist lots of other kinds of testing and this list is absolutely not exhaustive nevertheless it lists the most common types of automated tests:
-          Unit testing
Unit test target a single class or package. They are written by the developer to test one single unit of code and must be isolated from all other components.
-          Integration testing
Integration testing tests the integration of the different unit of code. In an integration test the different program units are combined and tested as groups in multiple ways. Integration testis can be done through the use of a unit testing framework of with black box testing tools.
-          User acceptance tests
These can be block box test or integration test but used in the context of the acceptance of the application. They can be performed by the provider before delivery to the customer or by the customer before transferring ownership.
-          Black box testing
Usually performed by Q&A but they can also sometimes used by developers having to change legacy systems that don’t have unit or pre existing integration tests. These tests take an external perspective of the SUT to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. There is no knowledge of the test object's internal structure. 
-          Performance/Load/stress testing
Although these terms are used to define different techniques they share the same purpose. Their goal is to test the performance of a system and not to find bugs.
As described in my previous post testing must occur at every stage of the development and by each part of the production unit. Nevertheless the types of tests used at a specific stage differ. Unit tests will be written during the implementation phase by developers. To avoid long and painful integration phases it is also recommended that developers write integration tests. They should test as soon as possible if their components integrate well with other already available components. Black box and load tests should be done after the implementation stage by Q&A teams. This is at the end of iteration when using an iterative process like Xp, Scrum or RUP or before deployment when using a linear process like waterfall. 


Automated testing and the software factory

 There is a theory in software engineering asserting that a relationship exists between the structure of a development organization and the architecture of the products that this organization produces. Recently a Harvard Business School study has shown that distributed teams tend to produce more modular software.
In my opinion the fact that distributed teams produce more modular software isn’t only because the communication dynamics force the distributed teams to align on interfaces but also because they realise that their software has to be tested thoroughly.   
To be able to enhance the testability of their software they need to make it modular and modular software is better software! 
Nevertheless testing has not only an impact on the product it has also a huge impact on the organization of the team. 
In this first post I enlighten why a distributed software factory should practice automated testing and how it affects the team.


Producing software is similar as producing consumer goods, both needs an efficient assembly line. The assembly line contains several parts, there is a customer team, a product owner, one or more development teams and a QA team. Like in a real factory these parts of the assembly line can be globally distributed. Every part of the assembly line is settled where the efficiency is the most optimal but all these pieces are working together for producing working software in one product development stream. They are all working with queues so that the teams can work asynchronously, maintaining a working speed dependent on one another.  


The productivity of a factory is measured by the speed at which products flaws from the production line and the effectiveness of the production line. As one may think, the speed at which products flaws out of the factory is not the average speed of each part of the production line but it depend mostly on the throughput of the slowest part of this production line. So if you want to increase the overall production capacity of a factory you’ve to synchronize every part of the production line and make sure that each part works at a constant peace and that every part is synchronized with his neighbours. The worsted thing that can happen in an assembly line is that a defect is caused in a part of the line and is paced at the next part. The defect will not only cause the part were the defect is detected to stop but the defect part has also to be resent to the responsible part so it can be fixed. This will desynchronize the production line and diminish the overall productivity of the factory.




In our modern industries when a defect happen in a manufacturing process, engineers will try to find the root cause of the problem and they will change the production process so that the defect can’t happen again. The way engineers are doing this is by incorporating automated testing devices in their production process. Software automated tests serve the same purpose as the automated testing devices in manufacturing. The automated tests are not made to detect malfunctions but to prevent defects to occur. The real value of our automated tests is not that they can detect defects but that they overcome defects to happen!
Type-G Toyaoda automated loom

Type-G Toyoda Automatic Loom


The Type-G Toyoda Automatic Loom, the world's first automatic loom with a non-stop shuttle-change motion, was invented by Sakichi Toyoda in 1924. This loom automatically stopped when it detected a problem such as thread breakage.




Another core concept of modern manufacturing is Just-InTime. "Just-in-Time" means making only "what is needed, when it is needed, and in the amount needed." Supplying "what is needed, when it is needed, and in the amount needed" according to this production plan can eliminate waste, inconsistencies, and unreasonable requirements, resulting in improved productivity. The worst enemy of Just-InTime is stock. The overall Just-InTime process is meant to eliminate every unnecessary stock. Stock means financial investing; they take in a lot of place resulting in more costs. Stock also dissimilates inefficiency in the assembly line.


Unfinished features are what best represent stock in a software factory. By unfinished features we mean code that is not delivered to the customer. The main reasons why code is not delivered to the customer is that we don’t know if it works or worse that we now for sure it don’t work because it’s bugged. Developers automated testing help us to reduce the amount of unfinished features. It shortens the Q&A cycle and the release cycle and helps us answer to the question, is the piece of code done.



Improve the

throughout put

It’s true that automated tests are increasing the initial cost of development part. In my experience writing automated tests tend to cost the same amount of time as writing production code. But this cost is more than re-gain because the other steps of the production process shorten. The amount of defects detected by the Q&A team is drastically falling and a lot of time is won because the Q&A team can work faster. Even the overall throughout put of the development teams increases at the end because a time is not loosed anymore in correcting a lot of defects detected by the Q&A team. The project manager is far better at estimated the project status. At the end the trust of the customer increases because they get features build on a constant pace and because the overall delivered quality increases.