Wednesday, 31 August 2011

Acceptance-Test-Driven Development

Acceptance-Test-Driven Development is a simple process change that can have far-reaching implications for your development projects

Usually, the testing phase of application development begins after the piece of software is "finished." However, a new approach known as acceptance-test-driven development (ATDD) is flipping that model upside down. In ATDD, tests are devised before the actual code development begins and automated testing occurs throughout the development process.

Proponents say that ATDD speeds up development by making it easier to find and fix bugs earlier. The approach can be used along with agile methodologies, and several vendors offer tools that promote ATDD.

Extracted from http://www.infoworld.com/d/application-development/better-approach-software-testing-do-it-you-code-170627

Tuesday, 23 August 2011

7 Top Scrum Tools

  1. Acunote -- This SaaS tool "doesn’t force you into a strict process. The tool shows you the progress of the project, and you can see all levels of work in order to make informed decisions," says company CEO Gleb Arshinov. It's free for teams of five or smaller.
  2. Agilebuddy -- Another SaaS option, Agilebuddy supports Extreme Programming. It also boasts simplicity and ease of use.
  3. CollabNet -- CollabNet's tools are a good fit for distributed teams. They aim to be as easy to use as a white board.
  4. Rally -- This company offers both on-premise and cloud-based solutions that utilize Scrum terminology, but aren't too rigid. Its SaaS offering is free for teams of ten or smaller.
  5. ScrumDo -- This SaaS option helps manage a backlog, estimate user stories and create iterations. It's also available on an open-source basis.
  6. Telerik -- Telerik's Team Pulse software includes a prioritized backlog, analytics and a best practice analyzer, which allow users to define their own Scrum processes based on the requirements of each project.
  7. VersionOne -- This agile development tool incorporates Scrum methodologies. CTO Ian Culling says it "brings visibility outside the team room."
    Extracted from - http://www.devx.com/DailyNews/Article/47128?trk=DXRSS_LATEST

Thursday, 4 August 2011

Application Life Cycle Management and Continues Integration

Application Life Cycle Management

Wiki- Application Lifecycle Management (ALM) is a continuous process of managing the life of an application through governance, development and maintenance. ALM is the marriage of business management to software engineering made possible by tools that facilitate and integrate requirements management, architecture, coding, testing, tracking, and release management.

Governance

In ALM, the purpose of governance is to make sure the application always provides what the business needs. Encompasses all of the decision making and project management for this application, extends over this entire time.

Development

Development, the process of actually creating the application, happens first between idea and deployment. For most applications, the development process reappears again several more times in the

Application’s lifetime, both for upgrades and for wholly new versions.

Operations

Operations, the work required to run and manage the application, typically begins shortly before deployment, and then runs continuously. - Every deployed application must be monitored and managed

The Next Generation Application Life Cycle Management with Visual Studio

VNext

http://www.speakflow.com/View.aspx?PresentationID=c0ae95d3-050d-4076-b9d7-8fcf1a0490f0&mode=presentLocally

Current Implementation and Tools Related to ALM.

Here I will be focusing on Continues integration and I will not be discussing any of the management Tools.

The Future of Microsoft Visual Studio Application Lifecycle ManagementThe

1. Provisioning & Terminating EC2 Servers

a. We have used Amazon EC2 cloud API for Provisioning and Terminating servers.

b. You will be able to find the source code for this in -

c. The Future of Microsoft Visual Studio Application Lifecycle Management References

1. Discussion Forums –

https://forums.aws.amazon.com/index.jspa;jsessionid=E63C9B854A02D1C91CA2086E2D2417A7?categoryID=1

2. Getting Started Guide

http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-10-01/?ref=get-started

3. Developer Guide

http://docs.amazonwebservices.com/AmazonEC2/dg/2007-01-19/

2. Build Solutions

To build the solution we used Team Foundation Server 2010 Build Definition

Configurations As follows

1. Source Controls is in SLTFS2010 Server.

2. Build Controllers and 4 agents Configured in EC2 Cloud 10.100.1.51 Server.

Configuration Topologies as follows

Understanding a Team Foundation Build System

Description: http://i.msdn.microsoft.com/Hash/030c41d9079671d09a62d8e2c1db6973.gifBuild System Topology Examples


Team Foundation Build Service is designed in a way that you can start with a smaller, less complex build system. As your code base expands and your team grows larger, you can expand your system incrementally with relative ease by adding build machines to the system that you already have.

Single-machine system (shared with application tier)

The following configuration can support a very small team, especially a team that runs builds infrequently and only during off-hours. (For example, you run only a single nightly build.)

Description: A single-machine system on application tier

In most cases, a topology with a single build machine is insufficient because of the following reasons:

· The build agent places heavy demands on the processor, which could significantly decrease the performance of your application tier.

· The build controller can exert pressure on the system's memory, especially if the controller is managing many active build agents at the same time.

· Installing Team Foundation Build Service increases the attack surface of a build machine. For example, a malicious user could construct a build definition to run arbitrary code to take control of the server and steal data.

Single-machine system (stand-alone)

The following configuration is good starting point for a small team.

Description: A single-machine system (stand-alone)

Because build agents perform the processor-intensive work on a separate machine, they do not affect the performance of the application tier when builds are run.

You could also run the build controller on the dedicated build machine. However, the configuration in the illustration has the advantage of making build system changes less disruptive, such as when you must repair or replace the build machine.

The build controller's presence on the same machine as the application tier is generally not a problem from a processor standpoint. However, you might move up to a more scalable topology for the following reasons:

· The build controller can exert pressure on the system's memory, especially if the controller is managing many active build agents at the same time.

· Installing Team Foundation Build Service increases the attack surface of a build machine. For example, a malicious user could construct a build definition to run arbitrary code to take control of the server and steal data.

Multiple-machine system

Medium and large teams will generally need multiple build machines to support their efforts. In the following example, two build machines are deployed.

Description: A multiple-machine system

By using multiple build machines, you can dedicate each machine to a different purpose, as described in the following example:

· One build machine could be dedicated to build agents that process continuous integration builds. The team needs these kinds of builds (especially gated check-in builds) to run quickly so that their work is not held up waiting for a build. You would use build process parameter settings to ensure that builds run quickly. These settings could include not cleaning the workspace, running only top priority tests, and setting a low value for the Maximum Execution Time setting.

· Another build machine could be dedicated to scheduled and ad-hoc builds that require a lot of time to process. For example, you could set up the build definitions that target the build agents on this machine so that the definitions clean the workspace, perform all tests, and run code analysis.

Multiple-machine system with multiple controllers

The following topology example can support enterprise-level software efforts.

Description: Multiple-machine system with multiple controllers

Each team project collection must have its own build controller, as shown in the illustration. Notice how this topology isolates the build machines. Team members who work on Team Project Collection A can use only the build agents that Build Controller A controls.

Main Build Work

a. Best learning video Discussion can be found here - \\Nilhanx\alm\BridgingTheGapBetweenDevsAndTestersUsingVS2010AutomatingTheBuild_2MB_ch9.wmv (this video is also can be accessed online in Channel 9)

b. References

TFS Build Part 1

http://www.ewaldhofman.nl/post/2010/04/20/Customize-Team-Build-2010-e28093-Part-1- Introduction.aspx

How to Create a Custom Workflow Activity for TFS Build 2010 RTM

http://blogs.msdn.com/b/jimlamb/archive/2009/11/18/how-to-create-a-custom-workflow- activity-for-tfs-build-2010.aspx

Team Foundation Build Activities

http://msdn.microsoft.com/en-us/library/gg265783.aspx#Activity_InvokeProcess

Continuous Integration for Database Development

http://www.codeproject.com/KB/showcase/Continuous-Integration.aspx

3. Deploy Solution

* PowerShell Scripting

* Souce Code

* References

* Connecting to a network folder with username/password in Powershell

* http://stackoverflow.com/questions/303045/connecting-to-a-network-folder-with-username-password-in-powershell

* PS Sessions

* http://stackoverflow.com/questions/3705321/pssession-is-not-working-in-my-powershell-script

Hierarchy of PowerShell Script Calls

4. Integration Test & Reporting

* Adding Integration Test in to TFS Build workflow

Overview

In my last post I described how to deploy web applications to a build integration server using Team Foundation Server 2010. The next logical step once the build is successfully deploying to the integration server is to trigger a set of integration tests to verify the deployment. In this post I will describe the changes to the Default Template build workflow to execute Integration Tests separately from the existing Unit Tests.

Unit Tests

It is important to consider at this stage why we would run integration unit tests, as opposed to the unit tests executed as part of the assembly build process.

Unit tests executed as part of the build are intended to verify the individual components are functioning correctly, and often would use mocked interfaces to ensure that only the specific functions being tested are executed. Unit tests are typically not reliant on deployed components and therefore can be run as soon as the assemblies have been built.

Integration tests on the other hand are intended to run against the fully deployed environment to ensure that the individual components successfully execute together. Integration tests therefore need to be executed after the application components have been deployed to an integration server. Failures in integration testing might indicate breaking changes such as database changes, missing data, or changed interfaces into other components of the system.

Note that running the deployment and integration tests adds to the duration required to execute a built. Rather than performing this action every time something in the solution changes it might be more pragmatic to have one Build Definition to build and run unit tests on a per-check-in basis, while another is configured for the full integration tests on a nightly basis.

Modify the Build Workflow

Description: http://nickhoggard.files.wordpress.com/2011/03/031311_0048_addinginteg12.png?w=600Workflow Sequence Overview

The integration tests have to run within the context of a build agent, to the activity needs to take place at the end of the Run On Agent activity, directly after the packages have been deployed to the build integration server within the Deploy Packages activity.

Changing variable scopes

Because we are going to borrow heavily from the existing “Run Tests” activity, but the execution will be outside the “Try Compile, Test, and Associate Changesets and Work Items” activity, we need to modify the scoping of the following variables. This is easiest done by editing the xaml directly in your favourite xml editor.

· outputDirectory – copy from the “Compile and Test for Configuration” activity up a level to the “Run On Agent” activity.

· treatTestFailureAsBuildFailure – copy from the try block of “Try Compile, Test, and Associate Changesets and Work Items” to the “Run On Agent” activity.

Add new Integration Tests workflow arguments

The parameters being added are as follows:

· Integration Tests Disabled (Boolean). I’m not a fan of negative argument types (eg, Disabled, rather than Enabled), however have decided to keep this consistent with the existing Tests Disabled argument.

· Integration Test Specs (TestSpecList).

The default value for the Integration Test Specs argument provides the defaults for filtering the unit tests to only the integration tests. Ideally I would have liked to be able to filter this to *test*.dll with a test category of Integration, however based on some rudimentary experimentation it appears that the Test Assembly Spec constructor can only set the assembly name filter. In the end I’ve used the following TestSpecList definition as the default value:

New Microsoft.TeamFoundation.Build.Workflow.Activities.TestSpecList(

New Microsoft.TeamFoundation.Build.Workflow.Activities.TestAssemblySpec

(“**\*test*.dll”))

Note: Don’t forget to change the Metadata property to ensure the new arguments are displayed in a suitable category in the Build Definition editor.

Add the Run Integration Tests Activity

Follow the following steps to add the new Run Integration Tests activity to the workflow

1. Add a new foreach activity after the Deploy Packages activity, but still within the Run on Agent activity. This activity will be used to iterate through the project configurations defined in the build definition.




< /ActivityAction.Argument>

< /ForEach>

2. Create a copy of the existing activity titled “If Not Disable Tests” into the foreach statement created above

3. Modify the copied workflow to use the added workflow arguments

o Use Integration Tests Disabled instead of Disable Tests

o Use Integration Test Specs instead of Test Specs

Configure the Build Definition

Configuring the filters for your integration tests is a matter for personal preference, though I’ve found the following approaches fairly simple;

· Define all integration tests in a separate project and utilise the Test Assembly Filespec filter

· Add a Test Category of Integration to each of the tests and use the Category Filter.

· Configure a custom testsettings file to allow for accurately specifying the order tests should be executed

Description: http://nickhoggard.files.wordpress.com/2011/03/031311_0048_addinginteg22.png?w=600

Extracted from - http://nickhoggard.wordpress.com/2011/03/13/adding-integration-tests-to-tfs-build-workflow/

* Generate SpecFlow Reports

With your SpecFlow installation comes SpecFlow.exe that is a program that can be used to generate tests from the scenarios AND to create nicely formatted reports from the generated test results.
There's been a lot written on how to generate these reports when your using NUnit (see
this and this for example), but when it comes to managing this for MsTest there's been almost silent. And facing this problem I can sure see why... It's a bit trickier.

In this blog post I want to show you two things; how to generate MsTest's from your .feature-files and how to create a report from the generated results. Finally I'll show you how to put the two together and link it up to a nice "External tool"-button in Visual Studio. Here we go:

Generate MsTest's from your .feature-file

With this step you can generate the test from you scenarios. SpecFlow.exe picks up your configuration and generates the test in your test framework of choice.
From the look of it, it seems quite straight forward. Using the “help command” of SpecFlow.exe, “specflow help generateall” produces this help:

Generate tests from all feature files in a project
usage: specflow generateall projectFile [/force] [/verbose]
projectFile Visual Studio Project File containing features

OK. I put together a .bat file that does that. (Note that I’m on a 64-bit machine and had to use some funky DOS-shortcut to get there. No simple way to be architecture agnostic I’m afraid).
Here is my file:

"%ProgramFiles(x86)%\TechTalk\SpecFlow\SpecFlow.exe" 
generateAll Specs\Specs.csproj  
/force /verbose
 
pause


I’ve added some line breaks for readability.

Well – that was easy.

Create a report from the generated results


To get this step to work we have to run the test and get hold of the location of the test report file (.trx). When you do this from within Visual Studio the test reporting is done in a TestResult folder and the file get a name with a timestamp. That is not very script-friendly sadly and we’re forced into writing a bat-file that also run the tests.

I used the
MsTest Command line reference to put together this .bat-file:

if Exist TestResult.trx del TestResult.trx 
"%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" 
/testcontainer:Specs\bin\Debug\Specs.dll 
/resultsfile:TestResult.trx 
pause



Again, watch out for me using a 64-bit Windows and use %programfiles% if you’re not.

Some strangeness with MsTest made me delete the testResults.trx before each run. Also MsTest create some folders (username_testrun...) but that doesn’t bother me now.

When we now have a created .trx file we can run the command that creates the report for us. According to the “documentation” (specflow help mstestexecutionreport"):

usage: specflow mstestexecutionreport projectFile 
[/testResult:value] 
[/xsltFile:value] 
[/out:value]

· The projectFile is the Visual Studio Project File containing features and specifications.

· Test Result refers to the .trx file generated by MsTest And it defaults to TestResult.trx

· Xslt file to use, defaults to built-in stylesheet if not provided

· out is the generated output file. Defaults to TestResult.html

I’m happy with the defaults of the last three since I choose my .trx-file name ... wisely. :)
So my whole .bat-file becomes this:

"%ProgramFiles(x86)%\TechTalk\SpecFlow\SpecFlow.exe" mstestexecutionreport Specs\Specs.csproj 
 
pause

A low and behold; it actually produces the nice report we wanted:

Description: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1xIdbwj1TCfwRePgAj29j5t1jPianTGmLIihl-mZzW-uJiEallqsiS6tx6GMFYR1VCupDN_g9HlTbQc7OVlmykD7jpxdH0H2g6Q6ewNXjnjwsLDo4bLdqaPr365jezbk8FXu16hC1gc/s320/examplereport.png

Putting it all together

That’s neat – we now have three different .bat files that we need to click in consecutive order ;)
No really – the first one (generate test from features can most certainly be handled by Visual Studio in most cases. Or in any case will probably not run in conjunction with the other two steps.
But to run the tests and produce a report would be nice with a single file. Here it is:

if Exist TestResult.trx del TestResult.trx 
 
"%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" /testcontainer:Specs\bin\Debug\Specs.dll /resultsfile:TestResult.trx 
 
"%ProgramFiles(x86)%\TechTalk\SpecFlow\SpecFlow.exe" mstestexecutionreport Specs\Specs.csproj /testResult:TestResult.trx
 
pause

And looking to this blog post I’ve also created a parameterized version of it that I can hook up to a “external command” that does that with a single click. That changes the bat-file into this:

if Exist TestResult.trx del TestResult.trx 
 
"%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" /testcontainer:%2 /resultsfile:TestResult.trx 
 
"%ProgramFiles(x86)%\TechTalk\SpecFlow\SpecFlow.exe" mstestexecutionreport %1 /testResult:TestResult.trx /out:TestResult.html
 
echo Created file TestResult.html

So you need to send the name of the test container (the .dll) and the project file. Save that file to a known location so that you can point your external command to it.
Finally you can create a external tool button in Visual Studio and set the parameters as follows:

Description: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh6JTN2yxjQDdso1x_MroBvw3nDw4WPTPBZRb3M30atR4XF3aCq35m9IBGOifxvaBy0ilbOVhgKz-gegdd7SB8dFYvv93U0L0PDAqVk5tRIhGq7XnBAx86ao9l7MmZsI30WVbgtFPR888/s320/configuring+external+tools.png


The arguments to the external command are:

· $(ProjectDir)$(ProjectFileName)

· $(TargetName)$(TargetExt)

Note that the project with specifications has to be selected before the External command can be run.

Extracted From - http://www.marcusoft.net/2010_12_12_archive.html

Source Code