Jay Harris is Cpt. LoadTest

a .net developers blog on improving user experience of humans and coders
Home | About | Speaking | Contact | Archives | RSS
Filed under: Learn to Code | Testing | Tools

Aligned with another jam session at Ann Arbor's Come Jam With Us is another installment of Learn to Code, this time providing an introduction to WatiN, or Web Application Testing in .NET. The jam session was held at the offices of SRT Solutions in Ann Arbor, Michigan, at 5:30p, Tuesday April 6th. Though thunderstorms were in the forecast, the predicted high was 72°F (22°C), so we weren't bothered by the same 8" of fluffy white snow that caused cancellations and delays during the my session on ASP.NET MVC 2. But for those that couldn't make the WatiN jam session, might I recommend the exercise below.

About This Exercise

This coding exercise is designed to give you an introduction to browser-based testing using the WatiN framework, or Web Application Testing in .NET. The framework allows developers to create integration tests (using a unit testing framework like MbUnit, NUnit, or MSTest) to test and assert their application within a browser window. The framework interacts with the browser DOM much like and end-user, producing reliable results that mimic the real world. In this sample, we will write a few WatiN tests against the Google search engine.


To complete this exercise, you will need to meet or complete a few prerequisites. Please complete these prerequisites before moving on. The session is designed to be completed in about an hour, but setup and prerequisites are not included in that time.

  • An active internet connection. (Our tests will be conducted against live third-party sites.)
  • Install Microsoft Visual Studio 2008 or Microsoft Visual Studio 2010.
  • Download and extract the latest version of the WatiN framework.

Exercise 0: Getting Started

Creating a Project

WatiN is generally used within the context of a unit testing framework. For this exercise, we will be using a Visual Studio Test Project and MSTest to wrap our WatiN code.

  1. Create a new "Test Project" in Visual Studio named "WatinSample". The language is up to you, but all of the examples in this post will use C#.
  2. Feel free to delete the Authoring Tests document, the Manual Test file, and UnitTest1.cs. We won't be using these.
  3. Add a reference to WatiN.Core.dll from the bin directory of your extracted WatiN download.
  4. Compile.

Exercise 1: My First Browser Tests

In our first test, we will use the project we just created to test Google's home page. After accessing http://www.google.com, we will check a few properties of the browser and a few loaded elements to ensure that the expected page was returned. The first thing we will need is a new Unit Test class to start our testing.

  1. Create a new class (Right click on the "WatinSample" project and select Add –> Class…), called WhenViewingTheGoogleHomePage.
  2. Mark the class as public.
  3. Add the MSTest [TestClass] attribute to the new class.
  4. Compile.
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace WatinSample
  public class WhenViewingTheGoogleHomePage

Make an instance of the browser

Now that we have a test class, we can start writing WatiN code. Each of our tests will first need a Browser object to test against. Using methods attributed with TestInitialize and TestCleanup, we can create a browser instance before the test starts and shut it down when the test is complete.

Creating an instance of a browser in WatiN is easy: simply create a new instance of the IE class, passing in a URL. We can assign this new class to a field of type Browser, which is a base class of all browser classes in WatiN. Currently, WatiN supports Internet Explorer and Firefox.

  1. Create a private field in the test class named browserInstance of type WatiN.Core.Browser. Add a using statement to WatiN.Core if you wish.
  2. Create a test initialization method named WithAnInstanceOfTheBrowser and give it the [TestInitialize] attribute. Within this method, create a new instance of the IE class, passing in the Google URL, http://www.google.com, and assigning the instance to the browserInstance field.
  3. Finally, create a test cleanup method named ShutdownBrowserWhenDone and give it the [TestCleanup] attribute. Within this method, execute the Close() method on our browser instance and assign the field to null to assist with object disposal.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using WatiN.Core;

namespace WatinSample
  public class WhenViewingTheGoogleHomePage
    Browser browserInstance;

    public void WithAnInstanceOfTheBrowser()
      browserInstance = new IE("http://www.google.com");

    public void ShutdownBrowserWhenDone()
      browserInstance = null;

Our First Tests: Checking for existence of an element

There are three prominent items on the Google home page: the Google logo, the search criteria text box, and the search button. Using WatiN, we can check for them all. The WatiN Browser object contains an Elements collection, which is a flattened collection of every element in the entire DOM. Like any collection, you can use Linq and lambda expressions to search for items within this collection. Alternately, you may also use the Element method, which accepts the same lambda expression that would be used within the Where extension method on the collection, and returns the first or default element. For more specific searches, WatiN's Browser object includes similar collections and methods for searching explicitly for Images (<IMG>), Paras (<P>), Text Fields (<INPUT type="text" />), and so on.

On each returned Element (or derived Para, Image, or Text Field, etc., all of which inherit from Element), WatiN supplies properties for accessing the CSS Class, Id, InnerHtml, Name, Tag, Text, Value, or many other attributes. The method GetAttributeValue(string attributeName) is provided for accessing other attributes that are not explicitly defined on the object (uncommon attributes and custom attributes). Finally, elements also contain a Style property, which not only gives access to the inline style attribute, but also any CSS properties associated with the element from Internal Style (in the Page Head) or External Style (in an external style sheet).

On to checking for the three elements within the Google home page: the logo, the criteria input, and the search button. First, check for the existence of the Google logo graphic. The image can be found by searching the DOM for an image with an Id of "logo". WatiN works very closely with lambda expressions, so we can use these to help us find out graphic.

  1. Create a new public method named PageShouldContainGoogleLogo.
  2. Add the MSTest [TestMethod] attribute to the method.
  3. Search for and assert on the existence of an image with the Id of "logo".
  4. Optionally, we can also check that the image has the expected Alt attribute; in this case, the value should be "Google".
  5. Compile and run the test. The test should pass.
public void PageShouldContainGoogleLogo()
  Image googleLogo;
  googleLogo = browserInstance.Image(img => img.Id == "logo");
  Assert.AreEqual("Google", googleLogo.Alt);

Next, check for the existence of the search criteria input box. WatiN refers to these elements as Text Fields, using the TextField type. Additionally, this form field is identified by its Name rather than its Id. In Google, the name given to the criteria input is "q".

  1. Create a new public method named PageShouldContainSearchCriteriaInput and give it the [TestMethod] attribute.
  2. Search for and assert on the existence of a Text Field with the name "q".
  3. Compile and run the test. The test should pass.
public void PageShouldContainSearchCriteriaInput()
  TextField criteriaInput;
  criteriaInput = browserInstance.TextField(tf => tf.Name == "q");

Finally, check for the existence of the search button using the Button method. In our lambda expression, it is not important to know if the field is identified by a Name property or an Id attribute, as WatiN supplies a IdOrName property to help us find the element. The value to identify the button is "btnG".

  1. Create a new public method named PageShouldContainSearchButton and give it the [TestMethod] attribute.
  2. Search for and assert on the existence of a Button with the Id or Name of 'btnG".
  3. Optionally, we can also check the value of the button, which is the text displayed on the button on-screen. This text should be "Google Search".
  4. Compile and run the test. The test should pass.
public void PageShouldContainSearchButton()
  Button searchButton;
  searchButton = browserInstance.Button(btn => btn.IdOrName == "btnG");
  Assert.AreEqual("Google Search", searchButton.Value);

Working with Style

WatiN can access properties on the DOM beyond just Text values and Alt attributes. WatiN also has full access to the style that CSS has applied to an element. Let's check out a few CSS properties, both those explicitly defined by WatiN and those implicitly accessible through the WatiN framework.

For our first style check, we'll take a look at the default font family used on the Google Home Page. Font Family is one of the explicitly available style properties on a WatiN element. Some others, like Color, Display, and Height are also explicitly defined.

  1. Create a new public test method named BodyShouldUseArialFontFamily.
  2. Assert that the font family assigned to the body matches "arial, sans-serif".
  3. Compile and run the test. The test should pass.
public void BodyShouldUseArialFontFamily()
  Assert.AreEqual("arial, sans-serif", browserInstance.Body.Style.FontFamily);

For our second style check, we will look for an implicit style definition. At the top of the Google Home Page is a series of links to other areas of Google, such as Images, Videos, Maps, and News. At the end of this list is a More link, that when clicked, displays a hidden DIV tag containing even more links, such as Books, Finance, and Google Translate. Since we do not have any code in our test initialization that interacts with the browser, and thus nothing that is clicking the More link, that DIV should still have a hidden visibility. However, since Visibility isn't an explicitly defined style property within WatiN, we need to use the GetAttributeValue method to retrieve the current visibility setting.

  1. Create a new public test method named MoreItemsShouldNotBeVisibleOnPageLoad.
  2. Search for the More Items DIV. It's Id is "gbi".
  3. Using the property lookup method, GetAttributeValue(string attributeName), check that the Visibility is set to "hidden".
  4. Compile and run the test. The test should pass.
public void MoreItemsShouldNotBeVisibleOnPageLoad()
  var googleBarMoreItems = browserInstance.Div(gbi => gbi.Id == "gbi");
  Assert.AreEqual("hidden", googleBarMoreItems.Style.GetAttributeValue("visibility"));

Exercise 2: Interacting with the Browser

Browser Integration tests are more than just loading a page and checking a few element attributes. Our tests may also need to enter values into form fields, click links and buttons, or interact with browser navigation like the back button. WatiN fully supports all of these features in a very intuitive fashion.

A new test class, this time with Search Capability

Create a new test class, similar to what we did in Exercise 1, calling the new test class WhenViewingGoogleSearchResultsForComeJamWithUs. Also add in the TestInitialize and TestCleanup methods that open and close the browser. However, this time, after we load http://www.google.com, enter a value into the search criteria input and then click the Google Search button.

  1. Create a new class named WhenViewingGoogleSearchResultsForComeJamWithUs, similar to what was done in Exercise 1.
  2. Add in the TestInitialize and TestCleanup methods from Exercise 1. Name the Initialize method WithAnInstanceOfTheBrowserSearchingGoogle.
  3. After the code that initializes the IE class, find the search criteria Text Field and set its value to "Come Jam With Us".
  4. After setting the Text Field value, click the Google Search button by calling the Click() method on the Button class.
  5. Compile.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using WatiN.Core;

namespace WatinSample
  public class WhenViewingGoogleSearchResultsForComeJamWithUs
    Browser browserInstance;

    public void WithAnInstanceOfTheBrowserSearchingGoogle()
      browserInstance = new IE(@"http://www.google.com");
      TextField criteria =
        browserInstance.TextField(tf => tf.Name == "q");
      criteria.Value = "Come Jam With Us";
      Button search =
        browserInstance.Button(btn => btn.IdOrName == "btnG");

    public void ShutdownBrowserWhenDone()
      browserInstance = null;

With this code, or initialized test will load the Google Home Page and will conduct a search for "Come Jam With Us".

Validating the Search Results Page

For our first verification, let's check the URL for the browser window. The search result URL should contain the search criteria in the URL's query string; we can validate this using the URL property on our instance of the Browser object.

  1. Create a new public test method named BrowserUrlShouldContainSearchCriteria.
  2. Validate that the current browser URL contains the search criteria information, "q=Come+Jam+With+Us".
  3. Compile and run the test. The test should pass.
public void BrowserUrlShouldContainSearchCriteria()

Finding Child Elements

With WatiN, we are not just limited to searching for items directly from the Browser object. We can also search for child elements directly from their parent element or any ancestor element. Our search results should contain a search result item linking to the Come Jam With Us web site. The Google Results page contains a DIV identified as "res" that serves as a container for all search result information. Rather than checking that our Come Jam With Us link exists somewhere on the page, we should search for it directly within the results DIV.

  1. Create a new public test method named ResultsShouldContainLinkToComeJamWithUs.
  2. From the browser instance, find a DIV identified as "res".
  3. Assert that a link to http://www.comejamwithus.org exists within the "res" DIV.
  4. Compile and run the test. The test should pass.
public void ResultsShouldContainLinkToComeJamWithUs()
  Link comeJamWithUs;
  Div searchResults = browserInstance.Div(div => div.IdOrName == "res");
  comeJamWithUs =
    searchResults.Link(link => link.Url == @"http://www.comejamwithus.org/");

Inner Text verses InnerHtml

An element may contain many child elements. An anchor tag—<A href="#">—can contain text, and child elements may make portions of that text bold, italic, underlined, or even bright red. Through WatiN, we can access that inner content as straight text without the formatting, or as the InnerHtml including all of the child elements.

  1. Create two public test methods, one named ResultsLinkContainsComeJamWithUsText and the other named ResultsLinkContainsComeJamWithUsHtml.
  2. In both methods, search for the results DIV, as we did in the previous test.
  3. In both methods, search through the results DIV for a link with a URL matching http://www.comejamwithus.org
  4. In the Text method, assert that the text of the link matches "Come Jam with us (Software Development Study Group)". Note that the value contains no child HTML elements.
  5. In the HTML method, assert that the InnerHtml of the link matches "<EM>Come Jam with us</EM> (Software Development Study Group)". Note that for the same link, we now have the emphasis tags surrounding Come Jam With Us.
  6. Compile and run both tests. The tests should pass.
public void ResultsLinkContainsComeJamWithUsText()
  Link comeJamWithUs;
  Div searchResults = browserInstance.Div(div => div.IdOrName == "res");
  comeJamWithUs =
    searchResults.Link(link => link.Url == @"http://www.comejamwithus.org/");
  Assert.AreEqual(@"Come Jam with us (Software Development Study Group)",

public void ResultsLinkContainsComeJamWithUsHtml()
  Link comeJamWithUs;
  Div searchResults = browserInstance.Div(div => div.IdOrName == "res");
  comeJamWithUs =
    searchResults.Link(link => link.Url == @"http://www.comejamwithus.org/");
    @"<EM>Come Jam with us</EM> (Software Development Study Group)",

Back to the Start

As previously mentioned, we can also fully interact with the browser, itself. Our test initialization started from the Google Home Page and performed a search. Using functionality built in to WatiN, we can execute the browser's back navigation to return to the previous page.

For our next test, execute a back navigation and verify that the browser's URL matches http://www.google.com/.

  1. Create a public test method named PageShouldHaveComeFromGoogleDotCom.
  2. Execute back navigation in the browser by calling the Back() method on browserInstance.
  3. Validate that the browser URL matches http://www.google.com/.
  4. Compile and run the test. The test should pass.
public void PageShouldHaveComeFromGoogleDotCom()
  string previousUrl;
  previousUrl = browserInstance.Url;
  Assert.AreEqual(@"http://www.google.com/", previousUrl);

Putting it all together

Some interactions on a page cause element properties to change. An example of this is the More link from Exercise 1; when the end-user clicks the More link, the More Items DIV appears because the link's click event changes the Visibility style property of the DIV to visible. For our final test, we will use what we have learned to test this functionality.

  1. Create a new public test method named MoreItemsShouldBeVisibleOnMoreLinkClick.
  2. Search for the header bar of Google links, a DIV with an Id of "gbar".
  3. Within "gbar", search for the More Items DIV by an Id or Name of "gbi".
  4. Assert that the visibility style property has a value of "hidden".
  5. Within "gbar", search for the More link by its class name, "gb3". Note that since a class attribute may contain multiple class definitions, this is accomplished by validating that the class attribute contains the class you are searching for.
  6. Execute a Click event on the link.
  7. Assert that the visibility style property of the More Items DIV has changed to "visible".
public void MoreItemsShouldBeVisibleOnMoreLinkClick()
  var googleBar = browserInstance.Div(gbar => gbar.Id == "gbar");
  var googleBarMoreItems = googleBar.Div(gbi => gbi.Id == "gbi");
  var googleBarMoreLink =
    googleBar.Link(link => link.ClassName.Contains("gb3"));

That's It

Now that we have spent some time on basic properties, interactions, and style sheets within the WatiN framework, hopefully you can apply this to your own application and get started with your own browser-based integration tests. If you would like more information, I encourage you to check out the WatiN site at http://watin.sourceforge.net. And as always, if you have any questions, drop me a line.

Wednesday, April 7, 2010 11:27:53 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing | Tools

Recently, I was writing unit tests for a web application built on Castle ActiveRecord. My goal was to mock ActiveRecord's data store, rather than use a Microsoft SQL Server database for testing. SQL Server backing just would not fit my needs, where a mock data store would serve much better:

  • I did not want a SQL Server installation to be a requirement for me, the other developers, and my Continuous Integration server.
  • I wanted something fast. I didn't want to have to wait for SQL Server to build / tear down my schema.
  • I wanted something isolated, so the other developers, and my CI server, and I wouldn't have contention over the same database, but didn't want to have to deal with independent SQL Server instances for everyone.

Essentially what I wanted was a local, in-memory database that could be quickly initialized and destroyed specifically for my tests. The resolution was using SQLite for ADO.Net, using an in-memory SQLite instance. Brian Genisio has a fantastic write-up on mocking the data store for Castle ActiveRecord using this SQLite for ADO.Net. The post made my day, since I was looking for a way to do this, and he had already done all of the work <grin/>. I encourage you to read his post first, as the rest of this post assumes you have already done so.

Brian's post was a great help to me; I made a few enhancements to what he started to make it fit my needs even more.

My updated version of Brian's ActiveRecordMockConnectionProvider class:

using System;
using System.Collections;
using System.Data;
using System.Reflection;
using Castle.ActiveRecord;
using Castle.ActiveRecord.Framework;
using Castle.ActiveRecord.Framework.Config;
using NHibernate.Connection;

namespace ActiveRecordTestHelper
  public class ActiveRecordMockConnectionProvider : DriverConnectionProvider
    private static IDbConnection _connection;

    private static IConfigurationSource MockConfiguration
        var properties = new Hashtable
              {"hibernate.dialect", "NHibernate.Dialect.SQLiteDialect"},
              {"hibernate.connection.provider", ConnectionProviderLocator},
                "Data Source=:memory:;Version=3;New=True;"}

        var source = new InPlaceConfigurationSource();
        source.Add(typeof (ActiveRecordBase), properties);

        return source;

    private static string ConnectionProviderLocator
      get { return String.Format("{0}, {1}", TypeOfEnclosingClass.FullName,
                                    EnclosingAssemblyName.Split(',')[0]); }

    private static Type TypeOfEnclosingClass
      get { return MethodBase.GetCurrentMethod().DeclaringType; }

    private static string EnclosingAssemblyName
      get { return Assembly.GetAssembly(TypeOfEnclosingClass).FullName; }

    public override IDbConnection GetConnection()
      if (_connection == null)
        _connection = base.GetConnection();

      return _connection;

    public override void CloseConnection(IDbConnection conn) {}

    /// <summary>
    /// Destroys the connection that is kept open in order to keep the
    /// in-memory database alive. Destroying the connection will destroy
    /// all of the data stored in the mock database. Call this method when
    /// the test is complete.
    /// </summary>
    public static void ExplicitlyDestroyConnection()
      if (_connection != null)
        _connection = null;

    /// <summary>
    /// Initializes ActiveRecord and the Database that ActiveRecord uses to
    /// store the data. Call this method before the test executes.
    /// </summary>
    /// <param name="useDynamicConfiguration">
    /// Use reflection to build configuration, rather than the Configuration
    /// file.
    /// </param>
    /// <param name="types">
    /// A list of ActiveRecord types that will be created in the database
    /// </param>
    public static void InitializeActiveRecord(bool useDynamicConfiguration,
                                              params Type[] types)
      IConfigurationSource configurationSource = useDynamicConfiguration
                                       ? MockConfiguration
                                       : ActiveRecordSectionHandler.Instance;
      ActiveRecordStarter.Initialize(configurationSource, types);

    /// <summary>
    /// Initializes ActiveRecord and the Database that ActiveRecord uses to
    /// store the data based. Configuration is dynamically generated using
    /// reflection. Call this method before the test executes.
    /// </summary>
    /// <param name="types">
    /// A list of ActiveRecord types that will be created in the database
    /// </param>
    [Obsolete("Use InitializeActiveRecord(bool, params Type[])")]
    public static void InitializeActiveRecord(params Type[] types)
      InitializeActiveRecord(true, types);

In my class I have overloaded the method InitializeActiveRecord to include the boolean parameter useDynamicConfiguration, governing if the configuration is dynamically built using Reflection or if the configuration in your app.config is used instead. If the parameter is not specified, it default to false (Use app.config).

Why? Brian's original code, as is, is meant to be dropped in as a new class within your test assembly, and uses reflection to dynamically determine the provider information, including the fully-qualified class name and assembly of the new DriverConnectionProvider. Reflection makes for little effort for me when I want to drop in the class into a new test assembly. Drop it in and go; no need to even modify the app.config. However, if I want to switch my provider back to SQL Server or some other platform, I have to modify the code and recompile.

My modifications remove the restriction of configuration in compiled code, allow configuration to be placed in app.config, while preserving the existing functionality for backward compatibility. By allowing app.config-based configuration, users can quickly switch back-and-forth between SQLite and SQL Server databases without having to modify and recompile the application. To use this customized ActiveRecordMockConnectionProvider class without dynamic configuration, add the following code to the configuration block of your test's app.config.

    <add key="hibernate.connection.driver_class"
      value="NHibernate.Driver.SQLite20Driver" />
    <add key="hibernate.dialect" value="NHibernate.Dialect.SQLiteDialect" />
    <add key="hibernate.connection.provider"
      value="ActiveRecordTestHelper.ActiveRecordMockConnectionProvider, ActiveRecordTestHelper" />
    <add key="hibernate.connection.connection_string"
      value="Data Source=:memory:;Version=3;New=True;" />

The catch is that you will need to know the fully-qualified class and assembly information for your provider (Line 6, above). This means you will have to modify it for every test assembly. To get around this, compile the code into a separate assembly (I called mine 'ActiveRecordTestHelper.dll'), and reference this new assembly in your test assembly. By using a separate assembly, you no longer need to modify the activerecord configuration block for every instance, and can reuse the same block everywhere the new assembly is referenced.

And to switch over from in-memory SQLite to SQL Server, just comment out the SQLite block and uncomment the SQL Server block (or whatever other provider you are using).

Download: ActiveRecordMockConnectionProvider.zip

  • Source code for the ActiveRecordMockConnectionProvider class
  • Sample Class that uses the new provider
  • Sample app.config containing the ActiveRecord block using the provider.
  • Compiled versions of ActiveRecordTestHelper.dll

As always, this code is provided with no warranties or guarantees. Use at your own risk. Your mileage may vary.
And thanks again to Brian Genisio.

Thursday, October 30, 2008 9:10:01 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: Testing

If you are new to testing, are looking for some experience in testing, or just want to have fun breaking things, check out the WordPress Bug Hunt on 5 July 2006. The WordPress clan is holding a Bug Hunt against versions 2.0.4 and 2.1 in true “fixing them as fast as we can break them” style, as volunteer coders will be jumping on the bugs as fast as you can get them logged. These guys would truly appreciate any help you can supply, and you can have some fun unleashing all of those crazy testing / breaking / hacking tactics that sit in your closet.

WordPress is what runs this site, so your assistance ultimately helps me out, too.

Thursday, June 29, 2006 9:18:11 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Testing

I never understood the point of manual test scripts. They annoy me. I view them as nothing more than a candidate for automation. I have never come across a manual script that wouldn’t be better used as an automation script, which of course violates the inherent nature of them being manual test scripts. The only value to manual test scripts is to give them to clients, so that they can run through the new app you just created for them and feel comfortable about the application (and learn about the app as they run through the scripts).

Jonathan Kohl presents the perfect argument about why manual test cases should be extinct. Everyone should read this. Developers should read it, clients should read it, testers should read this, and, most definitely, project managers should read this.

Most bugs will never be found by a manual script. They only illustrate the “conventional” click-path for completing a task, and the developer should have already went through this during their own testing; there is high probability that this path will already work. End-users are never going to follow this path, anyway; they will do something that you entirely don’t expect. They will hit the ‘Back’ button when you didn’t plan for it, or double-click the ‘Submit’ button when you didn’t handle it, or bookmark the third step in a five-step wizard. Scenarios like these will never be tested in a manual script, but could be tested if so much of the industry wasn’t convinced that scripts are the holy grail, and will be tested by any tester worth his salt.

Wednesday, November 16, 2005 11:56:33 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: Performance | Testing

Outside of the QA world (and unfortunately, sometimes in the QA world), I’ve heard people toss around ‘Performance Testing’, ‘Load Testing’, ‘Scalability Testing’, and ‘Stress Testing’, yet always mean the same thing. My clients do this. My project managers do this. My fellow developers do this. It doesn’t bother me–I’m not some QA psycho that harasses anyone that doesn’t use exactly the correct term–but I do smirk on the inside whenever one of these offenses occurs.

Performance testing is not load testing is not scalability testing is not stress testing. They are not the same thing. They closely relate, but they are not the same thing.

  • Load testing is testing that involves applying a load to the system.
  • Performance testing evaluates how well the system performs.
  • Stress testing looks at how the system behaves under a heavy load.
  • Scalability testing investigates how well the system scales as the load and/or resources are increased.

Alexander Podelko, Load Testing in a Diverse Environment, Software Test & Performance, October 2005.

Performance Testing

Any type of testing–and I mean any type–that measures the performance (essentially, speed) of the system in question. Measuring the speed at which your database cluster switches from the primary to secondary database server when the primary is unplugged is a performance test and has nothing to do with the load on the system.

Load Testing

Any type of test that is dependent upon load or a specific load being placed on the system. Load testing is not always a performance test. When 25 transactions per second (tps) are placed on a web site, and the load balancer is monitored to ensure that traffic is being properly distributed to the farm, you are load testing without a care for performance.

Stress Testing

Here is where I disagree with Alexander: stress testing places some sort of unexpected stress on the system, but does not have to be a heavy load. Stress testing could include testing a web server where one of its two processors have failed, a load-balanced farm with some if its servers dropped from the cluster, a wireless system with a weak signal or increased signal noise, or a laptop outside in below-freezing temperatures.

Scalability Testing

Testing how well a system scales also is independent of load or resources, but still relies on load or resources. Does a system produce timeout errors when you increase the load from 20tps to 40tps? At 40tps, does the system produce less timeout errors as the number of web servers in the farm is increased from 2 servers to 4? Or when the Dell PowerEdge 2300s are replaced with PE2500s?

Any type of testing in QA is vague. This includes the countless types of functional testing, reliability testing, performance testing, and so on. Often time a single test can fit into a handful of testing categories. Testing how fast the login page loads after three days of 20tps traffic can be a load test, a performance test, and a reliability test. The type of testing that it should be categorized as is dependent upon what you are trying to do or achieve. Under this example, it is a performance testing, since the goal is to measure ‘how fast’. If you change the question to ‘is it slower after three days’, then it is a reliability test. The point is that no matter where the test fits in your “Venn Diagram of QA,” the true identify of a test is based on what you are trying to get out of it. The rest is just a means to an end.

Sunday, October 16, 2005 1:41:01 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] - Trackback

I know. I haven’t posted in a while. But I’ve been crazy busy. Twelve hour days are my norm, right now. But enough complaining; let’s get to the good stuff.

By now you know my love for PsExec. I discovered it when trying to find a way to add assemblies to a remote GAC [post]. I’ve found more love for it. Now, I can remotely execute my performance tests!

Execute LoadRunner test using NAnt via LoadRunner:

<exec basedir="${P1}"
  commandline='\${P2} /u ${P3} /p ${P4} /i /w "${P5}" cmd /c wlrun -Run
    -InvokeAnalysis -TestPath "${P6}" -ResultLocation "${P7}"
    -ResultCleanName "${P8}"' />

(I’ve created generic parameter names so that you can read it a little better.)
P1: Local directory for PsExec
P2: LoadRunner Controller Server name
P3: LoadRunner Controller Server user username. I use an Admin-level ID here, since this ID also needs rights to capture Windows PerfMon metrics on my app servers.
P4: LoadRunner Controller Server user password
P5: Working directory on P2 for 'wlrun.exe', such as C:\Program Files\Mercury\Mercury LoadRunner\bin
P6: Path on P2 to the LoadRunner scenario file
P7: Directory on P2 that contains all results from every test
P8: Result Set name for this test run

'-InvokeAnalysis' will automatically execute LoadRunner analysis at test completion. If you properly configure your Analysis default template, Analysis will automatically generate the result set you want, save the Analysis session information, and create a HTML report of the results. Now, put IIS on your Controller machine, and VDir to the main results directory in P7, and you will have access to the HTML report within minutes after your test completes.

Other ideas:

  • You can also hook it up to CruiseControl and have your CC.Net report include a link to the LR report.
  • Create a nightly build in CC.Net that will compile your code, deploy it to your performance testing environment, and execute the performance test. When you get to work in the morning, you have a link to your full performance test report waiting in your inbox.

The catch for all of this: you need a session logged in to the LoadRunner controller box at all times. The '/i' in the PsExec command means that it interacts with the desktop.


PsExec is my favorite tool right now. I can do so many cool things. I admit, as a domain administrator, I also get a little malicious, sometimes. The other day I used PsExec to start up solitaire on a co-workers box, then razzed him for playing games on the clock.

Friday, October 14, 2005 11:35:40 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Business | Testing

I remember a day in my past when my project manager approached me, relaying a client request. This client always received a copy of the test cases we used when testing the application, and their request involved modifying our practices regarding case creation. Through this request—and you know how client ‘requests’ go—the client was convinced that we would be more efficient and better testers.

Fortunately I was able to convince my project manager that it was not a good idea, or at least “not a good idea right now.”
We relayed that we appreciated any suggestions to improve our process, but “would not be implementing this suggestion at this time.”

I am constantly looking for ways to improve my craft, and have received many quality suggestions from clients in a similar form to “Our testing department does [this]. You should take a look at it, and see you can benefit from it.” Suggestions carry the mood of “If you implement it, great. If you don’t, that’s great, too.” However, be weary of ‘missions from God’ to change your practices. The client’s plan may be driven by budget, promoting inferior methods that will save a few dollars. They may be based on their own practices that are less refined or matured than your own, also resulting in inferior methods. Finally, changing your practices mid-stream in a project—as many adopted “client requests” manifest—will disrupt flow, causing less quality over-all.

Your client is in the business of making whozigadgets. You trust that they know what they are doing, and know far better than you how to do it. You are in the business of testing. Likewise, your client should trust that you are the subject matter expert in your field.

I’m not advocating that all clients don’t know anything about what you do, and that everything they say about your craft should be blown off. All qualifying* suggestions should be thoroughly considered and evaluated; that’s good business. Perhaps there is a place in your organization for the process change, and that it would make you more efficient at what it is you do. However, I am advocating that you should not take a gung-ho attitude to please the client in any way possible, and implement every process change they utter; that’s suicide. Your testing team will turn in to a confused, ad-hoc organization. Your quality—and with it, your reputation—will crumble.

* Qualifying Suggestion: Any suggestion that is reasonable, intelligent, and well-thought. i.e. Do not abandon all QA to save costs, and rely on the client’s internal testing to find all bugs.

Monday, September 12, 2005 1:25:09 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Business | Testing

As Lead QA, I have the fun responsibility of screening resumes and conducting phone interviews. I weed out the hackers from the script kiddies before we bring them in to face the firing squad. It never fails to amaze me how people embellish their resume beyond reasonable limits. I am particularly fond of people that list skills they can not define, and of people who don’t proof read their resume when applying for a detail-oriented position.

As I run through my stack of paper I came across one unfortunate soul that did both. I was quite amused in a genuinely entertained sense. He proclaimed is proficiency in ‘Quick Teat Professional 8.0′, presumably an application through which you can automate cow milking, complete with data drivers and checkpoints. “OK. So he missed the ’s’ and didn’t catch it. So what?” Well, he also bolded the misspelling, perhaps to point out his attentiveness. This was only slightly before listing its usage in 2003 for a former employer that he also misspelled. (Note: QTP v8.0 was not available until the summer of 2004.)

However, and forgivably, my recruiter is not aware of such things and had already scheduled a phone interview for me and my entertaining candidate; I honored the call, giving the prospective a chance at redemption.

He failed.

Question number two asks the candidate to list the types of testing with which s/he has experience. This reply included integration testing (also stated in his resume, correctly spelled). My follow-up asked him to define integration testing; a common ploy to make sure I’m not just being fed buzz-words. It was a definition he could not supply, or even attempt.

A candidate should be able to define every ‘word’ he claims experience with. If you can not define it you obviously do not have enough experience in it to make it applicable. If you can not define ‘integration testing’, I will not hold it against you providing you do not list experience in it. Similarly, if you do not list it, and I ask you what you know about it, be straight; tell me straight-up that you cannot define it. You will rate higher in my book than someone who stumbles through an obviously concocted and blatantly incorrect response.

BTW, if you are looking for a position as a quality analyst, and can work in the Brighton, Michigan area, drop me a line and a resume. I would be happy to hear from you. Ability to define ‘integration testing’ a plus.

Tuesday, August 16, 2005 1:29:53 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing | Tools | Watir

Scott Hanselman is my new hero. He is filling the hole—the one thing preventing Watir from becoming real competitor in the automated functional test market: script recording. Watch out Mercury; by creating WatirMaker, Scott is opening the flood gates, and Watir is going to come pouring through.

This changes everything.

I started out my career as a developer, but as I noted in an earlier blog, I get much more enjoyment from breaking things than I do building things, so I jumped ship. With my development experience I can delve in to making some rather wicked scripts for QTP, LoadRunner, and lately, Watir. However, my testers don’t share my skill set. My biggest hurdle in ousting QTP and making Watir our standard is the lack of recording; I can not expect every tester to start coding away in Ruby. It should come as no surprise that when I opened Scott’s blog this morning, I was so excited that I nearly wet myself.

It is a work in progress, but soon Scott hopes to have a fully functional recording tool for Watir. With WatirMaker, my testers can hit a button and start clicking away in IE; the tool will happily watch like a little kid on the sidelines, learning every move. My testers can all adopt Watir with open arms, and we can wave goodbye to that Mercury maintenance contract.

The only thing left to say is: “Scott…thanks!”

Wednesday, July 27, 2005 1:55:50 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [2] - Trackback

Filed under: Testing | Tools

If you are anything like me, you probably have the latest version of Internet Explorer and/or Firefox on your machine.
If you are anything like me, you have clients that don’t. They are often still supporting Internet Explorer 5, or some archaic version of Netscape.

Though it is a little dated, I found a rather helpful post on semicolon, today. The post on multiple Internet Explorer versions in Windows discusses stand-alone versions of Internet Explorer available through an Internet Explorer browser archive from evolt.com. The post goes one step further, identifying a defect in IE where every version uses common registry settings causing it to always identify itself as v6, even if you are using a different version. The post contains a workaround; drag this Version bookmarklet to your links toolbar, and when you click it, it will show your actual version.

I would also like to take his post one step further. The full browser archive, which semicolon does not mention. Not only does evolt include Internet Explorer, but seemingly every browser ever available, such as Netscape Navigator, Opera, and Lynx.

Monday, July 25, 2005 1:59:37 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing | Tools | Watir

In case you haven’t heard of it yet, Watir is the greatest thing to hit automated functional testing since…well…ever. Watir (pronounced “water”), or Web Application Testing In Ruby, is an open source automated functional testing tool powered by Ruby. My company has been living off QuickTest Pro, and it is not much of a leap to Watir. Much like QTP, it automates an instance of Internet Explorer and navigates its way around your web site, however unlike QTP, it doesn’t hijack your computer when you do it; with Watir, the IE window doesn’t have to be the foreground window, so you can get something else done while your test is executing. Watir also allows various checks much like QTP, but though programming includes the capability of checking much more, such as object hierarchy or object style. (Yes, Watir can make sure that your validation messages are red!)

Your money manager will love Watir, too. Our switch from QTP will save us thousands of dollars per year from Mercury’s annual support costs. For a moment, I think our company president’s pupils turned to dollar signs like a cartoon.

If you are like me, and spend your QTP days in ‘Expert’ view (Source code), you will pick Watir up quickly. I even find it better than QTP. Additionally, since it is just a source code file, edited in Notepad if you like, it can be stored in your favorite source control application AND (this is a big ‘and’) your developers can execute the automated tests themselves without proprietary software like QTP. Its easy integration with NUnit will also tie your automated functional tests in with applications like Nant and CruiseControl.

More Information

Read all about Watir.
Read Bret Pettichord’s (a Watir creator) blog entry about Watir.

Friday, July 15, 2005 2:19:59 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Business | Testing

Fictional scenario: Trek–Lance Armstrong’s bicycle sponsor–is behind schedule and over-budget on creating a new cycle. They need to find a way to get their product out the door, find it now, and find it cheap. Now, imagine that they threw my grandmother on their bike, had her drive it around the block, and declared it fully tested and ready for mass-production. Would you be satisfied? If it found 300 grandmothers and had them drive around the block twice, would that satisfy you? How about if they used 300 average Joes? Would that satisfy Lance Armstrong? Would he have full confidence in his ride for twenty-one days and over 3,500 km in the tour? I doubt it. That bike wouldn’t even make it out of the warehouse, let alone to the starting line. That bike would not earn respect until it was rigorously tested in a scenario that at least simulates its intended use. So why do so many fail to put their web applications through the same trials?

Money? It will cost more money to fix it after launch than it will to test it during development, identify the issues early, and get them fixed before the product goes out the door.

Time? Well, time is money, so see above.

Experience? There are a lot of good, quality testers out there. If my mechanic doesn’t properly fix my car, I’ll take my car to a different mechanic.

I’m curious about the thoughts of everyone out there.

Friday, June 24, 2005 1:32:42 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Testing

We’ve all been there. You’re cruising down I-5, minding your own business, when the big SUV next to you decides that it is time to change lanes, and that it wants to occupy the space you are currently claiming for yourself. Or there’s the motor home doing 61 in the left lane passing the tractor-trailer in the right lane that’s doing 60. There’s the guy that passes everyone at the construction zone, repeatedly ignoring “Lane Ends. Merge Left,” and expects that someone will let him in when pylons encroach, and he will force the issue if they don’t.

All of this traffic congestion, incompetence, and utter disregard for fellow citizens pollutes your driving experience. You work day is stressful enough. Why must you go through it on the way home?

Your web applications have to go through the same thing: A poorly-coded neighbor with the memory leak, that just keeps taking and taking from the available RAM, until there is nothing left for your app, just like that SUV; An application that didn’t get properly optimized, and hogs all of your available bandwidth, slowing down your application like that 40-foot RV; An evil report that thinks it is superior, and locks the entire database from outside access until it is finished generating that 400-page PDF.

When you are testing the performance of your application, make sure that the environment you are about to stuff it in to is up to par. No matter how pristine your application looks in Staging, it is only going to be as good as the environment that you launch it to. If you ignore the big picture, and your application succumbs to the web environment pollution, your application will be to blame. No matter how mediocre the environment is without your application, your superiors or clients will still say “the environment works just fine without your app.”

Build a testing environment that mimics production, and that includes any other applications or components that you will be sharing resources with. Create some generic scripts that will generate traffic against these neighbors and execute tests against your application. This will help identify any integration issues between you and your environment, and help eliminate any surprises when you launch.

The environment is supposed to work just fine with your app, too.

Monday, June 13, 2005 2:49:19 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Performance | Testing

In the QA forums I frequent, there are often questions about how to properly load test when you don’t have access to production or an identically built environment. Most companies won’t spring the cash to build an environment that is identical to production; generally, testing environments are made up of hand-me-down servers that used to be in production. Of course, there is also the cost of test suite licensing to produce a productional load, and the near impossibility of mimicking real production traffic.

Though a production clone would be ideal, a watered down environment can be sufficient, and in some ways better. Bottlenecks are achieved faster, without having to push through 50 Mbps of data. Additionally, a “lesser” environment will be more sensitive to changes; your transaction may take 0.5 seconds on production-grade servers, and a defect that doubles it to 1.0 seconds is hardly noticeable, but on a lesser environment where that transaction takes 6.0 seconds, doubling it to twelve throws up red flags.

For a watered-down environment, try to lessen the horsepower of your system while matching the architecture. If your productional environment is eight web servers that are all quad 3.2 Ghz Xeons running Windows Server 2003 Web Edition, and all load balanced through a hardware load balancer, you can bring it down to two web servers with less horsepower–perhaps dual 700Mhz P3s–but the servers should still run Windows Server 2003 Web Edition and be balanced with a hardware balancer. Do not drop below two web servers because you will still want a load balanced environment, and do not switch to Windows 2000 or use Microsoft’s NLB (Network Load Balancing). If your production web environment uses Windows 2000 and NLB, obviously use that technology in your testing environment; do not switch to Windows 2003 or a hardware load balancer.

Additionally, try to reduce equally throughout your environment. Don’t drop your web servers from Pentium 4s to Pentium 3s while dropping your database servers from Pentium 4s to an old 486 desktop. Equal reductions maintain your continuity, and in the end, your sanity. Unequal reductions introduce new problems that don’t exist in production, but will still happily waste your time and money. A major bottleneck might exist on your web servers, but the defect could be masked because you were database-bound by using that old 486.

The idea behind this is that many bugs can be introduced by a specific revision of your OS (Think of the problems from Windows XP SP2), from your style of network infrastructure, the version of your graphics driver, etc. You want as many common points as possible between your testing and production environments to eliminate any surprises when you launch your application. Ideally, your testing environment is an exact replica of your production environment, but unless you are making desktop applications, it is only a fantasy, so just try to get as close as you can. Use the same OS version, including the same service pack and the same installed hot fixes. Use the same driver versions, and configure the same settings on your web server software. You are trying to create a miniature version of your production environment, like a model car or a ship in a bottle. Pay attention to the details and you will be okay. To your application, the environments should be exactly the same; one is just a little snug.

Wednesday, May 25, 2005 2:32:15 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Performance | Testing

For love of all things QA, before you launch a new application, test production!

“What? That’s stupid! Why would I want to perform a load test production and risk an outage? That impacts my SLAs. I can’t impact my SLAs!”

Remember the number one rule of quality control: if you don’t find it, your customers will.

When you are about to launch a brand new application into your production environment, test that application against production. However, this only applies for new applications. New applications will introduce new, additional load on the environment, while existing, revised applications already have added that load to the system. Essentially, with an existing application, you already know how well the production environment can handle the additional demand generated by the application’s audience. New applications have not yet generated that load, and production has yet to prove itself.

There is no hard evidence that production can take the additional demand. Maybe your production load balancer can only handle another 5 MB/s, and your new application will demand another 7. Perhaps it is one of the switches, instead. Or for my recent life, maybe it is your ISP. You will not know until you test it, until you measure it, and “if you didn’t measure it, you didn’t do it.”

With a past project, my company created an intranet application for our client, and our application just happened to be hosted off-site. The off-site environment was green, and wasn’t hosting anything else, so our client had no issue with us testing this environment fully since it was going to be production, but wasn’t yet. The hosting company and their ISP rated the environment at 45 Mbps (That’s megabits–lower-case ‘b’), and based on the clients traffic expectations, we needed about 30. It is a good thing we tested the site because we found an issue with the load balancer at about 15 Mbps, a problem with server memory when it was processing enough transactions to produce 20 Mbps, a problem with the database switches when we were generating 22 Mbps, and–this one is the kicker–a bandwidth ceiling at 28. Though all of the routers, switches, balancers, and servers were performing well, we couldn’t get more than 28 Mbps to the web servers. It turns out that the ISP didn’t ever expect anyone to use that 45 Mbps rating, and never tested to make sure they could handle it.

“If you didn’t measure it, you didn’t do it.”

Through two months of midnight through 0600 testing, we upgraded the load balancer, added more memory, put in gigabit switches, had the ISP tweak their infrastructure, pushed through all of the data we needed, and successfully proved that the off-site environment and our new application could handle the load. But, the environment still wasn’t fully tested. Our client used everyone’s favorite single-signon, SiteMinder. However, they wouldn’t let us test the application while integrating their productional SiteMinder policy servers. We could only use staging, and when the staging servers couldn’t handle the load, “that’s okay because it’s staging.” But no matter how much we advocated, we couldn’t test production. We might impact the environment and the SLAs. So, we launched without testing it, and guess what happened? The policy servers failed, and they severely impacted their SLAs.

And to think, we could have tested that at 1:00 AM on a Saturday, and they even if we fried the policy servers, they would have had all weekend to fix it. And most importantly, we would have identified it before the end-user did. But what really cooked their goose was the difference between productional load and performance testing load: performance tests can be stopped. It is a lot harder to fix a jet engine at 30,000 ft.

The moral of the story: when launching a new application, always test production. Always.

Monday, May 23, 2005 2:35:26 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Testing

When testing .Net web application forms that use postback, it is always a good idea to leave the form and come back. Postback is when a page refreshes or submits to itself; generally, identified by the pre- and post-submit URL being the same page. Often times, the status of the form fields is saved in the .Net ViewState after a submit, rather than retrieved from the database. You might have checked the “Display me” checkbox and clicked submit. The “cached” version from the ViewState says that this control should be checked, so when the page reloads, it is. However, the value may have not been saved to the database, so when the value is loaded from the DB, the box is not checked, but you would not have known since the ViewState version was used. When testing, to make sure you are getting the actual values and not the “cached” counterparts, make sure you leave the page and come back.

Wednesday, May 18, 2005 2:37:22 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Reviews | Testing | Tools

Screen Hunter 4.0 Free - www.wisdom-soft.com
Screen Capture Tool
Cost: Free

Quite possible the most essential task for any tester is taking a snapshot of the current screen to give their developer a visual representation of the logged error. The classic Windows hotkey, [Alt] + [PrtScn], will take a screen capture of the entire active window. However, sometimes the text on a link is spelled wrong, a button uses the wrong icon, or an error message displays in the wrong style; in these scenarios an entire screen grab is overkill and often confusing. Yet there are few things that a tester can do about that short of opening up MS Paint or Macromedia Fireworks and cropping the image, completely wasting valuable time and causing pointed comments from the Project Manager about diddling in Photoshop.

Screen Hunter 4.0 Free allows you to capture the important pixels quickly and effortlessly. Tap F6 (The default hotkey, but it can be modified), and your cursor changes to a cross-hair. Click-drag a box around whatever you want to capture, and it’s done. Instantly cropped screen capture for your bug-tracking pleasure.

The developers will be happier, too.

Wednesday, May 11, 2005 2:46:01 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] - Trackback

Filed under: Programming | Testing

So your wonderful little creation is finished, and it does exactly what it was designed to do. But, have you prevented it from doing what it’s not supposed to do?

Enter the forgotten art of negative testing. This is the safeguard from user error, malicious attacks, and blatant developer oversight. Negative testing is taking your calculator application and trying to add “Hello” and “Goodnight”. Negative testing is trying to supply an invalid email address–.anything@something.q–into your mailing list form. Negative testing is trying to cause a buffer overflow on your lead-developer’s computer because you were able to sneak in a script injection.

The key word here is “try.”

If everyone has done their job, you will get nowhere. Unfortunately, rarely is this job done right. In 3 minutes I could considerably alter my best friend’s blog, and he doesn’t even know it. In 10 minutes I could corrupt the online database of a Fortune 500’s web site–both company and URL to remain anonymous. And, what scares me the most, in 20 minutes I could download the entire database of a certain benefits company, including the complete identity–SSN included–of a few thousand people.

For years, I have been paid to break things as much as build them. When that calculator finally adds 2 and 2 correctly, don’t be satisfied. Try to add “Hello” and “Goodnight”. Will it give you a neatly handled error message informing you that it couldn’t complete the procedure, or did it return a fatal exception and die a miserable death because it expected a Double and you gave it a String? Optimally, it shouldn’t allow you to even type characters into the input area unless you are working in hex; even then, only A-F.

If instructions tell you to do one thing, enter the opposite. If you see a value in the URL, change it. If a field asks for an integer between 0 and 5, try 0, 2, 5, -1, 9, 3.5, and “Q”, and see how it handles “unexpected inputs.” If a querystring is “?UserID=6″, change the 6 to a 7, to see if you get information on User 7, and try invalid items like 3.5 and “Q” to see if it fails on unexpected inputs. If a client-side cookie has a value of “User”, try changing it to “Admin” or “Administrator” and see if your access-level is increased.

Find the weaknesses, find the holes, and find the bugs so that they can get fixed. You are the demolition man. You get paid to blow things up. Do it. Do it with purpose. Pretend you are a hacker trying to get into the system. Pretend you are a teenager-hacker-wannabe trying to screw with the system. Pretend you are a grandma that doesn’t know what to do with the system. Do all of the things that you aren’t supposed to do to the application and do them on purpose, because if by ignorance or intelligence, your users will find what was missed.

Tuesday, May 10, 2005 2:55:41 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: LoadRunner | Performance | Testing

For my needs, the biggest hole in Mercury LoadRunner is its lack of page size monitoring. LoadRunner can monitor anything else imaginable, including transaction counts, transaction times, errors, and all Windows Performance Monitor metrics. However, monitoring page size, download times, and HTTP Return codes are only available through programming.

The following function will monitor the page size of all responses, logging an error if it exceeds you specified limit, as well as track all values on the user-defined graphs.

si_page_size_limit(int PageLimit, char* PageName, char *PageURL, long TransactionID){
// Page Size Limit Monitor
// Author: Jay Harris, http://www.cptloadtest.com, (c) 2004 Jason Harris
// License: This work is licensed under a
//    Creative Commons Attribution 3.0 United States License.
//    http://creativecommons.org/licenses/by/3.0/us/
// Created: 10-Aug-2004
// Last Modified: 10-May-2005, Jay Harris
// Description:
// Logs an error to the log, pass or fail, including the applicable status, if logging is enabled.
// Plots page size datapoint to User Defined graph.
// Inputs:
// int PageLimit Maximum page size allowed, in bytes
// char* PageName Name of the page, such as the Title. For identification in logs.
// char* PageURL URL of the page. For reference in logs. FOr identification in logs.
// long TransactionID Transaction ID for the current request.
// Note: Transaction must be explicitly opened via lr_start_transaction_instance.
// Note: TransactionID is returned by lr_start_transaction_instance.
    int iPageSize = web_get_int_property(HTTP_INFO_DOWNLOAD_SIZE);
    char DataPointName[1024] = “Response Size [”;
    strcat(DataPointName, PageName);
    strcat(DataPointName, “]”);

    if (PageLimit < iPageSize) {
	    “Page Size Check FAILED - %s [%s] exceeds specified page size limit of %d (Total: %d)”,
    } else {
	    “Page Size Check PASSED - %s [%s] meets specified page size limit of %d (Total: %d)”,
    if (lr_get_trans_instance_status(TransactionID) == LR_PASS) {
    return 0;
Tuesday, May 10, 2005 2:51:58 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [6] - Trackback