Jay Harris is Cpt. LoadTest

a .net developers blog on improving user experience of humans and coders
Home | About | Speaking | Contact | Archives | RSS
 

Azure Websites are a fantastic method of hosting your own web site. At Arana Software, we use them often, particularly as test environments for our client projects. We can quickly spin up a free site that is constantly up-to-date with the latest code using continuous deployment from the project’s Git repository. Clients are able to see progress on our development efforts without us having to worry about synchronizing codebases or managing infrastructure. Since Windows Azure is a Microsoft offering, it is a natural for handling .NET projects, but JavaScript-based nodejs is also a natural fit and a first-class citizen on the Azure ecosystem.

Incorporating Grunt

Grunt is a JavaScript-based task runner for your development projects to help you with those repetitive, menial, and mundane tasks that are necessary for production readiness. This could be running unit tests, compiling code, image processing, bundling and minification, and more. For our production deployments, we commonly use Grunt to compile LESS into CSS, CoffeeScript into JavaScript, and Jade into HTML, taking the code we write and preparing it for browser consumption. We also use Grunt to optimize these various files for speed through bundling and minification. The output of this work is done at deployment rather than development, with only the source code committed into Git and never its optimized output.

Git Deploy and Kudu

Continuous deployment will automatically update your site with the latest source code whenever modifications are made to the source repository. This will also work with Mercurial. There is plenty of existing documentation on setting up Git Deploy in Azure, so consider that a prerequisite for this article. However, Git Deploy, alone, will only take the files as they are in source, and directly deploy them to the site. If you need to run additional tasks, such as compiling your .NET source or running Grunt, that is where Kudu comes in.

Kudu is the engine that drives Git deployments in Windows Azure. Untouched, it will simply synchronize files from Git to your /wwwroot, but it can be easily reconfigured to execute a deployment command, such as a Windows Command file, a Shell Script, or a nodejs script. This is enabled through a standardized file named ".deployment". For Grunt deployment, we are going to execute a Shell Script that will perform npm, Bower, and Grunt commands in an effort to make our code production-ready. For other options on .deployment, check out the Kudu project wiki.

Kudu is also available locally for testing, and to help build out your deployment scripts. The engine is available as a part of the cross-platform Windows Azure Command Line Tools, available through npm.

Installing the Azure CLI

npm install azure-cli –-global

We can also use the Azure CLI to generate default Kudu scripts for our nodejs project. Though we will need to make a few modifications to make the scripts work with Grunt, it will give us a good start.

azure site deploymentscript –-node

This command will generate both our <code>.deployment</code> and the default <code>deploy.sh</code>.

Our .deployment file

[config]
command = bash ./deploy.sh

Customizing deploy.sh for Grunt Deployment

From .deployment, Kudu will automatically execute our deploy.sh script. Kudu’s default deploy.sh for a nodejs project will establish the environment for node and npm as well as some supporting environment variables. It will also include a "# Deployment" section containing all of the deployment steps. By default, this will copy your repository contents to your /wwwroot, and then execute npm install --production against wwwroot, as if installing the application's operating dependencies. However, under Grunt, we want to execute tasks prior to /wwwroot deployment, such as executing our Grunt tasks to compile LESS into CSS and CoffeeScript into JavaScript. By replacing the entire Deployment section with the code below, we instruct Kudu to perform the following tasks:

  1. Get the latest changes from Git (or Hg). This is done automatically before running deploy.sh.
  2. Run npm install, installing all dependencies, including those necessary for development.
  3. Optionally run bower install, if bower.json exists. This will update our client-side JavaScript libraries.
  4. Optionally run grunt, if Gruntfile.js exists. Below, I have grunt configured to run the Clean, Common, and Dist tasks, which are LinemanJS's default tasks for constructing a production-ready build. You can update the script to run whichever tasks you need, or modify your Gruntfile to set these as the default tasks.
  5. Finally, sync the contents of the prepared /dist directory to /wwwroot. It is important to note that this is a KuduSync (similar to RSync), and not just a copy. We only need to update the files that changed, which includes removing any obsolete files.

Our deploy.sh file's Deployment Section

# Deployment
# ----------

echo Handling node.js grunt deployment.

# 1. Select node version
selectNodeVersion

# 2. Install npm packages
if [ -e "$DEPLOYMENT_SOURCE/package.json" ]; then
  eval $NPM_CMD install
  exitWithMessageOnError "npm failed"
fi

# 3. Install bower packages
if [ -e "$DEPLOYMENT_SOURCE/bower.json" ]; then
  eval $NPM_CMD install bower
  exitWithMessageOnError "installing bower failed"
  ./node_modules/.bin/bower install
  exitWithMessageOnError "bower failed"
fi

# 4. Run grunt
if [ -e "$DEPLOYMENT_SOURCE/Gruntfile.js" ]; then
  eval $NPM_CMD install grunt-cli
  exitWithMessageOnError "installing grunt failed"
  ./node_modules/.bin/grunt --no-color clean common dist
  exitWithMessageOnError "grunt failed"
fi

# 5. KuduSync to Target
"$KUDU_SYNC_CMD" -v 500 -f "$DEPLOYMENT_SOURCE/dist" -t "$DEPLOYMENT_TARGET" -n "$NEXT_MANIFEST_PATH" -p "$PREVIOUS_MANIFEST_PATH" -i ".git;.hg;.deployment;deploy.sh"
exitWithMessageOnError "Kudu Sync to Target failed"

These commands will execute bower and Grunt from local npm installations, rather than the global space, as Windows Azure does not allow easy access to global installations. Because bower and Grunt are manually installed based on the existence of bower.json or Gruntfile.js, they also are not required to be referenced in your own package.json. Finally, be sure to leave the –no-color flag enabled for Grunt execution, as the Azure Deployment Logs will stumble when processing the ANSI color codes that are common on Grunt output.

Assuming that Git Deployment has already been configured, committing these files in to Git will complete the process. Because the latest changes from Git are pulled before executing the deployment steps, these two new files (.deployment and deploy.sh) will be available when Kudu is ready for them.

Troubleshooting

Long Directory Paths and the 260-Character Path Limit

Though Azure does a fantastic job of hosting nodejs projects, at the end of the day Azure is still hosted on the Windows platform, and brings with it Windows limitations. One of the issues that you will quickly run into under node is the 260-Character Path Limitation. Under nodejs, the dependency tree for a node modules can get rather deep. And because each dependency module loads up its own dependency modules under its child folder structure, the folder structure can get rather deep, too. For example, Lineman requires Testem, which requires Winston, which requires Request; in the end, the directory tree can lead to ~/node_modules/lineman/node_modules/testem/node_modules/winston/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream, which combined with the root path structure, can far exceed the 260 limit.

The Workaround

To reduce this nesting, make some of these dependencies into first-level dependencies. With the nodejs dependency model, if a module has already been brought in at a higher level, it is not repeated in the chain. Thus, if Request is made as a direct dependency and listed in your project's project.json, it will no longer be nested under Winston, splitting this single dependency branch in two:

  1. ~/node_modules/lineman/node_modules/testem/node_modules/winston
  2. ~/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream

This is not ideal, but it will solve is a workaround for the Windows file structure limitations. The element that you must be careful of is with dependency versioning, as you will need to make sure your package.json references the appropriate version of your pseudo-dependency; in this case, make sure your package.json references the same version of Request as is referenced by Winston.

To help find those deep dependencies, use npm list. It will show you the full graph on the command line, supplying a handy visual indicator.

__dirname vs Process.cwd()

In the node ecosystem, Process.cwd() is the current working directory for the node process. There is also a common variable named __dirname that is created by node; its value is the directory that contained your node script. If you executed node against a script in the current working directory, then these values should be the same. Except when they aren't, like in Windows Azure.

In Windows Azure, everything is executed on the system drive, C:. Node and npm live here, and it appears as though your deployment space does as well. However, this deployment space is really a mapped directory, coming in from a network share where your files are persisted. In Azure's node ecosystem, this means that your Process.cwd() is the C-rooted path, while __dirname is the \\10.whatever-rooted UNC path to your persisted files. Some Grunt-based tools and plugins (including Lineman) will fail because that it will reference __dirname files while Grunt's core is attempting to run tasks with the scope of Process.cwd(); Grunt recognizes that it's trying to take action on \\10.whatever-rooted files in a C-rooted scope, and fails because the files are not in a child directory.

The Workaround

If you are encountering this issue, reconfigure Grunt to work in the \\10.whatever-rooted scope. You can do this by setting it's base path to __dirname, overriding the default Process.cwd(). Within your Gruntfile.js, set the base path immediately within your module export:

module.exports = function (grunt) {
  grunt.file.setBase(__dirname);
  // Code omitted
}

Unable to find environment variable LINEMAN_MAIN

If like me, you are using Lineman to build your applications, you will encounter this issue. Lineman manages Grunt and its configuration, so it prefers that all Grunt tasks are executed via the Lineman CLI rather than directly executed via the Grunt CLI. Lineman's Gruntfile.js includes a reference to an environment variable LINEMAN_MAIN, set by the Lineman CLI, so that Grunt will run under the context of the proper Lineman installation, which is what causes the failure if Grunt is executed directly.

The Fix (Because this isn't a hack)

Your development cycle has been configured to use lineman, so your deployment cycle should use it, too! Update your deploy.sh Grunt execution to run Lineman instead of Grunt. Also, since Lineman is referenced in your package.json, we don't need to install it; it is already there.

Option 1: deploy.sh

# 4. Run grunt
if [ -e "$DEPLOYMENT_SOURCE/Gruntfile.js" ]; then
  ./node_modules/.bin/lineman --no-color grunt clean common dist
  exitWithMessageOnError "lineman failed"
fi

Recommendation: Since Lineman is wrapping Grunt for all of its tasks, consider simplifying lineman grunt clean common dist into lineman clean build. You will still need the --no-color flag, so that Grunt will not use ANSI color codes.

The Alternate Workaround

If you don't want to change your deploy.sh—perhaps because you want to maintain the generic file to handle all things Grunt—then as an alternative you can update your Gruntfile.js to specify a default value for the missing LINEMAN_MAIN environment variable. This environment variable is just a string value passed in to node's require function, so that the right Lineman module can be loaded. Since Lineman is already included in your package.json, it will already be available in the local /node_modules folder because of the earlier npm install (deploy.sh, Step #2), and we can pass 'lineman' into require( ) to have Grunt load the local Lineman installation. Lineman will then supply its configuration into Grunt, and the system will proceed as if you executed Lineman directly.

Option 2: Gruntfile.js

module.exports = function(grunt) {
  grunt.file.setBase(__dirname);
  if (process.env['LINEMAN_MAIN'] === null || process.env['LINEMAN_MAIN'] === undefined) {
    process.env['LINEMAN_MAIN'] = 'lineman';
  }
  require(process.env['LINEMAN_MAIN']).config.grunt.run(grunt);
};

Credits

Thank you to @davidebbo, @guayan, @amitapl, and @dburton for helping troubleshoot Kudu and Grunt Deploy, making this all possible.

Changelog

2013-12-03: Updated LINEMAN_MAIN Troubleshooting to improve resolution. Rather than editing deploy.sh to set the environment variable, edit the file to execute Lineman. This is the proper (and more elegant) solution. [Credit: @searls]

Technorati Tags: ,,,,
Tuesday, 03 December 2013 00:34:25 (Eastern Standard Time, UTC-05:00)  #    Comments [4] - Trackback

Scheduled Integration is hard. I remember being involved in projects where developers would get a copy of the latest source code and a task, and race off like horses at the track while they spent a few weeks implementing their assigned feature. At the end of these many weeks was a scheduled integration, where all developers in the team would reconvene with their code modifications, and try to get their respective code to play well in a single sandbox. Project managers always seemed to expect that this would be a quick and painless process, but, of course, it never was. Reintegration of the code sometimes took just as long as the implementation, itself. Even if developers are all working in one big room throughout the project, the code remains isolated, never shared or reintegrated. Fortunately, Continuous Integration specifically focuses on this problem by reducing (and often eliminating) the end-of-project reintegration cycle through a continuous constant integration process.

Continuously Integrating your Mind

For a moment, compare  the process of software development with the process of learning new technologies in your field. In the past few years of Microsoft's .NET platform, we've seen .NET 2.0, Windows Communication Foundation, Windows Presentation Foundation, Workflow Foundation, Extension Methods, Generics, Linq, Dynamic Data, ASP.NET MVC, and more. For a developer that has not kept up with the latest trends, to suddenly and quickly catch up with the bleeding edge would be a major undertaking, and psychologically intimidating. This Scheduled Integration approach to learning can be overwhelming; the learning process is isolated, and only occasionally reconnects to the latest technologies. However, a developer that has kept up with the trends, learning through a continuous integration process, has been constantly updating, constantly learning the Next Big Thing over these past few years; this developer is already long familiar with .NET 2.0, WF, WCF, WPF, Generics, and Linq, and is now working on only Dynamic Data and MVC. The Continuous Integration process ensures that those involved are current with the latest developments in their project, and that the overwhelming burden of integration at the end of a project is instead simplified by distributing it throughout the course of the timeline.

Core Continuous Integration

At its core, Continuous Integration, or CI, is a development model where developers regularly commit and update against their source control repository. By committing (giving everyone else access to your code changes) and updating (getting code changes from everyone else), the scheduled, tedious integration process at the end of the project is eliminated. Added on top of this single fundamental, is ensuring the code works: Automating the build through scripting frameworks like Make, MSBuild, or NAnt helps developers validate their status quickly, by invoking a validation step that not only compiles, but executes a suite of unit tests against the code base. Validation results, along with the latest valid code, are then made easily accessible to anyone on the team.

The ten tenets of Continuous Integration are:

  1. Maintain a Source Code Repository
    Use a code repository, such as Subversion, Bazaar, or even Visual Source Safe. This allows a central point-of-truth for the development team, against which code can be committed or updated. This even applies to a development team consisting of only one person, as a local hard drive alone cannot provide change logs, revision differences, rollback capability, and other benefits of source management.
  2. Commit Frequently
    As other developers commit code changes, your local version of the code will get further and further from the revision head, thus increasing the likelihood of a conflicting change that will need manual resolution. By committing often (and by association, updating even more often), everyone can stay very close to the revision head, reducing the likelihood of conflicts, and reducing time to integrate code and functionality.
  3. Self-Testing Code
    "It compiles" is not sufficient criteria for determining if a project is ready for delivery or deployment to the client. Some level of testing must be conducted against each compile to measure application quality. Using unit tests, functional tests, or some other form of automated acceptance tests, the code can evaluate itself against expected outcome, providing a much more accurate and granular metric for readiness.
  4. Automate the Build
    Using Make, NAnt, MSBuild, or similar frameworks, consolidate execution of your full build into a single command, or a single icon on your desktop. These scripts should execute a full compile, and run your full suite of testing and evaluation tools against the compile.
  5. Build Fast, Fail Fast
    Even if your build is automated, no one wants to wait 30 minutes for the build to complete. Building your code should not just be a lunch-break activity. Keep the build fast to enable developers to do so as often as possible. And if there is a problem, fail immediately.
  6. Build Every Mainline Commit on an Integration Machine
    We all have applications on our local desktops that will not be present in Production. Instant Messenger, iTunes, Visual Studio, and Office are all common for us, but rare in Production. However, these applications can conflict or distort build results, such as a reference to an Office assembly that is not included in your deployment package. By executing the automated build on an integration machine (iTunes free, and using a CI suite like Hudson or CruiseControl), you can increase confidence in your application and eliminate "it works on my box!"
  7. Automate the Deployment
    Manual code deployment is a mundane, highly repetitive, error-prone, and time-consuming process that is ripe for automation. The time commitment adds to the stress when QA requests yet another deployment to the testing environment, particularly when all of the developers are in 80-hour-week crunch mode. In turn, this stress reduces deployment quality; environments often have configuration differences, such as different database connection strings, credentials, or web service URLs, and often only one configuration change needs to be overlooked to cause the entire system to malfunction. Automate this task, so that it can be executed easily, dependably, and often.
  8. Test in a Clone of the Production Environment
    In addition to Office and iTunes not being a part of the production server build, there are aspects of the production environment that are not a part of the desktop environment. Server farms, federated databases, and load balancing are examples of things that do not exist on the developer desktop, but do exist in production, and can cause surprises if they are not considered during development and testing. Consider the haves and the have-nots in your test environment, and eliminate these surprises. And if the cost of another production environment is out of reach, consider Virtual Machines. VMs have significantly reduced the cost of creating a watered down test environment that still has things like server farms or database clusters; even if you cannot exactly replicate your production configuration, mitigate your risk by reducing the differences between your test and production environments.
  9. Everyone Can View the Latest Build Results
    The underlying driver behind Continuous Integration is transparency and visibility. Communication enables both transparency and visibility by allowing everyone on the team to know the full status of the build. Did it compile? Did the unit tests pass? Which unit tests failed? Did the deployment work? How many seconds did it take to compile the application? Who broke the build? Continuous Integration suites, such as Hudson or CruiseControl, provide reporting mechanisms on build status. Make this status available to anyone, including project managers, and even the sales guy.
  10. Everyone Can Get the Latest Executable
    On a software development project, communication is more than just green icons (successful builds) and red icons (failed builds). Communicate the ones and zeros, in the form of your compiled application, to your team. By allowing other developers, testers, and even the sales guy (perhaps for a demo) to get the latest bits for themselves, developers can focus on writing code.

Benefits of Continuous Integration

By continuously integrating all new code and feature sets, development teams can eliminate that long and tedious scheduled integration effort, which reduces overall effort, time line, and budget. Through self-testing code and building every mainline commit, code is continuously tested against a full suite of tests, allowing quick analysis and identification of breaking changes. Through the easy accessibility of the latest bits, the team can test early and often, allowing quick identification of broken functionality, and for early identification of features that don't quite align with what the client had in mind. And finally, the immediate and public feedback on the success or failure of the build provides incentives for developers to write code in smaller increments and perform more pre-commit testing, resulting in higher quality code.

I consider Continuous Integration to be an essential, required part of any development effort, every time. I started using CI in 2004, and I have since become dependent on it. I even have a Continuous Integration box at home, validating home projects for my development-team-of-one, and I am comforted by having a Continuous Integration server analyzing and validating every code change that I make. I break unit tests as often as anyone else does, and there still continues to be plenty of times that I even break the compile. At least once, every developer among us has checked in the project file while forgetting to add myNewClass.cs to Source Control. It will break the compile every time. Fortunately, Continuous Integration is always watching my code commits; it will let me know that it could not find myNewClass.cs, every time. And my application's quality is remarkably better for it. Every time.

Wednesday, 01 April 2009 19:25:01 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Events | Speaking
Tomorrow night, Wednesday, 08 October, I will be speaking at the Ann Arbor Dot Net Developers meeting. We will be discussing Continuous Integration, focusing on CI as a process, not just a toolset. Come out to Ann Arbor, enjoy some pizza, and hear about what Continuous Integration can do for your development cycle.
Continuous Integration: It's more than just a toolset
Wednesday, 08 October, 2008 @ 6:00pm
SRT Solutions
206 South Fifth Ave, Suite 200
Ann Arbor, MI 48104

Session Abstract:

Does your team spend days integrating code at the end of a project? Continuous Integration can help. Using Continuous Integration will eliminate that end-of-project integration stress, and at the same time will make your development process easier. But Continuous Integration is more than just a tool like CruiseControl.Net; it is a full development process designed to bring you closer to your mainline, increase visibility of project status throughout your team, and to streamline deployments to QA or to your client. Find out what Continuous Integration is all about, and what it can do for you.
Tuesday, 07 October 2008 13:45:27 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Events | Speaking
Tomorrow night, Thursday, 11 September, I will be speaking at the GLUGnet Flint meeting. We will be discussing Continuous Integration, focusing on CI as a process, not just a toolset. Come out to Flint, enjoy some pizza, and hear about what Continuous Integration can do for your development cycle.
Continuous Integration: It's more than just a toolset
Thursday, 11 September, 2008 @ 6:00pm
New Horizons
4488 West Bristol Road
Flint, MI 48507

Session Abstract:

Does your team spend days integrating code at the end of a project? Continuous Integration can help. Using Continuous Integration will eliminate that end-of-project integration stress, and at the same time will make your development process easier. But Continuous Integration is more than just a tool like CruiseControl.Net; it is a full development process designed to bring you closer to your mainline, increase visibility of project status throughout your team, and to streamline deployments to QA or to your client. Find out what Continuous Integration is all about, and what it can do for you.
Wednesday, 10 September 2008 15:07:26 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Tools

CruiseControl.Net 1.3 was released this morning. To me, most important was the new Build Queue functionality, stopping multiple projects from building at the same time. If ProjectB depends on ProjectA, and they both get code changes committed at the same time, they will fail from contention. Either ProjectA will fail because it can’t delete its old assemblies (because ProjectB has a lock on them) or ProjectB will fail because it can’t find the ProjectA assemblies (because ProjectA deleted them in its rebuild).

No more.

I’m so excited that I am already upgrading our servers!

Friday, 22 June 2007 22:19:30 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Tools

For those that missed the announcement last week (like I did), the latest version of CruiseControl.Net has been released.

I plan on checking it out this week, then possibly upgrading our Build environment on Saturday. There are some modifications that I am really excited about:

  • Log4Net is used. (Default: Rolling file appender for logging server output.) If the traditional Log4Net configuration block is included in the application configuration file, I will probably change that to the ADONet appender, instead.
  • Users can volunteer to fix a broken build. How sweet is that!?!
  • <prebuild /> section allows custom tasks to run prior to the build. This one is a big bonus; previously, if something went wrong with the build, often the external log files (NUnit, FXCop) from the previous build would get included in the current build’s report. Now the prebuild can give them the boot.
  • Caching is used on WebDashboard. We have some huge log files and some not-so-powerful build servers. Sometimes it takes the machine a while to process the XSL. I am hoping that caching will help with that.
  • WebDashboard can stop and start projects. I am very excited about the ability to pause individual projects without having to modify the setup or stop the entire service.

This seems like a nice package (Release Notes). I am eager to pull it down and give it a go.

One gotcha that everyone should be aware of: Old versions of the dashboard and CCTray are incompatible with the new version of the service, so both will need to be replaced. Give your development team a heads-up, so they know to replace their tray installation as soon as the new server version is installed and online.

Monday, 09 October 2006 22:51:12 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

NAnt hates .Net’s resource files, or .resx. Don’t get me wrong–it handles them just fine–but large quantities of resx will really bog it down.

Visual Studio loves resx. The IDE will automatically create a resource file for you when you open pages and controls in the ‘designer’ view. Back when we still used Visual SourceSafe as our SCM, Visual Studio happily checked the file in and forgot about it. Now, our 500+ page application has 500+ resource files. Most of these 500+ resource files contain zero resources, making them useless, pointless, and a detriment to the build.

This morning I went through the build log, noting every resx that contained zero resources, and deleted all of these useless files.

The compile time dropped by 5 minutes.

Moral of the story: Be weary of Visual Studio. With regards to resx, VS is a malware program that’s just filling your hard drive with junk. If you use resx, great, but if you don’t, delete them all. NAnt will love you for it.

Wednesday, 15 February 2006 11:31:31 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Tools

CruiseControl .Net 1.0 has been released. download | release notes

This is a must upgrade for anyone running v0.9 or earlier. There are many updates that I am excited about, most notably the overhaul to CCTray (the client-side build monitoring tool that sits in your system tray). Our developers have had to use Firefox’s CC.Net monitor extension to monitor multiple builds, simultaneously. No more.

We will be upgrading within the next week.

Tuesday, 15 November 2005 11:34:32 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: Continuous Integration | Tools

MSIExec error code 1605 has been a thorn in my side for quite a while. When an MSI was command-line deployed by one user (manually deployed by me in the middle of the day), it couldn’t be uninstalled by another (automation during the nightly) due to the “Just Me” default. If I installed it through using the UI, and installed it for use by “Everyone”, then the nightly would build just fine. I needed a way to run an “Everyone” install from the command line, but Google wasn’t helping me out. Unfortunately, Microsoft does not seem to have a lot of documentation on this functionality, either.

It further frustrated me this morning when my nightlies were failing again, but only on one server. Of course, I manually deployed the package to this same server to a few days ago. I tried Google again, and this time hit pay dirt. Executing it with ALLUSERS=2 in the command line makes it available for everyone. Apparently, it forces an “Everyone” install for the UI, too.

Finally I can pull the thorn out.

MSIExec /i mypackage .msi … ALLUSERS=2

Saturday, 05 November 2005 11:06:26 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

I know. I haven’t posted in a while. But I’ve been crazy busy. Twelve hour days are my norm, right now. But enough complaining; let’s get to the good stuff.

By now you know my love for PsExec. I discovered it when trying to find a way to add assemblies to a remote GAC [post]. I’ve found more love for it. Now, I can remotely execute my performance tests!

Execute LoadRunner test using NAnt via LoadRunner:

<exec basedir="${P1}"
  program="psexec"
  failonerror="false"
  commandline='\${P2} /u ${P3} /p ${P4} /i /w "${P5}" cmd /c wlrun -Run
    -InvokeAnalysis -TestPath "${P6}" -ResultLocation "${P7}"
    -ResultCleanName "${P8}"' />

(I’ve created generic parameter names so that you can read it a little better.)
P1: Local directory for PsExec
P2: LoadRunner Controller Server name
P3: LoadRunner Controller Server user username. I use an Admin-level ID here, since this ID also needs rights to capture Windows PerfMon metrics on my app servers.
P4: LoadRunner Controller Server user password
P5: Working directory on P2 for 'wlrun.exe', such as C:\Program Files\Mercury\Mercury LoadRunner\bin
P6: Path on P2 to the LoadRunner scenario file
P7: Directory on P2 that contains all results from every test
P8: Result Set name for this test run

'-InvokeAnalysis' will automatically execute LoadRunner analysis at test completion. If you properly configure your Analysis default template, Analysis will automatically generate the result set you want, save the Analysis session information, and create a HTML report of the results. Now, put IIS on your Controller machine, and VDir to the main results directory in P7, and you will have access to the HTML report within minutes after your test completes.

Other ideas:

  • You can also hook it up to CruiseControl and have your CC.Net report include a link to the LR report.
  • Create a nightly build in CC.Net that will compile your code, deploy it to your performance testing environment, and execute the performance test. When you get to work in the morning, you have a link to your full performance test report waiting in your inbox.

The catch for all of this: you need a session logged in to the LoadRunner controller box at all times. The '/i' in the PsExec command means that it interacts with the desktop.

Sidenote

PsExec is my favorite tool right now. I can do so many cool things. I admit, as a domain administrator, I also get a little malicious, sometimes. The other day I used PsExec to start up solitaire on a co-workers box, then razzed him for playing games on the clock.

Friday, 14 October 2005 11:35:40 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

With our new nightly database restore we now have the desire to automatically run all of the change scripts associated with a project. We’ve found a way; I created a NAnt script that will parse the Visual Studio Database Project (or "DBP") and execute all of the change scripts in it. Here’s how we got there.


Problem 1: Visual Studio Command Files are worthless

Our first idea was to have everyone update a command file in the DBP, and have NAnt run it every night. Visual Studio command files are great and all, but we have discovered a problem with them: they do not keep the files in order. We have named all of our folders (01 DDL, 02 DML, etc) and our change scripts (0001 Create MyTable.sql, 0002 AddInfoColumn to MyTable.sql) accordingly so that they should run in order. We have found that the command file feature of VS.Net 2003 does not keep them in order but rather seems to sort them first by extension, then by order, or some similar oddness. Obviously, if I try to at InfoColumn to MyTable before MyTable exists, I’m going to have a problem. So, the command file idea was axed.

Problem 2: Visual SourceSafe contents can’t be trusted

Our second idea was to VSSGET the DBP directory in VSS and execute every script in it. However, the VSS store cannot be trusted. If a developer creates a script in VS.Net called ‘0001 Crate MyTable.sql’ and checks it in to the project, then proceeds to correct the spelling error in VS.Net to ‘0001 Create MyTable.sql’, VS does not rename the old file in VSS. Instead, it removes the old file from the project, renames it locally, then adds the new name to the project and to VSS. It also never deletes the old file name from the VSS store. Now, both files (’0001 Crate MyTable.sql’ and ‘0001 Create MyTable.sql’) exist in VSS. Performing a VSSGET and executing all scripts will run both scripts, which could lead to more troubles.


So, we can’t use a command file, because it won’t maintain the order. We can’t trust VSS, since it can have obsolete files. We can only trust the project, but how do we get a list of files, ourselves?

Fortunately, DBP files are just text in a weird XML-wannabe format. The NAnt script will open the file and run through it looking for every ‘SCRIPT’ entry in the file. If it finds a ‘BEGIN something’ entry, it assumes that ’something’ is a folder name, and appends it to the working path until it finds ‘END’, at which time it returns to the parent directory.

It’s not perfect. It still runs in to some problems, but here it is in v0.1 form.

<project name="RunDBPScripts" default="RunScripts">
<!–-
Execute all scripts in a VS.Net DBP
Author: Jay Harris, http://www.cptloadtest.com, (c) 2005 Jason Harris
License: This work is licensed under a  
   Creative Commons Attribution 3.0 United States License.  
   http://creativecommons.org/licenses/by/3.0/us/ 

This script is offered as-is.
I am not responsible for any misfortunes that may arise from its use.
Use at your own risk.
-–>
<!-– Project: The path of the DBP file –->
<property name="project" value="Scripts.dbp" overwrite="false" />
<!-– Server: The machine name of the Database Server –->
<property name="server" value="localhost" overwrite="false" />
<!-– Database: The database that the scripts will be run against –->
<property name="database" value="Northwind" overwrite="false" />
<target name="RunScripts">
        <property name="currentpath"
            value="${directory::get-parent-directory(project)}" />
        <foreach item="Line" property="ProjectLineItem" in="${project}">
            <if test="${string::contains(ProjectLineItem, 'Begin Folder = ')}">
                <regex pattern="Folder = &quot;(?’ProjectFolder’.*)&quot;$"
                    input="${string::trim(ProjectLineItem)}" />
                <property name="currentpath"
                    value="${path::combine(currentpath, ProjectFolder)}" />
            </if>
            <if test="${string::contains(ProjectLineItem, 'Script = ')}">
                <regex pattern="Script = &quot;(?’ScriptName’.*)&quot;$"
                    input="${string::trim(ProjectLineItem)}" />
                <echo message="Executing Change Script (${server+"\"+database}): ${path::combine(currentpath, ScriptName)}" />
                <exec workingdir="${currentpath}" program="osql"
                    basedir="C:\Program Files\Microsoft SQL Server\80\Tools\Binn"
                    commandline=’-S ${server} -d ${database} -i “${ScriptName}" -n -E -b’ />
            </if>
            <if test="${string::trim(ProjectLineItem) == 'End’}">
                <property name="currentpath"
                    value="${directory::get-parent-directory(currentpath)}" />
            </if>
        </foreach>
    </target>
</project>

I used an <EXEC> NAnt task rather than <SQL>. I found that a lot of the scripts would not execute in the SQL task because of their design. VS Command Files use OSQL, so that’s what I used. I guess those command files were worth something after all.

If you know of a better way, or have any suggestions or comments, please let me know.

Thursday, 25 August 2005 12:15:41 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback