Jay Harris is Cpt. LoadTest

a .net developers blog on improving user experience of humans and coders
Home | About | Speaking | Contact | Archives | RSS
 

Azure Websites are a fantastic method of hosting your own web site. At Arana Software, we use them often, particularly as test environments for our client projects. We can quickly spin up a free site that is constantly up-to-date with the latest code using continuous deployment from the project’s Git repository. Clients are able to see progress on our development efforts without us having to worry about synchronizing codebases or managing infrastructure. Since Windows Azure is a Microsoft offering, it is a natural for handling .NET projects, but JavaScript-based nodejs is also a natural fit and a first-class citizen on the Azure ecosystem.

Incorporating Grunt

Grunt is a JavaScript-based task runner for your development projects to help you with those repetitive, menial, and mundane tasks that are necessary for production readiness. This could be running unit tests, compiling code, image processing, bundling and minification, and more. For our production deployments, we commonly use Grunt to compile LESS into CSS, CoffeeScript into JavaScript, and Jade into HTML, taking the code we write and preparing it for browser consumption. We also use Grunt to optimize these various files for speed through bundling and minification. The output of this work is done at deployment rather than development, with only the source code committed into Git and never its optimized output.

Git Deploy and Kudu

Continuous deployment will automatically update your site with the latest source code whenever modifications are made to the source repository. This will also work with Mercurial. There is plenty of existing documentation on setting up Git Deploy in Azure, so consider that a prerequisite for this article. However, Git Deploy, alone, will only take the files as they are in source, and directly deploy them to the site. If you need to run additional tasks, such as compiling your .NET source or running Grunt, that is where Kudu comes in.

Kudu is the engine that drives Git deployments in Windows Azure. Untouched, it will simply synchronize files from Git to your /wwwroot, but it can be easily reconfigured to execute a deployment command, such as a Windows Command file, a Shell Script, or a nodejs script. This is enabled through a standardized file named ".deployment". For Grunt deployment, we are going to execute a Shell Script that will perform npm, Bower, and Grunt commands in an effort to make our code production-ready. For other options on .deployment, check out the Kudu project wiki.

Kudu is also available locally for testing, and to help build out your deployment scripts. The engine is available as a part of the cross-platform Windows Azure Command Line Tools, available through npm.

Installing the Azure CLI

npm install azure-cli –-global

We can also use the Azure CLI to generate default Kudu scripts for our nodejs project. Though we will need to make a few modifications to make the scripts work with Grunt, it will give us a good start.

azure site deploymentscript –-node

This command will generate both our <code>.deployment</code> and the default <code>deploy.sh</code>.

Our .deployment file

[config]
command = bash ./deploy.sh

Customizing deploy.sh for Grunt Deployment

From .deployment, Kudu will automatically execute our deploy.sh script. Kudu’s default deploy.sh for a nodejs project will establish the environment for node and npm as well as some supporting environment variables. It will also include a "# Deployment" section containing all of the deployment steps. By default, this will copy your repository contents to your /wwwroot, and then execute npm install --production against wwwroot, as if installing the application's operating dependencies. However, under Grunt, we want to execute tasks prior to /wwwroot deployment, such as executing our Grunt tasks to compile LESS into CSS and CoffeeScript into JavaScript. By replacing the entire Deployment section with the code below, we instruct Kudu to perform the following tasks:

  1. Get the latest changes from Git (or Hg). This is done automatically before running deploy.sh.
  2. Run npm install, installing all dependencies, including those necessary for development.
  3. Optionally run bower install, if bower.json exists. This will update our client-side JavaScript libraries.
  4. Optionally run grunt, if Gruntfile.js exists. Below, I have grunt configured to run the Clean, Common, and Dist tasks, which are LinemanJS's default tasks for constructing a production-ready build. You can update the script to run whichever tasks you need, or modify your Gruntfile to set these as the default tasks.
  5. Finally, sync the contents of the prepared /dist directory to /wwwroot. It is important to note that this is a KuduSync (similar to RSync), and not just a copy. We only need to update the files that changed, which includes removing any obsolete files.

Our deploy.sh file's Deployment Section

# Deployment
# ----------

echo Handling node.js grunt deployment.

# 1. Select node version
selectNodeVersion

# 2. Install npm packages
if [ -e "$DEPLOYMENT_SOURCE/package.json" ]; then
  eval $NPM_CMD install
  exitWithMessageOnError "npm failed"
fi

# 3. Install bower packages
if [ -e "$DEPLOYMENT_SOURCE/bower.json" ]; then
  eval $NPM_CMD install bower
  exitWithMessageOnError "installing bower failed"
  ./node_modules/.bin/bower install
  exitWithMessageOnError "bower failed"
fi

# 4. Run grunt
if [ -e "$DEPLOYMENT_SOURCE/Gruntfile.js" ]; then
  eval $NPM_CMD install grunt-cli
  exitWithMessageOnError "installing grunt failed"
  ./node_modules/.bin/grunt --no-color clean common dist
  exitWithMessageOnError "grunt failed"
fi

# 5. KuduSync to Target
"$KUDU_SYNC_CMD" -v 500 -f "$DEPLOYMENT_SOURCE/dist" -t "$DEPLOYMENT_TARGET" -n "$NEXT_MANIFEST_PATH" -p "$PREVIOUS_MANIFEST_PATH" -i ".git;.hg;.deployment;deploy.sh"
exitWithMessageOnError "Kudu Sync to Target failed"

These commands will execute bower and Grunt from local npm installations, rather than the global space, as Windows Azure does not allow easy access to global installations. Because bower and Grunt are manually installed based on the existence of bower.json or Gruntfile.js, they also are not required to be referenced in your own package.json. Finally, be sure to leave the –no-color flag enabled for Grunt execution, as the Azure Deployment Logs will stumble when processing the ANSI color codes that are common on Grunt output.

Assuming that Git Deployment has already been configured, committing these files in to Git will complete the process. Because the latest changes from Git are pulled before executing the deployment steps, these two new files (.deployment and deploy.sh) will be available when Kudu is ready for them.

Troubleshooting

Long Directory Paths and the 260-Character Path Limit

Though Azure does a fantastic job of hosting nodejs projects, at the end of the day Azure is still hosted on the Windows platform, and brings with it Windows limitations. One of the issues that you will quickly run into under node is the 260-Character Path Limitation. Under nodejs, the dependency tree for a node modules can get rather deep. And because each dependency module loads up its own dependency modules under its child folder structure, the folder structure can get rather deep, too. For example, Lineman requires Testem, which requires Winston, which requires Request; in the end, the directory tree can lead to ~/node_modules/lineman/node_modules/testem/node_modules/winston/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream, which combined with the root path structure, can far exceed the 260 limit.

The Workaround

To reduce this nesting, make some of these dependencies into first-level dependencies. With the nodejs dependency model, if a module has already been brought in at a higher level, it is not repeated in the chain. Thus, if Request is made as a direct dependency and listed in your project's project.json, it will no longer be nested under Winston, splitting this single dependency branch in two:

  1. ~/node_modules/lineman/node_modules/testem/node_modules/winston
  2. ~/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream

This is not ideal, but it will solve is a workaround for the Windows file structure limitations. The element that you must be careful of is with dependency versioning, as you will need to make sure your package.json references the appropriate version of your pseudo-dependency; in this case, make sure your package.json references the same version of Request as is referenced by Winston.

To help find those deep dependencies, use npm list. It will show you the full graph on the command line, supplying a handy visual indicator.

__dirname vs Process.cwd()

In the node ecosystem, Process.cwd() is the current working directory for the node process. There is also a common variable named __dirname that is created by node; its value is the directory that contained your node script. If you executed node against a script in the current working directory, then these values should be the same. Except when they aren't, like in Windows Azure.

In Windows Azure, everything is executed on the system drive, C:. Node and npm live here, and it appears as though your deployment space does as well. However, this deployment space is really a mapped directory, coming in from a network share where your files are persisted. In Azure's node ecosystem, this means that your Process.cwd() is the C-rooted path, while __dirname is the \\10.whatever-rooted UNC path to your persisted files. Some Grunt-based tools and plugins (including Lineman) will fail because that it will reference __dirname files while Grunt's core is attempting to run tasks with the scope of Process.cwd(); Grunt recognizes that it's trying to take action on \\10.whatever-rooted files in a C-rooted scope, and fails because the files are not in a child directory.

The Workaround

If you are encountering this issue, reconfigure Grunt to work in the \\10.whatever-rooted scope. You can do this by setting it's base path to __dirname, overriding the default Process.cwd(). Within your Gruntfile.js, set the base path immediately within your module export:

module.exports = function (grunt) {
  grunt.file.setBase(__dirname);
  // Code omitted
}

Unable to find environment variable LINEMAN_MAIN

If like me, you are using Lineman to build your applications, you will encounter this issue. Lineman manages Grunt and its configuration, so it prefers that all Grunt tasks are executed via the Lineman CLI rather than directly executed via the Grunt CLI. Lineman's Gruntfile.js includes a reference to an environment variable LINEMAN_MAIN, set by the Lineman CLI, so that Grunt will run under the context of the proper Lineman installation, which is what causes the failure if Grunt is executed directly.

The Fix (Because this isn't a hack)

Your development cycle has been configured to use lineman, so your deployment cycle should use it, too! Update your deploy.sh Grunt execution to run Lineman instead of Grunt. Also, since Lineman is referenced in your package.json, we don't need to install it; it is already there.

Option 1: deploy.sh

# 4. Run grunt
if [ -e "$DEPLOYMENT_SOURCE/Gruntfile.js" ]; then
  ./node_modules/.bin/lineman --no-color grunt clean common dist
  exitWithMessageOnError "lineman failed"
fi

Recommendation: Since Lineman is wrapping Grunt for all of its tasks, consider simplifying lineman grunt clean common dist into lineman clean build. You will still need the --no-color flag, so that Grunt will not use ANSI color codes.

The Alternate Workaround

If you don't want to change your deploy.sh—perhaps because you want to maintain the generic file to handle all things Grunt—then as an alternative you can update your Gruntfile.js to specify a default value for the missing LINEMAN_MAIN environment variable. This environment variable is just a string value passed in to node's require function, so that the right Lineman module can be loaded. Since Lineman is already included in your package.json, it will already be available in the local /node_modules folder because of the earlier npm install (deploy.sh, Step #2), and we can pass 'lineman' into require( ) to have Grunt load the local Lineman installation. Lineman will then supply its configuration into Grunt, and the system will proceed as if you executed Lineman directly.

Option 2: Gruntfile.js

module.exports = function(grunt) {
  grunt.file.setBase(__dirname);
  if (process.env['LINEMAN_MAIN'] === null || process.env['LINEMAN_MAIN'] === undefined) {
    process.env['LINEMAN_MAIN'] = 'lineman';
  }
  require(process.env['LINEMAN_MAIN']).config.grunt.run(grunt);
};

Credits

Thank you to @davidebbo, @guayan, @amitapl, and @dburton for helping troubleshoot Kudu and Grunt Deploy, making this all possible.

Changelog

2013-12-03: Updated LINEMAN_MAIN Troubleshooting to improve resolution. Rather than editing deploy.sh to set the environment variable, edit the file to execute Lineman. This is the proper (and more elegant) solution. [Credit: @searls]

Technorati Tags: ,,,,
Tuesday, 03 December 2013 00:34:25 (Eastern Standard Time, UTC-05:00)  #    Comments [4] - Trackback

Filed under: Events | Programming

The new season of Come Jam With Us in Ann Arbor is upon us. Come Jam With Us is a weekly software developers' study group in Ann Arbor for gaining exposure to and learning about many different software development topics. The group originally started in late 2008 by a group of developers looking for a way to help each other prepare for and pass one of the Microsoft .NET exams, and now has hour-long weekly Jam sessions covering Java, Ruby, .NET, F#, Silverlight, Design Patterns, and more.

The Winter/Spring 2010 Schedule begins this Tuesday, February 2nd, and continues every week until early May. Come Jam With Us in Ann Arbor, at the offices of SRT Solutions, 206 South 5th Ave, Suite 200. More information, including the prerequisites for each session (such as what software you need to have pre-installed), is available at the group's web site, http://www.comejamwithus.org.

Come Jam With Us in Ann Arbor

Every Tuesday, 5:30p-6:30p
February 2nd through May 5th, 2010

SRT Solutions
206 S. 5th Ave, Suite 200
Ann Arbor, MI 48104 | Map

Winter/Spring 2010 Jam Schedule

2-02 : Django with Darrell Hawley
2-09 : ASP.NET MVC2 with Jay Harris
2-16 : RESTful Web Services with Mike Smithson
2-23 : Erlang with Carl Wright
3-02 : MVVM with Brian Genisio
3-09 : F# (Part 1 of 3) with Chris Marinos
3-16 : F# (Part 2 of 3) with Chris Marinos
3-23 : F# (Part 3 of 3) with Chris Marinos
3-30 : WPF with Anne Marsan
4-06 : Getting to know jQuery with Dennis Burton
4-13 : Testing with WatiN with Jay Harris
4-20 : Adobe Air with Bill Heitzeg
4-27 : ActiveMQ with Becky Glesner
5-04 : NoSQL MongoDB with Dennis Burton

Sunday, 31 January 2010 20:28:41 (Eastern Standard Time, UTC-05:00)  #    Comments [3] - Trackback

Filed under: Programming | Quick Tips

Have you ever made comments when committing in to source control that you wish you could take back? Perhaps in a rage, you entered "Jimmy's code was a pile of fermenting humus that didn't work. So I fixed it!" Now you realize that Jimmy will see it, your boss is going to see it, and you want to change the comments to something that has a bit more tact. Or maybe your reason is far less malicious: you identified a major bug that you just committed, and you would like to update the comment log to say "Don't use this revision. It has a major bug."

In Subversion, the comments can be updated long after the original commit. Log messages are just a property on the repository revision.

svn propset --revision <REVISION> --revprop <MESSAGE> <URL>
  • <REVISION> : The revision number of the target log message.
  • <MESSAGE> : The value of the new log message, wrapped in quotes if necessary.
  • <URL> : The base URL of your repository. Since this applies to a revision property, rather than a file property, only the base URL of the repository is needed, rather than a URL directly to a file.

Now your malicious revision comment can be overwritten by:

svn propset --revision 123 --revprop "Fixed issue #17" http://svnserver/myrepos/

But next time, do try to be nice to Jimmy.

Friday, 27 February 2009 16:54:45 (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: Blogging | Flash | Programming

My earlier post on creating custom brushes in Google Syntax Highlighter (Extending Language Support in Google Syntax Highlighter) contains a rudimentary brush for ActionScript. The original is designed for Stone Soup; it is something to get an AS brush established, but is not meant to be exhaustive. I have revisited the brush and added some meat. The bush should now supply a more thorough coverage of the language. A download is provided below.

ActionScript Brush

dp.sh.Brushes.ActionScript = function() {
  var keywords = 'and arguments asfunction break call case catch clear ' +
    'continue default do else escape eval false finally for getProperty ' +
    'if ifFrameLoaded in instanceof loop NaN new newline not null or ' +
    'prototype return set super switch targetPath tellTarget this throw ' +
    'trace true try typeof undefined unescape var visible void while with';
  var builtin = '_currentframe _droptarget _framesloaded _global _height ' +
    '_level _name _root _rotation _target _totalframes _url _visible ' +
    '_width _x _xmouse _xscale _y _ymouse _yscale Array Boolean Button ' +
    'bytesLoaded bytesTotal Camera Color Date enabled Error focusEnabled ' +
    'Key LoadVars Math Mouse MovieClip nextFrame Number Object Selection ' +
    'Sound Stage String StyleSheet System TextFormat';
  var funcs = 'addProperty attachMovie attachVideo browse cancel ' +
    'clearInterval clone concat createEmptyMovieClip createTextField ' +
    'dispose draw duplicateMovieClip dynamic equals extends function ' +
    'getInstanceAtDepth gotoAndPlay gotoAndStop identity implements ' +
    'import interface isEmpty isFinite isNAN join length loadClip ' +
    'loadMovie loadMovieNum loadVariables loadVariablesNum merge moveTo ' +
    'on onClipEvent onDragOut onDragOver onEnterFrame onKeyDown onKeyUp ' +
    'onKillFocus onMouseDown onMouseMove onMouseUp onPress onRelease ' +
    'onReleaseOutside onRollOut onRollOver onUnload play pop prevFrame ' +
    'private public push registerClass removeMovieClip reverse rotate ' +
    'scale setEmpty setInterval setProperty shift slice sort sortOn ' +
    'splice startDrag static stopAllSounds stopDrag subtract swapDepths ' +
    'toString toString translate union unloadClip unloadMovie ' +
    'unloadMovieNum unshiftclass unwatch valueOf watch';
  var includes = '#include #initClip #endInitClip';

  this.regexList = [
    {regex: dp.sh.RegexLib.SingleLineCComments, css: 'comment' },
    {regex: dp.sh.RegexLib.MultiLineCComments, css: 'comment' },
    {regex: dp.sh.RegexLib.DoubleQuotedString, css: 'string' },
    {regex: dp.sh.RegexLib.SingleQuotedString, css: 'string' },
    {regex: new RegExp(this.GetKeywords(keywords), 'gm'), css: 'keyword' },
    {regex: new RegExp(this.GetKeywords(funcs), 'gm'), css: 'func' },
    {regex: new RegExp(this.GetKeywords(builtin), 'gm'), css: 'builtin' },
    {regex: new RegExp(this.GetKeywords(includes), 'gm'), css: 'preprocessor'}
  ];
  this.CssClass = 'dp-as';
  this.Style = '.dp-as .func { color: #000099; }' +
               '.dp-as .builtin { color: #990000; }';
}

dp.sh.Brushes.ActionScript.prototype = new dp.sh.Highlighter();
dp.sh.Brushes.ActionScript.Aliases = ['actionscript', 'as'];

Usage

Upload the Brush javascript file to your Google Syntax Highlighter Scripts directory, and load the file in unto your HTML with a <SCRIPT> tag with your other brushes.

<script language="javascript"
  src="dp.SyntaxHighlighter/Scripts/shBrushAs.js"></script>

Display syntax-highlighted ActionScript using a traditional Google Syntax Highlighter <PRE> tag, using as or actionscript as the language alias.

<pre name="code" class="as">
  // Some ActionScript Code
</pre>

Brush In Action

/*
Sample ActionScript for Demo
ActionScript Brush for Google Syntax Highlighter
*/
if (dteDate.getMonth() == intCurrMonth && intCurrMonth == intOldMonth
    && intOldYear == intCurrYear) {
  if (dteDate.getDay() == 0 and dteDate.getDate()>1) {
    intYPosition = intYPosition+20;
  }
  duplicateMovieClip ("DayContainer", "DayContainer"+intDate, intDate);
  setProperty ("DayContainer"+intDate, _y, intYPosition);
  setProperty ("DayContainer"+intDate, _x, intXPosition[dteDate.getDay()]);

  } else if (intCurrMonth == 6) {
    if (intDate == 4) {
      clrFColor = new Color("DayContainer"+intDate+".foreground");
      clrFColor.setRGB(0xFF0000);
      clrBColor = new Color("DayContainer"+intDate+".background");
      clrBColor.setRGB(0xFF0000);
    }
  } else if (intCurrMonth == 9) {
    if (intDate == 31) {
      clrFColor = new Color("DayContainer"+intDate+".foreground");
      clrFColor.setRGB(0xFF9922);
      clrBColor = new Color("DayContainer"+intDate+".background");
      clrBColor.setRGB(0xFF9922);
    }
  } else if (intCurrMonth == 10) {
    if (intDate >= 22 && intDate <= 28 && dteDate.getDay() == 4) {
      clrFColor = new Color("DayContainer"+intDate+".foreground");
      clrFColor.setRGB(0xFFCC00);
      clrBColor = new Color("DayContainer"+intDate+".background");
      clrBColor.setRGB(0xFFCC00);
    }
  set ("DayContainer"+intDate+":MyDate", new Date(dteDate.getFullYear(),
    dteDate.getMonth(), dteDate.getDate()));
  setProperty ("DayContainer"+intDate, _visible, true);
  intDate++;
  dteDate.setDate(intDate);
}

Download

Download: shBrushAs.zip
Includes:

  • Compressed shBrushAs.js for production. 
  • Uncompressed shBurshAs.js for debugging.

As always, this code is provided with no warranties or guarantees. Use at your own risk. Your mileage may vary.

 
Friday, 12 December 2008 08:25:38 (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: Blogging | Flash | JavaScript | Programming | Tools

As I discussed in an earlier post (Blog your code using Google Syntax Highlighter), Google Syntax Highlighter is a simple tool that allows bloggers to easily display code in a format that is familiar end users. The tool renders the code in a very consumable fashion that includes colored syntax highlighting, line highlighting, and line numbers. Out of the box it supports most of the common languages of today, and a few from yesterday, but some common languages are unsupported. Perl, ColdFusion, and Flash's ActionScript all are unloved by Google Syntax Highlighter, as are many others that you may want to post to your blog. For these languages, the solution is a custom brush.

Syntax Highlighting Brushes

For Google Syntax Highlighter, brushes are JavaScript files that govern the syntax highlighting process, with names following the format of shBrushLanguage.js, such as shBrushXml.js. Brushes contain information about the keywords, functions, and operators of a language, as well as the syntax for comments, strings, and other syntax characteristics. Keyword-level syntax is applied to any specific word in the language, including keywords, functions, and any word operators, such as and, or, and not. Regular expressions apply character-level syntax to code, and identifies items such as character operators, the remainder of an inline comment, or the entire contents of a comment block. Finally, aliases are defined for the brush; these are the language aliases that are used within the class attribute of the Google Syntax Highlighter <PRE> tag. With this information, the brush applies the syntax highlighting styles according to the CSS defined for each component of the language.

Breaking Down Brushes

Decomposing the SQL Brush

In JavaScript, everything is an object that can be assigned to a variable, whether its a number, string, function, or class. Brushes are each a delegate function. The variable name of the brush must match dp.sh.Brushes.SomeLanguage.

dp.sh.Brushes.Sql = function() {

Next, define the list of keywords for applying syntax highlighting. Each list is not an array, but rather a single-space delimited string of keywords that will be highlighted. Also, multiple keyword lists can exist, such as one list for function names, another for keywords, and perhaps another for types, and unique styling can be applied to each grouping (we'll get to styling a little later).

  var funcs = 'abs avg case cast coalesce convert count current_timestamp ' +
    'current_user day isnull left lower month nullif replace right ' +
    'session_user space substring sum system_user upper user year';

  var keywords = 'absolute action add after alter as asc at authorization ' +
    'begin bigint binary bit by cascade char character check checkpoint ' +
    'close collate column commit committed connect connection constraint ' +
    'contains continue create cube current current_date current_time ' +
    'cursor database date deallocate dec decimal declare default delete ' +
    'desc distinct double drop dynamic else end end-exec escape except ' +
    'exec execute false fetch first float for force foreign forward free ' +
    'from full function global goto grant group grouping having hour ' +
    'ignore index inner insensitive insert instead int integer intersect ' +
    'into is isolation key last level load local max min minute modify ' +
    'move name national nchar next no numeric of off on only open option ' +
    'order out output partial password precision prepare primary prior ' +
    'privileges procedure public read real references relative repeatable ' +
    'restrict return returns revoke rollback rollup rows rule schema ' +
    'scroll second section select sequence serializable set size smallint ' +
    'static statistics table temp temporary then time timestamp to top ' +
    'transaction translation trigger true truncate uncommitted union ' +
    'unique update values varchar varying view when where with work';

  var operators = 'all and any between cross in join like not null or ' +
    'outer some';

Following the keyword definitions is the Regular Expression pattern and Style definition object list. The list, this.regexList, is an array of pattern/style objects: {regex: regexPattern, css: classString}. The regexPattern is a JavaScript RegExp object, and defines the pattern to match in the source code; this pattern can be created using one of three options within Google Syntax Highlighter.

Predefined Patterns
Within Google Syntax Highlighter, dp.sh.RegexLib contains five predefined regular expression patterns. MultiLineCComments is used for any language that uses C-style multi-line comment blocks: /* my comment block */. SingleLineCComments is used for any language that uses C-style single line or inline comments: // my comment. SingleLinePerlComments applies for Perl-style single line comments: # my comment. DoubleQuotedString identifies any string wrapped in double-quotes and SingleQuotedString identifies strings wrapped in single-quotes. These options are used in place of creating a new instance of the RegExp object.
Keyword Patterns
Google Syntax Highlighter has a GetKeywords(string) function which will build a pattern string based on one of the brush's defined keyword strings. However, this is only the pattern string, and not the RegExp object. Pass this value into the RegExp constructor: new RegExp(this.GetKeyword(keywords), 'gmi')
Custom Pattern Definition
Create a new RegExp object using a custom pattern. For example, use new RegExp('--(.*)$', 'gm') to match all Sql comments, such as --my comment.

For these pattern/style objects, the regular expression pattern is followed by the name of the CSS class to apply to any regular expression matches. The style sheet packaged with Google Syntax Highlighter, SyntaxHighlighter.css, already defines the many CSS classes used by GSH; place the additional styles for your custom brushes within this file, in a new file, in your HTML, or defined them within the brush using JavaScript.

  this.regexList = [
    {regex: new RegExp('--(.*)$', 'gm'), css: 'comment'},
    {regex: dp.sh.RegexLib.DoubleQuotedString, css: 'string'},
    {regex: dp.sh.RegexLib.SingleQuotedString, css: 'string'},
    {regex: new RegExp(this.GetKeywords(funcs), 'gmi'), css: 'func'},
    {regex: new RegExp(this.GetKeywords(operators), 'gmi'), css: 'op'},
    {regex: new RegExp(this.GetKeywords(keywords), 'gmi'), css: 'keyword'}
  ];

The delegate definition ends with any style specifications. Apply a style sheet to the entire code block using this.CssClass. Also, as mentioned above, the brush can define custom CSS using this.Style as an alternative to placing the CSS in HTML or a CSS file. When finished, close the delegate.

  this.CssClass = 'dp-sql';
  this.Style = '.dp-sql .func { color: #ff1493; }' +
    '.dp-sql .op { color: #808080; }'; }

The final component of a Brush, set outside of your delegate, contains the prototype declaration and any aliases to apply to the Brush. Aliases consist of a string array (a real array this time, not a space-delimited string) of language aliases to use, such as ['c#','c-sharp','csharp']. Alias values must be unique across all defined brushes that you have included into your site.

dp.sh.Brushes.Sql.prototype = new dp.sh.Highlighter();
dp.sh.Brushes.Sql.Aliases = ['sql'];

Making a Custom Brush (for ActionScript)

I like rich media applications, such as those developed in Flash or Silverlight. I was surprised when I found that Google Syntax Highlighter does not ship with an ActionScript brush, and more surprised when I found out that no one has written one, yet. So, using the methods from above, I created one. This isn't meant to be an exhaustive brush, but more like Stone Soup. It's a start. Please feel free to add to it.

dp.sh.Brushes.ActionScript = function() {

  var keywords = 'and break case catch class continue default do dynamic else ' +
    'extends false finally for if implements import in interface NaN new not ' +
    'null or private public return static super switch this throw true try ' +
    'undefined var void while with';

  this.regexList = [{regex: dp.sh.RegexLib.SingleLineCComments, css: 'comment'},
    {regex: dp.sh.RegexLib.MultiLineCComments, css: 'comment'},
    {regex: dp.sh.RegexLib.DoubleQuotedString, css: 'string'},
    {regex: dp.sh.RegexLib.SingleQuotedString, css: 'string'},
    {regex: new RegExp(this.GetKeywords(keywords), 'gm'), css: 'keyword'}];

    this.CssClass = 'dp-as';
}

dp.sh.Brushes.ActionScript.prototype = new dp.sh.Highlighter();
dp.sh.Brushes.ActionScript.Aliases = ['actionscript', 'as'];
Wednesday, 10 December 2008 16:47:27 (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: ASP.Net | Blogging | Programming | SEO

Did you know that yourdomain.com and www.yourdomain.com are actually different sites? Are they both serving the same content? If so, it may be negatively impacting your search engine rankings.

Subdomains and the Synonymous 'WWW'

Sub-domains are the prefix to a domain (http://subdomain.yourdomain.com), and are treated by browsers, computers, domain name systems (DNS), search engines, and the general internet as separate, individual web sites. Google's primary web presence, http://www.google.com, is very different than Google Mail, http://mail.google.com, or Google Documents, http://docs.google.com, all because of subdomains. However, what many do not realize is that www is, itself, a subdomain.

A domain, on its own, requires no www prefix; a subdomain-less http://yourdomain.com should be sufficient for serving up a web site. And since www is a subdomain, dropping the prefix could potentially return a different response. There are some sites that will fail to return without the prefix, and some sites that fail with it, but the most common practice is that the www subdomain is synonymous for no subdomain at all.

The Synonymous WWW and SEO

The issue with having two synonymous URLs (http://yourdomain.com and http://www.yourdomain.com) is that search engines may interpret them as separate sites, even if they are serving the same content. The two addresses are technically independent and are potentially serving unique content; to a cautious search engine, even if pages appear to contain the same content, there may be something different under the covers. This means your audience's search results returns two entries for the same content. Some users will happen to click on yourdomain.com while others navigate to www.yourdomain.com, splitting your traffic, your page hits, your search ranking between two sites, unnecessarily.

HTTP Redirects will cure the issue. If you access http://google.com, your browser is instantly redirected to http://www.google.com. This is done through a HTTP 301 permanent redirect. Search Spiders recognize HTTP response codes, and understand the 301 as a "use this other URL instead" command. Many search engines, such as Google, will then update all page entries for the original URI (http://yourdomain.com) and replace it with the 301's destination URL (http://www.yourdomain.com). If there is already an entry for the destination URL, the two entries will be merged together. The search entries for yourdomain.com and www.yourdomain.com will now share traffic, share page hits, and share search ranking. Instead of having two entries on the second and third pages of search results, combining these entries may be just enough to place you on the first page of results.

In addition to combining search entries for subdomains, you can also combine root-level domains through HTTP 301. On this site, in addition to adding the www prefix if no subdomain is specified, captainloadtest.com will HTTP 301 redirect to www.cptloadtest.com.

Combining the Synonyms

We need a way to implement an HTTP 301 redirect at the domain level for all requests to a site; however, often we are using applications that may not grant us access to the source, or we don't have the access into IIS through our host to set up redirects for ourselves. URL Rewrite, Part 2 covers a great drop-in redirect module by Fritz Onion that uses a stand-alone assembly with a few additions in web.config to HTTP 301 redirect paths in your domain (it also supports HTTP 302 redirects). This module is perfect for converting a WordPress blog post URL, such as cptloadtest.com/?p=56, to a DasBlog blog post URL like cptloadtest.com/2006/05/31/VSNetMacroCollapseAll.aspx. However, to redirect domains and subdomains, the module must go a step further and redirect based on matches against the entire URL, such as directing http:// to https:// or captainloadtest.com to cptloadtest.com, which it does not support. It's time for some modifications.

private void OnBeginRequest(object src, EventArgs e) {
  HttpApplication app = src as HttpApplication;
  string reqUrl = app.Request.Url.AbsoluteUri;
  redirections redirs
    = (redirections) ConfigurationManager.GetSection("redirections");

  foreach (Add a in redirs.Adds) {
    Regex regex = new Regex(a.targetUrl, RegexOptions.IgnoreCase);
    if (regex.IsMatch(reqUrl)) {
      string targetUrl = regex.Replace(reqUrl, a.destinationUrl, 1);

      if (a.permanent) {
        app.Response.StatusCode = 301; // make a permanent redirect
        app.Response.AddHeader("Location", targetUrl);
        app.Response.End();
      }
      else
        app.Response.Redirect(targetUrl);

      break;
    }    
  }
}

By converting app.Request.RawURL to app.Request.AbsoluteUri, the regular expression will now match against the entire URL, rather than just the requested path. There is one downside to this change: the value is the actual path processed, not necessarily what was in the originally requested URL. To this effect, the value of AbsoluteUri for requesting http://www.cptloadtest.com?p=56 is actually http://www.cptloadtest.com/default.aspx?p=56; by requesting the root directory, the default page is being processed, not the directory itself, so default.aspx is added to the URL. Keep this in mind when setting up your redirection rules. Also, the original code converted the URL to lower case; with my modifications, I chose to maintain the case of the URL, since sometimes case matters, and instead ignore case in the regular expression match using RegexOptions.IgnoreCase. Finally, I made some other minor enhancements, like using the ConfigurationManager, since ConfigurationSettings is now obsolete, and reusing the matching Regex instance for replacements.

Download: RedirectModule.zip

Includes:

  • Source code for the drop-in Redirect Module
  • Sample web.config that uses the module
  • Compiled version of redirectmodule.dll

The code is based on the original Redirect Module by Fritz Onion and the Xml Serializer Section Handler by Craig Andera. As always, this code is provided with no warranties or guarantees. Use at your own risk. Your mileage may vary. Thanks to Fritz Onion for the original work, and allowing me extend his code further.

The usage is the same as Fritz Onion's original module. Drop the assembly into your site's bin, and place a few lines into the web.config. The example below contains the rules as they would apply to this site, 301 redirecting http://www.captainloadtest.com to http://www.cptloadtest.com, and adding the www subdomain to any domain requests that have no subdomain.

<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="redirections"
      type="Pluralsight.Website.XmlSerializerSectionHandler, redirectmodule" />
  </configSections>
  <!-- Redirect Rules -->
  <redirections type="Pluralsight.Website.redirections, redirectmodule">
    <!-- Domain Redirects //-->
    <add targetUrl="captainloadtest\.com/Default\.aspx"
      destinationUrl="cptloadtest.com/" permanent="true" />
    <add targetUrl="captainloadtest\.com"
      destinationUrl="cptloadtest.com" permanent="true" />

    <!-- Add 'WWW' to the domain request //-->
    <add targetUrl="://cptloadtest\.com/Default\.aspx"
      destinationUrl="://www.$1.com/" permanent="true" />
    <add targetUrl="://cptloadtest\.com"
      destinationUrl="://www.$1.com" permanent="true" />

    <!-- ...More Redirects -->
  </redirections>
  <system.web>
    <httpModules>
      <add name="RedirectModule"
        type="Pluralsight.Website.RedirectModule, redirectmodule" />
    </httpModules>
  </system.web>
</configuration>

The component is easy to use, and can redirect your site traffic to any URL you choose. Neither code changes to the application nor configuration changes to IIS are needed. By using this module to combine synonymous versions of your URLs, such as alternate domains or subdomains, you will improve your page ranking through combining duplicate search result entries. One more step towards your own search engine optimization goals.

URL Rewrite

Thursday, 04 December 2008 16:43:10 (Eastern Standard Time, UTC-05:00)  #    Comments [4] - Trackback

Filed under: Blogging | JavaScript | Programming | Reviews | Tools

Google Syntax Highlighter is a simple tool that allows bloggers to easily display code in a format that is familiar end users. The tool renders the code in a very consumable fashion that includes colored syntax highlighting, line highlighting, and line numbers.

/*
This is an example of how Google
Syntax Highlighter can highlight and display syntax
to you, the end user
*/
public void HelloWorld()
{
  // I have some comments
  Console.WriteLine("Hello, World!");
}

It is purely a client-side tool, as all of the processing is done strictly within the browser through JavaScript. There is no server-side processing. Since it is all JavaScript, you don't need special Copy/Paste plugins and macros installed to your favorite IDE or your blog authoring tool. (I am leery of random plugins and installing them into the software that I use to feed my family.) To including code in your blog post, copy your code from Visual Studio, Notepad, Flash, Firebug, or any tool that displays text, and paste it in to your post. As of v1.5.1, Google Syntax Highlighter supports C, C++, C#, CSS, Delphi, HTML, Java, JavaScript, PHP, Pascal, Python, Ruby, SQL, VB, VB.NET, XML, XSLT, and all of this is just what comes out of the box.

Setting Up Syntax Highlighter

To get Syntax Highlighter running on your blog, download the latest version of the RAR archive and extract the code. The archive contains a parent folder, dp.SyntaxHighlighter, with three child folders:

dp.SyntaxHighlighter
  \Scripts         //Production-ready (Compressed) scripts
  \Styles          //CSS
  \Uncompressed    //Human-readable (Uncompressed/Debug) scripts

Once the archive is extracted, upload dp.SyntaxHighlighter to your blog. Feel free to rename the folder if you like, though I did not. It is not necessary to upload the Uncompressed folder and its files; they are best used for debugging or for viewing the code, as the files in the Scripts folder have been compressed to reduce bandwidth by having most of their whitespace removed.

After you have uploaded the files, you will need to add script and style references to your site's HTML. This is code is not for your posts, but rather for your blog template. In DasBlog, I place this code in the <HEAD> block of my homeTemplate.blogtemplate file. Remember to change the file paths to match the path to where you uploaded the code.

<link type="text/css" rel="stylesheet"
  href="dp.SyntaxHighlighter/Styles/SyntaxHighlighter.css"></link>
<script language="javascript" src="dp.SyntaxHighlighter/Scripts/shCore.js"></script>
<script language="javascript" src="dp.SyntaxHighlighter/Scripts/shBrushCSharp.js"></script>
<script language="javascript" src="dp.SyntaxHighlighter/Scripts/shBrushXml.js"></script>
<script language="javascript">
window.onload = function () {
  dp.SyntaxHighlighter.ClipboardSwf = 'dp.SyntaxHighlighter/Scripts/clipboard.swf';
  dp.SyntaxHighlighter.HighlightAll('code');
}
</script>

To make the tool most efficient, including minimizing the code download by the client browser, highlighting is only enabled for the languages that you specify. The highlighting rules for each language is available through a file referred to as a Brush. The code sample above enables only C# and XML/HTML by including the core file, shCore.js, the C# brush, shBrushCSharp.js and the XML/HTML brush, shBrushXml.js. A unique brush file is available for each of the supported languages, and only the core file is required. These brushes are located in your Scripts directory (the human-readable version is in the Uncompressed folder). Include only the brushes that you like; if you forgot a language brush, the code will still display on your page, but as unformatted text.

<!-- Unformatted HTML Code / No Brush -->
<p id="greeting">Hi, mom & dad!</p>
<!-- Formatted HTML Code -->
<p id="greeting">Hi, mom & dad!</p>

Making Syntax Highlighter Go

Now that the application is deployed to the site, how does it get applied to a post? Paste the code into the HTML view of your post, inside of a <PRE> tag. Create a name attribute on your tag with a value of code, and a class attribute set to the language and options you are using.

<pre name="code" class="c-sharp">
  public void HelloWorld()
  {
    Console.WriteLine("Hello, World!");
  }
</pre>

One catch is the code must be first made HTML-safe. All angle-brackets, <tag>, must be converted to their HTML equivalent, &lt;tag&gt;, as well as ampersands, & to &amp;. I also find it helpful if your code-indentation uses two-spaces, rather than tabs.

<!-- Pre-converted code -->
<p>Hi, mom & dad!</p>
<!-- Converted code -->
<pre name="code" class="html">
  &lt;p&gt;Hi, mom &amp; dad!&lt;/p&gt;
</pre>

The class attribute is made up of both language and option aliases. These aliases consist of one language followed by your desired options, all in a colon delimited list.

class="language[:option[:option[:option]]]"

The value of language is any of Syntax Highlighter's defined language aliases, such as c#, csharp, or c-sharp for C#, or rb, ruby, rails, or ror for Ruby. See: full list of available languages.

Options allow for such things as turning off the plain text / copy / about controls (nocontrols), turning off the line number gutter (nogutter), or specifying the number of the first line (firstline[n]). A JavaScript code block with no controls header, and starting the line numbering at 34 would have a class attribute value of class="js:nocontrols:linenumber[34]". See: full list of available options.

Extending Syntax Highlighter

Because Google Syntax Highlighter is entirely in JavaScript, you have access to all of the code. Edit it however you like to suit your needs. Additionally, brushes are very easy to create, and include little more than a list of a highlighted language's keywords in a string and an array of language aliases. Creating a brush for ActionScript or QBasic would not take much time. Language brushes exist in the wild for Perl, DOS Batch, and ColdFusion.

In a future post I plan on discussing Brush Creation in depth through creating a brush for ActionScript.

Comparing Syntax Highlighter to Others

I am a fan of this tool, though that should be obvious considering it is what I use on this blog. I like how readable the code is, how extendable it is, and how easy it is to use. I don't like its compatibility--or lack thereof--with RSS; since all of the work is done in JavaScript, and RSS doesn't do JavaScript, there is no syntax highlighting, numbers, or options within a feed, though the code formatting is still maintained. Other tools, like the CopySourceAsHtml plugin for Visual Studio or Insert Code Snippet for Windows Live Writer convert your code into formatted HTML, where all of the syntax highlighting is applied through HTML attributes and embedded CSS. Their methods are much easier than Syntax Highlighter, since there are no stylesheets or JavaScript files to include in your HTML, and you don't have to worry about making your code HTML-safe. Also, their method works in RSS feeds. However, there isn't the same level of control. Through Syntax Highlighter's extendibility, I can theme my code views, such as if I wanted them to look like my personal Visual Studio theme. Through Syntax Highlighter, I can also make changes at a later time, and those changes will immediately reflected in all past posts, whereas making modifications to the HTML/embedded CSS pattern is much more difficult.

Final Thoughts

I like CopySourceAsHtml in Visual Studio. I used it for years on this blog. But I code in more languages than VB.Net or C#, and the plugin isn't available within the Flash or LoadRunner IDE. I was also frustrated with pasting my code in, only to find that it was too wide for my blog theme's margins, and would have to go back to Visual Studio, change my line endings, and repeat the process. I'm sticking with Google Syntax Highlighter. It works for all of my languages (as soon as I finish writing my ActionScript brush), and when my line endings are too long, I simply change my HTML. And in my HTML, my code still looks like code, rather than a mess of embedded style. I have to sacrifice RSS formatting, but as a presentation developer that is very particular about his HTML, I am glad for the customization and control.

Monday, 24 November 2008 10:41:50 (Eastern Standard Time, UTC-05:00)  #    Comments [9] - Trackback

Filed under: ASP.Net | Programming

A few months ago I switched my blogging engine from WordPress to DasBlog, and was left with a pile of broken post links. The WordPress format of formatting URLs, such as cptloadtest.com/?p=56, no longer applied; DasBlog has its own way of building the URL: cptloadtest.com/2006/05/31/VSNetMacroCollapseAll.aspx. Other people, with links to my posts on their own blogs, and search engines, with search results pointing to the old format, no longer directed people to the proper page. Fortunately, the old WordPress way directed users to the domain root, so they saw the main page instead of a 404, but what I really needed was a way to get users to the content they came for. But, I wanted a way that didn't involve customizing DasBlog code.

Part 1 discusses some client-side options for redirecting traffic. As a developer, I would only need to add some quick JavaScript or HTML to my pages to reroute traffic, but I wanted a better way. I wanted Google's links to be updated, rather than just for the link to correctly route. I want a solution that worked without JavaScript. I wanted a solution that didn't download the page twice, as any client-side processor would do. I wanted the HTTP 301.

HTTP 301, Move Permanently, is a server-side redirection of traffic. No client-side processing is involved. And Google recognizes that this is a permanent redirect, as it's name implies, so the Google result link is updated to the new URL. But since this is a server-side process, and I don't have full access to IIS settings with my hosting provider, I needed some code.

response.StatusCode = 301;
response.AddHeader("Location", "http://www.cptloadtest.com");
response.End();

You may have noticed that, unlike the title of this blog post, I did not use a URL rewrite. HTTP 301 (and 302) redirects are HTTP Response Codes that are returned to the client browser, and the client then requests the new URL. A URL Rewrite leaves the original URL in place, and renders a different page instead, without visibility to the client browser or end-user. If I were to URL rewrite my blog from page1.aspx to page2.aspx, the URL in your address bar would still list page1.aspx, even though the content was from page2.aspx. The end-user (and the search engine) never know about the URL for page2.aspx. However, with a HTTP redirect, the content and the address bar would both be page2.aspx.

I ran through Google to see if anyone had solved this problem in the past. I came across a Scott Hanselman post on HTTP 301, which mentions a fantastic drop-in redirect module by Fritz Onion. I was elated when I found this module, and it was very easy to implement. Drop the assembly into your application's bin, add a few lines into the web.config, and enjoy redirection bliss. The module also supports both HTTP 302 and HTTP 301 redirects through the "permanent" attribute in the redirection rules.

<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="redirections"
      type="Pluralsight.Website.XmlSerializerSectionHandler, redirectmodule" />
  </configSections>
  <!-- Redirect Rules -->
  <redirections type="Pluralsight.Website.redirections, redirectmodule">
    <!-- Wordpress Post Redirects //-->
    <add targetUrl="/default\.aspx\?p=86"
      destinationUrl="/2007/09/22/ManagingMultipleEnvironmentConfigurationsThroughNAnt.aspx"
      permanent="true" />
    <add targetUrl="/default\.aspx\?p=74"
      destinationUrl="/2007/02/09/Flash8DepthManagerBaffledByThoseWithoutDepth.aspx"
      permanent="true" />
    <add targetUrl="/default\.aspx\?p=72"
      destinationUrl="/2006/10/20/IE7Released.aspx"
      permanent="true" />
    <add targetUrl="/default\.aspx\?p=70"
      destinationUrl="/2006/10/17/ClearingFlashIDEWSDLCache.aspx"
      permanent="true" />
    <add targetUrl="/default\.aspx\?p=69"
      destinationUrl="/2006/10/10/CruiseControlNetV11ReleasedOct1.aspx"
      permanent="true" />

    <!-- ...More Redirects -->
  </redirections>
  <system.web>
    <httpModules>
      <add name="RedirectModule"
        type="Pluralsight.Website.RedirectModule, redirectmodule" />
    </httpModules>
  </system.web>
</configuration>

The entire implementation took only a few minutes. It seemed like it took longer to upload the changes to the blog than it did to make the local edits. The solution is also very clean, as it required no modification to the existing application beyond integrating the configuration blocks into the web.config. Blog traffic will now seamlessly redirect from links that use the old WordPress URL to post that use the new DasBlog URL. Search Engines also update their links, and the page ranking for each post isn't lost into the ether just because the URL changed. All is well with the world, again.

Next up: Part 3. URL redirection also plays a role in Search Engine Optimization. In Part 3, I will go over some ways that you can use HTTP redirects to improve your search engine listings, as well as discuss some improvements I made to Fritz Onion's redirect module in the name of SEO.

URL Rewrite

Technorati Tags: ,,
Tuesday, 11 November 2008 12:36:34 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: ASP.Net | Programming

A few months ago I switched my blogging engine from WordPress to DasBlog. Though I do feel that WordPress is a more mature engine, with a more robust feature set and a better plug-in community, I wanted a site that was built on .NET. Being a .NET developer, I wanted something I could tinker with,  modify, and adjust to my liking.

Redirecting WordPress URL Format to DasBlog

One of the pain points of the switch was with the differences in how the two blogging engines generated their URLs. As an example, my post on the Collapse All macro for the Visual Studio Solution Explorer, originally written in WordPress, went from a URL of cptloadtest.com/?p=56 to cptloadtest.com/2006/05/31/VSNetMacroCollapseAll.aspx. A few sites, such as Chinhdo.com, link to the post's old URL, which without effort to put a redirect in place would return HTTP 404: File Not Found under the new engine.

Enter URL redirects. I needed a way to translate one URL into another. More specifically, since WordPress retrieves all of its posts by query string (using the 'p' key), I needed to translate a specific query string key/value pair into a URL. But, I didn't want to have to recompile any code.

There are a view client-side redirecting options available:

Meta Refresh

<meta http-equiv="refresh" content="0;url=http://www.cptloadtest.com/">

The Old-School HTML way of redirecting a page. Nothing but HTML is needed. Only access to the base HTML page is required. Place the above HTML within your <HEAD> tag, reformatting the content value with "X;url=myURL", where X is the number of seconds to wait before initiating the redirect, and myURL is the destination URL. Any browsers will obey Meta-Refresh, regardless of JavaScript's enabled setting, but the simple implementation brings with it simple features. Meta-Refresh will redirect any request the page, regardless of the query string, and there is no simple way to enable it only for the specific key/value query string pair. Because all WordPress requests are to the domain root--a single page--and referenced by query string, this would not work for my needs. If the old request was instead to a unique former page, such as /post56.html, I could go create a new copy of post56.html as just a stub, and have each page Meta-Refresh to the post's new location. But even if this did apply, it's too much work to create 56 stub pages for 56 former posts.

JavaScript Redirect

<script type="text/javascript">
<!--
window.location = "http://www.cptloadtest.com/"
//-->
</script>

This one requires JavaScript to work. Since the query string is on the domain root, the request is serving up Default.aspx and applying the query string to it. DasBlog doesn't observe the "p" query string key, so it will not perform any action against it, but the query string is still available to client-side scripts. I can add JavaScript code to perform regular expression matches on the URL (window.location), and upon match rewrite the old URL to the new and redirect. It's relatively simple to create new match patterns for each of my 56 posts, and I can place all of my new client-side code atop Default.aspx.

var oldPath = "/?p=56";
var newPath = "/2006/05/31/VSNetMacroCollapseAll.aspx";
if (document.URL.indexOf(oldPath) > 0)
{
  window.location.replace(document.URL.replace(oldPath, newPath));
}

However, since the patterns are in client-side code, they are visible to end-users who view source. Users will also see page-flicker, as the Default.aspx is served up using the old URL, only to be redirected and refreshed under the new URL; a side-effect of downloading the page twice is that my bandwidth is also doubled for that single request, since users downloaded the page twice.

What it all means

All of the options above result in functionality similar a HTTP 302, otherwise known as a Temporary Redirect. This class of redirect is meant for simple forwarding or consolidation that is either of temporary nature or part of intended functional implementation. An example of where this would be used is if after one page is finished processing, another should be returned, such as if authentication is complete on a login page and the client should be redirected to a home page or landing page. With a 302, the original page is still a valid page, and users should still go to it in the future, but under certain scenarios there is an alternate page should be used.

The alternative to a 302 is a HTTP 301, otherwise known as Moved or a Permanent Redirect. The 301 is for files which have permanently changed locations and come with an implied "please update your links" notice. The HTTP 301 is ultimately what I was looking for. cptloadtest.com/?p=56 is a defunct URL that will never again see light of day; users, web sites, and (most importantly) search engines should update their references to the new DasBlog format of the post's URL. Client-side coding doesn't have the ability to create an HTTP 301, so it was beginning to look like I may either have to modify DasBlog code to get my 301s or live without. But, I found a way; this site now has all of the 301 goodness I craved, while keeping the DasBlog code pure.

It's all about HTTP Modules.

In Part 2, I will go over how to use HTTP Modules to implement both HTTP 301 and HTTP 302 redirects, all with simply dropping a file into your application bin and adding a new section to your web.config. No compile required.

URL Rewrite

Technorati Tags: ,
Thursday, 06 November 2008 10:01:33 (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: Programming | Testing | Tools

Recently, I was writing unit tests for a web application built on Castle ActiveRecord. My goal was to mock ActiveRecord's data store, rather than use a Microsoft SQL Server database for testing. SQL Server backing just would not fit my needs, where a mock data store would serve much better:

  • I did not want a SQL Server installation to be a requirement for me, the other developers, and my Continuous Integration server.
  • I wanted something fast. I didn't want to have to wait for SQL Server to build / tear down my schema.
  • I wanted something isolated, so the other developers, and my CI server, and I wouldn't have contention over the same database, but didn't want to have to deal with independent SQL Server instances for everyone.

Essentially what I wanted was a local, in-memory database that could be quickly initialized and destroyed specifically for my tests. The resolution was using SQLite for ADO.Net, using an in-memory SQLite instance. Brian Genisio has a fantastic write-up on mocking the data store for Castle ActiveRecord using this SQLite for ADO.Net. The post made my day, since I was looking for a way to do this, and he had already done all of the work <grin/>. I encourage you to read his post first, as the rest of this post assumes you have already done so.

Brian's post was a great help to me; I made a few enhancements to what he started to make it fit my needs even more.

My updated version of Brian's ActiveRecordMockConnectionProvider class:

using System;
using System.Collections;
using System.Data;
using System.Reflection;
using Castle.ActiveRecord;
using Castle.ActiveRecord.Framework;
using Castle.ActiveRecord.Framework.Config;
using NHibernate.Connection;

namespace ActiveRecordTestHelper
{
  public class ActiveRecordMockConnectionProvider : DriverConnectionProvider
  {
    private static IDbConnection _connection;

    private static IConfigurationSource MockConfiguration
    {
      get
      {
        var properties = new Hashtable
            {
              {"hibernate.connection.driver_class",
                "NHibernate.Driver.SQLite20Driver"},
              {"hibernate.dialect", "NHibernate.Dialect.SQLiteDialect"},
              {"hibernate.connection.provider", ConnectionProviderLocator},
              {"hibernate.connection.connection_string",
                "Data Source=:memory:;Version=3;New=True;"}
            };

        var source = new InPlaceConfigurationSource();
        source.Add(typeof (ActiveRecordBase), properties);

        return source;
      }
    }

    private static string ConnectionProviderLocator
    {
      get { return String.Format("{0}, {1}", TypeOfEnclosingClass.FullName,
                                    EnclosingAssemblyName.Split(',')[0]); }
    }

    private static Type TypeOfEnclosingClass
    {
      get { return MethodBase.GetCurrentMethod().DeclaringType; }
    }

    private static string EnclosingAssemblyName
    {
      get { return Assembly.GetAssembly(TypeOfEnclosingClass).FullName; }
    }

    public override IDbConnection GetConnection()
    {
      if (_connection == null)
        _connection = base.GetConnection();

      return _connection;
    }

    public override void CloseConnection(IDbConnection conn) {}

    /// <summary>
    /// Destroys the connection that is kept open in order to keep the
    /// in-memory database alive. Destroying the connection will destroy
    /// all of the data stored in the mock database. Call this method when
    /// the test is complete.
    /// </summary>
    public static void ExplicitlyDestroyConnection()
    {
      if (_connection != null)
      {
        _connection.Close();
        _connection = null;
      }
    }

    /// <summary>
    /// Initializes ActiveRecord and the Database that ActiveRecord uses to
    /// store the data. Call this method before the test executes.
    /// </summary>
    /// <param name="useDynamicConfiguration">
    /// Use reflection to build configuration, rather than the Configuration
    /// file.
    /// </param>
    /// <param name="types">
    /// A list of ActiveRecord types that will be created in the database
    /// </param>
    public static void InitializeActiveRecord(bool useDynamicConfiguration,
                                              params Type[] types)
    {
      ActiveRecordStarter.ResetInitializationFlag();
      IConfigurationSource configurationSource = useDynamicConfiguration
                                       ? MockConfiguration
                                       : ActiveRecordSectionHandler.Instance;
      ActiveRecordStarter.Initialize(configurationSource, types);
      ActiveRecordStarter.CreateSchema();
    }

    /// <summary>
    /// Initializes ActiveRecord and the Database that ActiveRecord uses to
    /// store the data based. Configuration is dynamically generated using
    /// reflection. Call this method before the test executes.
    /// </summary>
    /// <param name="types">
    /// A list of ActiveRecord types that will be created in the database
    /// </param>
    [Obsolete("Use InitializeActiveRecord(bool, params Type[])")]
    public static void InitializeActiveRecord(params Type[] types)
    {
      InitializeActiveRecord(true, types);
    }
  }
}

In my class I have overloaded the method InitializeActiveRecord to include the boolean parameter useDynamicConfiguration, governing if the configuration is dynamically built using Reflection or if the configuration in your app.config is used instead. If the parameter is not specified, it default to false (Use app.config).

Why? Brian's original code, as is, is meant to be dropped in as a new class within your test assembly, and uses reflection to dynamically determine the provider information, including the fully-qualified class name and assembly of the new DriverConnectionProvider. Reflection makes for little effort for me when I want to drop in the class into a new test assembly. Drop it in and go; no need to even modify the app.config. However, if I want to switch my provider back to SQL Server or some other platform, I have to modify the code and recompile.

My modifications remove the restriction of configuration in compiled code, allow configuration to be placed in app.config, while preserving the existing functionality for backward compatibility. By allowing app.config-based configuration, users can quickly switch back-and-forth between SQLite and SQL Server databases without having to modify and recompile the application. To use this customized ActiveRecordMockConnectionProvider class without dynamic configuration, add the following code to the configuration block of your test's app.config.

<activerecord>
  <config>
    <add key="hibernate.connection.driver_class"
      value="NHibernate.Driver.SQLite20Driver" />
    <add key="hibernate.dialect" value="NHibernate.Dialect.SQLiteDialect" />
    <add key="hibernate.connection.provider"
      value="ActiveRecordTestHelper.ActiveRecordMockConnectionProvider, ActiveRecordTestHelper" />
    <add key="hibernate.connection.connection_string"
      value="Data Source=:memory:;Version=3;New=True;" />
  </config>
</activerecord>

The catch is that you will need to know the fully-qualified class and assembly information for your provider (Line 6, above). This means you will have to modify it for every test assembly. To get around this, compile the code into a separate assembly (I called mine 'ActiveRecordTestHelper.dll'), and reference this new assembly in your test assembly. By using a separate assembly, you no longer need to modify the activerecord configuration block for every instance, and can reuse the same block everywhere the new assembly is referenced.

And to switch over from in-memory SQLite to SQL Server, just comment out the SQLite block and uncomment the SQL Server block (or whatever other provider you are using).

Download: ActiveRecordMockConnectionProvider.zip
Includes:

  • Source code for the ActiveRecordMockConnectionProvider class
  • Sample Class that uses the new provider
  • Sample app.config containing the ActiveRecord block using the provider.
  • Compiled versions of ActiveRecordTestHelper.dll

As always, this code is provided with no warranties or guarantees. Use at your own risk. Your mileage may vary.
And thanks again to Brian Genisio.

Thursday, 30 October 2008 09:10:01 (Eastern Standard Time, UTC-05:00)  #    Comments [1] - Trackback

Filed under: Business | Programming

At the devLink Technical Conference, one of the Open Spaces focused on Computer Science curriculum at universities, and what things that the developer community would CRUD on the CompSci tradition. Though I did not have opportunity to participate in the discussion—I was facilitating an Open Space on Continuous Integration, next door—I do have one proposal: "Writing." For Computer Scientists—a traditionally introverted and communication-challenged group—programming in English (substitute with your native language) should be paramount. Communicating to humans is part of our job description, and we must be able to do so effectively and using their language, whether it be for status updates, business justifications, SOWs, proposals, or just another email. Developers need to communicate effectively; write well rather than write good. We must be right like a writer.

We programmers should write like we code. The written word should be concise, to the point, just like code. Coders do not frivolously use fancy namespaces and complicated classes so that their code looks smart, as it has the opposite effect by resulting in bloated, inefficient, unmaintainable systems. Big words implemented frivolously for the sake of sounding smart perform the opposite function, and likewise result in bloated, inefficient, unmaintanable text. We avoid power robbers in our code, such as using string builders rather than underperforming string concatenations, and should avoid similar performance kills in our writing. Additionally, using the wrong keyword in code, or misusing grammatical marks in our code, results in compiler errors or runtime errors. The written word—a language void of any compiler benefits—throws runtime exceptions on execution when it is improperly authored, much like JavaScript or XSL.

“Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all sentences short or avoid all detail and treat subjects only in outline, but that every word tell.” — William Strunk

But like a programming language, English is simply a matter of keywords and laws. We must learn the rules of the system; we must learn its syntax; we must learn how to test and validate our code before shipping it off to a client or to production. Approaching the English language like we approach a programming language would also provide an effective learning mechanism for us developer-types. This would make an effective course at university: "Writing : For Programmers." Learn English like we learn any other language—approach it using our virtues—as what works for us may not be the same path that works for an English major.

Throughout my career, I have noticed a few areas that are typically mis-coded. I have included a few items below that every developer should be aware of to help learn English's keywords, its laws, and to provide opportunities to improve end-user experience.

Knowing the Language at an Elementary Level

  1. Homonyms : words that sound the same but can mean different things. This is often a challenge for people new to English, but even veterans get confused. Everyone should learn the difference, and make proper use a habit. And proof read, as spell check will not catch misuse of homonyms. This applies to:
    • Their / There / They're
    • To / Too / Two
    • Your / You're
    • Its / It's
    • Hear / Here
    • Threw / Through
    • Write / Right
  2. Irregardless is not a word.
  3. Who vs. Whom : Who, the subject, is doing the acting, and whom, the object, is being acted upon. The shortcut is to use who and whom as you would he and him. (Remember that him and whom both end in "m".) <Who did what to whom for how many jellybeans? He did that to him for five jellybeans.>
  4. Me vs. I : Follow the same rules as for who vs. whom, above. The subject is doing the acting, and the object is being acted upon. Subject pronouns are I, he, she, we, they, and who. Object pronouns are me, him, her, us, them, and whom. <Sally gave me five dollars. I used to money to buy lunch.>
  5. Than vs. Then : A comparison (than) vs. a measure of time (then). In code, than is used when describing a comparison operator; the '>' operator is a greater-than operator. In code, time sequences are an if...then statement. <Apples are better than oranges. I will eat my apple first, then I will eat my orange.>

Knowing the Language at a High School Level

  1. Power Robbers : Never use due to the fact that. It weakens your sentence. Use because.
  2. Contractions : In formal writing (e.g. Proposals, SOWs), avoid contractions. Contractions are for casual writing. Think along the lines of a Debug Build versus a Release Build.
  3. Simply & Obviously : Avoid using simply and obviously as it may be both to you, but neither to your audience. If it was simple or obvious, then you didn't need to write it.
  4. Use one space after a sentence, not two. A PC is not a typewriter. On a typewriter, with fixed-width fonts, two spaces are preferred, but on a PC, True Type fonts will properly space "<dot><space>" for you. Don't believe me? Open up any book and look at the spaces between sentences. Microsoft Word has an option to indicate this for you under grammar preferences.
  5. Effect vs. Affect : Effect is the result, affect is the action. Noun and verb. <Poor engine performance is the effect of ignored maintenance. Ignoring maintenance affects engine performance.>

Knowing the Language at a Collegiate Level

  1. Lists : Just like when defining an array, use a comma after every item in a list except the last. This includes before "and." <I like red, white, and blue.>
  2. i.e. vs. e.g. : The first is in other words and the second is for example, though this does not do much for clarity. Essentially, i.e. is a complete, clarifying list, while e.g. is an incomplete list of examples. <I watch the three stooges (i.e., Larry, Curly, and Moe). I like all four-player card games (e.g., Euchre, Spades, and Hearts).>
  3. Semicolons : Semicolons are used to separate two independent yet related clauses that could be broken into separate sentences. <The water is very hot; I hope I don't burn myself.> Also, use semicolons to separate items in a list where the items contain commas. <When the cards were dealt, Jack had a straight; Sally had two nines, two fives, and a queen; and George had a full house.>

Be mindful of your native language. It is the one you use the most, even more than C#, or Ruby, or Java. If you don't already own a copy, pick up The Elements of Style by Strunk & White; if you do own a copy, either have it at your desk so you can use it, or give it to someone who will. Effectively communicating with humans, using their rules, will help you have better testing, better design, better requirements, and have a better job. Become an effective English developer, and it will help you be a more effective developer, overall.

Thursday, 11 September 2008 15:30:33 (Eastern Daylight Time, UTC-04:00)  #    Comments [1] - Trackback

Filed under: ASP.Net | Programming

Many .Net developers that have experience in presentation-layer development have previous exposure to this one, but to those who haven’t, I must share: Request.ApplicationPath is evil! Request.Application path should never be used. Ever. I hate it!

It’s misgivings are most commonly exposed when concatenating strings to the end of it to create a URL of some sort.

sURL = Request.ApplicationPath & “/someDirectory/somePage.aspx”

This is evil. Neither will work in all situations due to the method’s inconsistency with trailing slashes.

If the application root is in a sub-directory, say “http://server/myRoot/index.aspx”, then the appPath is “/myRoot”. Note: there is no slash at the end, and the above sURL gets a value of “/myRoot/someDirectory/somePage.aspx”.

If the application root is not in a sub-directory, say “http://server/index.aspx”, then the appPath is “/”. Note: there is a slash at the end, and the above sURL gets a value or “//someDirectory/somePage.aspx”. In this case, rather than requesting “http://server/someDirectory/somePage.aspx”, wonderful Internet Explorer requests “http://somedirectory/somePage.aspx”. (I don’t fault IE for this. I commend it. It is actually one of the few cases where IE doesn’t kludge together a band-aid for Developer mistakes.)

So, don’t use Request.ApplicationPath. Ever. With no exception. You could make a method, perhaps MyUtilities.ApplicationPath, that checks to see if the return contains a trailing ‘/’, and if it does, give it the axe. This will turn your domain-root appPath to an empty string. Use your new method rather than Request.ApplicationPath, and all will be well with the world. But, don’t do it! Don’t make a new method. That just continues the evilness.

Use Page.ResolveURL(”~/someDirectory/somePage.aspx”) or the counterpart Control.ResolveURL if you want it to be relative to the control’s location. This is the only scenario you should use. ApplicationPath is evil!

Wednesday, 26 July 2006 08:19:23 (Eastern Daylight Time, UTC-04:00)  #    Comments [4] - Trackback

Filed under: Programming | Tools | Visual Studio

Most of our Visual Studio solutions contain hundreds of files (classes) organized neatly into dozens of folders (namespaces), but despite all of this organization the vertical content size of the Solution Explorer can get quite large. Finding a particular file when the majority of the tree is expanded is tedious and time-consuming, considering it should be a simple effort of less than five seconds. Fortunately, all of this is solved by the click of a button (assigned to handy macro).

The most useful macro for Visual Studio that I have ever encountered (and in the running for most useful VS tool, period) is the CollapseAll macro authored by one current and one former colleague, Dennis Burton and Mike Shields. In a quick XP effort, Dennis and Mike created a handy macro that recursively collapses the entire Solution Explorer tree down to just the solution and its projects.

With the tree collapsed, it is easy to find that desired file.

The macro is functional in all versions of Visual Studio for the Microsoft.Net framework, including Visual Studio 2003, Visual Studio 2005, and Visual Studio 2008.

CollapseAll Macro for Microsoft Visual Studio
Dennis Burton & Mike Shields | Published with Permission

Imports System
Imports EnvDTE
Imports System.Diagnostics

Public Module CollapseAll
    Sub CollapseAll()
        ' Get the the Solution Explorer tree
        Dim UIHSolutionExplorer As UIHierarchy
        UIHSolutionExplorer = DTE.Windows.Item(Constants.vsext_wk_SProjectWindow).Object()
        ' Check if there is any open solution
        If (UIHSolutionExplorer.UIHierarchyItems.Count = 0) Then
            ' MsgBox("Nothing to collapse. You must have an open solution.")
            Return
        End If
        ' Get the top node (the name of the solution)
        Dim UIHSolutionRootNode As UIHierarchyItem
        UIHSolutionRootNode = UIHSolutionExplorer.UIHierarchyItems.Item(1)
        UIHSolutionExplorer = Nothing
        UIHSolutionRootNode.DTE.SuppressUI = True
        ' Collapse each project node
        Dim UIHItem As UIHierarchyItem
        For Each UIHItem In UIHSolutionRootNode.UIHierarchyItems
            'UIHItem.UIHierarchyItems.Expanded = False
            If UIHItem.UIHierarchyItems.Expanded Then
                Collapse(UIHItem)
            End If
        Next
        ' Select the solution node, or else when you click
        ' on the solution window
        ' scrollbar, it will synchronize the open document
        ' with the tree and pop
        ' out the corresponding node which is probably not what you want.
        UIHSolutionRootNode.Select(vsUISelectionType.vsUISelectionTypeSelect)
        UIHSolutionRootNode.DTE.SuppressUI = False
        UIHSolutionRootNode = Nothing
    End Sub

    Private Sub Collapse(ByVal item As UIHierarchyItem)
        For Each eitem As UIHierarchyItem In item.UIHierarchyItems
            If eitem.UIHierarchyItems.Expanded AndAlso eitem.UIHierarchyItems.Count > 0 Then
                Collapse(eitem)
            End If
        Next
        item.UIHierarchyItems.Expanded = False
    End Sub
End Module

Based on code from Edwin Evans

Here, the macro is so popular that it is a part of our default developer’s build for every new machine, and is conveniently assigned to a toolbar button. The default button icon list contains an Up Arrow (in the Change Button Image menu when customizing the toolbar) that seems quite appropriate. That little button has saved us all from a lot of pain, five seconds at a time.

Wednesday, 31 May 2006 10:19:55 (Eastern Daylight Time, UTC-04:00)  #    Comments [2] - Trackback

Filed under: JavaScript | Programming

Recently, here at the ranch, we have been experiencing a few issues with Internet Explorer’s javascript functionality. It’s not that IE is doing anything wrong, but rather that it performs unexpectedly. When opening a new window in javascript [window.open()], the height and width dimensions are relative to the size of the canvas / stage / content area (depending on what discipline you are from), all commonly referred to as innerHeight and innerWidth. However, when you execute a resize function in javascript [window.resizeTo()], the height and width dimensions are relative to the size of the entire window, including any chrome, commonly referred to as outerHeight and outerWidth.

Thus, if you create a window (500px x 300px), then resize it to (505px x 305px), the window will actually get smaller, since the chrome adds more than 5 pixels to height and width.

The problem: SCOs (Courseware) can be of any size, depending on the specifications, requirements, and whims of the original clients and developers. To keep courses in their most visual-appealing state, we need to size the content window to precise innerHeight and innerWidth pixel dimensions. Due to our site’s design, the link that launches the Course Shell window has no idea what the size of the SCO is. The Course Shell (LMS) knows how big the SCO is, but it is loaded into the Course Shell window, which is obviously done after the window is created and initially sized. We need a way to size the window to content-dimensions after the window is created.

The catch: Internet Explorer has no native support for innerHeight and innerWidth, let alone a way to resize to those dimensions. Furthermore, the difference between the inner- and outer-dimensions due to window chrome can change from OS to OS and from theme to theme. For example, in Windows XP’s default “bubbly” themes, the title and status bars are much taller than in Windows “Classic” themes. Internet Explorer has no native way to identify the difference in inner- and outer-dimensions, making it difficult to resize to an inner size using an outer size command.

I hacked together some (perhaps mediocre, but still effective) javascript to resize a window to its inner dimensions rather than the outer dimensions, regardless of the amount of chrome that the OS adds.

HTML

<div id="resizeReference"
    style="position: absolute; top: 0px; left: 0px; width: 100%; height: 100%; z-index: -1;">
    &nbsp;
</div>

Javascript

function resizeWindow(iWidth, iHeight){
    var resizeRef = document.getElementById('resizeReference');
    var iOuterWidth = iWidth + 10;
    var iOuterHeight = iHeight + 29;
 
    if (resizeRef) {
        var iPreWidth = resizeRef.offsetWidth;
        var iPreHeight = resizeRef.offsetHeight;
        window.resizeTo(iPreWidth,iPreHeight);
        var iPostWidth = resizeRef.offsetWidth;
        var iPostHeight = resizeRef.offsetHeight;
        iOuterWidth = iWidth + (iPreWidth-iPostWidth);
        iOuterHeight = iHeight + (iPreHeight-iPostHeight);
    }
    window.resizeTo(iOuterWidth, iOuterHeight);
}

The HTML tag creates a hidden DIV whose height and width match the window. This provides innerHeight and innerWidth. The JS takes the dimensions of that DIV and resizes the window to those measured dimensions. Since this transfers the inner-dimensions to the outer-dimensions, this provides the exact size of the window chrome. We can now resize to the input parameter height and width, plus the height and width of window chrome, to give us a resize to inner-dimensions.

Execute resizeWindow no sooner than below the closing Body tag, so that the DIV is rendered.

It is a bit of a hack, but it works for now.

Tuesday, 02 May 2006 10:23:23 (Eastern Daylight Time, UTC-04:00)  #    Comments [3] - Trackback

NAnt hates .Net’s resource files, or .resx. Don’t get me wrong–it handles them just fine–but large quantities of resx will really bog it down.

Visual Studio loves resx. The IDE will automatically create a resource file for you when you open pages and controls in the ‘designer’ view. Back when we still used Visual SourceSafe as our SCM, Visual Studio happily checked the file in and forgot about it. Now, our 500+ page application has 500+ resource files. Most of these 500+ resource files contain zero resources, making them useless, pointless, and a detriment to the build.

This morning I went through the build log, noting every resx that contained zero resources, and deleted all of these useless files.

The compile time dropped by 5 minutes.

Moral of the story: Be weary of Visual Studio. With regards to resx, VS is a malware program that’s just filling your hard drive with junk. If you use resx, great, but if you don’t, delete them all. NAnt will love you for it.

Wednesday, 15 February 2006 11:31:31 (Eastern Standard Time, UTC-05:00)  #    Comments [0] - Trackback

Filed under: ASP.Net | Programming | Tools | Visual Studio

“It compiles! Ship it!”

Microsoft has sent Visual Studio 2005 to the printers. That brings .Net 2.0 to the table in all of its glory. The official release date is still November 7, and though it is available now to all of us MSDN subscribers (though the site is too flooded to ping, let alone download), there is still some question on if the media will be ready in time to go in all of the pretty little VS05 boxes at your local Microsoft store.

Friday, 28 October 2005 13:40:03 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: ASP.Net | Programming | Task Automation

The default settings of NUnit, TestRunner, and Test Driven Development all want different copies of the app.config at different locations. If ProjectName creates ProjectName.dll, then NUnit wants ProjectName.config, TR wants ProjectName.dll.config, and TDD wants TargetDir\ProjectName.dll.config. This is a lot of work to put in the post-build event of every unit test project, and can be even more work when another testing tool comes along that wants yet a new config filename. The best way to manage all of these file copies is through a common post-build event call.

Many probably opt for a NAnt script, but we found that passing in the required paths can sometimes cause NAnt to get confused, and it won’t properly parse the parameter listing. So, we went with a command file, instead.

CopyConfigs.cmd

rem for nunit

copy “%~1App.config” “%~1%~2.config”

 

rem for testrunner

copy “%~1App.config” “%~1%~2.dll.config”

 

rem for testdrivendevelopment

copy “%~1App.config” “%~3.config”

VS.Net Post Build Event

call “C:\MyPath\CopyConfigs.cmd” “$(ProjectDir)” “$(ProjectName) “$(TargetPath)”

VS.Net already includes a series of NAnt-like properties for project names, project directories, target [assembly] filenames, etc; these come in handy for creating a universal script. Placing the path references in quotes allows for spaces and other characters (Except more quotes) in the path. Executing the command file through a call allows us a little more versatility with the argument references (%~1 removes the surrounding quotes from the argument value, allowing us to append a few together without jacking the subsequent path).

Monday, 08 August 2005 13:48:20 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing | Tools | Watir

Scott Hanselman is my new hero. He is filling the hole—the one thing preventing Watir from becoming real competitor in the automated functional test market: script recording. Watch out Mercury; by creating WatirMaker, Scott is opening the flood gates, and Watir is going to come pouring through.

This changes everything.

I started out my career as a developer, but as I noted in an earlier blog, I get much more enjoyment from breaking things than I do building things, so I jumped ship. With my development experience I can delve in to making some rather wicked scripts for QTP, LoadRunner, and lately, Watir. However, my testers don’t share my skill set. My biggest hurdle in ousting QTP and making Watir our standard is the lack of recording; I can not expect every tester to start coding away in Ruby. It should come as no surprise that when I opened Scott’s blog this morning, I was so excited that I nearly wet myself.

It is a work in progress, but soon Scott hopes to have a fully functional recording tool for Watir. With WatirMaker, my testers can hit a button and start clicking away in IE; the tool will happily watch like a little kid on the sidelines, learning every move. My testers can all adopt Watir with open arms, and we can wave goodbye to that Mercury maintenance contract.

The only thing left to say is: “Scott…thanks!”

Wednesday, 27 July 2005 13:55:50 (Eastern Daylight Time, UTC-04:00)  #    Comments [2] - Trackback

Filed under: ASP.Net | Programming

Scott Hanselman has a good post today about the HttpOnly cookie attribute. It secures the cookie from access via the DOM. “The value of this property is questionable since any sniffer or Fiddler could easily remove it. That said, it could slow down the average script kiddie for 15 seconds.”

Read Scott’s full blog entry.

Here’s the meat-and-potatoes of what Scott came up with; it’s for your global.asax:

protected void Application_EndRequest(Object sender, EventArgs e)
{
    foreach(string cookie in Response.Cookies)
    {
        const string HTTPONLY = ";HttpOnly";
        string path = Response.Cookies[cookie].Path;
        if (path.EndsWith(HTTPONLY) == false)
        {
            //force HttpOnly to be added to the cookie
            Response.Cookies[cookie].Path += HTTPONLY;
        }
    }
}
Friday, 22 July 2005 14:01:58 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Usability

Years ago, way back when the web first started to become universally popular–and I’m talking about popular with all demographics and not just geeks like me–there was the 30KB rule, and it was a cardinal sin to break it. The entire request, including your HTML, your images, and everything else your page contained, had to come in under 30 kilobytes. Most homes were surfing through the web on a 28.8kbps modem, which pulled 30KB in about 10 seconds. Beyond that 10 seconds, and your users were too bored, frustrated, or confused to wait another moment, and were off to pursue the next site in your WebRing. 30KB. That was the limit. It was universally accepted. And, save a few ransom notes on GeoCities, everyone followed it.

Whatever happened to the 30KB rule?

Today constantly see pages that are 100KB or more, and those are even compressed responses. Everyone is so caught up in broadband, and developing on their 100mbps LAN, that they forget about the little guy. What about grandma? My poor grandma still surfs the internet with a good ol’ 57.6kbps modem. And even if she could pull off that speed (US restrictions limit to 53, max) it would still take 20 seconds to yank that monster through the pipe. My poor grandma shouldn’t have to wait that long.

Often times if I wade through the muck that makes up the offending request, I see 60KB style sheets, references to JS files that contain functions not even used in this page, big honkin’ j-pegs, useless JavaScript comment blocks that easily pile on a few K, and my nemesis: ViewState. (I hate ViewState. ViewState hates me. It’s a good relationship.)

My company is making an application, and the primary audience is a bunch o’ satellite offices stuck in the 20th century. They plug in to a whopping dual ISDN which maxes out at a whopping 128kbps. That’s 16KB/sec for you young folk. That 100KB page will take 7 seconds to pull across the wire. Toss that fact in to the 10-second rule, and you only have 3 seconds to process the request. I have visions of little ones and zeros flying towards the light screaming “We’re not gonna make it!!”

Get a haircut. Trim those bushes. Bring that response size down a few K. Here’s a few ways to tame the beast:

Reduce ViewState

“Remember that ViewState is evil.”

It adds a big, encrypted string to a hidden form variable in your HTML. However, this beast gets bigger with every web control that you have. Explicitly turn off viewstate on every control that does not use it, or better yet, turn of viewstate for the entire page. Of course, realize that the not-dot-net crowd is laughing at you while you do it.

Eliminate Comments in Release Code

“keep your comments to yourself.”

It is great to comment your code. It is fabulous. Every developer should bow to you if you comment your code, because not enough do it themselves. However, unless it is compiled code or in code-behind that isn’t sent to the client, it has to go. You can keep your comments in the version stored in SVN, VSS, or whatever your favorite source control tool is, but your release code should contain no client-side comments. Your client doesn’t read them, their browser doesn’t care about them, and all it does is slow everything down, so give them the axe. Your network administrator will love you for it, too.

Optimize Images

“Phenomenal, Artistic Imagery, Itty-Bitty Living Space.”

Compress your images. Get them as small as your image editor can get them (small: file size, not small: pixel size) without degrading the image beyond acceptable levels. And if you use a GIF, lower the number of colors (which will lower the number of bytes). When your images get smaller, people get happy.

Eliminate Whitespace

“One Program: One Line of Code.”

This is a cheap trick to squeeze out those last few bytes. If you have a news article that’s 9 pages long, open it up in notepad and turn off word-wrap, it becomes one big long line that stretches out forever and is impossible to read. But, your browser could care less. Take out all of the horizontal white space that you use to make your code readable, and then take out all the line breaks to make your HTML one big line, and your browser couldn’t tell the difference. However, you just chopped a few more bits.

Enable Compression

“GZip it, and GZip it good.”

If all else fails, and you’ve gotten your pages as whittled down as you can, and they are still big, compress it. Then again, even if they are small, compress it. It will add a few more CPU cycles on both ends to compress and decompress the response, but the time lost is greatly countered by the time saved in data transfer. Typical compression is around 40%, which takes that 100KB file down to 60KB, and saves my grandma nearly 8 seconds. She’ll give you a kiss and squeeze your cheeks for that one.

Monday, 18 July 2005 14:10:35 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing | Tools | Watir

In case you haven’t heard of it yet, Watir is the greatest thing to hit automated functional testing since…well…ever. Watir (pronounced “water”), or Web Application Testing In Ruby, is an open source automated functional testing tool powered by Ruby. My company has been living off QuickTest Pro, and it is not much of a leap to Watir. Much like QTP, it automates an instance of Internet Explorer and navigates its way around your web site, however unlike QTP, it doesn’t hijack your computer when you do it; with Watir, the IE window doesn’t have to be the foreground window, so you can get something else done while your test is executing. Watir also allows various checks much like QTP, but though programming includes the capability of checking much more, such as object hierarchy or object style. (Yes, Watir can make sure that your validation messages are red!)

Your money manager will love Watir, too. Our switch from QTP will save us thousands of dollars per year from Mercury’s annual support costs. For a moment, I think our company president’s pupils turned to dollar signs like a cartoon.

If you are like me, and spend your QTP days in ‘Expert’ view (Source code), you will pick Watir up quickly. I even find it better than QTP. Additionally, since it is just a source code file, edited in Notepad if you like, it can be stored in your favorite source control application AND (this is a big ‘and’) your developers can execute the automated tests themselves without proprietary software like QTP. Its easy integration with NUnit will also tie your automated functional tests in with applications like Nant and CruiseControl.

More Information

Read all about Watir.
Read Bret Pettichord’s (a Watir creator) blog entry about Watir.

Friday, 15 July 2005 14:19:59 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming

I like to think that my development team is full of competent and capable people, and not one of them was aware of this: Internet Explorer has a limitation on the number of cookies per domain (MSDN Reference).

From: “Number and size limits of a cookie in Internet Explorer”
http://support.microsoft.com/default.aspx?scid=kb;en-us;306070

Microsoft Internet Explorer complies with the following RFC 2109 recommended minimum limitations:

  • at least 300 cookies
  • at least 4096 bytes per cookie (as measured by the size of the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie header)
  • at least 20 cookies per unique host or domain name

We recently started having random authentication problems with our eLearning platform. It turns out that our application, plus everyones favorite Single-Sign On, plus SCORM, plus courseware created by third-party vendors created enough cookies to blow the top off the cookie jar. IE can only handle 20 cookies. Create a 21st cookie, and the oldest cookie is given the axe, which is generally an authentication cookie, a session ID, or some other very important cookie (as the ‘elders’ usually are).

So, be aware of your cookie jar. Monitor the number of existing client-side cookies in use when testing that new application. Harass other developers if they start using too many. Keep yours hands out of the cookie jar!

Oh, and don’t forget to encrypt them (but that’s a different post topic).

Friday, 01 July 2005 14:21:40 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Business | Mush | Programming

It’s not all about Internet Explorer any more. Yet, I am surprised at the number of web houses still coding specifically to IE. Much to my dismay, even my own company does it. Though we have a little bit of an excuse—our client only supports IE in their organization, and the app is internal—it still bothers me that we are abandoning everyone else.

New figures released a week ago place IE’s market share at 89%. That means more than 1 in 10 users are not using IE. (Read the Article) By coding specific to Microsoft, you are abandoning 11% of your potential users. That is astonishing and disturbing.

Pay particular attention to Firefox. Its user-base is growing exponentially, and doubling every 9 months. I’m a fan of the application. It is much easier to use than IE, and much more solid. I’ve converted all of my friends and almost all of my family. I even have my in-laws using Firefox. (Get Firefox)

As the IE behemoth continues to fall, you and your organization should be paying more and more attention to standards and multiple-browser testing. Check that your HTML is compliant, and test your sites in at least IE and Firefox, if not others. Don’t force your users to use a particular browser; chances are that if they can, they will just go somewhere else for their information.

Friday, 20 May 2005 13:34:48 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Testing

So your wonderful little creation is finished, and it does exactly what it was designed to do. But, have you prevented it from doing what it’s not supposed to do?

Enter the forgotten art of negative testing. This is the safeguard from user error, malicious attacks, and blatant developer oversight. Negative testing is taking your calculator application and trying to add “Hello” and “Goodnight”. Negative testing is trying to supply an invalid email address–.anything@something.q–into your mailing list form. Negative testing is trying to cause a buffer overflow on your lead-developer’s computer because you were able to sneak in a script injection.

The key word here is “try.”

If everyone has done their job, you will get nowhere. Unfortunately, rarely is this job done right. In 3 minutes I could considerably alter my best friend’s blog, and he doesn’t even know it. In 10 minutes I could corrupt the online database of a Fortune 500’s web site–both company and URL to remain anonymous. And, what scares me the most, in 20 minutes I could download the entire database of a certain benefits company, including the complete identity–SSN included–of a few thousand people.

For years, I have been paid to break things as much as build them. When that calculator finally adds 2 and 2 correctly, don’t be satisfied. Try to add “Hello” and “Goodnight”. Will it give you a neatly handled error message informing you that it couldn’t complete the procedure, or did it return a fatal exception and die a miserable death because it expected a Double and you gave it a String? Optimally, it shouldn’t allow you to even type characters into the input area unless you are working in hex; even then, only A-F.

If instructions tell you to do one thing, enter the opposite. If you see a value in the URL, change it. If a field asks for an integer between 0 and 5, try 0, 2, 5, -1, 9, 3.5, and “Q”, and see how it handles “unexpected inputs.” If a querystring is “?UserID=6″, change the 6 to a 7, to see if you get information on User 7, and try invalid items like 3.5 and “Q” to see if it fails on unexpected inputs. If a client-side cookie has a value of “User”, try changing it to “Admin” or “Administrator” and see if your access-level is increased.

Find the weaknesses, find the holes, and find the bugs so that they can get fixed. You are the demolition man. You get paid to blow things up. Do it. Do it with purpose. Pretend you are a hacker trying to get into the system. Pretend you are a teenager-hacker-wannabe trying to screw with the system. Pretend you are a grandma that doesn’t know what to do with the system. Do all of the things that you aren’t supposed to do to the application and do them on purpose, because if by ignorance or intelligence, your users will find what was missed.

Tuesday, 10 May 2005 14:55:41 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming | Usability

All too often, we forget about usability. We get so caught up in fixing functionality and enhancing performance that we forget about the most important part: how easy is it to use this thing we have just created. Sure. That new super-gigantic Humvee will go in any direction you want, can climb a 21-inch vertical, and can pull small houses with ease, but who cares? It drools at the sight of a gas station, it is impossible to parallel park, and most importantly, how do you expect grandma to climb in and out of that thing?

What would grandma do?

Imagine that poor old lady trying to raise her little leg up onto the running board, and then pull herself up into the cab with those little arms. She’s a GRANDMA! She isn’t 20 anymore. Or 50, for that matter. We forget about grandma in our software testing, too. How would she use that application you just made? How would she react to that detailed error message your creation just spit out?

My poor father; he just got his first computer, and I’ve been trying to teach him how to do instant messaging. He knows enough about Windows XP to be familiar with the big ‘X’ in upper-right corner. Click it and everything goes away. But, Trillian is different. In the upper-left of the contact list is an upside-down triangle that minimizes the window to the system tray. Right below that is a little small ‘x’. Unfortunately, that ‘x’ removes your contacts from within your Trillian window. You have to play around in your ‘View’ menu to get the list to come back, again. However, my father doesn’t know upside-down triangles, and he certainly doesn’t know about the ‘View’ menu, yet. He just knows the ‘x’. So that’s what he does. He clicks the little ‘x’. And every time he does, his contacts go *poof*, and he has to call me to help him get his contacts back. My grandmother would do the same thing. I don’t think Trillian did any usability testing on that feature.

What would grandma do? You know that she’s going to want to click the ‘x’, no matter what, because the ‘x’ is what she knows, just like my father. So why not make the upside-down triangle an ‘x’? It can still minimize to the system tray. The ‘x’ isn’t a cast-in-stone rule that the application must quit all-together. If you don’t believe me, try the ‘x’ on your MSN Messenger window. It minimizes to the system tray. Why did Microsoft brake their own tradition? I bet what they really did was a little usability testing, and discovered that new users always want to click the ‘x’. To new users, the ‘x’ is a big “CLICK HERE” sign to make that window go away. They don’t care if it closes; new users just want it to go away. And if she were still around, Violet–my grandmother–would always be a new user when it came to computers. Just make the window go away. Like clearing the dishes after dinner: it didn’t matter if you threw the plate out, just get it off the table.

So, we have this problem. Now, what do we do about it? Ask yourself:What would grandma do? “Fatal error: Userdata insert failed. Connection to database unavailable. \\jedimaster\yoda\greenlightsaber\sqlserver2000 not found.” If she saw that, what would grandma do? Stare blankly at the computer? “Unable to save your contact information. Please try again later.” Grandma can understand that. So, think of your grandma when you test that new application. Think of your grandma when you write your error messages. Think of grandma when you draw pretty graphics or design a button icon. Your program will be much more friendly, and much easier to use. Even grandma could use it. If you need help remembering, put a picture of grandma on your desk at work, right next to your monitor. And if you don’t have a grandma, substitute that sweet old lady down the street that bought all of your raffle tickets when you were 12 and baked you cookies because you were such a good little kid.

Remember grandma.
What would grandma do? She’d tell you that she’s proud of you, because that’s what grandmas do.

Saturday, 07 May 2005 08:53:33 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback

Filed under: Programming

Imagine a world where no one had a name. Instead of John Doe, you would know “the 185lb, long black haired, 5-foot-11 guy that lives on the corner of 43rd and 5th.” What happened if that guy got a crew cut? To his close friends, he would now be “the 185lb, short black haired, 5-foot-11 guy that lives on the corner of 43rd and 5th.” But not-so-close acquaintances, ex-girlfriends, the florist, cabbie, and the IRS would all know him as his former name. Few would know this new guy. There would be confusion when one group tries to talk to the other about this guy. What happened if he gained a few pounds, too, and moved over to 54th? No one would know who he was, anymore.

That’s why we have names. They are a constant in a dynamic life. They are the dependable value that gives the world security when all else changes. If John dyed his hair blue, moved to Phoenix, and got a sunburn, people would still know who he is. He’s John Doe.

ID attributes do for web objects what names do for humanity. In a world where static web sites are the ones your 12-year-old makes on Geocities, automated QA tools need a little help. Do you have a link to your favorite news site? Today, your automated tool can find that link to <a href=”http://www.cnn.com”>CNN.com</a>, but tomorrow’s <a href=”http://www.msnbc.com”>MSNBC</a> link is lost; the tool is still looking for CNN. So, give the link an ID. That’s its John Doe. The tool can find your link whether it is <a id=”favNewsLink” href=”http://www.cnn.com”>, <a id=”favNewsLink” href=”http://www.msnbc.com”>, or any other link that floats your boat. All it has to do is look for the name. favNewsLink. John Doe.

Be nice to your automated tools: give your web objects a name. Rename your ‘Comments’ link to ‘Feedback’? John Doe. Change your image of your dog to an image of your cat? John Doe. Multilingual site with translated text? John Doe. It will help you automate your QA process, and you will spend less time retooling your automation scripts and more time downing that Corona. Just don’t forget the lime.

Happy Cinco de Mayo.

Thursday, 05 May 2005 15:01:29 (Eastern Daylight Time, UTC-04:00)  #    Comments [0] - Trackback