Implementing 'Keep Me Signed In' in Windows Identity Foundation

04 February 2014

A common feature of website authentication is the ‘Remember me’ or ‘Keep me signed in’ option. This feature is not a built-in feature of Windows Identity Foundation. The easiest solution is to make all Relying Party cookies Session cookies, meaning they expire when you close the browser. When you navigate back to the relying party you’ll be sent to the STS, automatically logged in and sent back. This can be a pain for a number of reasons so it’s ideal if we can setup the Relying Party cookies the same as the STS. I’ll show how it can be implemented using claims as the means of communication between the STS and Relying Party.

The STS setup

To communicate whether or not the user wanted to be remembered, we’re going to use claims. Specifically we’ll be using two existing claims from the Microsoft.IdentiyModel.Claims namespace, IsPersistent and Expiration. To do so, first add the claims to the FederationMetadata xml so you see something like this:

<auth:ClaimType xmlns:auth="" Uri="" Optional="true">
   <auth:Description>If subject wants to be remembered for login.</auth:Description>
<auth:ClaimType xmlns:auth="" Uri="" Optional="true">
   <auth:Description>How long before the persistent session cookie should expire</auth:Description>

As the description states, we’ll be using the IsPersistent claim to communicate if the user wanted to be kept logged in and the Expiration claim to communicate the session expiration if IsPersistent is true.

The last step on the Relying Party is to set the claims on the user’s principal. Update the IClaimsPrincipal creation code to specify the two new claims.

public static IClaimsPrincipal CreatePrincipal( UserModel user, bool rememberMe )
  if ( user == null )
    throw new ArgumentNullException( "user" );

  var claims = new List<Claim>
    ... // Your other claims go here
    new Claim( ClaimTypes.IsPersistent, rememberMe.ToString() ),
    new Claim( ClaimTypes.Expiration, TimeSpan.FromDays( DEFAULT_COOKIE_EXPIRATION_IN_DAYS ).ToString() )

  var identity = new ClaimsIdentity( claims );

  return ClaimsPrincipal.CreateFromIdentity( identity );

The two steps above ensure that the STS will communicate the necessary information to the Relying Party for them to set up their session to mirror the STS session.

Relying Party setup

On the Relying Party side we have to override the default WIF behavior for the session expiration and set it manually based on the claims we’ve specified in the STS. We’ll need to override the SessionSecurityTokenCreated behavior to do so. Place the following code in the global.asax of the Relying Party.

// This method does not appear to be used, but it is.
// WIF detects it is defined here and calls it.
// Note: Do not rename this method. The name must exactly match or it will not work.
[System.Diagnostics.CodeAnalysis.SuppressMessage( "Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode" )]
void WSFederationAuthenticationModule_SessionSecurityTokenCreated( object sender, SessionSecurityTokenCreatedEventArgs e )
  bool isPersistent = false;
  string expirationAsString = null;
    isPersistent = ClaimsHelper.GetClaimValueByTypeFromPrincipal<bool>( e.SessionToken.ClaimsPrincipal, ClaimTypes.IsPersistent );
    expirationAsString = ClaimsHelper.GetClaimValueByTypeFromPrincipal<string>( e.SessionToken.ClaimsPrincipal, ClaimTypes.Expiration );
  catch ( ClaimParsingException )
    Trace.TraceWarning( "Failure to parse claim values for ClaimTypes.IsPersistent and ClaimTypes.Expiration. Using session cookie as a fallback." );
  catch ( ClaimNullException )
    Trace.TraceWarning( "Expected claim values for ClaimTypes.IsPersistent and ClaimTypes.Expiration but got null. Using session cookie as a fallback." );

  TimeSpan expiration;
  if ( isPersistent && TimeSpan.TryParse( expirationAsString, CultureInfo.InvariantCulture, out expiration ) )
    DateTime now = DateTime.UtcNow;
    e.SessionToken = new SessionSecurityToken( e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, now, now.Add( expiration ) )
      IsPersistent = true
    e.SessionToken = new SessionSecurityToken( e.SessionToken.ClaimsPrincipal, e.SessionToken.Context )
      IsPersistent = false
  e.WriteSessionCookie = true;

The important part is at the end. We create a new SessionSecurityToken object based on the values of the claims and overwrite the default WIF security token with it. This gives us either a session cookie or a cookie with an expiration that matches the STS value; giving us the ‘Keep me logged in’ behavior we wanted.


Update Web.Config in an Azure Cloud Service package

09 August 2013

Windows Azure deployments are done using a convenient .cspkg and .cscfg files. The .cscfg is an xml config file and the .cspkg is essentially a zip file that contains your application code. This means you can build once and deploy to different environments by providing a different version of the .cscfg, making continuous deployment simple. Just keep the .cspkg file around and deploy it anywhere.

Problems arise when you need to modify something in the cspkg, such as the Web.Config for your web application. A common scenario where this is necessary is configuring Windows Identity Foundation to update a trusted issuer thumbprint or federation realm. Options to fix the problem are to create a new package for each environment or creating the package on demand as part of the deploy process. Microsoft has provided a way to create packages manually but it’s complicated to set up and involves duplicating a lot of work that’s done for us already in the MSBuild tasks for the cloud project.

The fix

An alternate approach I’ve had success with is to modify the Web.Config on role start in your web project based on values stored in the .cscfg configuration file. To do this, copy the Web.Config in your project and rename it to Web.Config_pretransform or something similar. Also stop tracking the Web.Config in your source control since it will just be generated as needed (but make sure the project still has a reference to it). Next add code to your WebRole.cs to do the file modification like so:

public override bool OnStart()

Fill in the UpdateConfigs method with code to open the Web.Config_pretransform using Microsoft.Web.Administration.ServerManager.

private void UpdateConfigs()
  using ( var server = new ServerManager() )
    Site site = server.Sites[RoleEnvironment.CurrentRoleInstance.Id + "_Web"];
    string physicalPath = site.Applications["/"].VirtualDirectories["/"].PhysicalPath;
    string inputWebConfigPath = Path.Combine( physicalPath, "web.config_pretransform" );
    string outputWebConfigPath = Path.Combine( physicalPath, "web.config" );

    File.Copy( inputWebConfigPath, outputWebConfigPath, overwrite:true );

    SetWIFWebConfigSettings( outputWebConfigPath );

The code above grabs the physical path of the website from IIS and passes it off to the SetWIFWebConfigSettings method. This method can then parse and update the Web.Config using your favorite XML parser. Finally, the code below shows how to update the realm and requireHttps attributes using values from the .cscfg:

private static void SetWIFWebConfigSettings( string webConfigPath )
  var doc = XDocument.Load( webConfigPath );
  var wifConfig = doc.Descendants( "microsoft.identityModel" ).Single();

  var wsFederation = wifConfig.Descendants( "federatedAuthentication" ).Single()
    .Descendants( "wsFederation" ).Single();
  wsFederation.SetAttributeValue( "realm", CloudConfigurationManager.GetSetting( "WifWsFederationRealm" ) );
  wsFederation.SetAttributeValue( "requireHttps", "true" );

  doc.Save( webConfigPath );
  • It’s important to note that Azure will not actually put our instance on the load balancer until after the RoleStart method has finished which allows us to do these modifications.

One last thing we need to do is make it work locally as well. An easy fix is to copy the Web.Config.pretransform to the Web.Config location prior to building the project if it doesn’t exist.

   <PreBuildEvent>if not exist "$(ProjectDir)\Web.config" (copy /Y "$(ProjectDir)\Web.config_pretransform" "$(ProjectDir)\Web.config") else (echo web.config already exists in $(ProjectDir), skipping)</PreBuildEvent>

That’s all we need to do to modify the Web.Config on role start in an Azure Cloud Service. It lets us keep all environment specific settings in the .cscfg which means we can deploy one package to any environment.


Emulate The Visual Studio Command Prompt In PowerShell

06 June 2013

The Visual Studio Command Prompt provided when you install Visual Studio adds lots of useful commands to the PATH of the current prompt. You don’t have to remember where msbuild.exe or mstest.exe are located just call the commands. However, Visual Studio only ships with a regular command prompt. There is no corresponding PowerShell command prompt.

All hope is not lost for PowerShell fans though. The Visual Studio Command Prompt works by calling the vcvarsall.bat file that ships with Visual Studio to update the PATH for the current session. We can take advantage of that same script when PowerShell starts to update the PATH for the PowerShell session by modifying the default PowerShell profile.

First we need to open the PowerShell profile file for editing. This file is located at ~\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 If you already have PowerShell open you can call start $PROFILE from the prompt. This command will open the profile for you in your default editor.

Next paste these lines at the end of the file.

# Move to the directory where vcvarsall.bat is stored
pushd 'C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC'

# Call the .bat file to set the variables in a temporary cmd session and use 'set' to read out all session variables and pipe them into a foreach to iterate over each variable
cmd /c "vcvarsall.bat&set" | foreach {
  # if the line is a session variable
  if( $_ -match "=" )
    $pair = $_.split("=");

    # Set the environment variable for the current PowerShell session
    Set-Item -Force -Path "ENV:\$($pair[0])" -Value "$($pair[1])"

# Move back to wherever the prompt was previously

Save the file, and reopen a new PowerShell prompt. You should have a fully functioning Visual Studio Powershell Prompt.

Finally, if you don’t want to load the Visual Studio variables for every prompt, move the lines above to a file in your path and execute that file when you want the variables . thefilethathasthescriptabove.ps1.

  • Edit 6/7/2013 - Corrected typo in Set-Item line.


Why I Use Test Driven Development

15 May 2013

Test Driven Development (TDD) is one of the more polarizing techniques in software engineering. People love it or they hate it, often without ever actually trying it. I feel like it’s a useful tool in my programming toolbox, helpful for a multitude of reasons, but it is not the silver bullet of software development. Some of the reasons I practice Test Driven Development are:

TDD guides decisions and helps to develop sufficiently architected code.

TDD encourages “good enough” solutions. When you’re focused on doing just enough to make the tests pass, you aren’t going to create complex, over architected solutions. Instead you’ll end up with a clean design that you know works for at least one client, the tests.

A common complaint about TDD is actually the opposite of my point, which is that it results in poorly designed code. It’s easy to fall into the trap of writing a failing test, making it pass, then continuing on, ignoring the Refactor step. Left alone, this technical debt can fester and create a mass of poorly designed code. However, if the developer is diligent in refactoring I believe the design will come out as good or better than one designed completely up front. You develop an architecture that makes sense to the client (the tests) and fits the context.

The biggest problem for me when doing Test Driven Development is the Refactor step. I am often tempted to skip or make weak attempts at the refactoring. In order to truly reap the benefits of TDD, you must be diligent in this step.

TDD Encourages decoupled code

The process of Test Driven Development encourages the developer to defer work to interfaces and then mock outputs of those interface functions to test a given method in isolation. Doing so results in decoupled code that is flexible enough to accommodate future change. However, care must be taken to craft those interfaces in a logical way that will make sense to future developers. Often times it is hard to name these pieces and personally this is where I end up spending a fair bit of time. Others will spend more time reading my code than I spent writing it so I want to be careful and name things well. Additionally, with refactoring tools like Resharper the cost of changing the name several times as I get a feel for what the object is doing is minimal.

Allows testing in isolation.

Writing tests allows you to test classes in isolation, which allows for a quicker testing loop. Rather than firing up the entire application in a debugger and stepping through the code, you can just write a test that exercises the code to prove it works. This is especially useful for logic heavy classes where common errors like bounds checking or parsing problems can be a pain to test manually. With TDD, less time spent in the debugger will often translate to quicker development.

TDD gives some confidence that new changes have not broken system

Finally, test first development can provide protection against many (not all) regression bugs. Because most of your code has test coverage, many regression problems will be found and resolved early in the development cycle, usually before they are even committed. The quicker those bugs are found, the easier and cheaper they are to fix. However, it is not a perfect system and will not prevent all regression bugs, nor should it be relied on to do so. TDD should be coupled with full system tests to verify things actually work from a user’s perspective in order to have sufficient confidence the code does what it should.

TDD forces you to understand what you’re doing before doing it

Writing the tests first requires you to know how the code works and how to add your new feature to existing code. This can feel like a burden at first but in practice it ends up being a good thing. Without tests, the temptation is to hack in the new feature and ship it. With tests you can’t really do that, TDD requires you to know what test you want to write next.

A common practice to help figure out where you’re going is “spiking”. Ignoring tests, you implement as much of the feature as you need to get a feel for what needs to be written and where the code should go. Then you throw that work away and rewrite it using TDD. This second pass goes much quicker than the first and you have a good idea of what tests to write and where you are headed. Both give you a better understanding of the code, leading to more robust code with less regression bugs.


Lastly, I want to reiterate that TDD is not the perfect solution to every programming problem and I’ll happily abandon TDD when the situation calls for it. If I’m working on a prototype or playing with a new language I likely won’t use TDD. I value speed and flexibility in those situations more than I care about the benefits TDD provides. Another factor in the decision to use TDD is language. There are some languages that make TDD easier than others. For example, C# or Ruby have established testing support and frameworks but languages like C++ or Objective-C are more difficult to test just because of the constraints of the language. Regardless, TDD is a skill developers should try for themselves. The benefits vastly outweigh the cost in many cases and it’s a great skill to have simply because it makes you look at programming a little differently.


Storing TimeSpan Properties with EntityFramework Code First

30 April 2013

Entity Framework 5 Code First does a good job of selecting a corresponding SQL column type for most C# primitives. However, the type it chooses for TimeSpan properties can cause problems. It chooses the Time type which can only store ranges up to 24 hours. If your TimeSpan needs to store more than 24 hours, you need to choose a different option.

The strategy I’ve found most useful is to store the data as Ticks in a BIGINT column. You can achieve this by using the code below

public TimeSpan TimeToCompleteForm

public long TimeToCompleteFormTicks
    return TimeToCompleteForm.Ticks;
    TimeToCompleteForm = TimeSpan.FromTicks( value );

In SQL you can query this value as raw ticks or convert it to a readable string in the format 'dd.hh:mm:ss:ms' using the following query:

SELECT CONVERT(VARCHAR, DATEPART(DAY,DATEADD(ms, TimeToCompleteFormTicks/10000, 0))) + '.' + CONVERT(VARCHAR, DATEADD(ms, TimeToCompleteFormTicks/10000, 0), 114)

Finally, if you prefer to represent the value in milliseconds instead of ticks, the code above requires two tweaks. Instead of Ticks use the TotalMilliseconds property of the TimeSpan. Additionally, use the FromMilliseconds method to convert the incoming milliseconds to a TimeSpan instead of FromTicks.


See All Posts