Full Framework WSFederation to OWIN Conversion

14 May 2018

If you have been using WSFederation in a .net web application for more than a year or two, chances are that it is configured using the Microsoft.IdentityModel.Web or System.IdentityModel.Services libraries. Two HTTP modules are added to the application, WSFederationAuthenticationModule and SessionAuthenticationModule, to handle the WSFederation protocol and configuration was done by inheriting those classes or configuring on application start via the web.config. However, in newer versions of asp.net using “middleware” is preferred by using OWIN in both full framework applications and .net core. The purpose of OWIN is to abstract the underlying web server from the web application. HttpModules are tightly coupled to System.Web and therefore the IIS webserver. Using OWIN does require some configuration and setup changes which I will detail in this post.

Basic OWIN setup

First, if you don’t already have OWIN configured for your application install the Microsoft.Owin and Microsoft.Owin.Host.SystemWeb nuget packages. Then add a startup class like the one below to your application:

using System;
using System.Threading.Tasks;
using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(OwinApp.Startup))]
namespace OwinApp
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
        }
    }
}

Convert SessionAuthenticationModule into OWIN configuration

Once OWIN is installed, we can begin configuring WSFederation. Previously a SessionAuthenticationModule would have been customized to set up properties for the cookies that will store session information:

public class CustomSessionAuthenticationModule : SessionAuthenticationModule
{
  protected override void InitializePropertiesFromConfiguration()
  {
      CookieHandler.RequireSsl = true;
      CookieHandler.Name = "FederatedAuthCookie";
  }
}

and configured as a HTTP module in the web.config:

<modules>
  <add name="SessionAuthenticationModule" type="MyApp.CustomSessionAuthenticationModule, MyApp" preCondition="managedHandler" />
</modules>

In the OWIN pipeline, we’ll configure the cookie using CookieAuthentication classes and helper methods.

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.UseCookieAuthentication( new CookieAuthenticationOptions
        {
          // converted from the CookieHandler.Name = "FederatedAuthCookie"; line in SessionAuthenticationModule
          CookieName = "FederatedAuthCookie",
          // converted from the CookieHandler.RequireSsl = true; line in SessionAuthenticationModule
          CookieSecure = CookieSecureOption.Always
        } );
    }
}

Convert WSFederationAuthenticationModule into OWIN configuration

Next, we’ll convert our custom WSFederationAuthenticationModule to use the WsFederationAuthenticationMiddleware from the OWIN pipeline.

public class CustomWsFederationAuthenticationModule : WSFederationAuthenticationModule
{
  protected override void InitializeModule( HttpApplication context )
  {
      base.InitializeModule( context );

      RedirectingToIdentityProvider += OnRedirectingToIdentityProvider;
  }

  protected override void InitializePropertiesFromConfiguration()
  {
      Issuer = InstanceWideSettings.BaseStsUrl;
  }

  private void OnRedirectingToIdentityProvider( object sender, RedirectingToIdentityProviderEventArgs args )
  {
      // setting the realm in the OnRedirecting event allows it to be dynamic for multi-tenant applications
      args.SignInRequestMessage.Realm = Settings.BaseUrl;
  }
}

The code above will be removed and replaced with the UseWSFederationAuthentication helper below

public void Configuration(IAppBuilder app)
{
   ...
   app.UseWsFederationAuthentication( new WsFederationAuthenticationOptions
   {
     // Pulls in STS Url and other metadata (like signing certificates)
     MetadataAddress = Settings.StsMetadataUrl,
     Notifications = new WsFederationAuthenticationNotifications
     {
         // replaces the OnRedirectingToIdentityProvider event
         RedirectToIdentityProvider = notification =>
         {
           notification.ProtocolMessage.Wtrealm = Settings.PresentationUrlRoot;
         }
     };
     // Name this authentication type (for WIF)
     AuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType,
     // Tells the pipeline to use a cookie authenication we configured above to store the WIF session
     SignInAsAuthenticationType = CookieAuthenticationDefaults.AuthenticationType
   } );
}

Move Global.asax.cs WSFederation configuration into OWIN configuration

Now that we’ve converted the two WSFederation HttpModules we can finish configuring the OWIN pipeline by converting either the WSFederation configuration in the web.config or that was configured on application start. In my case, I preferred to set up WSFederation in code using the FederationConfigurationCreated event like the code below:

FederatedAuthentication.FederationConfigurationCreated += ( sender, args ) =>
{
  args.FederationConfiguration.IdentityConfiguration.AudienceRestriction.AudienceMode = ystem.IdentityModel.Selectors.AudienceUriMode.Always;

  // this method loads the list of relying parties for a multi-tenant application.
  List<string> relyingParties = GetRelyingParties();
  relyingParties.ForEach( rp => args.FederationConfiguration.IdentityConfiguration.AudienceRestriction.AllowedAudienceUris.Add( new Uri( rp  ) );

  // This code loads the metadata url, parses it and and updates the configuration with details from it like the signing certificates
  args.FederationConfiguration.IdentityConfiguration.IssuerNameRegistry = new CustomMetadataParser( Settings.StsMetadataUrl );
};

The items configured above can be added to the UseWsFederationAuthentication configuration:

public void Configuration(IAppBuilder app)
{
   ...
   app.UseWsFederationAuthentication( new WsFederationAuthenticationOptions
   {
     ...
     TokenValidationParameters = new TokenValidationParameters()
     {
         // this replaces the IdentityConfiguration.AudienceRestriction setup
         ValidAudiences = GetRelyingParties(),
         ValidateAudience = true
     },
     // Pulls in STS Url and other metadata (like signing certificates) so we don't have to do custom metadata parsing
     MetadataAddress = Settings.StsMetadataUrl,
     ...
   } );
}

Additionally, in the Global.asax.cs file if you wanted to have access to WSFederation events you could declare special methods on your HttpApplication class and those would be invoked while the WSFederation protocol was executing. Two examples that I’ve used are shown below:

void WSFederationAuthenticationModule_SessionSecurityTokenCreated( object sender, SessionSecurityTokenCreatedEventArgs e )
{
   // extend the expiration of the session cookie to make it last 1 year
   TimeSpan expiration = TimeSpan.FromYears( 1 );
   e.SessionToken = new SessionSecurityToken( e.SessionToken.ClaimsPrincipal, e.SessionToken.Context, now, now.Add( expiration ) ) { IsPersistent = true };

   e.WriteSessionCookie = true;
}

void WSFederationAuthenticationModule_RedirectingToIdentityProvider( object sender, RedirectingToIdentityProviderEventArgs e )
{
   // add client id parameter to outgoing wsfederation request
   e.SignInRequestMessage.Parameters.Add( "client_id", Settings.ClientId );
}

Again, these items can be replicated in the UseWsFederationAuthentication configuration:

public void Configuration(IAppBuilder app)
{
   ...
   app.UseWsFederationAuthentication( new WsFederationAuthenticationOptions
   {
     ...
     Notifications = new WsFederationAuthenticationNotifications
     {
         // replaces the WSFederationAuthenticationModule_RedirectingToIdentityProvider method
         RedirectToIdentityProvider = notification =>
         {
           notification.ProtocolMessage.Parameters.Add( "client_id", Settings.ClientId );
         },
         // replaces the WSFederationAuthenticationModule_SessionSecurityTokenCreated method
         SecurityTokenValidated = notification =>
         {
           var newAuthenticationProperties = new AuthenticationProperties( authenticationTicket.Properties.Dictionary );

           DateTime now = DateTime.UtcNow;
           TimeSpan expiration = TimeSpan.FromYears( 1 );

           newAuthenticationProperties.IssuedUtc = now;
           newAuthenticationProperties.ExpiresUtc = now.Add( expiration );
           authenticationProperties.IsPersistent = true;

           return new AuthenticationTicket( claimsIdentity, authenticationProperties );
         }
     };
     ...
   } );
}

Wrap up

At this point, all old WSFederation code is replaced and WSFederation actions are handled using the OWIN pipeline. One thing to note - we are not able to re-use existing sessions so existing user sessions will be invalidated by this code change. Once the user logs in again at the STS they’ll be issued a new cookie that will work with the OWIN pipeline cookie authentication code.

Setting up Code Analysis in Visual Studio 2017 projects

04 May 2018

In older versions of Visual Studio, FXCop was the standard for static analysis in .NET. In Visual Studio 2017, along with the release of the roslyn compiler, the landscape is different. Static analysis is no longer something you install on the machine and configure in the project, it’s delivered via nuget packages. In this post, I’ll lay out how I suggest setting up static analysis for C# projects. The setup should work equally well for Full Framework and .NET Core.

Which analyzers to use

The first question to answer is which code analyzers to use for your project. Before we just had FXCop but now you can google for “roslyn analyzers” and find a plethora of options. I typically stick to the stock Microsoft options but there are plenty of third-party options too like Style Cop.

In this example, I’ll use the Microsoft.CodeAnalysis.FxCopAnalyzers, which is a meta package of four other analyzer packages:

  • Microsoft.CodeQuality.Analyzers
    • The bulk of the “classic” FXCop errors are here. For instance, implementing IDisposable properly and passing URIs instead of strings are both checked in this package.
  • Microsoft.NetCore.Analyzers
    • .NET core specific warnings/errors appear here, but many are more generic like requiring ICultureInfo to be passed to methods that can accept it.
  • Microsoft.NetFramework.Analyzers
    • Full framework .net warnings are checked here like handling ISerializable correctly.
  • Text.Analyzers
    • This package provides some basic spell checking (disabled by default).

Install and configure analyzers

To get started, install the Microsoft.CodeAnalysis.FxCopAnalyzers nuget package into all projects in your solution. This by itself will generate warnings and do the code analysis using the default ruleset. I want to take it one step further to use shared rulesets for all projects in the solution.

Add shared rulesets

To utilize shared rulesets, edit the csproj files to reference a shared analyzer ruleset. This file will be used to configure which rules are enabled/disabled in your solution. I prefer to have two ruleset files, one for production source code and one for tests so that I can be flexible on the rules I use to analyze test code.

  1. Edit the projects by adding the following lines to the csproj. They can go anywhere but I typically add them underneath the TargetFramework/RootNamespace property group. I use a tools folder at the root of my git repository but you can put the ruleset file anywhere.

     <PropertyGroup>
       <CodeAnalysisRuleSet>..\..\tools\Source.ruleset</CodeAnalysisRuleSet>
     </PropertyGroup>
    
  2. Add a ruleset file in the location specified above. The exact contents of the ruleset file will vary but if you use FxCop analyzers a good place to start is with this default file - https://gist.github.com/dontjee/4a151dea7bc1169f9dd051da70bec35e. It enables some of the most important rules as warnings.

  3. Repeat the process above for test projects, or any other projects you want to use different rulesets, using a different ruleset file.

     <PropertyGroup>
       <CodeAnalysisRuleSet>..\..\tools\Tests.ruleset</CodeAnalysisRuleSet>
     </PropertyGroup>
    

Build and fix/ignore warnings

Now that we’ve configured the ruleset files, the next step is to do a rebuild of the solution and fix or ignore any warnings that pop up. One rule that I often disable is CA2007 Do not directly await a Task without calling ConfigureAwait. This makes sense when building libraries to be consumed in other projects but when building applications, this rule isn’t necessary. To disable the rule follow the steps below

  1. Find the rule in the analyzer list under Dependencies->Analyzers->[The-Analyzer-Name]. In the case of CA2007, the analyzer name is Microsoft.CodeQuality.Analyzers. Dual_write-example
  2. Under the code analyzer, find the rule you want to disable and right-click on it and set the Rule Set Severity to None. Code-Analyzer-Rule-Right-Click

    This adds the following block to the corresponding ruleset file to disable the rule:

     <Rules AnalyzerId="Microsoft.CodeQuality.Analyzers" RuleNamespace="Microsoft.CodeQuality.Analyzers">
       <Rule Id="CA2007" Action="None" />
     </Rules>
    

Repeat the process above for all rules you wish to disable or fix the warnings that show up. Once that’s done, the code analysis setup is complete for your solution.

Extra Credit - set up builds to fail on analyzer warnings

Now that you have a clean build with no warnings I suggest configuring the continuous build (I hope you have one!) to report warnings as errors so that the build will fail if any new code analysis violations show up. To do this, add the following MSBuild property to the compile step of your build - /p:TreatWarningsAsErrors="true".

Avoid dual writes with sql server and change tracking - Part 2

20 December 2017

In my last post I suggested using the database stream writer pattern to avoid writing to multiple data stores outside of a transaction from your application. This post will detail the implementation. For my example the application is a c# application writing to SQL Server and tracking changes using the Change Tracking feature. The data model for this example is similar to Youtube. It contains users, channels and media. A user has one or more channels which in turn have one or more pieces of media associated with them. The full schema is contained in this gist.

At a high level, there are 3 pieces of the architecture to consider.

  1. The primary application which will write to the SQL database. It will not deal with writes to the downstream data stores and will largely be ignored by this post.
  2. The SQL Server database which will be configured to track changes to all necessary tables.
  3. The application to monitor the change tracking stream from the database and push updates to their datastore (cache, search, etc).

Database Implementation

First change tracking must be enabled on the SQL Server database and tables. The SQL below enables change tracking on all three tables configured to retain changes for seven days and enables automatic cleanup of expired changes.

ALTER DATABASE BlogPostExample
SET CHANGE_TRACKING = ON  
  (CHANGE_RETENTION = 7 DAYS, AUTO_CLEANUP = ON)  

ALTER TABLE dbo.UserAccount
ENABLE CHANGE_TRACKING  
WITH (TRACK_COLUMNS_UPDATED = ON)

ALTER TABLE dbo.Channel
ENABLE CHANGE_TRACKING  
WITH (TRACK_COLUMNS_UPDATED = ON)

ALTER TABLE dbo.Media
ENABLE CHANGE_TRACKING  
WITH (TRACK_COLUMNS_UPDATED = ON)

One final table is required to track the current position in the change tracking stream our monitor has consumed. The following SQL will create the table and initialize it with the minimum change tracking version currently in the database:

CREATE TABLE [dbo].[CacheChangeTrackingHistory] (
   [CacheChangeTrackingHistoryId]               INT   IDENTITY (1, 1) NOT NULL,
   [TableName]                             NVARCHAR (512)   NOT NULL,
   [LastSynchronizationVersion]            BIGINT   NOT NULL,
);
ALTER TABLE [dbo].[CacheChangeTrackingHistory]
   ADD CONSTRAINT [PK_CacheChangeTrackingHistory] PRIMARY KEY CLUSTERED ([CacheChangeTrackingHistoryId] ASC);

-- Add default values for last sync version
INSERT INTO dbo.CacheChangeTrackingHistory( TableName, LastSynchronizationVersion )
VALUES ('dbo.UserAccount', CHANGE_TRACKING_MIN_VALID_VERSION(Object_ID('dbo.UserAccount')))
INSERT INTO dbo.CacheChangeTrackingHistory( TableName, LastSynchronizationVersion )
VALUES ('dbo.Channel', CHANGE_TRACKING_MIN_VALID_VERSION(Object_ID('dbo.Channel')))
INSERT INTO dbo.CacheChangeTrackingHistory( TableName, LastSynchronizationVersion )
VALUES ('dbo.Media', CHANGE_TRACKING_MIN_VALID_VERSION(Object_ID('dbo.Media')))

Change Monitor Implementation

With the database properly configured we can start on the application that will consume the change tracking stream from SQL Server. The changes can be accessed by the CHANGETABLE( CHANGES <TABLE_NAME> ) function. I will focus on UserAccount changes but the code will apply equally to the Channel and Media tables. When our monitor application starts, a loop is started to process change tracking updates and push them to downstream data stores. In this case the only downstream data store is the Cache represented by the ICache interface. If we had multiple downstream systems to update, the application would start one monitoring loop with a distinct change tracking history table for each system.

public static class ChangeTracker
{
    internal static async Task StartChangeTrackingMonitorLoopAsync( CancellationToken token, ICache userAccountCache )
    {
       while ( true )
       {
          if ( token.IsCancellationRequested )
          {
             break;
          }

          using ( ChangeTrackingBatch<UserAccountChangeModel> userAccountChangesBatch = GetLatestUserChanges() )
          {
             UserAccountChangeModel[] userAccountChanges = ( await userAccountChangesBatch.GetItemsAsync() ).ToArray();
              foreach( var userAccount in userAccountChanges )
              {
                userAccountCache.UpdateObject( "user_account", userAccount.OperationType, userAccount.UserAccountId, userAccount );
              }
              userAccountChangesBatch.Commit();
          }

          await Task.Delay( 1000 );
       }
    }

    private Task<IEnumerable<UserAccountChangeModel>> GetLatestUserChangesAsync()
    {
         string cmd = "
DECLARE @last_synchronization_version BIGINT = (SELECT LastSynchronizationVersion FROM dbo.CacheChangeTrackingHistory WHERE TableName = 'dbo.UserAccount')

DECLARE @current_synchronization_version BIGINT = CHANGE_TRACKING_CURRENT_VERSION();
SELECT ct.UserAccountId, ua.Email, ua.DisplayName, ua.CreateDate
		, CASE WHEN ct.SYS_CHANGE_OPERATION = 'I' THEN 'Insert' WHEN ct.SYS_CHANGE_OPERATION = 'U' THEN 'Update' ELSE 'Delete' END AS OperationType
FROM dbo.UserAccount AS ua
	RIGHT OUTER JOIN CHANGETABLE(CHANGES dbo.UserAccount, @last_synchronization_version) AS ct ON ua.UserAccountId = ct.UserAccountId

UPDATE dbo.CacheChangeTrackingHistory
SET LastSynchronizationVersion = @current_synchronization_version
WHERE TableName = 'dbo.UserAccount'
";
         return new ChangeTrackingBatch<UserAccountChangeModel>( _connectionString, cmd );
    }
}
internal class ChangeTrackingBatch<T> : IDisposable
{
  private readonly string _command;
  private SqlTransaction _transaction;
  private IEnumerable<T> _items;
  private SqlConnection _connection;
  private readonly object _param;

  public ChangeTrackingBatch( string connectionString, string command, object param = null )
  {
     _connection = new SqlConnection( connectionString );
     _command = command;
     _param = param;
  }

  public async Task<IEnumerable<T>> GetItemsAsync( )
  {
     if ( _items != null )
     {
        return _items;
     }

     _connection.Open();
     _transaction = _connection.BeginTransaction( System.Data.IsolationLevel.Snapshot );
     _items = await _connection.QueryAsync<T>( _command, _param, _transaction );
     return _items;
  }

  public void Commit()
  {
     _transaction?.Commit();
     _connection?.Close();
  }

  public void Dispose()
  {
     Dispose( true );
     GC.SuppressFinalize( this );
  }

  protected virtual void Dispose( bool disposing )
  {
     if ( disposing )
     {
        _transaction?.Dispose();

        _connection?.Dispose();
     }
  }
}

One additional thing to note is the use of SnapshotIsolationMode for the database transaction. This makes sure we’re working with a consistent view of the database to prevent any collisions with the change tracking cleanup process.

Wrap Up

At this point the solution is complete. Any updates to the UserAccount table will be tracked and pushed into the cache by the change tracker class. If the update to the cache fails, the transaction will be rolled back. The monitor application will then retry applying changes in order until the change is pushed into the cache successfully or the change is cleaned up by change tracking retention settings.

This solution is tied to the scalability of SQL Server, so for a write heavy application a different architecture may be necessary. For example, SQL Server could be replaced by a log stream like Kafka. However, for moderate scale applications this architecture will be more than adequate to handle load. We’ve solved the resiliency and race condition issues from the dual write scenario by ensuring that any successful database write will be pushed to downstream systems, in order. We’ve also improved the overall architecture of the primary application by removing the writes to secondary data stores. Plus we haven’t introduced any new data stores to learn and manage. For reference, the full application source code is available on GitHub.

Avoid dual writes with sql server and change tracking - Part 1

13 December 2017

Problem: Consistent updates to multiple data stores

A common problem in web applications is the need to persist updates to multiple data stores (sql, cache, search, etc). Rarely does an application deal only with one data store. How do we get one update from the application into all data stores? The most common approach is dual writes where the application simply writes to each data store in parallel or serial order. This is compelling because it’s easy to implement and works well with low traffic, low error scenarios.

Dual_write-example

However, there are many tricky errors that can arise. The most common being a failure of one of the writes. One data store has the new data and one has stale data. Another problem is race conditions among the different data store updates like in the diagram below. In this example, the value will be ‘2’ in the SQL database and ‘3’ in the Redis cache. No errors were thrown in this case making it even harder to track down.

Dual_write-example

How do we deal with these problems? One approach is to build a process to monitor the databases, look for drift between the two, then correct the one that’s out of line. However, this is difficult because you have to infer which database is correct. It’s also slow to analyze the whole database so the difference between the two databases may be “in the wild” for some time.

A better solution is the unified log pattern. At a high level, the unified log pattern is implemented by persisting all writes to one stream data store like Kafka or Azure Event Hubs. The log is consumed by one or more applications that persist the updates to dependent data stores as shown below.

Unified-Log-Example

This avoids many of the problems from the dual write scenario but it is difficult to introduce into existing systems that currently write to more traditional data stores like SQL. Additionally, if your team is used to working with traditional databases, a log data store can be a difficult mindset shift.

What else can we do? A better solution is to only write to the SQL database. Then consume the changelog of the database and update any dependent data stores. I call this approach database stream writer. This idea comes from a series of blog posts Martin Kleppmann did on Events and Stream Processing. With this system you can continue writing to the existing SQL database but your primary application no longer has to deal with updating dependent data stores.

Database-leader-example

How do we implement the database stream writer pattern?

This pattern is implemented by turning Change Data Capture on for your database. Change Data Capture exposes the changes made to a database to a third party application based on its commit log. Many database vendors support this feature, for example, postgresql exposes a method called logical decoding that can be used to parse updates from the write ahead log. SQL Server calls this feature change data capture where updates to tables are stored in system tables inside the same database and exposed via table functions. SQL Server also has a lighter weight feature called change tracking that does not track individual column updates but simply tracks when a table row is modified. For many applications just knowing that a row changed is enough information to make necessary updates to dependent data stores (like invalidating a cache).

With a change data capture system in place, the last step is to write an application to consume the change data capture stream and write the data to downstream systems. In my next post, I will detail how to implement a system like this using SQL Server’s Change Tracking and a C# application for consuming the change stream.

Solutions For Async/Await In MVC Action Filters

15 August 2017

Async/await has been available in .net for years but until the release of asp.net core there was no way to create a MVC ActionFilter that uses async/await properly. Since async was not supported by the framework there was no truly safe way to call async code from an ActionFilter. This has changed in asp.net core but if you are using ASP.Net 5 or below you’re stuck.

Recently, I found a workaround to using an async HttpModule to load whatever data the ActionFilter will need. You could also do all the work of the ActionFilter in the HttpModule but I prefer to keep the filter because it ties more closely into the rest of the MVC pipeline. My example will demonstrate moving async code out of an AuthorizationFilter but the pattern will work with any ActionFilter.

ActionFilter to fix

This is an example authorization filter that does async work as part of the authorization of the request. Because attributes do not have async methods to override we’re stuck calling .Wait() and .Result to synchronously execute the task. This code is ripe for deadlocks.

public class WebAuthorizationFilter : AuthorizeAttribute
{
  public override void OnAuthorization( AuthorizationContext filterContext )
  {
     if ( AllowAnonymous( filterContext ) )
     {
        return;
     }

     Task<bool> isAuthorizedTask = DoAsyncAuthorizationWork( filterContext.HttpContext );
     isAuthorizedTask.Wait();

     bool isAuthorized = isAuthorizedTask.Result;
     if ( !isAuthorized)
     {
        filterContext.Result = new UnauthorizedResult();
     }
  }
}

There are 2 classes necessary to add the module:

New HttpModule to handle async code

Any async code goes in this class. Call necessary methods then add state to HttpContext.Items.

public class WebAuthorizationAsyncModule : IHttpModule
{
    public void Init( HttpApplication context )
    {
       var authWrapper = new EventHandlerTaskAsyncHelper( AuthorizeRequestAsync );

       // Execute module early in pipeline during request authorization
       // To execute the module after the MVC route has been bound, use `context.AddOnPostAcquireRequestStateAsync` instead
       context.AddOnAuthorizeRequestAsync( authWrapper.BeginEventHandler, authWrapper.EndEventHandler );
    }

    private static async Task AuthorizeRequestAsync( object sender, EventArgs e )
    {
       HttpApplication httpApplication = (HttpApplication) sender;
       HttpContext context = httpApplication.Context;

       bool isAuthorized = await DoAsyncAuthorizationWork( context );

       // Store the result in HttpContext.Items for later access
       context.Items.Add( "IsAuthorized", isAuthorized );
    }
}

Module Registration Startup Class

This class registers the HttpModule created above with asp.net. You can also register in the web.config but I prefer to keep this kind of configuration in code.

public class PreApplicationStartCode
{
  public static void Start()
  {
    DynamicModuleUtility.RegisterModule( typeof( WebAuthorizationAsyncModule ) );
  }
}

The second code change required is to add the PreApplicationStartCode class to the startup classes registered with asp.net. To do this use the PreApplicationStartMethod attribute on your HttpApplication class in Global.asax.cs.

[assembly: PreApplicationStartMethod( typeof( Some.Code.PreApplicationStartCode ), "Start" )]
namespace Some.Code
 {
    public class WebApplication : HttpApplication
    {
      ...

Action Filter changes

This is the same authorization filter from above changed to read the authorization result from Httpcontext.Items instead of doing the work directly.

public class WebAuthorizationFilter : AuthorizeAttribute
{
  public override void OnAuthorization( AuthorizationContext filterContext )
  {
     if ( AllowAnonymous( filterContext ) )
     {
        return;
     }

     bool isAuthorized =  (bool) filterContext.HttpContext.Items["isAuthorized"];
     if ( !isAuthorized)
     {
        filterContext.Result = new UnauthorizedResult();
     }
  }
}

Next Steps

This example was deliberately simple and as such it executes for every request. To only execute the module for specific URLs you can inspect the HttpContext.Request.Url property. Or if you delay execution of the module until after the ‘Acquire State’ pipeline step in asp.net (context.AddOnPostAcquireRequestStateAsync in Module.Init) and access the MVC route values using HttpContext.Request.RequestContext.RouteData.Values you can only execute the module code for specific Controllers/Actions in MVC.


See All Posts