WCF MaxReceivedMessageSize Handle with care

I see a number of questions get raised when the following exception is raised from a WCF client/server:

The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.

And a lot of the responses I see are similar the following:

Easy to solve you just need to set the maxReceivedMessageSize to 2147483647

So you make the change and the error goes away, all is good…

I’m not too sure you see the reason WCF has the the maxReceivedMessageSize in the first place is to have a safeguard by restricting the size of the message being transported, if we think about the change we have just made we have gone from allowing up to a 64kb message  to 2gb, now chances are your not going to hit this but even if we get up to 5mb because we are selecting every transaction that a client has ever made and they have been a customer for several years, now the following is put under extra stress:

  • Server has to serialize 5mb of data, this increases memory usage & cpu
  • Any network points now have to cope with 5mb going over the wire
  • Calling application has to deserialize 5mb of data, this also increases memory usage & cpu
  • The data then needs to be displayed to the client if this is a web application that means returning the html for the 5mb amount of data which will probably end up being more due to the presentation html structure as well, this will increase the outbound bandwidth usage significantly

Now looking at the above you may be thinking 5mb is not much and it should be handled easily and you’re probably correct however this is the steps for a single request, once you factor in multiple clients that may have similar amounts of transactions all browsing the site, your going to have major problems especially given the amount of time request may take to come back the the clients browser as the client will usually refresh the page a number of times before giving up.

Looking at the above the are a number of potential problems that could arise:

  • Depending on the memory the server has it may reach its maximum usage if it has to serialize multiple large objects
  • The calling application may also suffer the same fate when deserializing multiple large objects
  • The network may not have been prepared for the amount of traffic now going over it leading to sluggish replies
  • The calling application could run out of threads all waiting for replies coming back from the server (hopefully there will at least be a timeout set so that the thread does not wait indefinitely)
  • The clients browser may not be able to deal with the vast amounts of html being returned or more likely the client will give up before the browser completes rendering it

So had do we solve this? well let’s take a step back, once we start seeing errors about the size of messages we should really be thinking about what it is we are trying to achieve if we look at the example above

As a client I want to be able to view my previous transactions

If we think about the client who could have 5 years worth of transactions who could have thousands of transactions we want to present the transactions that are important to him now, chances are he is interested in his recent transactions so you could have:

  • Last 6 months transactions
  • Last 12 months transactions
  • Transactions made in 2012
  • Transactions made greater than £1000

By partitioning the transactions we have made it easier for the client to locate the transactions they’re interested in and also solved the issue of sending huge data objects over the network.

Now some people may be thinking that this is a cop out and if I were a client I may want to see all my transactions how would you get this to work…

  1. The client would be able to request all transactions
  2. The client is presented with a message to inform them that the results are being collated and he will be sent an email with the results
  3. In the meantime a backend service would deal with this request by querying the relevant database directly (preferably separate to the OLTP database)
  4. It would then build the results into a file (csv, xml, json etc…)
  5. Provide it to the client over a more appropriate channel (ftp, zipped attachment email)
  6. Notify the client via email that their transactions can now be viewed with the details to access the channel above

What I have done above is acknowledge that providing all the users transactions could result in a massive result set and therefore needs to be treated differently to a simple query performed via the website.

The key point I want to make here is that when you get an error raised due to message size it’s usually a dead give-away that you need to re-think what it is your trying to achieve especially if you have already increased the message size once before!

log4net FallbackAppender – under the hood

Some Background

I recently put together log4net contrib project up on google code the only item currently up there (though hopefully there should be more stuff from the community up there) is a custom appender I wrote to address the need for having an appender that is clever enough to fallback to other appenders if one fails, this originally came from a forum post of someone wondering how to do it and after some investigation I was that NLog has this built-in.

Investigating how to add it

So my first step was to grab the log4net source and see how I could integrate this appender in, one thing I really didn’t want to do was to change the log4net code and instead allow the appender to be plugged in like you would do normally.

First step was to see how to create an appender that holds other appenders and stumbled across IAppenderAttachable and then the ForwardingAppender which implements it, this interface is what allows you to user the following xml:


      
      

And the configuration makes sure that the appender is populated with the referenced appenders. Looking at the behaviour of the existing ForwardingAppender it gave me a lot of what I was after so I inherited from this.

Hitting the main problem

Now I had the FallbackAppender which got most of it’s behaviour from the ForwardingAppender the next step was to setup the following logic:

  1. Get first appender in the collection
  2. Try and append to it
  3. If succeeded exit
  4. Otherwise try and append to the next appender in the collection
  5. Rinse repeat until one succeeds or all are exhausted

On the face of it this seems to be quite simple, loop the appenders and put a try…catch around the call to Append then have some logic in the catch to use the next appender and repeat, something like this:

while (apppenderQueue.Count > 0)   
{   
    try   
    {   
        var appender = appenderQueue.Dequeue();   
        appender.Append(loggingEvent);   
        break;   
    }   
    catch (Exception thrownException)  
    {  
        // log to internal log about failure 
    }  
}

When I tried this and tested against log4net it didn’t work only the first appender was being appended to if I setup the config incorrectly to simulate an issue.

After some digging the issue became obvious the appenders don’t let exceptions bubble up through the stack and instead log to an IErrorHandler that they get via a property from the AppenderSkeleton object they inherit from and with this being the only way to track errors from appenders I had to then impose a limitation.

Adding the AppenderSkeleton limitation to go forward

With the ErrorHandler property coming from AppenderSkeleton being the only way to discover errors I came to the conclusion that I needed to hook into this property luckily the property can be set, this also imposed the limitation that any of the appenders being referenced had to inherit from AppenderSkeleton in the inheritance tree at some point in order to get access to the ErrorHandler property.

Putting in the hooks

The next step I did was to create my own IErrorHandler that simply recorded if an error occurred, and at appender activation I replace each of the referenced appenders ErrorHandler with that one, now the code can see if an error has been raised for a particular appender and move onto the next one.

Summary

I guess this post demonstrates that even if a third party component doesn’t necessarily provide you with an API that makes extending behaviour easy in this example if log4net appenders let the exception bubble up or the Append method returned a bool whether it succeeded or not, there are still tricks we can use to try and achieve it.

Getting to grips with NHibernate: Stored Procedures Redux

Update: Due to popular demand I have included using stored procedures for insert/update/delete

I hope this post saves someone the amount of time it took me to try and run a stored procedure using NHibernate.

Selecting from a Stored Procedure

Create Stored Procedure

The first step is to create your stored procedure, if we take a basic example:

CREATE PROCEDURE SearchStaff
	(
	@LastName VARCHAR(255) = NULL,
	@FirstName VARCHAR(255) = NULL
	)
AS
	SET NOCOUNT ON

	SELECT s.*
	FROM Staff s
	WHERE (s.LastName LIKE @LastName OR @LastName IS NULL)
	AND (s.FirstName LIKE @FirstName OR @FirstName IS NULL)
	ORDER BY FirstName, LastName

	SET NOCOUNT OFF

Note the SET NOCOUNT ON/SET NOCOUNT OFF

Create a named query

In order for NHibernate to be able to run the stored procedure, we need to create a named query, in our hbm file:


	
	  
	  
	  
	  
	
	exec SearchStaff :LastName, :FirstName

In this example because the stored procedure is actually returning a staff object I set the return to the Staff class, if I was only returning something like an integer value I could use the following:


    
    exec StaffCount :LastName, :FirstName

The name does not need to be the same name as the stored procedure.

Create the NHibernate code

// session set here

IQuery searchQuery = session.GetNamedQuery("StaffSearching")

if (!string.IsNullOrEmpty(LastName))
	searchQuery.SetString("LastName", LastName);
else
    searchQuery.SetString("LastName", null);

if (!string.IsNullOrEmpty(FirstName))
    searchQuery.SetString("FirstName", FirstName);
else
    searchQuery.SetString("FirstName", null);

IList foundStaff = searchQuery.List();

Notice that the stored procedure can deal with or without filtering, so if the fields have not been set we can simply set them to a null value and NHibernate will pass the parameters as a NULL which is what we want.It’s worth noting that the above is quite a simple example and that for the above I would not use a stored procedure and instead just use NHibernates own querying objects. The case were I used a stored procedure was for a paging routine for SQL server 2000.

Using Stored Procedure for Insert, Update & Delete

Create Stored Procedures

CREATE PROCEDURE InsertStaff
	(
	@LastName VARCHAR(255),
	@FirstName VARCHAR(255),
             @EmailAddress VARCHAR(512)
	)
AS
             INSERT INTO Staff
             (
              LastName,
              FirstName,
              EmailAddress
             )
             VALUES
             (
              @LastName,
              @FirstName,
              @Email
             )

Note: SET NOCOUNT is not set for these stored procedures

I’m not going to include the SQL for the update and delete stored procedures as I don’t think it adds any value, and is the easy part.

Update NHibernate XML Mapping

We need to tell NHibernate the SQL we want to execute for our insert/update/delete, we do this inside the class element:


    
      
    
    
    
    

    EXEC InsertStaff ?,?,?
    EXEC UpdateStaff ?,?,?,?
    EXEC DeleteStaff ?

Caveats

  • The last parameter will be the id for the update.
  • The ordering of the parameters is determined by NHibernate, so the best way to find out the ordering would be to view the generated SQL, bit pants but hay ho.

Your code will remain the same, so no changes needed there.

Multiple instances of same windows service

There are times were you want to be able install multiple instances of the same windows service onto the same server (i.e. to support different environments but you have limited servers at your diposal) this poses a problem, you are probably aware windows will not allow more than 1 windows service to be installed with the same service name and when you create a windows service project and an installer you assign the service name at design time.

We need is someway to assign the service name at runtime, well after some extensive googling around I found a way of achieving this, the ProjectInstaller has 2 methods you can override Install and Uninstall which enables us to make changes to the service installer at the at runtime 🙂 Yeah that’s great however how do we assign the service name?! Fear not! at our disposal is the Context class that happens to have a Parameters dictionary so we can now capture custom command arguments passed to installutil.exe, enough yapping time for some code:

public override void Install(System.Collections.IDictionary stateSaver)
{
	RetrieveServiceName();
	base.Install(stateSaver);
}

public override void Uninstall(System.Collections.IDictionary savedState)
{
	RetrieveServiceName();
	base.Uninstall(savedState);
}

private void RetrieveServiceName()
{
	var serviceName = Context.Parameters["servicename"];
	if (!string.IsNullOrEmpty(serviceName))
	{
		this.SomeService.ServiceName = serviceName;
		this.SomeService.DisplayName = serviceName;
	}
}

In this instance I have made the servicename argument optional in which case it will use the default assigned at design time, you could enforce it in your version.

We can now use this like this:

installutil /servicename=”My Service [SysTest]” d:\pathToMyService\Service.exe

and for uninstall:

installutil /u /servicename=”My Service [SysTest]” d:\pathToMyService\Service.exe

Note for the uninstall you need to supply the servicename so windows how to resolve the service to uninstall

One thing that suprises me is how hidden this implementation is, most results bring up people wanting the same functionality and being told it ain’t possible and articles that suggest messing with config files this to me seems to be a much simpler and nicer way to achieve multiple instances.

Enforcing Conventions

In my last post I demonstrated adding AOP to cut down on cross cutting code, and at the end mentioned that it would be nice to enforce a convention throughout the system, the example being each public method in the task layer being decorated with a certain attribute.

I was unsure about how to do this until recently seeing a post by Ayende http://www.ayende.com/Blog/archive/2008/05/05/Actively-enforce-your-conventions.aspx in it he references an article by Glenn Block whereby both of them came up with a unit test called PrismShouldNotReferenceUnity in the test they use reflection to check that there are indeed no references from the Prism assembly to the Unity assembly.

This is a great idea! You now have a repeatable test that can be run to make sure the conventions for your system are met, so armed with this technique I created the following test:

[Test]
public void task_class_methods_should_be_marked_with_wrap_exception_attributes()
{
    try
    {
        Assembly asm = Assembly.LoadFrom( "MCromwell.StaffIntranet.Task.dll" );
        var wrapExceptionType = asm.GetType( "MCromwell.StaffIntranet.Task.Infrastructure.WrapExceptionWithAttribute" );
        Assert.IsNotNull(wrapExceptionType);

        foreach (Type current in asm.GetTypes())
        {
            if (current.FullName.StartsWith( "MCromwell.StaffIntranet.Task.Tasks." ) && (!current.IsInterface) && (!current.IsAbstract))
            {
                foreach (var method in current.GetMethods())
                {
                    if ((method.IsPublic) && (method.DeclaringType.Name != "Object"))
                    {
                        if (method.GetCustomAttributes(wrapExceptionType, false).Length <= 0)
                            Assert.Fail("no wrap exception attribute found on type '{0}', method '{1}'", current.FullName,method.Name);
                    }
                }
             }
        }
    }
    catch (ReflectionTypeLoadException rex)
    {
        foreach (var current in rex.LoaderExceptions)
            Console.WriteLine(current.ToString());
        throw;
    }
    catch (Exception ex)
    {
        Console.Error.Write(ex.Message);
        throw;
    }
}

In here you can see by leveraging reflection I can browse all the public methods for my task layer classes and make sure they do indeed have a WrapExceptionWithAttribute the other cool thing is that by doing it this way I can freely add new classes and they will need to comply with the conventions set out or the testing will fail, cool eh!

One thing to point out is that if you start increasing the number of conventions and want a better way to control and report on, you probably want to look into something like FXCop or NDepend’s CQL.

Adding AOP to Staff Intranet

Because I’m a stickler for good code I put exception handling into my task layer and wrap any exception that may be raised from the data access layer into an appropriate exception for the task layer and also log the exception, because of this I end up having lots of unit test code that looks similar to make sure I’m enforcing this rule:

[Test]
public void Should_log_exception_if_exception_is_raised_when_deleting_session()
{
    Guid id = Guid.Empty;
    IAdministrationRepository mockRepository = CreateMock();

    IServiceResolver mockResolver = CreateMock();
    ILog mockLog = CreateMock();
    Exception mockException = new Exception( "mock ex" );

    using (Record)
    {
        SetupResult.For(mockResolver.Resolve())
                   .Return(mockLog);
        SetupResult.For(mockRepository.FindLoginSessionBy(id))
                   .IgnoreArguments()
                   .Return(new LoginSession(id));
        mockRepository.Delete(null);
        LastCall.IgnoreArguments()
                .Throw(mockException);
        mockLog.Error(mockException);
    }

    using (PlayBack)
    {
        try
        {
            IoC.InitializeWith(mockResolver);
            IAuthenticationTask sut = createSUT(mockRepository);
            sut.InvalidateSessionFor(id);
        }
        catch { }
    }
}

And to fulfill this I then end up with cross cutting code to meet the test behaviour:

try
{
    //... work here
}
catch (Exception thrownException)
{
    Log(thrownException);
    throw new ProblemSavingStaffMemberException(thrownException);
}

To try and cut down on this cross cutting code and to keep the tasks more lean I did some investigation into some AOP strategies.

My first reaction was… I’m using Castle Windsor so can I utilize it’s built in AOP capabilities after changing some code and adding an interceptor I quickly found out this won’t be possible as it doesn’t support out or ref parameters and due to some of my task layer method using out parameters to pass back a notification object this choice was gone!

Next up I had a look at PostSharp the difference between them being that PostSharp adds code in after compilation whereas Windsor dynamically creates proxies at runtime. After looking at some examples I had an idea as to how to implement it so after installing PostSharp I created my object that will inject itself into other objects:

[Serializable]
[AttributeUsage(AttributeTargets.All)]
public class TaskExceptionHandlerAttribute : OnMethodBoundaryAspect
{
    public override void OnException(MethodExecutionEventArgs eventArgs)
    {
        Exception thrownException = eventArgs.Exception;
        LogException(thrownException);
        WrapExceptionWithAttribute wrapExceptionAttribute = RetrieveWrappingExceptionAttribute(eventArgs.Method);
        if (wrapExceptionAttribute != null)
        {
            Type exceptionToWrapWith = wrapExceptionAttribute.WrapExceptionType;
            Exception exceptionToThrow = (Exception)Activator.CreateInstance(exceptionToWrapWith, thrownException);
            if (exceptionToThrow != null)
                throw exceptionToThrow;
        }
        throw new TaskLayerException(thrownException);
    }

    private static WrapExceptionWithAttribute RetrieveWrappingExceptionAttribute(MethodBase method)
    {
        WrapExceptionWithAttribute wrapExceptionAttribute =
        method.GetFirstCustomAttribute(typeof(WrapExceptionWithAttribute),false);
        return wrapExceptionAttribute;
    }

    private static void LogException(Exception thrownException)
    {
        ILog log = IoC.Resolve();
        log.Error(thrownException);
    }
}

In order to not have to place this object as an attribute on all the different task classes I can use the assembly attribute and give it a target using a name plus a wildcard:

[assembly: TaskExceptionHandler(AttributeTargetTypes="MCromwell.StaffIntranet.Task.Tasks.*")]

And that’s it my methods are now injected with the PostSharp code after compilation to handle exceptions and at which point it will call my custom code very cool!

One thing you may have noticed is the use of another attribute that can be decorated on the task methods WrapExceptionWithAttribute this attribute takes a Type in it’s constructor this Type is the exception that should wrap the thrown exception ideally we would like this attribute to be placed on all public task class methods so that we raise applicable exceptions depending on the task be performed although how do we enforce this convention?…

In my next post I will be demonstrating a technique that allows to enforce these types of conventions we want for our systems.

Head First Series

I’m just over half way through Head First Software Development, and I have it found a fantastic book, to be honest I wasn’t surprised by this conclusion, I have read both HFDP and HFOOAD and was blown away by the qualities this series has.

By moving away from the mostly dreary dull literature we find in most books about software development and instead keeping the tone light and amusing, using various visual elements on the page & including user input activities I have found them incredibly helpful in getting the message through and making it stick (not easy when the topics covered are tricky and complicated).

I would highly recommend anyone to give this series a shot, you won’t be disappointed!