I see a number of questions get raised when the following exception is raised from a WCF client/server:
The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.
And a lot of the responses I see are similar the following:
Easy to solve you just need to set the maxReceivedMessageSize to 2147483647
So you make the change and the error goes away, all is good…
I’m not too sure you see the reason WCF has the the maxReceivedMessageSize in the first place is to have a safeguard by restricting the size of the message being transported, if we think about the change we have just made we have gone from allowing up to a 64kb message to 2gb, now chances are your not going to hit this but even if we get up to 5mb because we are selecting every transaction that a client has ever made and they have been a customer for several years, now the following is put under extra stress:
- Server has to serialize 5mb of data, this increases memory usage & cpu
- Any network points now have to cope with 5mb going over the wire
- Calling application has to deserialize 5mb of data, this also increases memory usage & cpu
- The data then needs to be displayed to the client if this is a web application that means returning the html for the 5mb amount of data which will probably end up being more due to the presentation html structure as well, this will increase the outbound bandwidth usage significantly
Now looking at the above you may be thinking 5mb is not much and it should be handled easily and you’re probably correct however this is the steps for a single request, once you factor in multiple clients that may have similar amounts of transactions all browsing the site, your going to have major problems especially given the amount of time request may take to come back the the clients browser as the client will usually refresh the page a number of times before giving up.
Looking at the above the are a number of potential problems that could arise:
- Depending on the memory the server has it may reach its maximum usage if it has to serialize multiple large objects
- The calling application may also suffer the same fate when deserializing multiple large objects
- The network may not have been prepared for the amount of traffic now going over it leading to sluggish replies
- The calling application could run out of threads all waiting for replies coming back from the server (hopefully there will at least be a timeout set so that the thread does not wait indefinitely)
- The clients browser may not be able to deal with the vast amounts of html being returned or more likely the client will give up before the browser completes rendering it
So had do we solve this? well let’s take a step back, once we start seeing errors about the size of messages we should really be thinking about what it is we are trying to achieve if we look at the example above
As a client I want to be able to view my previous transactions
If we think about the client who could have 5 years worth of transactions who could have thousands of transactions we want to present the transactions that are important to him now, chances are he is interested in his recent transactions so you could have:
- Last 6 months transactions
- Last 12 months transactions
- Transactions made in 2012
- Transactions made greater than £1000
By partitioning the transactions we have made it easier for the client to locate the transactions they’re interested in and also solved the issue of sending huge data objects over the network.
Now some people may be thinking that this is a cop out and if I were a client I may want to see all my transactions how would you get this to work…
- The client would be able to request all transactions
- The client is presented with a message to inform them that the results are being collated and he will be sent an email with the results
- In the meantime a backend service would deal with this request by querying the relevant database directly (preferably separate to the OLTP database)
- It would then build the results into a file (csv, xml, json etc…)
- Provide it to the client over a more appropriate channel (ftp, zipped attachment email)
- Notify the client via email that their transactions can now be viewed with the details to access the channel above
What I have done above is acknowledge that providing all the users transactions could result in a massive result set and therefore needs to be treated differently to a simple query performed via the website.
The key point I want to make here is that when you get an error raised due to message size it’s usually a dead give-away that you need to re-think what it is your trying to achieve especially if you have already increased the message size once before!