Still creating windows services in .NET by hand?

Seriously stop and use Topshelf, why?…

  • Turns a console application into a windows service which allows much easier debugging
  • Uses a fluent interface to setup your services allowing easy in code setup
  • An instance can be provided at install allowing multiple same services on the same machine (which means you won’t need to read this)

The project site has a good Getting Started guide, once your ready to install the service, simply run this in a cmd prompt in the same directory as your exe:

myservice.exe install /instance:test
Advertisements

Using dynamic for Stored Procedures

It can be quite cumbersome to call an SP with ado.net when you also need to supply parameters, you usually end up with this:

using (var connection = new SqlConnection(connectionString))
{
    connection.Open();
    var cmd = connection.CreateCommand();
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Text = "MyStoredProcedure";
    var firstParameter = new SqlParameter("FirstParameter", value);
    var secondParameter = new SqlParameter("SecondParameter", DBNull.Value);
    cmd.Parameters.Add(firstParameter);
    cmd.Parameters.Add(secondParameter);
    var reader = cmd.ExecuteReader();
}

There is quite a lot of ceremony in there to make a simple SP call, using a dynamic object we can reduce this down to:

using (var connection = new SqlConnection(connectionString))
{
    connection.Open();
    dynamic myStoredProcedure = new DynamicStoredProcedure("MyStoredProcedure");
    myStoredProcedure.FirstParameter = value;
    myStoredProcedure.SecondParameter = null;
    var reader = myStoredProcedure.ExecuteReader(connection);
}

As you can see it still requires the caller to look after the connection so that it can manage it properly and make sure that it is closed however now using the dynamic keyword we can make calls to properties that don’t exist but instead are resolved dynamically and treated as a SqlParameter using the name of the property as the name of the parameter and the value is used as the parameter value, also nulls are automatically converted to DBNull.Value as the value for the parameter.

The DynamicStoredProcedure object declaration looks like this:

public class DynamicStoredProcedure : DynamicObject
{
	protected string m_Name = string.Empty;
	protected IDictionary<string, SqlParameter> m_Parameters = new Dictionary<string, SqlParameter>();
	
	public DynamicStoredProcedure(string storedProcedureName)
	{
		m_Name = storedProcedureName;
	}

	public void AddParameter(SqlParameter parameter)
	{
		if (m_Parameters.ContainsKey(parameter.ParameterName))
		{
			m_Parameters[parameter.ParameterName] = parameter;
		}
		else
		{
			m_Parameters.Add(parameter.ParameterName, parameter);
		}
	}

	public override bool TrySetMember(SetMemberBinder binder, object value)
	{
		var convertedValue = (value == null) ? DBNull.Value : value;

		if (m_Parameters.ContainsKey(binder.Name))
		{
			m_Parameters[binder.Name].Value = convertedValue;
			return true;
		}

		var param = new SqlParameter(binder.Name, convertedValue);
		m_Parameters.Add(binder.Name, param);
		return true;
	}
	
	public SqlDataReader ExecuteReader(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		var reader = cmd.ExecuteReader();
		return reader;
	}

	public T ExecuteScalar<T>(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		var result = cmd.ExecuteScalar();
		var type = typeof(T);

		if (result.GetType().IsAssignableFrom(type))
			return (T)result;

		throw new InvalidOperationException(string.Format("Cannot convert result [{0}] to type [{1}]", result, type));
	}

	public void ExecuteNonQuery(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		cmd.ExecuteNonQuery();
	}

	private SqlCommand CreateCommand(SqlConnection connection)
	{
		var cmd = connection.CreateCommand();
		cmd.CommandType = CommandType.StoredProcedure;
		cmd.CommandText = m_Name;
		cmd.Parameters.AddRange(m_Parameters.Values.ToArray());
		return cmd;
	}
}

I have added the standard Execute methods that you find on SqlCommand There are also times were you need to add a parameter explicitly for instance for output parameters and if you want more control over the configuration of the parameter in this case I have added an explicit AddParameter for these cases.

Microsoft Sync Framework

I recently had a requirement to move backups from a server’s local filesystem over to a network location now my first thoughts were of a windows service and a FileSystemWatcher watching the local directory and upon a folder/file being changed copy the file over to the network, however once you start digging into the details you find it isn’t as straight forward for one thing if a backup file is deleted on the server to free up space (ony the last month’s worth for example) this should be reflected on the network end plus there’s also file renames, locked files etc…

Anyhow I googled around and found the MS Sync Framework which is specially created to deal with these scenario’s and out of the box comes with providers for dealing with file synchronization great stuff! The framework is pretty big as it has extension points to let you do stuff with custom providers (for other transport synchronization), custom data transformation, metadata stored items etc… however for what I wanted I barely needed to scratch the surface.

In it’s most basic form you end up with the following code:

FileSyncProvider sourceProvider = null ;
FileSyncProvider destinationProvider = null ;

try
{
    sourceProvider = new FileSyncProvider(
        sourceReplicaId, sourceReplicaRootPath, filter, options);
    destinationProvider = new FileSyncProvider(
        destinationReplicaId, destinationReplicaRootPath, filter, options);

    destinationProvider.AppliedChange +=
        new EventHandler (OnAppliedChange);
    destinationProvider.SkippedChange +=
        new EventHandler (OnSkippedChange);

    SyncOrchestrator orchestrator = new SyncOrchestrator ();
    orchestrator.LocalProvider = sourceProvider;
    orchestrator.RemoteProvider = destinationProvider;
    orchestrator.Direction = SyncDirection.Upload; // Sync source to destination

    Console .WriteLine( "Synchronizing changes to replica: " +
        destinationProvider.RootDirectoryPath);
    orchestrator.Synchronize();
}
finally
{
    // Release resources
    if (sourceProvider != null ) sourceProvider.Dispose();
    if (destinationProvider != null ) destinationProvider.Dispose();
}

There is one gotcha that caught me out, the MS Sync Framework will not do any of the scheduling of when the synchronization is performed that is completely dealt with by you, so in my case I was back to using a FileSystemWatcher to watch for file changes in the source folder and once a change was detected I would then tell the SyncOrchestrator to synchronize this seemed to work fine and was not that much extra work. The only other step was to wrap this functionality up to be used from a windows service and that was it! Overall I would recommend anyone looking to perform synchronization to take a look to see if either there’s something out of the box the framework gives you or if it can give you a head start.

C# 3.0 Language Features Presentation

Update: Source code now available from here http://code.google.com/p/mcromwell/source/browse/#svn/trunk/3_0_Features get hold a good SVN client to checkout, such as TortoiseSVN

Yesterday evening I put together a presentation for the Isle of Man devG, I think it went ok overall considering this was my first presentation.

I have yet to make the source code available but once I get round to uploading it I will update this post, If anyone attending has any question’s feel free to drop me an email or leave a comment.

UML & Design Anti-Patterns

Intro

First off it’s worth saying that I use UML a lot and glad that we have a somewhat standardised way of communicating  our models, this post is not a dig at UML but rather some anti-patterns I have come across in relation to people using UML.

Where’s the Behaviour

When we are thinking about objects we should be thinking about the behaviour an object performs, what messages does it accept, other objects it uses perform it’s behaviour etc… And yet time and time again all I see is class diagrams this is no good, class diagrams are static and don’t provide us with enough information to show it’s behaviour instead you should start with dynamic diagrams i.e. collaboration diagrams or sequence diagrams these provide us with a much better way to model the behaviour of our objects:

user repository

If we compare the two diagrams we can see that the class diagram makes us perceive that the object sits on it’s own and exposes some messages it accepts, however in the sequence diagram we have already began to think about what other objects will use the object were designing and the collaborating objects it uses this gives us a much better idea about how we can start to implement the design.

Michelangelo Effect

UML should be used to go through quick design examples, not to create a work of art! You should not be getting hung up on every little semantic detail (should this line be dotted or filled, do I need to underline this etc…) as long as the general rules of the model are followed that’s enough. Don’t spend time trying to stick to every UML specifications you won’t get any benefit from doing so, this one of the reasons I prefer to use a whiteboard or simple pencil sketches rather than  UML tool such as visio as it’s conformity to every single rule slows you down.

Modeling the World 

You should model within a reasonable context, not try and model the system as a whole. We use UML to go through some designs of the system and in these cases we are concentrating on a particular context of the system, when we use UML we should be modeling only the items that make sense for this context once we start modeling the whole system we get bogged down in details that hinder rather than help us work out the design. This is somewhat related to the above anti-pattern in that we start treating UML models more than just sketches to help with designing.

Using Repositories with nHibernate

Intro

The point of this article is to demonstrate 2 ways that I have been using the Repository pattern in conjuction with nHibernate to provide my applications with rich domain objects, I wanted to also get feedback to the options shown and if there are any other ways to implement Repository with nHibernate.

Repository manages session

The first option and the one I used for the Staff Intranet and for older applications is wereby the repository manages the session object and decides where transactions should be used, here is an example of one the methods that we might typically see:

public int Save(User user)
{
    ISession session = NHibernateSessionManager.GetSession();
    ITransaction transaction = session.BeginTransaction();
    try
    {
        session.Save(user);
        session.Flush();
        transaction.Commit();
        return user.Id;
    }
    catch
    {
        transaction.RollBack();
        throw;
    }
    finally
    {
        NHibernateSessionManager.CloseSession();
    }
}

This has a number of the issues:

  • If you need to use a number of repositories in a single transaction this option will not work
  • Integration testing becomes more difficult as the repository handles the session and transactions

Caller manages session

The other way we can implement repositories is to let the calling code manage session and when transactions take place, here is an updated version of the above to accomodate the caller managing session & transactions:

public int Save(User user)
{
    ISession session = NHibernateSessionManager.GetSession();
    session.Save(user);
    session.Flush();
    return user.Id;
}

Our code has been reduced however the caller would now contain the code removed:

public int SaveUser(User user)
{
    ITransaction transaction = NHibernateSessionManager.GetSession().BeginTransaction();
    try
    {
        int userId = userRepository.Save(user);
        transaction.Commit();
        return userId;
    }
    catch (Exception thrownException)
    {
        transaction.RollBack();
        //... log exception, wrap up exception, rethrow, etc...
    }
    finally
    {
        NHibernateSessionManager.CloseSession();
    }
}

We could now quite happily make calls before or after the saving of the user to other repositories knowing that they would all be using the session & transaction.

I’m pretty convinced that this option of having the caller responsible for manageing session & transactions is a better way of implementing repositories, however if you of any other ways then please point me in the right direction 🙂

Helping Integration testing

One thing that I didn’t like having to do was performing integration tests against the database the reason being the extra care that needed to be taken to make sure that data was reverted back to the state it was to begin with, by using the caller in charge of session & transactions approach you can rollback any data changes made.

Looking beyond blame

While sat eating my lunch at the Software Architect conference I overheard a brief conversation between someone who seemed to be in charge of a project possibly Software Architect or possibly Manager, anyway the bit that caught my attention went something like this

 …I only just found out that one of the developers has bypassed the middle layer…

The next bit of the conversation went along the lines

…When I get back I’m going to bring all the developers in and instruct them that they must go through this layer…

Now I have some big issues with this, firstly would it not be more constructive to ask the developer why they didn’t go through the certain layer he was talking about first?

It may be because the architecture of how that layer is implemented does not lend itself to the functionality that the developer was working on chances are that either the implementation or the processes around it are restricting or slowing down the ability to deliver functionality hence the need to find alternatives (ideally you want to have low viscosity, when viscosity is high the right changes to software are more difficult than just doing kludges, that indicates you have big problems!). Out of this may come a refactoring that has a positive impact on all the developers that develop for the system.

My second issue is this, say that the middle layer that was bypassed turns out to be the fault of the developer (s)he decides (s)he will instead go directly to the database (who needs all this business logic anyway, the data is what were interested in ;-)) if this is the case then I would suggest bringing the developer up to speed on why these things are important, explaining and showing examples of things such as being to write a unit test without having to hook up the database because it has been nicely decoupled may give them the aha! moment were it falls into place.

I guess both of this issues point to the fact that before any digging into the problem was performed a blame exercise that ain’t really going to help anyone or the system going forward has been organised.