Evil Twin Defender

Evil Twin Attack

An Evil Twin Attack is when a wireless device spoofs an existing wireless access point in the hope that an unsuspecting victim device will connect to this one rather than the real access point, this chance can be increased for the following:

  • The wireless access point is open with no authentication present (more typical in public areas)
  • The attacker performs de-auths against the real wireless access point and victim
  • The signal strength is stronger than the real wireless access point

The usual reason for performing this attack is for the attacker to then use MitM attacks against the victim as they now have control over the network.

Defending against

An Evil Twin Attack can be difficult to defend against as it is reliant on the victim knowing not to choose the spoofed wireless device which requires some training on the user’s side, what we can do proactively is monitor the wireless airspace to see if we spot any spoofing devices show up and then we can take action against them by tracking the source and disabling the device, this is were ETD (Evil Twin Detector) fits in.

Evil Twin Defender is an open source python tool I wrote, I know it runs on Linux and I have tested it with this wireless adapter using both 2.4 & 5 GHz channels, the reason for choosing this configuration is because Linux already geared up for monitoring wifi out of the box so long as a you give it an adapter with a chipset that can be configured (check here for a good list)

Setting it up

ETD supports 2 modes of running:

  1. Standalone – As you’d expect lets you run from the command line directly this is handy for for first few runs to check it’s ok and for debugging any issues.
  2. Service – Runs the tool as a systemd service this makes for a more resilient pattern and you can then have monitoring software keep an eye on it as part of a security strategy.

Both modes you going to want to perform the following:

git clone https://github.com/stavinski/etd.git 
cd etd
pip install -r requirements.txt

You will then want to setup the configuration in etd.yaml it should be fairly intuitive and there is an explanation on the README for each, as a test you will want to setup a pattern that matches one or more of your wireless access points, once this is done your good to go:

sudo python etd.py

Note that it must be ran as root, this is to enable the configuration of the wireless adapter to enable monitoring.

For running as a service you will need to run:

sudo ./setup.sh install

This will copy the relevant files and link the service file into systemd, please note that when you want to change config for the service you need to change the file in /etc/etd/etd.yaml and restart the service via sudo systemctl restart etd.service.

In Action

In this demo I’m going to have the attacker using Fluxion and we’ll see how ETD fairs against it! For those not familiar setting up a Wifi MitM attack typically relies on setting up a few items and getting them in place: scanning for access points (airmon-ng, kismet), setting up a new access point daemon (hostapd), dns spoofing (ettercap, dnsspoof) etc… Fluxion handles all this for you with some additional bits thrown in as well!

Attacker

  1. Start Fluxion, it will carry out some tasks to make sure things that are expected are there.
  2. It will use airmon-ng to carry out a scan of targets, once your done CTRL-C
  3. Your presented with a list of which target to spoof
  4. Once this is chosen there are few more options to choose from before it then starts all the tasks running.
  5. I can see the Wifi Access point appear on my phone and the de-auth kicks me off and keeps me kicked off the genuine access point.
  6. As soon as I connect to it I can see all the dns spoofing in action and am presented with a fake captive portal page.

Defender

    1. I setup the config for my wireless adapter and run ETD, change the pattern to match my genuine wireless access point and setup an ignore for the real MAC address.
    2. After a few seconds I get a hit:
      [+]     9	ca:0e:14:6f:2e:44   	The Shire           	  -37	OPN

      This gives me the Channel, BSSID, ESSID and RSSI of the device.

    3. I also get a syslog entry:
      Jun  4 20:10:21 2018-06-04 20: 10:21,356 Evil Twin Detector:     9#011ca:0e:14:6f:28:44   #011The Shire           #011  -33#011OPN
    4. and an email alert:Screenshot from 2018-06-04 20-11-13

As you can see the attacker did not take long to be found and the next steps would be to locate the spoof device and shut it down.

Advertisements

Still creating windows services in .NET by hand?

Seriously stop and use Topshelf, why?…

  • Turns a console application into a windows service which allows much easier debugging
  • Uses a fluent interface to setup your services allowing easy in code setup
  • An instance can be provided at install allowing multiple same services on the same machine (which means you won’t need to read this)

The project site has a good Getting Started guide, once your ready to install the service, simply run this in a cmd prompt in the same directory as your exe:

myservice.exe install /instance:test

Using dynamic for Stored Procedures

It can be quite cumbersome to call an SP with ado.net when you also need to supply parameters, you usually end up with this:

using (var connection = new SqlConnection(connectionString))
{
    connection.Open();
    var cmd = connection.CreateCommand();
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Text = "MyStoredProcedure";
    var firstParameter = new SqlParameter("FirstParameter", value);
    var secondParameter = new SqlParameter("SecondParameter", DBNull.Value);
    cmd.Parameters.Add(firstParameter);
    cmd.Parameters.Add(secondParameter);
    var reader = cmd.ExecuteReader();
}

There is quite a lot of ceremony in there to make a simple SP call, using a dynamic object we can reduce this down to:

using (var connection = new SqlConnection(connectionString))
{
    connection.Open();
    dynamic myStoredProcedure = new DynamicStoredProcedure("MyStoredProcedure");
    myStoredProcedure.FirstParameter = value;
    myStoredProcedure.SecondParameter = null;
    var reader = myStoredProcedure.ExecuteReader(connection);
}

As you can see it still requires the caller to look after the connection so that it can manage it properly and make sure that it is closed however now using the dynamic keyword we can make calls to properties that don’t exist but instead are resolved dynamically and treated as a SqlParameter using the name of the property as the name of the parameter and the value is used as the parameter value, also nulls are automatically converted to DBNull.Value as the value for the parameter.

The DynamicStoredProcedure object declaration looks like this:

public class DynamicStoredProcedure : DynamicObject
{
	protected string m_Name = string.Empty;
	protected IDictionary<string, SqlParameter> m_Parameters = new Dictionary<string, SqlParameter>();
	
	public DynamicStoredProcedure(string storedProcedureName)
	{
		m_Name = storedProcedureName;
	}

	public void AddParameter(SqlParameter parameter)
	{
		if (m_Parameters.ContainsKey(parameter.ParameterName))
		{
			m_Parameters[parameter.ParameterName] = parameter;
		}
		else
		{
			m_Parameters.Add(parameter.ParameterName, parameter);
		}
	}

	public override bool TrySetMember(SetMemberBinder binder, object value)
	{
		var convertedValue = (value == null) ? DBNull.Value : value;

		if (m_Parameters.ContainsKey(binder.Name))
		{
			m_Parameters[binder.Name].Value = convertedValue;
			return true;
		}

		var param = new SqlParameter(binder.Name, convertedValue);
		m_Parameters.Add(binder.Name, param);
		return true;
	}
	
	public SqlDataReader ExecuteReader(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		var reader = cmd.ExecuteReader();
		return reader;
	}

	public T ExecuteScalar<T>(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		var result = cmd.ExecuteScalar();
		var type = typeof(T);

		if (result.GetType().IsAssignableFrom(type))
			return (T)result;

		throw new InvalidOperationException(string.Format("Cannot convert result [{0}] to type [{1}]", result, type));
	}

	public void ExecuteNonQuery(SqlConnection connection)
	{
		var cmd = CreateCommand(connection);
		cmd.ExecuteNonQuery();
	}

	private SqlCommand CreateCommand(SqlConnection connection)
	{
		var cmd = connection.CreateCommand();
		cmd.CommandType = CommandType.StoredProcedure;
		cmd.CommandText = m_Name;
		cmd.Parameters.AddRange(m_Parameters.Values.ToArray());
		return cmd;
	}
}

I have added the standard Execute methods that you find on SqlCommand There are also times were you need to add a parameter explicitly for instance for output parameters and if you want more control over the configuration of the parameter in this case I have added an explicit AddParameter for these cases.

Microsoft Sync Framework

I recently had a requirement to move backups from a server’s local filesystem over to a network location now my first thoughts were of a windows service and a FileSystemWatcher watching the local directory and upon a folder/file being changed copy the file over to the network, however once you start digging into the details you find it isn’t as straight forward for one thing if a backup file is deleted on the server to free up space (ony the last month’s worth for example) this should be reflected on the network end plus there’s also file renames, locked files etc…

Anyhow I googled around and found the MS Sync Framework which is specially created to deal with these scenario’s and out of the box comes with providers for dealing with file synchronization great stuff! The framework is pretty big as it has extension points to let you do stuff with custom providers (for other transport synchronization), custom data transformation, metadata stored items etc… however for what I wanted I barely needed to scratch the surface.

In it’s most basic form you end up with the following code:

FileSyncProvider sourceProvider = null ;
FileSyncProvider destinationProvider = null ;

try
{
    sourceProvider = new FileSyncProvider(
        sourceReplicaId, sourceReplicaRootPath, filter, options);
    destinationProvider = new FileSyncProvider(
        destinationReplicaId, destinationReplicaRootPath, filter, options);

    destinationProvider.AppliedChange +=
        new EventHandler (OnAppliedChange);
    destinationProvider.SkippedChange +=
        new EventHandler (OnSkippedChange);

    SyncOrchestrator orchestrator = new SyncOrchestrator ();
    orchestrator.LocalProvider = sourceProvider;
    orchestrator.RemoteProvider = destinationProvider;
    orchestrator.Direction = SyncDirection.Upload; // Sync source to destination

    Console .WriteLine( "Synchronizing changes to replica: " +
        destinationProvider.RootDirectoryPath);
    orchestrator.Synchronize();
}
finally
{
    // Release resources
    if (sourceProvider != null ) sourceProvider.Dispose();
    if (destinationProvider != null ) destinationProvider.Dispose();
}

There is one gotcha that caught me out, the MS Sync Framework will not do any of the scheduling of when the synchronization is performed that is completely dealt with by you, so in my case I was back to using a FileSystemWatcher to watch for file changes in the source folder and once a change was detected I would then tell the SyncOrchestrator to synchronize this seemed to work fine and was not that much extra work. The only other step was to wrap this functionality up to be used from a windows service and that was it! Overall I would recommend anyone looking to perform synchronization to take a look to see if either there’s something out of the box the framework gives you or if it can give you a head start.

C# 3.0 Language Features Presentation

Update: Source code now available from here http://code.google.com/p/mcromwell/source/browse/#svn/trunk/3_0_Features get hold a good SVN client to checkout, such as TortoiseSVN

Yesterday evening I put together a presentation for the Isle of Man devG, I think it went ok overall considering this was my first presentation.

I have yet to make the source code available but once I get round to uploading it I will update this post, If anyone attending has any question’s feel free to drop me an email or leave a comment.

UML & Design Anti-Patterns

Intro

First off it’s worth saying that I use UML a lot and glad that we have a somewhat standardised way of communicating  our models, this post is not a dig at UML but rather some anti-patterns I have come across in relation to people using UML.

Where’s the Behaviour

When we are thinking about objects we should be thinking about the behaviour an object performs, what messages does it accept, other objects it uses perform it’s behaviour etc… And yet time and time again all I see is class diagrams this is no good, class diagrams are static and don’t provide us with enough information to show it’s behaviour instead you should start with dynamic diagrams i.e. collaboration diagrams or sequence diagrams these provide us with a much better way to model the behaviour of our objects:

user repository

If we compare the two diagrams we can see that the class diagram makes us perceive that the object sits on it’s own and exposes some messages it accepts, however in the sequence diagram we have already began to think about what other objects will use the object were designing and the collaborating objects it uses this gives us a much better idea about how we can start to implement the design.

Michelangelo Effect

UML should be used to go through quick design examples, not to create a work of art! You should not be getting hung up on every little semantic detail (should this line be dotted or filled, do I need to underline this etc…) as long as the general rules of the model are followed that’s enough. Don’t spend time trying to stick to every UML specifications you won’t get any benefit from doing so, this one of the reasons I prefer to use a whiteboard or simple pencil sketches rather than  UML tool such as visio as it’s conformity to every single rule slows you down.

Modeling the World 

You should model within a reasonable context, not try and model the system as a whole. We use UML to go through some designs of the system and in these cases we are concentrating on a particular context of the system, when we use UML we should be modeling only the items that make sense for this context once we start modeling the whole system we get bogged down in details that hinder rather than help us work out the design. This is somewhat related to the above anti-pattern in that we start treating UML models more than just sketches to help with designing.

Using Repositories with nHibernate

Intro

The point of this article is to demonstrate 2 ways that I have been using the Repository pattern in conjuction with nHibernate to provide my applications with rich domain objects, I wanted to also get feedback to the options shown and if there are any other ways to implement Repository with nHibernate.

Repository manages session

The first option and the one I used for the Staff Intranet and for older applications is wereby the repository manages the session object and decides where transactions should be used, here is an example of one the methods that we might typically see:

public int Save(User user)
{
    ISession session = NHibernateSessionManager.GetSession();
    ITransaction transaction = session.BeginTransaction();
    try
    {
        session.Save(user);
        session.Flush();
        transaction.Commit();
        return user.Id;
    }
    catch
    {
        transaction.RollBack();
        throw;
    }
    finally
    {
        NHibernateSessionManager.CloseSession();
    }
}

This has a number of the issues:

  • If you need to use a number of repositories in a single transaction this option will not work
  • Integration testing becomes more difficult as the repository handles the session and transactions

Caller manages session

The other way we can implement repositories is to let the calling code manage session and when transactions take place, here is an updated version of the above to accomodate the caller managing session & transactions:

public int Save(User user)
{
    ISession session = NHibernateSessionManager.GetSession();
    session.Save(user);
    session.Flush();
    return user.Id;
}

Our code has been reduced however the caller would now contain the code removed:

public int SaveUser(User user)
{
    ITransaction transaction = NHibernateSessionManager.GetSession().BeginTransaction();
    try
    {
        int userId = userRepository.Save(user);
        transaction.Commit();
        return userId;
    }
    catch (Exception thrownException)
    {
        transaction.RollBack();
        //... log exception, wrap up exception, rethrow, etc...
    }
    finally
    {
        NHibernateSessionManager.CloseSession();
    }
}

We could now quite happily make calls before or after the saving of the user to other repositories knowing that they would all be using the session & transaction.

I’m pretty convinced that this option of having the caller responsible for manageing session & transactions is a better way of implementing repositories, however if you of any other ways then please point me in the right direction 🙂

Helping Integration testing

One thing that I didn’t like having to do was performing integration tests against the database the reason being the extra care that needed to be taken to make sure that data was reverted back to the state it was to begin with, by using the caller in charge of session & transactions approach you can rollback any data changes made.