Splitting the Solution – Part I

Starting on the path to break the monolith

(Cross posted from DrDoctor blog – https://www.drdoctor.co.uk/blog/splitting-the-solution-part-1 )

Recently it has become clear that microservices are not a silver bullet. It is almost as common these days to see a blog post that warns developers to steer clear of microservices as it was five years ago to see one that praised them as the holy grail.

I do agree with the sentiment that development teams should start off with a well designed monolith rather than starting to straight away define their microservice boundaries. When starting out it is impossible to predict how your idea of the business and the business domain will evolve as things mature.

This well designed monolith has been the chosen (explicitly or not) path at DrDoctor. This path has worked very well for the past 7 years and in many aspects still works very well. Our services, websites and infrastructure are resilient, using a distributed message bus architecture. Our build and deploy pipeline is well established and easy enough to understand and we are still delivering features quickly (releasing to production 5+ times a week).

However 7 years later we have evolved from a single development team of 2 developers working from a shared backlog of work. We now consist of four separate development teams (“pods”) of developers all working on different product areas and separate backlogs of work. We currently have 11 developers working in four different time zones and the pain
points of a large monolithic code base are starting to show as we scale.

Pain Points

  1. Size of the solution
  • At a size of 210 .NET projects (and counting) in a single solution, even the beefiest new laptops sometimes struggle with the processing power required.
  • Switching feature branches, rebuilding, running tests, debugging locally are increasingly becoming more painful as the solution grows. Even small chunks of time zones spent waiting for these can slowly hinder a developers productivity throughout the working week.

2. Build times

  • At DrDoctor we follow best practice by having an automated CI pipeline using Teamcity. Whenever a commit is pushed to our bitbucket repo, a new build will be triggered in Teamcity for both master and feature branches.
  • The build process builds the entire solution and runs ALL tests (unit tests, integration tests, spec tests, js tests etc). This is brilliant for ensuring we avoid breaking existing functionality when making changes.

The problem is that whenever we make a small change in one area of our codebase we have to wait 30-40 minutes before teamcity finishes the build. If you have broken a test, this feedback loop becomes very tiresome as even if you fix the change quickly, you have another 30-40 minutes to wait until the build passes.

3. Deployments

  • Our monolith contains over 30 different services, websites or console applications / agents that get deployed.
  • Although we can manually exclude / include these during releases it is sometimes easier and safer to just deploy them all, for example if you have changed a shared library which may be used in multiple places.

4. Encourages “bad habits”

  • When developing a new feature or service it often feels much easier to just add it to the existing solution as we have a lot of useful code we can utilise. We do not currently have internal APIs that our services can use to fetch data so we end up using the already existing repository classes defined in the monolith.
  • This is a catch 22 as it makes it harder for us to develop new services outside of the monolith so the monolith instead grows larger.


It is worth noting that we have been putting a lot of work in over the last 12 months to make sure our bounded contexts are correct which has involved a fair chunk of refactoring and shifting classes and projects around. This has put us in a good footing for the next step.

We are now acknowledging that the pain points are outweighing the simplicity that comes with a monolith codebase. A dedicated team has now been allocated for 2020 to start breaking the monolith apart. Stay tuned for the next episode where I will highlight the decision making process to decide how to start tackling this beast!

Shifting to .NET Core – Problem #1 – NuGet Packages.

Over the last year or so at DrDoctor, we have slowly been trying to shift away from old school .NET framework and head towards the new world of .NET core. Initially we thought that the path was going to be relatively smooth – which as always was not the case.

In this series of blog posts I will try and cover the problems our team has faced when attempting to slowly move towards the .NET core world from a relatively large distributed-but-monolithic .NET framework solution.

The Problem

Transitive dependencies do not get pulled through to a consuming project, resulting in runtime “file not found errors”. For example:

In this example above I have two projects:

  1. .NET Standard 2.0 class library that has imported the NuGet package “SharpRaven”.
  2. .NET Framework 4.6.1 console app that consumes the above library, and instantiates a class from said library.

As we can see when I run my console app I receive a runtime error which complains about not being able to find the NuGet package SharpRaven that is referenced by the class library (i.e. a transitive dependency of my console app).

The Cause

The way in which projects reference NuGet packages has changed for .NET standard and .NET Core (also known as “SDK style” projects). Up until recently we would have a separate file within our project called “packages.config” which automatically gets generated when you add a package to your project (as we can see in my screenshot above).

However the “new and improved” method does away with this separate file and instead includes NuGet references within the .csproj file itself as <PackageReference> tags.

Old Way (packages.config) :

New way (within .csproj):

Any projects that use the “old style” and also reference “new style” projects will not receive the transitive dependencies (that usually get pulled as .dll files into the consuming projects /bin folder.

The solution

Luckily, in some cases there is a very simple solution; convert the old style projects to use the “new way” of referencing NuGet packages. Even better – Visual Studio has inbuilt functionality to do so.

First, we right click the packages.config file within our old-style project, and find the “Migrate…” option (as seen below)

Then follow the simple wizard through the steps


All going well, your problem should now be solved. Because both projects are now using the same method of referencing NuGet packages, therefore the transitive dependencies will now be “pulled through” to your consuming projects /bin folder.

As a sidenote – it is also possible to set the default project package management to PackageReference for any new projects within VS. Find the option in Tools => Options => Nuget, as follows:

The Caveats to the Solution

As always things are never this simple. It turns out that there are specific limitations to the new <PackageReference> format, and some of these limitations are big enough to prevent you from converting older-style projects to the new way of doing things.

If you are consuming NuGet packages that either depend on powershell “install.ps1” scripts or packages that use “content” assets – these are both incompatible with the PackageReference format.

In my next post I plan to investigate these issues further and explain possible workarounds in these scenarios.

Final thoughts

For teams that are looking to shift their own enterprise solutions to .NET core it usually seems like a sensible first step to start by converting their class library projects one by one to .NET standard.
For our team this seemed like a harmless and relatively simple procedure – however we soon realised that the path from .NET framework to .NET core is not without its bumps and unexpected roadblocks. Hopefully whoever reads this can benefit from our lessons – saving a lot of time and frustration!

I was surprised that I hadn’t seen many warning signs from individual developers or the .NET team themselves about the difficulties of interoperability between the two flavours – but I guess it is still early days..

Useful links

https://docs.microsoft.com/en-us/nuget/reference/migrate-packages-config-to-package-reference

https://github.com/dotnet/standard/issues/481

https://blog.nuget.org/20180409/migrate-packages-config-to-package-reference.html