Posted in:

It's a very common practice for .NET developers to use a local SQL database when running locally. Quite often that's SQL Server Express with its convenient and lightweight "LocalDB" feature. This saves you having to connect out to an external SQL server shared by other developers, or pay for a cloud hosted database.

However, if you have Docker Desktop installed on your machine (which I highly recommend), you also have the option of running your database in a container.

Why run in a container?

There are a number of benefits to the approach of running your databases in containers rather than directly on your developer machine.

First, there are no prerequisites other than Docker. You don't need SQL Server Express installed, and in fact, with Docker multi-stage builds and Visual Studio Code Remote Containers, you could even develop and run your whole application locally with nothing more than Docker installed.

Second, if you are running other containers, they can't easily connect out to a SQL Express LocalDB running on your developer machine. Running your database as another container solves this problem, as containers hosted on the same computer can easily communicate with each other.

Third, running databases as containers makes it very easy to switch out to other versions of the database by mounting different "volumes" (we'll discuss later), or to create throwaway databases for testing where you want to discard your changes after you've finished.

Fourth, even if you don't plan to use containerized databases in production (I normally use Azure SQL Database), if you have a CI/CD process that runs "integration tests" where your application is tested talking to a real database, having the ability for any Docker host to run those containers is extremely convenient as now the server running those tests only needs to have Docker installed.

Fifth, the containerized approach to application dependencies is extremely beneficial to microservices development, and once you've got started using it for one dependency such as a database, you'll quickly want to use it for all your dependencies such as Redis, RabbitMQ, Elasticsearch etc.

Let's see how to get set up

Step 1 - Running SQL Server in a Container

You'll need Docker Desktop installed and be running in the Linux mode to follow these instructions.

We'll use the docker run command, with -d to run in the background, --name to give our container a meaningful name, -p to expose port 1433, and setting two environment variables with -e to set the SA password and accept the EULA. The image we'll run is mcr.microsoft.com/mssql/server:2019-latest

docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=My!P@ssw0rd1" -p 1433:1433 --name customerdb mcr.microsoft.com/mssql/server:2019-latest

With that done, we will have a SQL Server running locally as a container. You can stop it with docker stop customerdb or completely delete the whole thing with docker rm -f customerdb.

Step 2 - Connecting to the database

With SQL Express, your connection strings look something like this:

Server=(localdb)\\MSSQLLocalDB;Database=CustomerDB;Trusted_Connection=True;MultipleActiveResultSets=true

Our Docker SQL container is visible on localhost, so our connection string will look like this (I've not included Database=CustomerDB; because we haven't created that database yet).

Server=localhost;User Id=sa;Password=My!P@ssw0rd1;

Here's some example code using Dapper that can connect to our server, create a database, create a table, add a record and query it:

// n.b. you will need Dapper and System.Data.SqlClient NuGet packages to run this
var connectionString = "Server=localhost;User Id=sa;Password=My!P@ssw0rd1;";
var createDBSQL = @"
IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = 'CustomerDB')
    BEGIN
        CREATE DATABASE CustomerDB
    END";
var createTableSQL = @"
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N'Customers')
    BEGIN
        CREATE TABLE [dbo].[Customers] (
            [CustomerID] INT            IDENTITY (1, 1) NOT NULL,
            [FirstName]  NVARCHAR (MAX) NULL,
            [LastName]   NVARCHAR (MAX) NULL,
            [Email]      NVARCHAR (MAX) NULL,
            CONSTRAINT [PK_dbo.Customers] PRIMARY KEY CLUSTERED ([CustomerID] ASC)
        )
    END";

var insertSQL = "INSERT INTO Customers (FirstName, LastName, Email) Values ('Mark', 'Heath', 'mark.heath@example.com');";

using (var db = new SqlConnection(connectionString))
{
    db.Execute(createDBSQL);
    Console.WriteLine("Created DB");
    db.Execute("USE CustomerDB");
    db.Execute(createTableSQL);
    Console.WriteLine("Created Table");
    var affectedRows = db.Execute(insertSQL);
    Console.WriteLine("Added user");
    var customers = db.Query<Customer>("SELECT * From Customers").ToList();
    foreach(var c in customers)
    {
        Console.WriteLine($"{c.CustomerID} {c.FirstName} {c.LastName}");
    }
}

public class Customer
{
    public int CustomerID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Email { get; set; }
}

Step 3 - Using volumes

Now, if we stop our database with docker stop customerdb and then restart it with docker start customerdb, the table and data in the database will be retained. But if we remove the container with docker rm -f customerdb then any data is lost. What if we want to keep our data, but not the container?

We can do that with volumes. Let's add one more parameter to our docker run command. This is -v customervol:/var/opt/mssql, which will "mount" a "volume" to our container. This volume is called customervol and will be created automatically if it doesn't exist. We'll then mount that to the location /var/opt/mssql inside our container. This is where SQL Server writes our data, so this will put that data into the volume.

docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=My!P@ssw0rd1" -p 1433:1433 --name customerdb -v customervol:/var/opt/mssql mcr.microsoft.com/mssql/server:2019-latest

Now we can connect and run the same times as before, but this time if we remove the container with docker rm -f customerdb our volume will still exist. We can see the volumes we have with docker volume ls and delete the volume with docker volume rm customerdb if we want to get rid of it.

The great thing about volumes is that we can now start a brand new container, and mount the same volume. All of the contents of our database will be there. Not only that, but we could have several volumes representing different versions of the database, or databases pre-seeded with certain starting data, and easily switch between them.

Comments

Comment by aregaz

Good idea!
I would also suggest adding --restart unless-stopped to the docker run command. In this way when you restart your machine the container will start automatically.

aregaz
Comment by Mark Heath

great suggestion, thanks

Mark Heath
Comment by Mark Arnold

Thanks Mark! Have you used this in a substantial way with your dev team? If so I'd be interested to know your team's overall devOps approach and how this fits in. Sounds like a lot of opportunities here to optimize the collab around schema and data updates. Thanks!

Mark Arnold
Comment by Mark Heath

hi Mark, the reason I'm blogging about it is that I've been considering introducing this on some older projects where most developers are still using LocalDB. It's one of the things I'm hoping to make progress on in the new year as part of a drive to migrate some legacy codebases away from .NET 472 and onto .NET Core to open the door to easier containerization.

Mark Heath