Skip to content

.NET Core 6 and Docker: A Tale of Two Servers

Perhaps this is really basic stuff to anyone who's already been using Docker, but it took me a few hours to get this working. Initially it was a workaround to get a containerised .NET Core application communicating with a SQL Server database.

The first step was to set up Docker support in the Visual Studio project:

  • Right-click on the project root, and select 'Add' -> 'Docker Support...'.
  • Right-click on the project root again, and select 'Add' -> 'Container Orchestrator Support...'.

At this point, there should be a docker-compose section in the Visual Studio solution, with .dockerignore and docker-compose.yml files. There will also be a hidden override file there, plus a Dockerfile within the project. Combined, these should enable other developers to replicate the process of creating the images.

The docker-compose.yml file should initially contain the following:

version: '3.4'

services:
  dotnet-core-6-mvc:
    image: ${DOCKER_REGISTRY-}dotnetcore6mvc
    build:
      context: .
      dockerfile: Test-MVC/Dockerfile

Also, the Visual Studio debug options will be replaced with the configuration for Docker Compose. To run the project using IIS Express again, just click the drop-down with 'docker-compose' and select the original project name.

The .NET Core application will run as a Docker container, with a reference to it in the registry. Aside from not being able to connect to a local database server, it runs well enough.

The SQL Server Container

The second container was created by first pulling the MSSQL image from Microsoft:

docker pull mcr.microsoft.com/mssql/server:2019-latest

Running '$docker images', something like the following should appear in the registry:

mcr.microsoft.com/mssql/server  2019-latest  d78e982c2f2b  3 months ago  1.48GB

Now we want to initialise the SQL server image, making sure to specify a reasonably complex password (or the service will fail to start):

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=StandardPass56" -p 1433:1433 --name RBC  -d mcr.microsoft.com/mssql/server:2019-latest

If the service container is running (we can check using Docker Desktop or the CLI), it should be possible to connect to it using SQL Server Management Studio on port 1433. At this point I also ran a few more SQL scripts to recreate the tables that were present in the original database (see the Azure Data Studio notebook).

Getting a Containerised Application Communicating with a Containerised Database

I had to resort to Stack Overflow to learn why the application still wouldn't establish a connection to the database, even when trying it with an explicit loopback address and the local machine hostname, and checking it was configured to accept TCP connections. Using 'host.docker.internal' in the connection string seemed to resolve this.

"ConnectionStrings": {
  "DefaultConnection": "Server=host.docker.internal;Initial Catalog=dockertest;Integrated Security=false;User ID=SA;Password=StandardPass56"
}

If, for some reason, the project references an older database connection in Connected Services and Secrets.json (or Azure Key Vault), the application will ignore the connection string in appsettings.json and revert to that. It's a useful feature thst enables the project to be distributed without exposing the database authentication data.

version: '3.4'

services:
  dotnet-core-6-mvc:
    image: ${DOCKER_REGISTRY-}dotnetcore6mvc
    build:
      context: .
      dockerfile: Test-MVC/Dockerfile

  DockerSqlServerNew:
    image: mcr.microsoft.com/mssql/server
    environment:
    - SA_PASSWORD=StandardPass56
    - ACCEPT_EULA=Y
    ports:
    - "1443:1433"

After running 'docker-compose build' and 'docker-compose up', the Docker registry should list a container that includes the DB service with the application.