Weeknote 0 (zero)

For a few years, I have regularly practised writing notes to summarise weeks for my teams or reflect on my work. In addition, to maintain “current” in my first aid roles, I often write reflective notes as part of my Continuous Professional Development (CPD). I am very open about my source code and convinced about working in the open. Yet, I have only recently noticed the #weeknotes community and practice. 

There is so much content produced! But who has the time to read it all? And without readership, how useful is blogging? Well… let’s try. 

But first, a small disclaimer. I have tried blogging before. A few times. But I always found it challenging to stay within only one area of my diverse interests. I always thought sticking to one subject would make it easier to read and more “useful”. But the fact is, all those elements influence one another. So, if I am to try “weeknoting” it will have to be about all of it: work, civil service, tech, data, people and process. It will include first aid, volunteering, languages, music and continuous education. Will there be anybody who can get through it, let alone find any value in it? I probably wouldn’t!

TL;DR:

  • Burnout is a thing and we talked about it this week.
  • Porembela, working in hybrid/remote way, played in Galicia for the first time.
  • I have to make a move on establishing the cross-government linked data community.
  • GDS, and CDDO are interesting, but CDPS is even more so!
  • We dropped out from the Data Challenge in the semi-finals – but plan to continue the work regardless. Somehow.
  • While waiting for 999 calls, I had to experience why there is a need for the CDPS in Wales. They have plenty to do.

The full version:

My last week (and a bit) started the Wednesday before last. I got up early, and by 7am I was already sitting on a train to London, working through my inbox. It was going to be a long day and a long week involving a lot of travel to the extent I hadn’t experienced in six or seven years. Before then, I wouldn’t even think twice about it, but now going away for eight days? Is it necessary? Can I even do it? 

Hour after hour, meeting after meeting, task after task, I went through the day. By 17:00, I started getting messages from local pubs where colleagues were gathering – pre-meetings for our big Digital, Data and Technology (DDaT) Fest day tomorrow. But I wasn’t quite ready for it yet. So I kept working, and by 19:00, after 12 hours of working, I realised I still hadn’t prepared for the “burnout” presentation I was to co-present the next day! 

If this is autoironic, I don’t know what is! I spent my thirteenth working hour of the day preparing to talk to people about the importance of looking after themselves and not working too much!

DDaT Fest the next day was good. I joined the Department for International Trade 16 months earlier, and this was the first time I got to see how big our wider team is. Most people there I hadn’t seen in person before. There were many good discussions started, and many ideas were exchanged. But the “burnout” session’s success was a surprise. We had a few times more people there than we expected. It was one of the break-out sessions, but It looked as if everyone was there. It shouldn’t have been a surprise. After all, a lot of recent research shows more or less 1 in 2 of us show symptoms of it from time to time. The World Health Organisation upgraded it from a “workplace phenomenon” to an “illness” earlier this year. But it was a surprise to see the interest, to hear many people sharing their experiences in group discussions, and even more approaching us later to talk about it. This deserves a write-up of its own!

After DDaT Fest, I took a day off but not to go home and recover! Instead, I went to Galicia (north-west Spain) to continue experiments in hybrid ways of working. I play in a really badly timed folk duo. We published our first mini album on the 1st of March 2020, during the only festival we got to play that year. After a restart in 2022, Gerardo – the other half of the duo – moved from Wales to Galicia. But we enjoy playing together, and we are both experienced in remote working (IT style). Still, we are trying to find ways for Porembela to continue despite the 2,000km between us. It was an experiment, and it went well. We successfully tested the logistics and played two good concerts on Friday and Saturday. We have a short list of improvements to try in the next iteration, but definitely, there will be the next iteration, and we will make it work. The best part was that we got to play one of my xotas in Galicia. A traditional Galician xota (a folk dance) composed by a Pole in Wales was well received in its natural habitat. 

By Sunday night, I was back in London, ready for meetings starting early on Monday morning, all refreshed by the landscape change over the weekend, both geographical and mental. 

During my Monday visit to the Government Digital Service (GDS) office, I met with Charles – the Head of Data Architecture in the Central Digital and Data Office (CDDO). I wanted to discuss ways to progress my linked data and services ideas and how to start a cross-government community of practice (or interest). My talks and presentations over the summer and during DataConnect22 on the subject were very popular, and I have promised a few people to start a forum to progress it. But how? I know I could just do it, create a new slack channel or even a whole workspace. Still, there are already so many, and it seems the discussions are not getting us in any specific direction. So rather than starting from scratch, I’m trying to find a way to use what already exists, get some central support, and perhaps to help re-invigorate the work done there in the past. The details? I won’t share them here just yet, but I will be moving quickly now. I have spent a year convincing and influencing, preparing the ground. But in a proper Civil Service style, it seems my window of opportunity is closing. I have to act, or I will have to start the convincing and influencing all over again!

The same day, while in the GDS office, I discovered the Centre for Digital Public Services (CDPS). On the surface, this Welsh Government’s arms-length body appears to be a Welsh GDS copy. It got me interested straight away. I am a big fan of the GDS ideas (even if not always of their implementations) and the digital public service revolution they started. I have spent my time in the Civil Service trying to figure out how it works and why it doesn’t. Why are digital services so different here in my adopted home country – Wales? Why do local authorities appear to ignore it all, and why… too many questions to list here.

The point is: the idea of the Welsh GDS sparked my interest. Since then, I have spoken to a few people over the week – thank you for your time – and I was wrong. The Centre for Digital Public Services is very different from GDS and CDDO. The mission appears to be more pointed, the ambition more focused, and the drive to deliver fresher. The challenge in front of them is enormous, and a lot is at stake, but they are on a journey I will want to watch closely. 

Eventually, I got to Wednesday – my eighth day away from home, the day of the Data Challenge semi-finals. Since September, I have been working in a cross-functional, cross-departmental team to prepare our submission. We are trying to reduce the time it takes to transfer security clearances by looking at the data differently – as a personal, not an organisational asset. In addition, we think this would open many other possibilities to do with things like Disclosure Barring Service (DBS), qualifications, mobility of NHS staff between nations and many others. We have a great idea. We thought the presentation went well. And then we dropped out. We are still waiting for feedback more detailed than “Your presentation was great, the idea sound, but it was a very tough competition and we took a very long time to make a difficult decision”. 

But this is not the end. It could be a blog post of its own. Over the last three months, we have formed a solid team of people who really want to make it happen, and currently, the team spirit is “we show them judges!” and get it done anyway. For me, a relative newcomer, it has been a fantastic opportunity to understand better how the Civil Service works and get to know people in a few of the bigger departments. 

This wasn’t enough for the week. On Thursday my attention had to quickly switch to a recruitment campaign I’m helping with as a panel member. We are recruiting for a Structured Content Designer. Who that is? Well, in the public sector is a surprisingly unknown role and so it took some doing to think about the interviews which we held on Friday. They were very successful. We have seen some good candidates. We unanimously agreed on who was the best, so all that is left to hope the person will accept the offer. 

I spent last night volunteering as a first aider at our local Help Point in Swansea to relax after all of the above. It was a strange night: cold and quiet but with a lot of blood and three victims of violent assaults. I will leave the reflection on the graphic specifics in my private CPD folder. Still, there was something that night that brought Monday’s discovery of CDPS back. In between the calls, I watched our doctor and our operations manager struggling and cursing multiple local digital services they were trying to use. Wales seems to need to catch up to the wider UK’s digital service experience. It’s not good. Especially when it impacts things like 999 service provision. But it’s great that an organisation is committed to changing it for us all. 

There was more I could reflect on here in the open from just this last week, but it seems the week isn’t done yet. I have just had a call for any volunteers available to pick up some stacking 999 calls. So instead, I’ll quickly publish what I have, put on my green uniform and go out into the night. The new work week doesn’t start for another 12 hours! That’s plenty of time.

T-SQL Tuesday #114 – The SQL Puzzle Party

T-SQL Tuesday logoThere were times when I tried to look for puzzles to solve, especially the T-SQL puzzles (what happened to the T-SQL Challenge site?). Now I don’t. Life is challenging as it is, especially if you work with SQL Server and really try to understand what’s going on.

So rather than coming up with some contrived problem for you to solve as part of this edition of T-SQL Tuesday (thank you Matthew McGiffen) I will share something that surprised me only last week. And yes, I have solved it already, and will be blogging more about it soon so no there is no big price for solving my production issue here 😉

Here is the scenario

There is a table that stores millions of records. It has a primary key, a date when a record was processed, a bit column indicating whether it was processed or not, and some text fields that are used for something, but in our example, it’s just data that takes space on pages.

There is also an application which is using nHibernate to generate a T-SQL query that retrieves one (just one at a time) records from that table where IsProcessed = 0. There are 10-50 records like that at peak times, in a table which holds tens of millions of records so making it very, very fast should be easy with a tiny little covering filtered index. Well… it turns out, SQL Server prefers to scan the clustered index instead.

Have a look

The challenge setup

use tempdb
go
drop table if exists dbo.LongProcessingTable
if not exists(select 1 from sys.tables where name = 'LongProcessingTable')
create table LongProcessingTable (
Id int not null identity primary key
,ProcessedOn datetime2 null
,IsProcessed bit null
,SomeData nvarchar(1024) not null
)

-- just some text to fill up the space on pages
declare @sometext nvarchar(1024) = (
select string_agg(convert(char(1),name), '')
from sys.all_objects
)

-- create just 100k records with some random date values
-- at this time all records are marked as processed
insert into dbo.LongProcessingTable(ProcessedOn, IsProcessed, SomeData)
select top(100000)
dateadd(second, -abs(checksum(a.object_id, b.object_id)%10000), getdate())
,1
,@sometext
from sys.all_objects a
cross join sys.all_objects b

-- now mark 10 rows as not processed
update d set IsProcessed = 0, ProcessedOn = null
from (
select top (10) *
from dbo.LongProcessingTable d
order by ProcessedOn desc
) d

Now the query:

declare @IsProcessed bit = 0

select top(1) Id, SomeData
from dbo.LongProcessingTable
where IsProcessed = @IsProcessed

The above query comes from the application and cannot be changed. It is what it is. And to help you start, here is the index I thought would work, but doesn’t.

create index IX_LongProcessingTable_NotProcessedYet
on dbo.LongProcessingTable(IsProcessed) include (SomeData)
where IsProcessed = 0

The index gets ignored and the server goes for the table scan instead.
Of course, there was somebody who discovered it earlier. I wasn’t all that surprised that Erik Darling blogged about it in 2015, 2017 and 2018 it turns out, he even says ‘IT IS KNOWN’… well, it wasn’t to me. But even know, with that knowledge, I still cannot change the query, so what can I do? How to make this query more efficient without changing it, and without creating a covering indexing on the whole table which can contain hundreds of GB of data just to get one row.

If you are still reading… well, enjoy the challenge. I will follow up with a few comments and a couple of my attempts at solving the problem later this month (hopefully).

 

gMSA and Docker – Lessons Learnt

In the last two posts (here and here) I have documented how I use gMSAs to connect services running in docker containers on Windows to SQL Server using the domain authentication. In the end it was very simple, but there are things I wish I knew when I started. It would save me a lot of time. Here is an attempt to document the lessons learnt.

Versions are important!

While in the end I was able to make it work on Windows Server 2016, 1803, 2019 and 1809 I wasted some time trying to make it work with docker 17.06. Unsuccessfully. Docker 18.09.1 and 18.09.2 worked every time.

There are some reports of intermittent problems with specific OS updates breaking stuff, like the one here but I wasn’t able to reproduce it. I wonder if the updates changes something else that it causing problems, in other words is it the problem with the update itself or the update process?

The Set-AdServiceAccount

From the beginning I set to try the gMSA authentication on multiple VMs following blog posts which all included some use of the Set-AdServiceAccount powershell command (from `RSAT-AD-PowerShell). I could not make it work on more than one VM at a time. I thought I was going mad! The problem (and the clue) is in the name. Set. It is not add, not modify. So when I was doing something like this

Set-AdServiceAccount -Identity MyService `
  -PrincipalsAllowedToRetrieveManagedPasswords DH2019A$ 

it was setting the principals allowed to retrieve the managed passwords for MyService to DH2019A VM. As expected. But not as expected removing the privilege from all the other VMs I granted that permission before. With no warnings.

It is probably the most worrying part about using gMSAs for the service authentication in production, as I plan to do it. All it will take is one sysadmin to run a command like that to break all the services, potentially on all docker hosts. To mitigate we have decided to grant the permissions through a domain group to which we will add docker hosts. That way there should be no need to run this command when scaling out.

Misleading Get- and Test-AdServiceAccount

Understanding the above problem with Set-AdServiceAccount was made much worse, by my misunderstanding how Get-AdServiceAccount and `Test-AdServiceAccount work.

If you are a domain admin the Get-AdServiceAccount will always return details of the gMSA if it exists. So it is of no use to check if the specific gMSA can be used on a given host.

If the gMSA was previously installed the Test-AdServiceAccount will return true regardless if the host account has permissions to retrieve the password or not. That permission is necessary for the gMSA authentication to work.

So with that in mind neither command is fit for checking if a specific host has permissions it needs to use a gMSA. I was not able to find anything better than attempting to install it again with Install-AdServiceAccount. It will either install it again, or display error message indicating that the computer has no permissions to retrieve the password.

Remove-AdServiceAccount

This does not remove previously installed gMSA from the local host. It removes the gMSA from the domain!

The SSPI context error.

If you try to use domain authentication from the service running on a docker host which has no permissions to retrieve the gMSA password you will get fairly generic error tell you that the SSPI context couldn’t be created.

There are scores of blog posts and msdn documents explaining how to troubleshoot many SSPI context errors. Not a single one I found mentions any problems with gMSA. I have learnt a lot about SSPI and how it really works, just to eventually realise that everything is fine, and I have to look for problems somewhere else.

There is no localhost

When you run a standalone container you can access it from the same host on the localhost. By default a nat network is used and it allows communication on the hosts IP. When moving to the swarm mode (using docker service create not just docker create) by default the ingress network is used and the localhost is not available. You have to use the public IP address of the docker swarm. There is a lot of blog posts how to define your own overlay or bridge networks. I suppose they all work on Linux, but on windows when using an overlay network you cannot use host IPs (so no localhost) and you cannot create bridge networks at all.

PS C:\> docker network create -d bridge bnet
Error response from daemon: could not find plugin bridge in v1 plugin registry: plugin not found

I was able to make it work when publishing the port directly on the host using --publish published=8001,target=80,mode=host instead of the shorter -p 8001:80 (ports obviously may be different) but I don’t think that’s a configuration I’d be using so… I simply gave up. I use public IP and everything works.

Security considerations

To install a gMSA on a host which has permission to read the gMSA’s password you don’t need to have any extra permissions. It appears that anybody with access to PowerShell on the host can do Install-AdServiceAccount -Identity MyService. There are no restrictions which credential spec file can be sued for which servce either. This means that if somebody has access to the docker host they can create a new service using any gMSA to which the host itself has permissions.

gMSA name lenght limit

The Group Managed Service Account’s name is limited to 15 characters. Not a big deal, but it messed up a carefully agreed naming strategy and in the end I have vowel-less service names.

New-CredentialSpec silently overwrites existing files.

Active Directory, Windows Containers in Swarm Mode and SQL Server

In my previous post I have explained how I was able to connect from windows containers running on docker to a SQL Server cluster on a network using domain authentication (with gMSAs) rather than SA logins and passwords.

gMSAs in docker swarm mode

After I got the containers using Group Managed Service Accounts working on a single Docker host I went on to try the same in the swarm mode. My plan was to simply replace docker run -d part of the command creating the container with docker service create, but it turns out that it is not that simple, especially if you don’t have a lot of experience with swarm mode. It is also worth noting, that I had a lower success rate than when I was experimenting with standalone containers. I was able to make it work on all the same Windows versions (2016, 1803, 2019, 1809) but only when using Docker 18.09 and not on the 17.06 which was on the image I used for 1803 tests).

Demo setup

Similarily to the previous post I have tested this on a range of operating systems and docker versions, but what I want to show here is how it worked on Windows 2019 and Docker 18.09.

gMSA_Docker_Service_1

To make it a bit more exciting (and because of how Docker Swarm works) this time I will be testing the service from a web browser. To start with it does’t work. That is because there is no service listening on port 8101.

gMSA_Docker_Service_1_web

Creating a service

To create a service docker service create command is used. When compared to docker create some parameters are different, for example there is no --security-opt used in the previous post and instead --credential-spec is used. But first things first. Let’s just create a service using michalporeba/sqlgmsatest:1809nano image with minimal configuration and see what happens.

docker service create -p 8101:80 michalporeba/sqlgmsatest:1809nano

gMSA_Docker_Service_2

The -p 8101:80 makes the service available on the 8101 port using the default ingress network. No errors, the service is running, it is converged, so let’s try to connect to it!

gMSA_Docker_Service_2_WebRequest

And here is the first surprise. The localhost doesn’t work. That’s a swarm thing and although it is possible to publish ports in host mode it is not how I would be running in production, so I will just open the ports and connect to the service externally using a web browser.

gMSA_Docker_Service_2_web

OK, so the api/info call was successful. The service from michalporeba/sqlgmsatest:1809nano image is running and responding. So the next task is to use it to query theTestDBdatabase on my test instanceDB.sqlgmsa.local`.

gMSA_Docker_Service_2_web2

Adding the gMSA

Not authorized! But who? The NT AUTHORITY\ANONYMOUS LOGON. That is because despite the docker host being member of the sqlgmsa.local domain, the container running the service is not. To fix it, exactly as in the case of standalone container a Group Managed Service Account has to be created, installed and a credential file created.

# Create gMSA
New-AdServiceAccount -Name MyService -DNSHostName sqlgmsa.local `
  -PrincipalsAllowedToRetrieveManagedPassword "Domain Controllers", "Domain Admins", "CN=DockerHosts,CN=Computers,DC=sqlgmsa,DC=local" `
  -KerberosEncryptionType AES128, AES256

# Install it
Install-AdServiceAccount -Identity MyService

# Import the module to manage Credential Specs
Import-Module .\PsModules\CredentialSpec.psm1

# And create a spec file for MyService
New-CredentialSpec -Name MyService -AccountName MyService `
-Domain (Get-AdDomain -Current LocalComputer)

gMSA_Docker_Service_3

Consistency is everything, isn’t it? -Name, -Identity, -AccountName on those commands above refer to the same thing, the gMSA name and have to match. The -Name parameter on the New-CredentialSpec command is used to control the name of the json file containing the credential spec. The filename can be anything, and it doesn’t need to match the account name, but I find it easier if it does. The existing credential spec files can be found in C:\ProgramData\docker\CredentialSpecs\ or by using Get-CredentialSpec command from the CredentialSpec.psm1 module.

The next step is to use the newly created credential spec file when creating the service. The --security-opt is not supported when created a service and --credential-spec has to be used instead.

docker service create -p 8102:80 `
  --credential-spec file://MyService.json `
  michalporeba/sqlgmsatest:1809nano

gMSA_Docker_Service_4

The new service now runs on port 8102 and should use the new MyService identity. Let’s see.

gMSA_Docker_Service_4_web

Almost there! Login failed for user SQLGMSA\MyService$ That’s good, that means the correct identity has been picked up, so the last thing to do is to create the login on the SQL Server.

gMSA_Docker_Service_5_sql

And now, as if by magic

gMSA_Docker_Service_5_web

The test web service, written in C#, using .net core is hosted in a Docker container running on windows host,and queries a SQL Server database using domain authentication.

Active Directory, Windows Containers and SQL Server

The problem

Everything appears to be in containers nowadays, even the SQL Server. But still there are mixed environments, people and companies wanting to try containers without going all in. So I was wondering how practical would it be to have .net core services on docker, running in Windows containers connecting to an external, old fashioned SQL Server instance? Also, as it is all in a Windows domain, I’d like to use domain authentication so I don’t have to worry about managing passwords.

Simple isn’t it? Well, it turns not that simple as not everything is on the domain. The SQL Server is, the docker hosts are, but the containers are not.

Additionally there are differences depending on whether you want to run as an independent container, or in docker swarm mode. This blog post focuses on standalone containers, and the swarm mode is covered in the follow-up post.

The quick answer

The good news is that it is not an unreasonable requirement and it has been done before. The solution is to use Group Managed Service Accounts (gMSA) and Credential Spec Files. A number of people have already documented their efforts. Some were more successful than others.

My story

My problem was that I wasn’t able to make it work just by following any single write-up. In fact, for a few days, I was not able to get it going at all. But eventually it happened and here is a step by step description of how I made it work on Windows Server 2016, 1803, 1809 and 2019 as the host OS and 2016, 1803 and 1809 in full and nano options as the container base image. Generally, it is very simple once you know what to do, and more importantly what not to do (more about it later).

Test setup

To test it I have set up a virtual lab environment on Azure with 6 VMs

  • DC – Windows Server 2016 Datacenter acting as a domain controller
  • DB – Windows Server 2016 Datacenter with SQL Server 2017 Developer edition installed
  • DH2016A – Windows Server 2016 Datacenter with Containers (Docker version 18.09.2)
  • DH1803A – Windows Server 1803 with Containers (Docker version 17.06.2-ee-18)
  • DH1809A – Windows Server 1809 with Containers (Docker version 18.09.0)
  • DH2019A – Windows Server 2019 with Containers (Docker version 18.09.1)

I have created a sqlgmsa.local domain and joined all the VMs to it. SQL Server was run using SQLGMSA\SqlServer Managed Service Account without any special permissions.

In the domain I have 2 service accounts SQLGMSA\ServiceA and SQLGMSA\ServiceB. Both have logins on the SQL Server instance. I will be setting some of my containers to connect to the SQL Server as ServiceA and some as ServiceB.

Initially I tested the connectivity from containers build with full base images (standard mrc.microsoft.com/windows/servercore) and using dbatools module to run queries from them to the DB.sqlgmsa.local server. To be able to test nano based images I created two test images containing simple .net core WebAPI service written in C# with two public methods. Calling api/info you can check if the service is running, what system is it running on. Calling api/query/ attempts to open connection to the specified database (or master if the db parameter is not provided) and returns information about the database, the original login and the current user. A simple query

select 
     @@version SqlServer
    ,db_name()  [Database]
    ,current_user CurrentUser
    ,original_login() OriginalLogin
for json path

The test images are available on docker hub and the source code here on github.

PowerShell and AD

In the example I am using PowerShell to manage my active directory. If the commands I use don’t work for you you may be missing the AD modules. To install them add the RSAT-AD-PowerShell windows feature by executing this PowerShell command

Add-WindowsFeature RSAT-AD-PowerShell

Group Managed Service Accounts (gMSA)

Managed Service Accounts where introduced some time ago to reduce overhead associated with managing passwords for service accounts. The Group Managed Service Accounts solve the same problem but unlike MSAs gMSAs can be used across multiple computers.

To start using gMSAs on a domain a KDS Root Key has to be created first. It is the key with which passwords shared between the computers on the domain are protected. If your domain has other MSAs already you will not need to do it again.

To create a KDS Root Key Run I used this PowerShell command on the domain controller

Add-KdsRootKey -EffectiveTime (Get-Date).AddHours(-10)

and then to verify that it has been created

Get-KdsRootKey

Now to create the test service accounts I used the following commands. I was doing it on the DC, but with the right permissions it should be possible to do from any computer on the domain.

New-AdServiceAccount -Name ServiceA -DNSHostName sqlgmsa.local `
   -PrincipalsAllowedToRetrieveManagedPassword "Domain Controllers", "Domain Admins", "CN=DockerHosts,CN=Computers,DC=sqlgmsa,DC=local" `
   -KerberosEncryptionType AES128, AES256

New-AdServiceAccount -Name ServiceB -DNSHostName sqlgmsa.local `
   -PrincipalsAllowedToRetrieveManagedPassword "Domain Controllers", "Domain Admins", "CN=DockerHosts,CN=Computers,DC=sqlgmsa,DC=local" `
   -KerberosEncryptionType AES128, AES256

Where ServiceA and ServiceB are the names of the accounts and "CN=DockerHosts,CN=Computers,DC=sqlgmsa,DC=local" is the distinguished name of the group I have created for the docker hosts.

If you don’t know the exact distinguished name running this command can help

Get-AdGroup -filter { name -like "yourgroupname" }

Now on every docker host all the specific service accounts (2 in my test case) have to be installed so that the host OS can access them.

Install-AdServiceAccount -Identity ServiceA
Install-AdServiceAccount -Identity ServiceB

If there is an error message like this, it means the permissions were not set correctly
Install-AdServiceAccount : Cannot install service account. Error Message: ‘{Access Denied}

Credential Spec file

Docker Credential Spec Files have been created specifically to solve the problem of passing gMSA to containers. They are plain json files with information about the service account. It is possible to create the files manually, but there is module for it. It is documented here but here is a short instruction of how to create get and import the module.

This part needs to be done on every docker host.

# 1. Set TLS1.2 support from PowerShell so the module can be downloaded from github. 
PS C:\Tmp> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

# 2. Download the psm1 file using Invoke-WebRequest
PS C:\Tmp> Invoke-WebRequest "https://raw.githubusercontent.com/MicrosoftDocs/Virtualization-Documentation/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1" -OutFile "CredentialSpec.psm1"

# 3. Import the module
PS C:\Tmp> Import-Module .\CredentialSpec.psm1

With CredentialSpec module imported for each gMSA a credential spec file has to be created.

PS C:\> New-CredentialSpec -Name ServiceA`
  -AccountName ServiceA `
  -Domain $(Get-AdDomain -Current LocalComputer)

PS C:\> New-CredentialSpec -Name ServiceB`
  -AccountName ServiceA `
  -Domain $(Get-AdDomain -Current LocalComputer)

The list of existing files can be obtained with

PS C:\> Get-CredentialSpec

Name     Path
----     ----
ServiceA C:\ProgramData\docker\CredentialSpecs\ServiceA.json
ServiceB C:\ProgramData\docker\CredentialSpecs\ServiceB.json

And finally, run the containers passing the credential spec files with the --security-opt parameter. (This is an example from DH2019A using the 1809 nano base image).

docker run -d -it -p 8001:80 `
   --security-opt "credentialspec=file://ServiceA.json" `
   --name ServiceA `
   michalporeba/sqlgmsatest:1809nano

docker run -d -it -p 8002:80 `
   --security-opt "credentialspec=file://ServiceB.json" `
   --name ServiceB `
   michalporeba/sqlgmsatest:1809nano

The proof is in the pudding

After checking both containers are running with docker ps I can start testing. As the test is not focused on anything else but domain authentication I didn’t open any ports to the lab, and all I was doing was to either connect to the container and use dbatools to execute a query on the db server, or from the docker host connecting to the service listening on the published port. Here are the example calls using Invoke-WebRequest on DH2019A.

sqlgmsa.proof

PS C:\> $env:ComputerName
DH2019A
PS C:\> docker start ServiceA
ServiceA
PS C:\> docker start ServiceB
ServiceB
PS C:\> docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS              PORTS                           NAMES
c6382bb7d816        michalporeba/sqlgmsatest:1809nano   "dotnet TestService.…"   2 days ago          Up 4 seconds        443/tcp, 0.0.0.0:8002->80/tcp   ServiceB
02ece189cb74        michalporeba/sqlgmsatest:1809nano   "dotnet TestService.…"   2 days ago          Up 8 seconds        443/tcp, 0.0.0.0:8001->80/tcp   ServiceA
PS C:\> # Service A
PS C:\> (Invoke-WebRequest -UseBasicParsing http://localhost:8001/api/info).Content
["OS:  Microsoft Windows 10.0.17763 ","Framework: .NET Core 4.6.27317.07"]
PS C:\> (Invoke-WebRequest -UseBasicParsing http://localhost:8001/api/query/DB.sqlgmsa.local).Content
[{"Database":"master","CurrentUser":"guest","OriginalLogin":"SQLGMSA\\ServiceA$"}]
PS C:\> # Service B
PS C:\> (Invoke-WebRequest -UseBasicParsing http://localhost:8002/api/info).Content
["OS:  Microsoft Windows 10.0.17763 ","Framework: .NET Core 4.6.27317.07"]
PS C:\> (Invoke-WebRequest -UseBasicParsing http://localhost:8002/api/query/DB.sqlgmsa.local).Content
[{"Database":"master","CurrentUser":"guest","OriginalLogin":"SQLGMSA\\ServiceB$"}]
PS C:\>

Conclusions

The above setup really boils down to 5 steps. If you want to use windows authentication from windows containers on docker to a SQL Server instance (or a cluster you have to

  1. Create gMSAs for your services
  2. Create logins for the service accounts on the SQL Server
  3. Install gMSAs on the docker hosts
  4. Create credential spec files
  5. Create containers with --security-opt parameter pointing to the credential spec file.

That’s it. In my case I was able to make it work (for standalone containers, not in swarm mode) on different OS versions (2016, 1803, 1809, 2019) using full and nano base images and using docker 17.06 and 18.09. However, there can be surprises and pulling hair. In the week I spent trying to figure it out I had a number of moments when I thought I’ve got it, just to realise that what worked a moment ago, doesn’t any more.

The biggest lessons where

  • use AD Groups for managing access to gMSAs rather than individual computer accounts,
  • be very careful with Set-AdServiceAccount which I have seen in some of the posts out there,
  • SSPI context errors is not what it seems, and can be very annoying

More details about the lessons learnt can be found in the follow up post.

Trying to do the same but in a service run on Docker in swarm mode is similar, but not exactly the same. I have described it here.

This post is long enough as it is, so I will not go into the details of those lessons learnt here but instead include them in a follow up to which I will link here later.