Let’s get a bit crazy today and try something new. Ever had one of those times where you needed to run some code in Linux but just couldn’t get things to play nicely? I just had one of those moments.
When I write one of these posts, I do it inside of a dev container or Codespaces. To keep the image size small and lightweight, I’m running Hugo on Alpine. On the Mac, it’s running on linux/arm64
, while in Codespaces it’s using linux/amd64
. If you’ve every worked with Alpine, you probably have quickly learned that the apps running in that environment use musl-libc
instead of glibc
. That means that not all applications are compatible with the environment. In my case, I wanted to use an application that was written using Dart. Consequently, it needed glibc. In this case, the app also requires x64 support.
The easiest way to handle this is to simply run the application in Docker. That allows the application to have access to the libraries it needs. In addition, my Mac can emulate x86, so it can run those containers. Because I’m running on a Mac, I’ll get a warning about the mismatched platform. To avoid that, I just need to specify --platform linux/amd64
. That informs Docker that we’ve consciously decided to use a different platform and avoids a warning.
Running an application in Docker on its own is fairly easy. Doing that from inside a dev container requires just a bit more work. There are two patterns that we can use:
- Docker-in-Docker (DinD)
- The Docker services run inside the container, with images preserved in the container. This is similar to nested virtualization with VMs. This can require a bit more setup and privileges.
- Docker-from-Docker (DfD)
- The Docker client runs inside the container, communicating with the external Docker service. The images remain on the hosted system. Containerized applications are run as sidecars, but they appear to be part of the current container.
To keep things simple, I chose to use the Docker-from-Docker pattern. This allowed me to use my host’s Docker environment, which includes x86 emulation support. It was also significantly less effort to setup! 😄
Making this work requires two basic steps:
- In the
Dockerfile
file, includesudo apk add docker-cli
to ensure that the Docker CLI is installed in the Alpine container. - In the
devcontainer.json
, mount the Docker socket so that it is available to the CLI using"mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ]
. This makes the Unix socket that the Docker daemon listens on available inside the container.
Notice that this does not require any special arguments for Docker. It works without needing --privileged
or other sandbox-altering commands. We’re still accessing a privileged service, but from within a lower privilege environment.
If the dev container runs with reduced privileges (using the vscode
user, for example), there is a catch. This becomes apparent when you call the CLI. For example, running docker ps
results in the following error:
1Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
2Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock:
3connect: permission denied
To run a Docker command, we need to use sudo
to increase privileges (sudo docker ps
). This provides appropriate permissions for accessing the socket. This is necessary because the mounted docker.sock
is owned by root
. This is easily confirmed by using ls
to see the details:
$ ls -l /var/run/docker.sock
srwxr-xr-x 1 root root 0 Jan 2 01:00 /var/run/docker.sock
If you don’t mind using sudo
to elevate your permissions, this is not a problem. If you want to avoid having to use sudo
to run the Docker commands, you’ll need to tweak devcontainers.json
. You’ll add a post-create command to provide the lower-privileged user with the appropriate permissions using chown
:
1"postCreateCommand": {
2 "configure-docker": "sudo chown $(whoami) /var/run/docker.sock"
3},
This changes the owner to the current user (in my case, vscode
). In this case, I’m using whoami
to get the name of the current user. I could have also specified the user (sudo chown vscode /var/run/docker.sock
), but I like to make scripts as generic as possible. This makes them easier to reuse. As an alternative, you could also add the current user to the docker
group, giving that user additional permissions for accessing Docker resources. This provides some additional permissions to the user, avoiding the need to use sudo
. For this to work, the group for /var/run/docker.sock
should be set to docker
.
With those minor changes, you can now happily run Dockerized applications from within your dev container.
Happing DevOp’ing!