Ken Muse

Implementing Private VS Code Extensions for Dev Containers


Did you know that dev containers can have their own private Visual Studio Code extensions?

Most people that use VS Code are familiar with installing extensions to add new features to the editor environment. Traditionally, extensions are usually installed one of three ways. They can be installed globally (or more technically, to the default profile). This makes it available any time you open VS Code. The second approach is installing them as part of a profile. This allows you to have extensions available by selecting a profile that you want to use while working within VS Code. Finally, extensions can be configured as part of the development container. This causes them to be downloaded, cached in a volume, and installed in that dev container at runtime. Each of these approaches relies on the public extensions gallery.

While there is a long standing feature request for private galleries, it’s not currently a native feature of VS Code. As a result, we can’t natively point a container to a private gallery to retrieve an internal extension. Even if we could, there are times when we need an extension that is designed to support a specific application. This is where container-private extensions can shine.

Private Visual Studio Code Extensions in Containers

As a practical example, I have an extension I’ve built that helps me to make it easier to build and publish my blog posts in Hugo. The extension is specific to my blog, so it doesn’t make sense to install it globally or make it part of a profile. It is only useful in that single context. It also wouldn’t have any value as a public extension, since it’s tied to my process and a specific organization of the files in Hugo. The functionality saves me hours of time each month, but it would be of limited value to others.

In short, it makes sense for this extension to be a part of my dev container.

As a second example, I might want to make a set of extensions available across multiple dev containers across a development team. Each team needs the same set of private, internally developed plugins and some common configurations applied to their dev containers. This is a more complex scenario, but it’s another example of when the public gallery is not the right solution.

Thankfully, there are multiple ways to solve this problem. It starts with understanding how extensions are installed.

The Basics of Installing an Extension

VS Code can install extensions from the command line (code --install-extension <extension>.vsix). It relies on a vsix file, which is the packaged, distributable extension. It’s worth knowing that there is an additional --force parameter that can be used to upgrade an installed extension. It’s also important to mention that upgrading or uninstalling extensions after VS Code has launched may require you to reload the window or the extension host.

The customizations.extensions property of the container lets you define which extensions should be auomatcally installed as part of the dev container. Normally, this is just an array of the specific extension IDs that should be retrieved from the public gallery. The extensions are downloaded and cached in a volume, making them available to other containers and making it easier to restart the container without having to download the extensions again. Once downloaded, the extensions are programmatically installed when VS Code connects to the container.

It’s not made completely clear in the documentation, but both Codespaces and VS Code support installing local extensions. Instead of using the ID, the container just needs to specify path to the VSIX file. Instead of downloading and caching the plugin, it will install it from the provided path.

The VSIX file is typically going to be stored in one of two places. If the extension is built as part of the container, it might exist in a folder relative to the workspace. Thankfully, we can use ${containerWorkspaceFolder} when invoking a lifecycle script, allowing us to dynamically access the path. You can also rely on an absolute mounting path. This can be particularly helpful if you’re specifying the workspaceMount and workspaceFolder properties in your dev container. This allows you to have project-specific extensions within a monorepo.

The second approach is to store the extension in the container user’s home folder. Every dev container will have be run using a specific user account, typically root or vscode. If the extension is stored in that folder, it makes it centrally available (and therefore discoverable). By being outside of the workspace folders, it prevents accidentally committing a binary to source control. To be clear, you are not limited to the home folder. You just want to establish a convention that works for your team.

Once the the extension is available, it just needs to be referenced from the devcontainer.json. You may see a warning in Visual Studio, since the schema file does not include the pattern for referencing files, but the feature works and is supported:

 1{
 2    "name": "My Container",
 3    // Other properties and content ...
 4    "customizations": {
 5        "vscode": {
 6            "extensions": [
 7                // Public extension in the gallery
 8                "GitHub.copilot-chat",
 9
10                // Extension built inside of the dev container or in the workspace
11                "${containerWorkspaceFolder}/.devcontainer/extensions/customtool/publish/tool.vsix",
12
13                // Extension referenced from the container's mount path
14                "/workspace/myApp/extensions/customtool/publish/tool.vsix",
15
16                // Extension referenced from a folder in the container user's home folder
17                "/home/vscode/extensions/tool.vsix"
18            ]
19        }
20    }
21}

Referencing it this way ensures that the extension is loaded at the proper time in the lifecycle. Of course, you can always manually install (or reinstall) the extension from the command line.

How to Make the Extension Part of the Container

In order to install the extension, it needs to be available on the file system before the extensions are configured. That typically means that it should exist before the first start/attach attempt. As a result, most approaches will rely on an onCreateCommand or updateContentCommand lifecycle script. This can be implemented as a feature or directly in the devcontainer.json. For example:

 1{
 2    "name": "My Container",
 3    // Other properties and content ...
 4    "customizations": {
 5        "vscode": {
 6            "extensions": [
 7                "${containerWorkspaceFolder}/.devcontainer/extensions/customtool/publish/tool.vsix"
 8            ]
 9        }
10    },
11    "onCreateCommand": "${containerWorkspaceFolder}/.devcontainer/buildAndAcquireExtensions.sh"
12}

The process of actually preparing or acquiring an extension can be handled a few different ways.

Approach 1: Downloading the Extension

Download the extension from a known endpoint URL and copy it to a known path within the dev container. Tools such as curl, wget, or gh can download the file to a the user’s home folder and then install the extension. For example:

1 curl -sSLo /home/vscode/tool.vsix \
2   -H "Accept: application/octet-stream" \
3   -H "Authorization: Bearer $TOKEN" \
4   https://github.com/myorg/myrepo/releases/download/v1.0/tool.vsix

Approach 2: Build as a Layer

The extension’s source code exists as part of the main repository and is build as part of the process of creating the container image. A multi-stage Dockerfile can be used to build the extension, ensuring only the binary itself is copied to the final image.

For example:

1FROM node:lts-slim as Extensions
2WORKDIR /src
3COPY ./extensions/customtool /src
4RUN /src/build.sh
5
6FROM mcr.microsoft.com/vscode/devcontainers/base:bullseye
7COPY --from=Extensions /src/publish/tool/vsix /home/vscode/tool.vsix

Approach 3: Build Outside the Container and Reference

Containers are versatile! You can always use a container to build the extension. Volume mounts can be used to allow the binary to be copied to a a specific target path on the file system. The devcontainer.json can use a mounts property to mount the extensions folder into the dev container.

For example, the dev container can invoke Docker on the client when the container is being built, then mount the results:

1{ 
2    "name": "My Container",
3    "initializeCommand": "docker run -it --rm -v ./extensions/customtool:/work -v ./extensions:/work/publish node:lts-slim /work/build.sh",
4    "mounts": [
5        "source=./extensions,target=/home/vscode/extensions,type=bind"
6    ]
7}

Approach 4: Make it a Feature

A feature provides a way to add additional functionality to a dev container without altering the base image definition. Features can use lifecycle scripts (other than initializeCommand) to dynamically add content to the dev container. This can allow you to combine some of the approaches above while keeping the code isolated from the image used to create the container. Because this adds to the existing container, it might require the base container to have all of the tools necessary for building and compiling the image (or the feature would need to include those tools). Because a Feature can add to the configured customizations.vscode.extensions, this approach might be useful as a way of aggregating multiple extensions and centrally distributing them.

Approach 5: Reference it as a Volume

This is a variation of building the extension outside of the container. The difference is that it relies on a Docker volume to hold the resulting binary. This can allow the extension to be used by multiple dev containers. To take full advantage of this, the build script in each dev container would need to have logic to only build the extension if it does not already exist. The devcontainer.json would look like this:

1{ 
2    "name": "My Container",
3    "initializeCommand": "docker run -it --rm -v ./extensions/customtool:/work --mount type=volume,src=extensions,target=/work/publish  node:lts-slim /work/build.sh",
4    "mounts": [
5        "source=extensions,target=/home/vscode/extensions,type=volume"
6    ]
7}

Approach 6: Build it in the Container

There’s nothing stopping you from directly building the extension as part of creating the dev container. This approach can even support the need to rebuild and re-apply the extension after a code change. To make this work, you would build the code during the updateContentCommand, postCreateCommand, or onCreateCommand lifecycle script. This approach requires the dev container to have all of the supporting tools installed in the container.

Since I provided a simple example of this earlier, here’s a more creative example. It globally configures Yarn when the container is created, and it rebuilds the extension during any prebuild. It uses source in the workspace and builds the code inside of the workspace itself. This ensures that the compiled extension is preserved even if the container is recreated.

 1{ 
 2    "name": "My Container",
 3    "customizations": {
 4        "vscode": {
 5            "extensions": [
 6                "${containerWorkspaceFolder}/.devcontainer/extensions/customtool/publish/tool.vsix"
 7            ]
 8        }
 9    },
10    "updateContentCommand": {
11        "configure-extension": "${containerWorkspaceFolder}/.devcontainer/extensions/customtool/build.sh"
12    },
13    "onCreateCommand": {
14        "setup-yarn": "corepack enable && COREPACK_ENABLE_DOWNLOAD_PROMPT=0 corepack install --global yarn@stable"
15    }
16}

The Power of Containers!

As you can see, there are a few ways to take advantage of dev containers that need to use private or solution-specific extensions. This provides a powerful, configurable way to extend your infrastructure-as-code development environments. Developers can implement solutions that improve the IDE experience without having to publicly deploy or distribute the extension. It’s just one of many options that we have for improving the modern development experience.