Last week, I mentioned that there are ways to modernize your deployments to Azure App Services. This week, let’s dive into that just a bit deeper. For most people used to working with Windows App Service Plans (or IIS) and .NET, the built-in support for MSDeploy (Web Deploy) is often used for deployment. Properly used, MSDeploy provides a powerful engine that enables deployment-time customization, advanced IIS capabilities, incremental deployment support, and built-in retry logic. Unfortunately, as technologies have matured (and become cross-platform), approaches for deploying the code had to also mature.
MSDeploy brings with it a few challenges. For example:
- During deployments, IIS may be locking files. This can slow down (or break) the process. There are some workarounds in MSDeploy that can help with this.
- MSDeploy transfers files incrementally and is optimized for partial deployments. As a results, updates are not actually an atomic operation. It is possible to end up with the system in an inconsistent state.
- It can be particularly slow if you’re working with JavaScript packages or large numbers of small files. These require significant disk and network cycles. When a system is trying to scale up or redeploy, the seconds can quickly add up.
- Creating an MSDeploy package outside of MSBuild requires understanding how to create the ZIP files and the required metadata.
To simplify the approach, Microsoft introduced ZIP Publish. This enabled a simple ZIP file to be created and unpacked, replacing the entire web site. The files are decompressed and stored in d:\home\site\wwwroot
on Windows or /home/site/wwwroot
on Linux. The ZIP doesn’t require any special metadata or configuration, unlike MSDeploy packages. This created a universal approach for deploying sites in any language, but it didn’t solve all of the problems. It still suffers from the same issues with having to unpack and write small files. Most importantly, it lacks a way to handle file locks, which can result in a non-atomic update.
A new approach was required to eliminate these issues. Microsoft introduced the deployment package, also called ZIP Deploy. Similar to ZIP Publish, this uses a vanilla ZIP file. That’s where the similarities end. The files are not copied out of the package to the wwwroot
folder. Instead, the ZIP file is mounted as the wwwroot
folder. Replacing the code on the server is truly an atomic operation — the previous ZIP file is unmounted and the new ZIP file is mounted. All of the files are updated at the same time, and and the process eliminates file locking. This leads to substantial improvements to the cold-start time.
Note: This approach is limited to just ZIP files. There’s no support for TAR, GZIP, or any other compression format. The deployed file must also be 1 GB or smaller.
There are two approaches to adopting this approach. Both involve adding an App Setting to enable the functionality. The first approach is to configure WEBSITE_RUN_FROM_PACKAGE
=1. In this mode, you can directly push a ZIP file to the service using Azure CLI:
1az webapp deployment source config-zip --resource-group {groupName} --name {appName} --src {fileName}.zip
Behind the scenes, this will create a SitePackages folder, persist the ZIP, and modify a file called packagename.txt to reference the ZIP file. The file is stored on the App Service, making it immediately available. As you would expect, having the file stored locally provides the best cold-start experience.
The other approach is to configure WEBSITE_RUN_FROM_PACKAGE
to point to an external URL for the package file. In this mode, App Services will download the package, save it locally, and then mount it. Because of the network transfer, it takes slightly longer to get started compared to the other approach. This approach brings a lot of flexibility. The ZIP file can be utilized for testing, validation, or local execution. Similarly, rollback and update logic are handled by simply changing the URL stored in the App Settings. Normally the file is stored in Blob Storage using either a
Shared Access Signature (SAS) or
role-based access controls (RBAC) with the managed identity of the App Service to access the ZIP file.
It’s worth knowing that the external package URL approach is required if you’re creating Azure Functions on Linux using a Consumption Plan. It’s the only supported deployment methodology. A reminder from last week — if you deploy this way and update the ZIP file that is being referenced, don’t forget to sync your triggers. If you’re pushing out new, immutable ZIP packages (recommended!), then updating the App Settings with the new URL will automatically handle that sync process.
Hopefully this helps to clarify the benefits of this approach and why you should be using it. If you’re still using MSDeploy, consider upgrading your approach to take advantage of the improved functionality. If you’re using legacy approaches like FTP, you’ll see an even bigger benefit from the new approach.