Over the years, I’ve met countless teams that were worried about writing their software for a specific platform to prevent “vendor lock-in.” It’s a valid concern, but it’s one that I believe is often taken too far. In this post, I’ll explain why I think this is the case and how I like to think about software design.
Typically, trying to be “vendor agnostic” leads developers down one of two paths. The first path is to design the software for the lowest common denominator. The software doesn’t take advantage of the features it could so that it can be deployed on any cloud or local platform. That means that it is often using a virtual machine and frequently means that it uses few (if any) cloud-native features.
The second path is to over-engineer the software to be able to run on any platform. This often leads to a lot of complexity and unnecessary components. In some cases, it is separate (but nearly identical) implementations for each platform. In both cases, you can end up creating software that is less effective and harder to work with.
The server-based approach
I worked on a sample Probot application that demonstrated how to integrate a custom approval process with Advanced Security, the Security Alert Watcher. In building it, I had the opportunity to demonstrate a principle that I have long believed in: Good code doesn’t care where it lives. What do I mean by this? The application started its life as a simple TypeScript application that listened to GitHub webhooks and then called APIs. The application could be compiled to a single JavaScript file that could run under Node.js. In other words, it was perfect for hosting on an Azure App Service or a VM running Node.js.
The main code was encapsulated in a single app
class instance. The server hosting code really came down to a few lines of code:
1import { run } from "probot"
2import { app } from "@security-watcher-core";
3
4export async function main() {
5 const server = await run(app);
6}
7
8main()
On the web server, the application was started with a simple command line: node index.js
. It relied on the Probot code in the app
instance to process the requests and create a response. In some ways, you could say that the node
command line was the adapter for the platform. It provided the entry point for running the application code and starting its web server on any supported operating system.
Modernizing with Docker
Some users of the application needed to be able to deploy the application and run it in a container. Instead of a full VM with Node.js, everything was encapsulated into a lightweight image. I added a Dockerfile – which acted as the adapter for this platform – and created an image that invoked the same Node.js command line:
1FROM node:22-alpine3.20
2# ...
3# Code for copying the files into the image
4# and setting up the environment
5# ...
6ENV NODE_ENV=production
7
8# Expose the web server port
9ENV PORT=80
10EXPOSE ${PORT}
11
12# Expose the details for starting the application
13ENTRYPOINT ["node", "index.js"]
The resulting image could be run on any platform that can host containers, such as Docker, Podman, Azure Container Apps, Kubernetes, or Amazon Elastic Container Service. The code itself didn’t change, but the platform host did require the code to be provided in a specific package format (OCI) that includes the details for exposing the application on Port 80. This, in turn, allowed the host to proxy requests and forward them to the running instance.
The path to serverless
With that completed, I started to receive requests to support AWS Lambda and Azure Functions. Users wanted to be able to run a serverless application so that they just pay for the resources they consume. They didn’t need a VM or container solution that was continuously running.
While most developers are used to the idea that serverless solutions map to a single route or API, the reality is that they are more flexible than that. In fact, it’s even possible to treat path components in a URL as another parameter! In my case, it was easier than that. Since the application was designed to be a webhook, it just needed to expose a single endpoint for receiving the JSON data. The application code already provided this.
The code was implemented using Express, a common web server framework for Node.js. As a result, it already receives requests and returns responses. To make it testable, the framework allows executing the application code directly without hosting a web server. When you do this, you simply provide it with request and response objects that contain the data that would have been sent to the server.
AWS Lambda and Azure Functions both expect to find functions with specific signatures that they can invoke when a request is received. They each pass in the request details and expect a formatted response. The request and response objects are different for each platform. To enable the application to be hosted on either platform, I needed to create an adapter – code that could be called from either platform and be used to create the inputs that the application expected.
In its simplified form (with some error checking removed), the results looked like this:
1import { load, Probot } from "probot"
2import { app } from "@security-watcher-core";
3
4export interface WebhookEventRequest {
5 body: string | undefined;
6 headers: Record<string, string> | Record<string, string | undefined>;
7}
8
9export interface WebhookEventResponse {
10 status: number;
11 body: string;
12}
13
14export class WebhookProcessor {
15 private app: Probot;
16 constructor(app: Probot) {
17 this.app = app;
18 }
19 public async process(
20 event: WebhookEventRequest
21 ): Promise<WebhookEventResponse> {
22
23 // Build the complete processing pipeline for the requests
24 var instance = load(this.app);
25
26 // The Probot framework exposes a method that uses specific headers
27 // to verify the request and dispatch it to the correct handler.
28 // Normally, Express wrappers this back into an HTTP response.
29 try {
30 await instance.webhooks.verifyAndReceive(
31 id: headers['x-github-delivery'],
32 name: headers['x-github-event'],
33 signature: headers['x-hub-signature-256'],
34 payload: event.body );
35 catch (error) {
36 // Something went wrong, so provide an error response
37 return {
38 status: 500,
39 body: { message: error instanceof Error ? error.message : String(error)}
40 }
41 }
42
43 return {
44 // Message received and processed successfully
45 status: 200,
46 body: JSON.stringify({ok: true })
47 }
48 }
49}
Notice that this adapter still delegates the work to the code in the app
instance. This is similar to the server code, but instead of calling run
to create the Express server, the code uses load
to set up the middleware and allow the methods to be directly called. All of the business logic, testing, and other code for the application itself remains unchanged. The adapter just provides the glue that lets the application code be called from a serverless system.
With that in place, the code just needed to be configured for the host environment. In some ways, it’s similar to how the Dockerfile adapts the standalone server for a containerized environment by providing a way to start the application and exposing a port. With serverless, you need to provide a specific entry point method that the host expects to call. You can then invoke your adapter to call your application code. Each serverless platform expects specific signatures for the entry point method. The AWS Lambda code looks like this:
1import { APIGatewayProxyEventV2, APIGatewayProxyStructuredResultV2, Context } from 'aws-lambda';
2import { app as probotApp, WebhookProcessor, WebhookEventRequest } from '@security-watcher-core';
3
4export async function process(
5 event: APIGatewayProxyEventV2,
6 _context: Context
7): Promise<APIGatewayProxyStructuredResultV2> {
8 const processor = new WebhookProcessor(app);
9 const response = await processor.process(event as WebhookEventRequest);
10 return {
11 statusCode: response.status,
12 body: response.body
13 };
14}
The Azure Functions runtime has a similar requirement, but a slightly different method signature. It has a method that provides the request/response and a method call that registers this function with the host environment.
1import { app, HttpRequest, HttpResponseInit, InvocationContext } from '@azure/functions';
2import { app as probotApp, WebhookProcessor, WebhookEventRequest } from '@security-watcher-core';
3
4// The actual function that will be called by the runtime
5export async function securityWatcher(
6 request: HttpRequest,
7 _context: InvocationContext
8): Promise<HttpResponseInit> {
9 const processor = new WebhookProcessor(probotApp);
10 const event: WebhookEventRequest = {
11 headers: Object.fromEntries(request.headers),
12 body: await request.text()
13 };
14 const resp = await processor.process(event);
15 return {body: resp.body, status: resp.status};
16}
17
18// Register the function and its HTTP method/authentication requirements.
19app.http('securityWatcher', {
20 methods: ['POST'],
21 authLevel: 'anonymous',
22 handler: securityWatcher
23});
Notice that the handlers for the Azure Function and AWS Lambda are nearly identical. That’s why I can use the same adapter for the underlying codebase. There are some differences in the request and response objects those frameworks use, but they are easily mapped to the interfaces in the adapter code. Notice that again, the core application code didn’t require any changes.
Alternatively, I could have also used a container image for both cases. The images would still need to have specific entry points, so the pattern is very similar. In that case, I would rely on the base image definitions for each of those runtimes, plus the adapter code.
The real lesson
In each case, the code that I wrote was agnostic to how it was hosted or invoked. It didn’t care if it was running on a VM, in a container, or in a serverless environment. Because the code was designed to be modular and testable, I could easily adapt it to run in any environment with minimal effort. While each hosting environment may be vendor-specific, the application code itself is not.
Well-written code can often be directly reused on nearly every programming platform. Of course, it isn’t the only option. For example, the adapter code could be invoking another containerized service using HTTP and then providing the response. This pattern is often used to support having a cloud-hosted product that supports running as a local server. It can be effective for providing platform-specific features and services, such as reading and writing files from Blob Storage, S3, or a local disk. The main component is responsible for handling files optimally in a specific hosting environment, but the adapter code provides the interface the application code uses for reading and writing files. For files, this pattern is so common that there are products (such as MinIO and Flexify) that are designed to make all storage look like file operations with S3-compatible storage. They provide the adapter.
Next time you’re considering writing multiple versions of your code to support different platforms, consider whether you can use an adapter pattern instead. It may be easier than you think. In fact, it may be the best way to write good code that doesn’t care where it lives.