WebAssembly on Kubernetes: from containers to Wasm (part 01)

Mar 15, 2024 • 5 minutes to read

Community blog by Seven Cheng

WebAssemly (Wasm) was originally created for the browser, and it has become increasingly popular on the server-side as well. In my view, WebAssembly is gaining popularity in the Cloud Native ecosystem due to its advantages over containers, including smaller size, faster speed, enhanced security, and greater portability. In this article, I will provide a brief introduction to WebAssembly and explain its advantages. Then I will discuss how Wasm modules can be executed using container toolings, including low-level container runtimes, high-level container runtimes, and Kubernetes in the next article.

What is WebAssembly?

WebAssembly is a universal bytecode technology that allows programs written in various languages like Go, Rust, and C/C++ to be compiled into bytecode, which can be executed directly within web browsers and servers.

WebAssembly is designed from the ground up to solve the performance problem of JavaScript. With WebAssembly, developers can compile code to a low-level binary format that can be executed by modern web browsers at near-native speeds. In March 2019, Mozilla announced the WebAssembly System Interface (WASI), an API specification that defines a standard interface between WebAssembly modules and their host environments. WASI allows Wasm modules to access system resources securely, including the network, filesystem, etc. This extremely expanded Webassembly’s potential by enabling it to work not only in browsers but also on servers.

The advantages of WebAssembly

WebAssembly stands out with several remarkable benefits over traditional containers:

  • Fast: Wasm modules typically start within milliseconds, significantly faster than traditional containers, which is crucial for workloads requiring rapid startup, such as serverless functions.
  • Lightweight: Compared to container images, Wasm modules generally occupy less space and demand fewer CPU and memory resources.
  • Secure: Wasm modules run in a strict sandbox environment, isolated from the underlying host operating system, reducing potential security vulnerabilities.
  • Portable: Wasm modules can run seamlessly across various platforms and CPU architectures, eliminating the need to maintain multiple container images tailored for different OS and CPU combinations.

You can refer to this table for a detailed comparison between WebAssembly and containers: WebAssembly vs Linux Container.

Run Wasm modules in Linux containers

An easy method to execute Wasm modules within container ecosystems is to incorporate the Wasm bytecode into the Linux container image. Specifically, the Linux OS inside the container can be pared down to only the components necessary to support the Wasm runtime. Since Wasm modules are housed in standard containers, they can be integrated seamlessly with any existing container ecosystems. The slimmed Linux OS presents a much smaller attack surface versus a regular Linux OS. Nonetheless, this approach still necessitates the launching of a Linux container. Although the Linux OS is trimmed down, it still takes up 80% of the container’s image size.

Run Wasm modules in container runtimes that have Wasm support

The advantage of embedding the Wasm modules into the Linux container is that it allows for seamless integration with existing environments while also benefiting from the performance improvements brought by Wasm. However, compared to running Wasm modules directly in Wasm-supported container runtimes, this method is less efficient and secure. Generally, container runtimes can be categorized into two levels: high-level runtimes and low-level runtimes.

  • Low-level Container Runtime: This refers to OCI-compliant implementations that can receive a runnable filesystem (rootfs) and a configuration file (config.json) to execute isolated processes. Low-level container runtimes directly manage and run containers, such as runc, crun, youki, gvisor, and kata.
  • High-level Container Runtime: This is responsible for the transport and management of container images, unpacking the image, and passing it off to the low-level runtime to run the container. High-level container runtimes simplify container management by abstracting the complexities of low-level runtime, which allows users to manage various low-level runtimes through the same high-level runtime. Containerd and CRI-O are two popular high-level container runtimes.

We can enable Wasm support in both low-level and high-level container runtimes. When running Wasm modules directly via low-level container runtimes, there are several options available, such as crun and youki, which come with built-in support for Wasm. When running Wasm modules through high-level container runtimes, both CRI-O and containerd are great options. There are two possible approaches:

  • One is that the high-level runtime still depends on low-level runtimes, invoking the low-level runtime to execute the Wasm module.
  • The other approach is that containerd has a subproject called runwasi, which enables developing a containerd-wasm-shim that interacts directly with the Wasm runtime such as WasmEdge and Wasmtime. This allows containerd to run Wasm modules without relying on low-level runtimes, but rather by invoking the Wasm runtime directly. This not only shortens the invocation path, but also improves efficiency.

Run Wasm modules on Kubernetes

WebAssembly is driving the third wave of cloud computing. As the de facto standard in the realm of container orchestration, Kubernetes continuously evolves to leverage the advantages brought about by WebAssembly. To run Wasm workloads on Kubernetes, two key components are needed:

  • Worker nodes bootstrapped with a Wasm runtime. This setup can be achieved by integrating high-level container runtimes such as containerd and CRI-O with lower-level runtimes like crun and youki that support Wasm.
  • RuntimeClass objects mapping to nodes with a WebAssembly runtime. RuntimeClass addresses the problem of having multiple container runtimes in a Kubernetes cluster, where some nodes might support a Wasm runtime while others support regular container runtimes. You can use RuntimeClass to schedule Wasm workloads specifically to nodes with Wasm runtimes.

To enable Wasm support on Kubernetes nodes, we can use the Kwasm Operator to automate the process instead of manually installing a container runtime with the Wasm runtime library. Kwasm is a Kubernetes Operator that automatically adds WebAssembly support to your Kubernetes nodes. The operator uses the kwasm-node-installer project to modify the underlying Kubernetes nodes.

Conclusion

WebAssembly provides a fast, efficient, and secure way for executing code, while Kubernetes serves as a powerful container orchestration platform. “Cloud Native WebAssembly” uses Wasm on servers and in the cloud, employing orchestration tools like Kubernetes for the deployment and management of Wasm applications. By combining these technologies, we can create Cloud Native applications that are flexible, high-performance, scalable, and secure. This convergence opens up exciting possibilities for innovation, enabling the development of advanced serverless architectures, edge computing solutions, and more, while ensuring compatibility and portability across different environments.

KubeConk8sCNCFWebAssembly
A high-performance, extensible, and hardware optimized WebAssembly Virtual Machine for automotive, cloud, AI, and blockchain applications