AI as A Service on WebAssembly

• 2 minutes to read

AI inference is an ideal use case for Function as a Service (FaaS). The inference operation is transactional, often happens on the edge, need to scale up and down quickly, and integrates with other network services for input and output.

However, the inference is also ill-suited for today's FaaS infrastructure.

  • Inference is computationally intensive and time sensitive. FaaS today has long cold start time, and poor runtime performance due to the heavy runtime software stack and inefficient programming languages. Native binary executables are poorly supported in most FaaS due to safety and portability concerns.
  • Inference needs to access specialized hardware for performance, requiring more sophisticated security models than today’s containers or system microVMs. The FaaS environment does not expose host hardware to its hosted functions.

WebAssembly is much faster than typical FaaS languages such as JavaScript and Python. However, the standard WebAssembly sandbox provides very limited access to the native OS and hardware, such as multi-core CPUs, GPU and specialized AI inference chips. It is still not ideal for the AI workload.

With the Second State VM, we extend the WebAssembly sandbox security model to support native “command” modules. The hybrid model enables Second State FaaS to safely run WebAssembly functions that perform AI inference at full native speed. If you are interested in learning more about this approach, check out this tutorial.

For Second State FaaS users, the FaaS pre-installs native commands that are reviewed and approved by Second State. You can simply make API calls to those commands in your code. Let’s look into some concrete examples next! If you are interested in contributing native SSVM commands for AI model execution, ask us about the “AI model zoo” coding challenge and win a prize!


RustJavaScriptWebAssemblyNode.jsFaaSRust FaaSServerlesscloud computingAI
Fast, safe, portable and serverless Rust functions as services