AI inference is a computationally intensive task that could benefit greatly from the speed of Rust and WebAssembly. However, the standard WebAssembly sandbox provides very limited access to the native OS and hardware, such as multi-core CPUs, GPU and specialized AI inference chips. It is not ideal for the AI workload.
The popular WebAssembly System Interface (WASI) provides a design pattern for sandboxed WebAssembly programs to securely access native host functions. The Second State VM extends the WASI model to support access to native Tensorflow libraries from WebAssembly programs. It provides the security, portability, and ease-of-use of WebAssembly and native speed for Tensorflow.
The WASI-like extension for Tensorflow is available on all SSVM-based applications, including Node.js services and the Second State FaaS (Function as a Service). The AI inference operation is especially well suited for FaaS since it transactional, often happens on the edge, needs to scale up and down quickly, and integrates with other network services for input and output. The Second State FaaS makes it easy to move your Tensorflow models to production as a web service.
Interested? Checkout the tutorials below.