-
WebAssembly on Kubernetes: from containers to Wasm (part 01)
Community blog by Seven Cheng WebAssemly (Wasm) was originally created for the browser, and it has become increasingly popular on the server-side as well. In my view, WebAssembly is gaining popularity in the Cloud Native ecosystem due to its advantages over containers, including smaller size, faster speed, enhanced security, and greater portability. In this article, I will provide a brief introduction to WebAssembly and explain its advantages. Then I will discuss how Wasm modules can be executed using container toolings, including low-level container runtimes, high-level container runtimes, and Kubernetes in the next article.…
-
Talk to WasmEdge at WasmIO 2024 in Barcelona and KubeCon EU 2024 in Paris
WasmEdge is set to make a splash at two of the most awaited tech events of the year, WasmIO 2024 in Barcelona and KubeCon EU 2024 in Paris. With a series of engaging talks, workshops, and presentations lined up, these appearances highlight the increasing importance of efficient, portable AI/LLM inference and cloud-native technologies in today's fast-evolving digital landscape. From deep dives into cloud-native WebAssembly to innovative strategies for log processing and building business models around open-source projects, WasmEdge's contributions are poised to offer invaluable insights and practical solutions for developers, entrepreneurs, and tech enthusiasts looking to leverage the full potential of Wasm and AI technologies on the edge cloud and beyond.…
-
Getting Started with Llava-v1.6-Vicuna-7B
Llava-v1.6-Vicuna-7B is open-source community's answer to OpenAI's multimodal GPT-4-V. It is also known as a Visual Language Model for its ability to handle visual images and language in a conversation. The model is based on lmsys/vicuna-7b-v1.5. In this article, we will cover how to create an OpenAI-compatible API service for Llava-v1.6-Vicuna-7B. We will use LlamaEdge (the Rust + Wasm stack) to develop and deploy applications for this model. There is no complex Python packages or C++ toolchains to install!…
-
LlamaEdge released v0.4.0, adding RAG and Llava support
LlamaEdge v0.4.0 is out! Key enhancements: Support the Llava series of VLMs (Visual Language Models), including Llava 1.5 and Llava 1.6 Support RAG services (i.e., OpenAI Assistants API) in the LlamaEdge API server Simplify the run-llm.sh script interactions to improve the onboarding experience for new users Support Llava series of multimodal models Llava is an open-source Visual Language Model (VLM). It supports multi-modal conversations, where the user can insert an image into a conversation and have the model answer questions based on the image.…
-
Getting Started with Qwen1.5-72B-Chat
Qwen1.5-72B-Chat,developed by Alibaba Cloud, according to its hugging face page, has the below improvements over the previous released Qwen model: Significant performance improvement in human preference for chat models; Multilingual support of both base and chat models; Stable support of 32K context length for models of all sizes. It surpasses GPT4 in 4 out of 10 benchmarks based on a photo on Qwen’s Github page. In this article, taking Qwen1.5-72B-Chat as an example, we will cover…
-
Getting Started with Gemma-2b-it
Google open sourced its Gemma models family yesterday, finally joining the open-source movement in large language models. Gemma-2b-it, like Gemma-7b-it we have discussed, is also designed for a range of text generation tasks like question answering, summarization, and reasoning. These lightweight, state-of-the-art models are built on the same technology as the Gemini models, offering text-to-text, decoder-only capabilities. They are available in English, with open weights, pre-trained variants, and instruction-tuned versions, making them suitable for deployment in resource-constrained environment.…
-
Getting Started with Gemma-7b-it
*Right now the Gemma 7b model is undergoing some issues. Please come back to try later. Google announced Gemma models Gemma-2b-it and Gemma-7b-it yesterday. Google's Gemma model family is designed for a range of text generation tasks like question answering, summarization, and reasoning. These lightweight, state-of-the-art models are built on the same technology as the Gemini models, offering text-to-text, decoder-only capabilities. They are available in English, with open weights, pre-trained variants, and instruction-tuned versions, making them suitable for deployment in resource-constrained environment.…
-
Getting Started with Qwen1.5-0.5B-Chat
Qwen1.5-0.5B-Chat, developed by Alibaba Cloud, is a beta version of the Qwen2, a transformer-based language model pretrained on a large amount of data. It offers improved performance in chat models, multilingual support, and stable support for 32K context length for models of all sizes. The model is designed for text generation and can be used for tasks like post-training and continued pretraining. In this article, taking Qwen1.5-0.5B-Chat as an example, we will cover…
-
Getting Started with Neural-Chat-7B-v3-3
Neural-Chat-7B-v3-3, created by Intel, is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the Intel/neural-chat-7b-v3-1 model on the meta-math/MetaMathQA dataset. The model was aligned using the Direct Performance Optimization (DPO) method. It is intended for general language-related tasks, but may need further fine-tuning for specific applications. It adopts the Apache 2.0 license. In this article, we will cover How to run Neural-Chat-7B-v3-3 on your own device How to create an OpenAI-compatible API service for Neural-Chat-7B-v3-3 We will use LlamaEdge (the Rust + Wasm stack) to develop and deploy applications for this model.…
-
Getting Started with Phi-2
Phi-2 by Microsofit is a 2.7 billion parameter Transformer pushing the boundaries of language models! Unlike its predecessors, it excels in reasoning and understanding thanks to unique training data (augmented with a new data source that consists of various NLP synthetic texts and filtered websites.) and avoids fine-tuning via human feedback. Open-source and powerful, Phi-2 empowers researchers to tackle crucial safety challenges in AI. In this article, we will cover…