-
Run FLUX.1 [schnell] on your MacBook
FLUX.1 is an open-source image generation model developed by Black Forest Labs, the creators of Stable Diffusion. They recently released FLUX.1 [schnell], a lightweight, high-speed variant designed for local use, ideal for personal projects, and licensed under Apache 2.0. With WasmEdge's release of version 0.14.1, which includes Stable Diffusion plugin support, you can use LlamaEdge (the Rust + Wasm stack) to run the FLUX.1 [schnell] model and Stable Diffusion model and generate images directly on your machine without needing to install complex Python packages or C++ toolchains!…
-
Getting started with Qwen2.5-14B
The Qwen 2.5 series includes models ranging from 0.5B to 110B parameters, optimized for diverse tasks like coding, logical reasoning, and natural language understanding. These models, including smaller ones (0.5B, 1.8B, 4B, 7B, 14B) for edge devices and larger ones (72B, 110B) for enterprise use, have seen significant improvements in instruction-following, logic, and over 29 languages support. They have long-context support (up to 128K input tokens and over 8k token generation), and can generate structured outputs like JSON.…
-
Tutorial: Run Yi-Coder as a private coding assistant
Yi-Coder is an open-source, high-performance code language model designed for efficient coding. It supports 52 programming languages and excels in tasks requiring long-context understanding, such as project-level code comprehension and generation. The model comes in two sizes—1.5B and 9B parameters—and is available in both base and chat versions. In this tutorial, you’ll learn how to Run the Yi-coder model locally with an OpenAI-compatible API Use Yi-coder to power Cursor Cursor is one of the hottest AI code editors.…
-
10 More Free Linux Foundation Certification Exam or Course Vouchers
Follow-Up Offer: 10 More Free Vouchers Up for Grabs! We have reached out to the winners for the last 10 vouchers-new contributors during the past 6 months. Check your inbox and confirm by replying to our email! If you think you're eligible and have not received an email, please reach out to furao@secondstate.io by sending your merged Pull Request! After the success of our previous giveaway, Second State is thrilled to announce a new round of opportunities for open-source contributors.…
-
Getting Started with Phi-3.5-mini-instruct
Phi-3.5-mini is a cutting-edge, lightweight version of the renowned Phi-3 model, designed to handle extensive contexts up to 128K tokens with unparalleled efficiency. Built from a mix of synthetic data and meticulously filtered web content, this model excels in high-quality, reasoning-intensive tasks. The development of Phi-3.5-mini involved advanced techniques such as supervised fine-tuning and innovative optimization strategies like proximal policy optimization and direct preference optimization. These rigorous enhancements guarantee exceptional adherence to instructions and robust safety protocols, setting a new standard in the AI landscape.…
-
Win Gifts from Second State/WasmEdge at KubeCon+CloudNativeCon+ OSSummit+AI_dev 2024
The GenAI, cloud-native and pen source community is eagerly anticipating the upcoming KubeCon + CloudNativeCon + Open Source Summit + AI_dev China 2024, set to take place in Hong Kong 21-23rd August. This event promises to be a remarkable gathering of open-source luminaries, including the legendary Linus Trovalds and other star speakers. Developers will have the rare opportunity for face-to-face interactions with these influential figures, as well as with Jim Zemlin, the CEO of the Linux Foundation, and other industry leaders.…
-
Advance Your Skills with WasmEdge LFX Mentorship 2024 Fall: LLMs, Trading Bots and More
The 2024 Term 3 of the LFX Mentorship program is here, and it's packed with exciting opportunities! Running from September to November, this program invites passionate developers to contribute to open source while boosting their CV. WasmEdge has 4 projects that offer an exciting opportunity for aspiring developers to work on cutting-edge projects within the WasmEdge ecosystem, with a focus on enhancing WebAssembly (WASM) capabilities, improving software reliability, and integrating modern AI techniques to build innovative applications.…
-
Getting Started with Llama 3.1
The newly released Llama 3.1 series of LLMs are Meta’s “most capable models to date”. The largest 405B model is the first open source LLM to match or exceed the performance of SOTA closed-source models such as GPT-4o and Claude 3.5 Sonnet. While the 405B model is probably too big for personal computers, Meta has used it to further train and finetune smaller Llama 3 models. The results are spectacular! Compared with Llama 3 8B, the Llama 3.…
-
Mathstral: A New LLM that is Good at Math Reasoning
Today, Mistral AI released mathstral, a finetuned 7B model specifically designed for math reasoning and scientific discovery. The model has a 32k context window. The model weights are available under the Apache 2.0 license. As we have seen, leading edge LLMs, such as the GPT-4o, can solve very complex math problems. But do they have common sense? A meme that has been going around on the Internet suggests that LLMs can only pretend to solve “math Olympiad level” problems since it lacks understanding of even elementary school math.…
-
Getting Started with internlm2_5-7b-chat
The internlm2_5-7b-chat model, a new open-source model from SenseTime, introduces a 7 billion parameter base model alongside a chat model designed for practical applications. This model showcases exceptional reasoning capabilities, achieving state-of-the-art results in math reasoning tasks, outperforming competitors like Llama3 and Gemma2-9B. With a remarkable 1M context window, InternLM2.5 excels in processing extensive data, leading in long-context challenges such as LongBench. The model is also capable of tool use, integrating information from over 100 web sources, with enhanced functionalities in instruction adherence, tool selection, and reflective processes.…