No fear. No panic.
Only focus.
Trust your preparation.
Great things happen under pressure. 💪
— ArkDevLabs
Posts by ArkDevLabs
Open4Bits Update
Gemma 3 270M (GGUF) has crossed 1,000 downloads.
Available in F32, F16, and Q4_K–Q8_0 quantized variants for efficient deployment.
Model: huggingface.co/Open4bits/ge...
🚀 BIGGEST DROP YET.
Introducing Open4bits/DeepSeek-R1-mlx-2Bit — a 685B parameter DeepSeek R1 compressed to 2-bit, optimized for large-scale local inference in MLX format.
Our largest release ever.
Efficiency meets scale.
huggingface.co/Open4bits/De...
What Are Sitemaps?
New blog is live! Learn what sitemaps are and how they help search engines crawl your website better.
🔗 arkdevlabs.com/global/blog/...
#Sitemaps #SEO #WebDevelopment #ArkDevLabs
Introducing Open4bits/gpt-oss-20B-MLX-2Bit, a 2-bit quantized GPT-OSS 20B model optimized for efficient local inference with reduced memory and compute requirements.
huggingface.co/Open4bits/gp...
Introducing Open4bits/gpt-oss-120B-MLX-2Bit, a highly compressed 2-bit quantized GPT-OSS 120B model optimized for efficient local inference with reduced memory and compute requirements.
huggingface.co/Open4bits/gp...
Introducing **Open4bits/Schematron-3B-gguf**, a quantized GGUF release of Schematron-3B, built on meta-llama/Llama-3.2-3B-Instruct and fine-tuned by Inference-Net. Optimized for efficient local, CPU-friendly inference.
huggingface.co/Open4bits/Sc...
Introducing Open4bits/Ministral-3-3B-Base-2512-gguf, a quantized 3B-parameter Ministral 3 Base model optimized for efficient local inference and broad CPU compatibility, released in GGUF format.
huggingface.co/Open4bits/Mi...
Introducing Open4bits/llama3.2-1b-gguf, a quantized 1B-parameter LLaMA 3.2 model optimized for efficient local inference, released in GGUF format for broad CPU compatibility.
huggingface.co/Open4bits/ll...
Introducing Open4bits/EXAONE-4.0-1.2B-gguf, a quantized 1.2B EXAONE 4.0 model for efficient local inference with multilingual support, released in GGUF format.
huggingface.co/Open4bits/EX...
v1.2.0 is live.
This update focuses on refinement over noise—clearer content, better structure, improved accessibility, and stronger overall stability.
Building steadily, improving intentionally.
arkdevlabs.com
Released Open4bits/whisper-base-f16
FP16 Whisper Base (~74M params) for efficient, production-ready multilingual ASR.
huggingface.co/Open4bits/wh...
Dropped: Open4bits/whisper-tiny-f16
FP16 Whisper Tiny (~37.85M params) for fast, efficient multilingual speech-to-text.
huggingface.co/Open4bits/wh...
Introduction to Artificial Intelligence
New blog is live! Learn what AI is and why it matters.
arkdevlabs.com/global/blog/...
#AI #ArtificialIntelligence #ArkDevLabs
New release!
Gemma-3-270M (GGUF) is now available for local AI workflows.
Grab it here 👇
huggingface.co/Open4bits/ge...
#LocalAI #GGUF #HuggingFace
New release!
Gemma-3-270M-IT (GGUF) is now live — great for Italian language tasks and local inference.
👉 huggingface.co/Open4bits/ge...
#LocalAI #GGUF #HuggingFace
We’ve just released a new blog on GGUF 🚀
Learn what it is, why it matters, and how it’s shaping local AI models.
👉 arkdevlabs.com/global/blog/...
#AI #GGUF #MachineLearning #ArkDevLabs
Open4bits/Granite-4.0-H-Micro-FP4 is now available on Hugging Face.
The FP4 variant delivers extreme compression for highly memory-constrained inference environments.
Download:
huggingface.co/Open4bits/gr...
Open4bits/Granite-4.0-H-Micro-NF4 is now available on Hugging Face.
The NF4 variant uses 4-bit NormalFloat quantization to maximize memory efficiency with minimal quality loss.
Download:
huggingface.co/Open4bits/gr...
Open4bits/Granite-4.0-H-Micro-INT8 is now available on Hugging Face.
The INT8 variant offers a balanced trade-off between performance, memory efficiency, and inference speed.
Download:
huggingface.co/Open4bits/gr...
Open4bits/Granite-4.0-H-Micro-FP16 is now available on Hugging Face.
The FP16 variant provides a high-fidelity baseline suitable for accurate inference and further experimentation.
Download:
huggingface.co/Open4bits/gr...
Open4bits/Granite-4.0-H-Micro-Quantized models are now available on Hugging Face.
Multiple quantized variants (FP16, FP8, INT8, NF4, FP4) are provided to support efficient inference across diverse hardware and deployment environments.
Download:
huggingface.co/Open4bits/gr...