Advertisement · 728 × 90
#
Hashtag
#Unsloth
Advertisement · 728 × 90
Post image

We're excited to announce we've just made working with #Unsloth Studio on cloud GPUs way easier with our new dedicated template.

This means training and running models is as simple as working with your local device and as powerful as the hardware you want to use.

#LLM #MLOps

0 0 1 0
Preview
Mastering LLM Fine-Tuning with Unsloth Studio: 2x Faster Training and 70% Less VRAM The most pervasive bottleneck in modern LLM engineering isn’t data acquisition—it’s the brutal physics of GPU memory and compute cycles. If you are a Senior AI Engineer attempting to fine-tune a 20B o...

Mastering LLM Fine-Tuning with Unsloth Studio: 2x Faster Training and 70% Less VRAM
www.tiptinker.com/mastering-ll...

#unsloth #unslothstudio #LLM #AI

0 0 0 0
Preview
Unsloth Studio: The Open-Source LLM Studio To Try If you tried to glue together your own “local LLM stack” this year, you probably ended up with a cursed combo of llama.cpp, some Colab notebook for LoRAs, a random web UI, and three folders called new_new_final. Unsloth Studio is the first serious attempt to make that whole mess one coherent, local, point‑and‑click app, and that’s more important than “a nicer LM Studio clone”.

Enough gluing: Unsloth Studio turns your cursed local LLM stack into one open-source, point-and-click app for local fine-tuning, dataset creation and export. #Unsloth Studio #llama.cpp #LoRA

2 0 0 1
+-----------------------------------------+----------------------+----------------------+
|   1  Quadro RTX 4000                Off | 00000000:B3:00.0 Off |                  N/A |
| 80%   86C    P0             121W / 125W |   4931MiB /  8192MiB |     99%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+-----------------------------------------+----------------------+----------------------+ | 1 Quadro RTX 4000 Off | 00000000:B3:00.0 Off | N/A | | 80% 86C P0 121W / 125W | 4931MiB / 8192MiB | 99% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

Running #unsloth SFT on my poor old Quadro RTX 4000 is keeping it busy, but it does look like I could have perhaps used a larger model to start from. Still got some headroom on the memory there.

0 0 0 0
Post image

Fine-tuning Qwen-8B под проприетарный синтаксис (CADINP) на одной RTX 3090: опыт инженера-конструктора Возможно ли на одной ...

#LLM #fine-tuning #локальные #нейросети #RTX #3090 #Unsloth #Qwen #DeepSeek #GGUF #SOFiSTiK

Origin | Interest | Match

0 0 0 0
Post image Post image Post image

🧠 #Unsloth ha sviluppato un approccio più efficiente per addestrare #GPT-OSS tramite reinforcement learning e GRPO.
👉 I dettagli: www.linkedin.com/posts/alessi...

#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM

0 0 1 0
Post image

Playing around with #AI model fine tuning using #MLX on my MacBook. Kind of got somewhere but it uses a lot of system resources which isn't ideal on my daily driver. Going to try #unsloth on a Windows machine that can just be left running for as long as needed and see what happens

0 0 0 0
Preview
Unlocking AI Potential: Fine-Tuning for Specialized Tasks Discover how fine-tuning can enhance AI model accuracy for specific tasks, and explore the tools making this process more accessible.

Modern workflows are being reshaped by generative AI — from automation to decision-making and creative production. ⚙️🤖
A practical look at how AI fits into real-world workflows 👇
techlife.blog/posts/modern...

#AI #MachineLearning #OpenSource #NVIDIA #Unsloth

1 0 0 0
Ollama Model File Create gguf model and push it to ollama or huggingface
Ollama Model File Create gguf model and push it to ollama or huggingface YouTube video by Mike Møller Nielsen

Create an Ollama compliant model and make it accessible to the world! Sharing is caring. :-) youtu.be/grCeXX-N_Gg #ollama #python #llm #machinelearning #ai #unsloth

2 0 0 0
Finetune LLM Model With Unsloth. Mention Mike When The Word Denmark Is Mentioned. Use APACA template
Finetune LLM Model With Unsloth. Mention Mike When The Word Denmark Is Mentioned. Use APACA template YouTube video by Mike Møller Nielsen

youtu.be/Yl10VgSm_MI
Finetune a LLM with your own data praising yourself. Then see the result when running the model afterwards! #unsloth #ai #python #ollama #llm

0 0 0 0

I'm glad the local model running at a rate of a token every 15 seconds is estimating towards the end of our session after spending around 4 hours for ~2500 output tokens that our entire session took, and I quote the ollama output «approximately **1 minute**» #huggingface #unsloth

0 0 1 0
Preview
Master the Art: Fine-tune LLM Unsloth Ollama in 7 Simple Steps for Peak Local Performance In the rapidly evolving world of artificial intelligence, customizing Large Language Models (LLMs) to perform specific tasks with exceptional accuracy is no longer just for large corporations. With powerful tools like Unsloth and Ollama, you can now **Fine-tune LLM Unsloth Ollama** directly on your local machine, bringing sophisticated AI capabilities into your personal projects or small-scale applications. This comprehensive tutorial will guide you through the entire process, from setting up your environment to deploying your custom-trained model locally using Ollama.

Unlock local AI power! Fine-tune LLM Unsloth Ollama in 7 steps! Boost performance & customize your models. #LLM #FineTuning #Unsloth #Ollama #AI

1 0 0 0
Post image

New feature for my BubbleUI project : you can now use
@unsloth.ai models with a custom API endpoint and a free colab account !

Check it out : github.com/KenoLeon/Bub...

#AI #LLM #Unsloth #OpenSource #BubbleUI #UserExperience #UserInterface #MachineLearning #ChatUI #developer

2 1 0 0
Post image

Finetuning Qwen 3 на RTX4090: полный гайд обучения LLM c помощью Unsloth 💡 О чём эта статья: В этой статье я разбираю, как с по...

#unsloth #ml #ai

Origin | Interest | Match

0 1 0 0

5/8
For query generation, we fine-tune 4-bit quantized LLaMA-3 models (1B, 3B, 8B) using LoRA—
enabling efficient training on a single RTX A5000 using the Unsloth AI library.
For dense retrieval, we use e5-small-v2 as the text encoder.
#LoRA #LLaMA3 #Unsloth

0 0 1 0
Post image

• 💻 Light enough to run locally on a single #RTX4090 or #Mac with 32GB #RAM while powerful enough for enterprise use on privacy-sensitive repositories

• 🚀 Released under #Apache2 license and available free on #HuggingFace, #Ollama, #Kaggle, #Unsloth, and #LMStudio

0 1 1 0
Post image

🧠 #Unsloth ha condiviso un notebook Colab gratuito che permette di addestrare #Llama 3.2 3B su documenti specifici, con una tecnica molto interessante.
👉 I dettagli e il colab: www.linkedin.com/posts/alessi...

#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM

0 0 1 0
Original post on medium.com

Fine-Tuning LLaMA Models on Colab Made Simple - Train with Your Own Dataset Using Unsloth, PEFT...

medium.com/@bhaskaro/fine-tuning-ll...

#machine-learning #ai […]

1 0 0 0
Preview
Unsloth challenge 5 — Memory Effecient Backprop Hi Folks , this is Sambhav Dixit , your neighbourhood opensource contributor and ml guy , and this time I am back with the solution to get…

Unsloth challenge 5 — Memory Effecient Backprop Hi Folks , this is Sambhav Dixit , your n...

medium.com/@indosambhav/unsloth-cha...

#python #llm-finetuning #unsloth #machine-learning #pytorch

Event Attributes

0 0 0 0
微调DeepSeek-R1打造SQL语言转自然语言大模型!小白也能十分钟打造自己的推理大模型!unsloth+Colab+DeepSeek-R1-Distill-Llama-8B轻松上手
微调DeepSeek-R1打造SQL语言转自然语言大模型!小白也能十分钟打造自己的推理大模型!unsloth+Colab+DeepSeek-R1-Distill-Llama-8B轻松上手 YouTube video by AI超元域

🚀使用unsloth+Colab微调DeepSeek-R1-Distill-Llama-8B效果不错,我实现的是微调了一个能够解释SQL语言的模型,有了这个模型,学习SQL的小白可以0成本解释各种复杂的SQL语句,达到快速入门 #DeepSeekR1 #finetune #unsloth

youtu.be/MpTxJLcViuU

0 0 0 0
Preview
Train your own R1 reasoning model locally You can now reproduce your own DeepSeek-R1 reasoning model with Unsloth 100% locally. Using GRPO. Open-source, free and beginner friendly.

Train your own R1 reasoning model with Unsloth.
#ai #reasoning #unsloth #opensource #locally
https://unsloth.ai/blog/r1-reasoning

0 0 0 0
Preview
Train your own R1 reasoning model locally You can now reproduce your own DeepSeek-R1 reasoning model with Unsloth 100% locally. Using GRPO. Open-source, free and beginner friendly.

Opinion: exciting times ahead, I wonder can the opposite be done to introduce negative reinforcement and restrict model output to security/compliance policies.

#unsloth #reinforcement-learning #ai #artificial-intelligence #agentic-system

3 0 0 0
Post image

Поднимаем DeepSeek llm локально Все уже слышали про новую ...

habr.com/ru/articles/878836/

#llm #deepseek #unsloth #deploy #distillation #inference #chat #bot

Event Attributes

0 0 0 0
🐋 Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp | Open WebUI A huge shoutout to UnslothAI for their incredible efforts! Thanks to their hard work, we can now run the full DeepSeek-R1 671B parameter model in its dynamic 1.58-bit quantized form (compressed to jus...

Open WebUI (github.com/open-webui/o...) now has a helpful step-by-step plan online to get you going:

#AI #DeepSeek #Unsloth #OpenWebUI

0 0 0 0

DeepSeek-R1のサイズを最大80%削減した動的量子化モデルが公開中 #Gigazine (Jan 29)

#DeepSeek #量子化 #AIモデル #オープンソース #unsloth

0 1 0 0
Preview
Finetune Phi-4 with Unsloth Fine-tune Microsoft's new Phi-4 model with Unsloth! We've also found & fixed 4 bugs in the model.

#Unsloth fixes bugs in Phi-4, dramatically improving the performance of the small #LLM. Unsloth continues to impress.

10 0 0 0