We're excited to announce we've just made working with #Unsloth Studio on cloud GPUs way easier with our new dedicated template.
This means training and running models is as simple as working with your local device and as powerful as the hardware you want to use.
#LLM #MLOps
Mastering LLM Fine-Tuning with Unsloth Studio: 2x Faster Training and 70% Less VRAM
www.tiptinker.com/mastering-ll...
#unsloth #unslothstudio #LLM #AI
Enough gluing: Unsloth Studio turns your cursed local LLM stack into one open-source, point-and-click app for local fine-tuning, dataset creation and export. #Unsloth Studio #llama.cpp #LoRA
+-----------------------------------------+----------------------+----------------------+ | 1 Quadro RTX 4000 Off | 00000000:B3:00.0 Off | N/A | | 80% 86C P0 121W / 125W | 4931MiB / 8192MiB | 99% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+
Running #unsloth SFT on my poor old Quadro RTX 4000 is keeping it busy, but it does look like I could have perhaps used a larger model to start from. Still got some headroom on the memory there.
Fine-tuning Qwen-8B под проприетарный синтаксис (CADINP) на одной RTX 3090: опыт инженера-конструктора Возможно ли на одной ...
#LLM #fine-tuning #локальные #нейросети #RTX #3090 #Unsloth #Qwen #DeepSeek #GGUF #SOFiSTiK
Origin | Interest | Match
🧠 #Unsloth ha sviluppato un approccio più efficiente per addestrare #GPT-OSS tramite reinforcement learning e GRPO.
👉 I dettagli: www.linkedin.com/posts/alessi...
#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM
Playing around with #AI model fine tuning using #MLX on my MacBook. Kind of got somewhere but it uses a lot of system resources which isn't ideal on my daily driver. Going to try #unsloth on a Windows machine that can just be left running for as long as needed and see what happens
Modern workflows are being reshaped by generative AI — from automation to decision-making and creative production. ⚙️🤖
A practical look at how AI fits into real-world workflows 👇
techlife.blog/posts/modern...
#AI #MachineLearning #OpenSource #NVIDIA #Unsloth
Create an Ollama compliant model and make it accessible to the world! Sharing is caring. :-) youtu.be/grCeXX-N_Gg #ollama #python #llm #machinelearning #ai #unsloth
youtu.be/Yl10VgSm_MI
Finetune a LLM with your own data praising yourself. Then see the result when running the model afterwards! #unsloth #ai #python #ollama #llm
I'm glad the local model running at a rate of a token every 15 seconds is estimating towards the end of our session after spending around 4 hours for ~2500 output tokens that our entire session took, and I quote the ollama output «approximately **1 minute**» #huggingface #unsloth
Unlock local AI power! Fine-tune LLM Unsloth Ollama in 7 steps! Boost performance & customize your models. #LLM #FineTuning #Unsloth #Ollama #AI
New feature for my BubbleUI project : you can now use
@unsloth.ai models with a custom API endpoint and a free colab account !
Check it out : github.com/KenoLeon/Bub...
#AI #LLM #Unsloth #OpenSource #BubbleUI #UserExperience #UserInterface #MachineLearning #ChatUI #developer
Finetuning Qwen 3 на RTX4090: полный гайд обучения LLM c помощью Unsloth 💡 О чём эта статья: В этой статье я разбираю, как с по...
#unsloth #ml #ai
Origin | Interest | Match
5/8
For query generation, we fine-tune 4-bit quantized LLaMA-3 models (1B, 3B, 8B) using LoRA—
enabling efficient training on a single RTX A5000 using the Unsloth AI library.
For dense retrieval, we use e5-small-v2 as the text encoder.
#LoRA #LLaMA3 #Unsloth
• 💻 Light enough to run locally on a single #RTX4090 or #Mac with 32GB #RAM while powerful enough for enterprise use on privacy-sensitive repositories
• 🚀 Released under #Apache2 license and available free on #HuggingFace, #Ollama, #Kaggle, #Unsloth, and #LMStudio
🧠 #Unsloth ha condiviso un notebook Colab gratuito che permette di addestrare #Llama 3.2 3B su documenti specifici, con una tecnica molto interessante.
👉 I dettagli e il colab: www.linkedin.com/posts/alessi...
#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM
Fine-Tuning LLaMA Models on Colab Made Simple - Train with Your Own Dataset Using Unsloth, PEFT...
medium.com/@bhaskaro/fine-tuning-ll...
#machine-learning #ai […]
Unsloth challenge 5 — Memory Effecient Backprop Hi Folks , this is Sambhav Dixit , your n...
medium.com/@indosambhav/unsloth-cha...
#python #llm-finetuning #unsloth #machine-learning #pytorch
Event Attributes
🚀使用unsloth+Colab微调DeepSeek-R1-Distill-Llama-8B效果不错,我实现的是微调了一个能够解释SQL语言的模型,有了这个模型,学习SQL的小白可以0成本解释各种复杂的SQL语句,达到快速入门 #DeepSeekR1 #finetune #unsloth
youtu.be/MpTxJLcViuU
Train your own R1 reasoning model with Unsloth.
#ai #reasoning #unsloth #opensource #locally
https://unsloth.ai/blog/r1-reasoning
Opinion: exciting times ahead, I wonder can the opposite be done to introduce negative reinforcement and restrict model output to security/compliance policies.
#unsloth #reinforcement-learning #ai #artificial-intelligence #agentic-system
Поднимаем DeepSeek llm локально Все уже слышали про новую ...
habr.com/ru/articles/878836/
#llm #deepseek #unsloth #deploy #distillation #inference #chat #bot
Event Attributes
Open WebUI (github.com/open-webui/o...) now has a helpful step-by-step plan online to get you going:
#AI #DeepSeek #Unsloth #OpenWebUI