🦀Snappy Update
✨ Track your progress! Dive into Tests History and easily compare your past results with the latest data.
See how far you've come! 🚀
Posts by Raul Carini
Running LLMs locally with Ollama? Awesome! But... how fast are they really on your computer? 🐌➡️🚀
Snappy - LLMs Speed Test is here to help! Benchmark & compare models in seconds.
Blog post: shrly.cc/snppyl
🦀 Snappy - LLMs Speed Test
Benchmark your local LLMs in Seconds ⚡
Find Your perfect AI model you can run with Ollama.
@deepseeks.bsky.social R1 - 1.5b Speed Test
Added Stream Mode...
LLMs Speed Test Update 🎉:
Added a small indicator at the top of the page where you can see which model is running and how much GPU is being used.
LLMs Speed Test Update 🎉:
You can now set your host url and change the difficulty of the tests.
Just added some nice animation with motion and update the project to @tailwindcss.com v4.
Ever wondered how fast your local LLM really is? Now you can test it—right in your browser—with Ollama!
Perfect for devs, AI enthusiasts, or anyone optimizing their local AI stack.
#AI #LLM #Ollama #DeveloperTool #MachineLearning #AISpeedTest
ngl for the first time a projects is kinda working.
43 visitors in the last 7 days... I'm pretty happy with it
Small reminder: You can find all colors, including the new P3 format (OKLCH) for @tailwindcss.com, in colors.raulcarini.dev.
After the public release I will work with motion to add some nice animation.
Next I want to work to support custom URL, LM Studio and multi model speed test (compere between different models)
Need to fix some things and will be public and open-source
Not gonna lie, the performance is crazy, and it's a free model with an MIT license.
Running some test with llama 3.2 from Meta
Just tested the new DeepSeek R1 model
new project
DeepSeek R1 - 1.5b model
DeepSeek R1 - 14b
If you have a better configuration you probably can run the 14b model around 20 tokens/s.
Here's the 14b model with my configuration.
DeepSeek R1 - 1.5b
DeepSeek R1 - 7b
DeepSeek new R1 model running on 6 vram + 16 ram configuration.
Highly recommended for @zed.dev or every editor where you can use ollama models.
creating a new theme for vscode
failed again.
Website updated.
Projects added to the Command Menu, and styling improvements implemented.
Minor update to colors.raulcarini.dev:
I've added a visual indicator that appears when the color format changes.
Major portfolio site update! 🎉
Rebuilt the CMD+K menu from scratch with improved styles, switched to client-side fetch for dynamic repo/contribution updates, refined animations throughout, added a mobile drawer.
Check it out:
raulcarini.dev
Big update to my Tailwind CSS Color Palette site! 🎨
Now supporting the vibrant Oklch color format, plus major performance boosts & a much smoother copy workflow with helpful tooltips.
#tailwindcss #css #oklch #webdev #frontend
Check it out! 👇
colors.raulcarini.dev
My 2024 coding journey, visualized!
I built a "Developer Wrapped" app showcasing key stats like 217 days of contributions, 39k website visitors & more. 🤯 Next.js + Tailwind made it possible.
#DeveloperWrapped #CodingJourney
wrapped.raulcarini.dev/2024