Apple Silicon LLM Inference Optimization: The Complete Guide to Maximum Performance TL;DR: MLX is 20-87% faster than llama.cpp for generation on Apple Silicon (under 14B params). Use Ollama 0.19+ w...
#applesilicon #llm #localai #mlx
Origin | Interest | Match
So my current #vibecoding setup is running locallay the [gemma3:31b](https://ollama.com/library/gemma4 model in ollama. Since ollama now has support for Apple's accelerated #mlx it is sufficiently fast on my M4 MBP (128GB RAM, yay).
The model takes up 45GB RAM according to Activity Monitor […]
GitHubでトレンドの「mlx-vlm」、MLX上でVLM(視覚言語モデル)の推論やファインチューニングが軽量に回せるの、Appleシリコン勢には朗報ですね。
ローカルLLM環境の構築、GPU選定に悩む前にまずは手元のMacで試せる選択肢が増えるのは嬉しい。
使っている人いますか?🤔
https://github.com/Blaizzy/mlx-vlm
#LocalLLM #AppleSilicon #MLX #OSS #AI
Using apple silicon, I was able to speed up clustering molecules using Butina and KMeans. Of course, BitBIRCH continued to be the fastest (which is done entirely on CPUs). These are all great for plugging into current workflows. #Cheminformatics #compchem #RDKit #mlx
OllamaのMLX対応プレビューがすごい。手元のMacで計測したら、GGUFと比較して生成速度が約2.1倍に向上しました。体感でも明らかに速く、ローカルLLMの実用性が一段と上がった印象。対応モデルはまだ限定的ですが、今後の拡大に期待大。みなさんの環境ではどうですか?
#LocalLLM #Ollama #MLX #Mac #AI #エンジニア
https://zenn.dev/sawacarac/articles/49885802b85f0c
Experience faster AI performance on your Mac! Ollama now integrates Apple's MLX framework, optimizing local AI model execution on Apple Silicon. #Ollama #MLX #AppleSilicon #AI Link: thedailytechfeed.com/ollama-boost...
Ollama MLX 지원으로 빨라진 맥 로컬 LLM 구동 환경
https://bit.ly/4uYXqgx
#Ollama #MLX #MacOS #LLM #ArtificialIntelligence #LocalAI #TechNews
Running local models on Macs gets faster with Ollama's MLX support https://arstechni.ca #Applesilicon #alibaba #ollama #Apple #apple #Qwen #mlx #AI
⚡ Ollama acelera la IA local en tu Mac con el framework MLX de Apple
https://thenewstack.io/ollama-taps-apples-mlx/
#Ollama #MLX #IA #Apple
That's neat.
"Today, we’re previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple’s machine learning framework."
https://ollama.com/blog/mlx
#Ollama #MLX #LocalAI
Big news for Mac users! Ollama just got a huge speed upgrade on Apple Silicon thanks to MLX, making local LLMs fly. Get ready for faster, smoother AI right on your desktop.
thepixelspulse.com/posts/ollama-mlx-apple-s...
#ollama #mlx #applesilicon
This is an experimental setup and I haven’t optimized speed yet, but it’s stable enough that I’ve started testing it in an autoresearch-style loop. #LocalAI #MLX #MoE
Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Here's a fascinating piece of researc...
#ai #generative-ai #local-llms #llms #qwen #mlx
Origin | Interest | Match
Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Here's a fascinating piece of researc...
#ai #generative-ai #local-llms #llms #qwen #mlx
Origin | Interest | Match
If you use it with a local backend (@ollamabot.bsky.social, #llama.cpp , #mlx, #mistral-rs), every step runs on your device; nothing leaves your machine unless you configure a cloud provider (it supports EU-based ones, e.g. #Nebius @scaleway.com, or #Mistral).
Wow! My MLX vs llama.cpp benchmark hit #9 on r/LocalLLaMA today. Did not expect that.
Takeaway: benchmark actual scenarios, do not rely on just the tok/s counter in your UI. Ran into a caching bug specific to Qwen 3.5 (35B-A3B) on MLX. Effective tokens/s is what we experience
#MLX #LlamaCpp #Qwen
LLM-Ops-Kit
It is an operational toolkit for the messy but important part of self-hosted AI: brings privacy and security.
Local TTS and voice cloning, can replace premium hosted speech.
Streamlined Jinja ChatML template with tool calling.
#ai #llm #mlx #opensource #devtools #HuggingFace
PersonaPlex 7B Runs Full-Duplex Speech on a Mac
awesomeagents.ai/news/personaplex-7b-appl...
#AppleSilicon #Mlx #Nvidia
The Creator of MLX Just Left Apple - And He's Not the First
awesomeagents.ai/news/awni-hannun-mlx-cre...
#Apple #Mlx #AwniHannun
LIVE on #Twitch and doing a #giveaway of #100k #mlx
www.twitch.tv/bg_gamer86
#crypto #xrp #btc #xrparmy #bitcoin #CryptoTrading #CryptoCommunity
I built vox for vibe coders a Rust CLI that makes your Mac talk.
Voice cloning with a 6-second audio clip. One command.
⚠️ Experimental. Mac only
Looking for testers !
github.com/rtk-ai/vox
#VibeCoding #Rust #TTS #ClaudeCode #AppleSilicon #MLX #OpenSource #DevTools #MacOS #VoiceCloning #AI
Pi Core Team Moves Over $500 Million in Early February as Token Falls More Than 94%.
#Crypto #MLX #AI #ETH
When Your $1,400 iPhone Can't Do Math: A Hardware Defect in Apple's A18 Neural Engine
#ai #machine learning #iphone #hardware defect #neural engine #mlx #debugging
Join me on my #giveaway stream right now and see if you are the next winner for some #mlx!
bgoines86.tangled.com/join
#btc #bitcoin #XRP #XRPHolders #XRPCommunity
Join me on my #giveaway stream right now and see if you are the next winner for some #mlx!
bgoines86.tangled.com/join
#btc #bitcoin #XRP #XRPHolders #XRPCommunity
bgoines86.tangled.com/join
I'm earning #MLX with every post on #tangled, join with my link above and starting getting your share of #crypto just for #socialnetworking!
#btc #bitcoin #XRP #XRPHolders #XRPCommunity
#mlx #applesilicon #ai #localAI
What do you think about Exo?
Ever tried?
github.com/exo-explore/...
Aviation weather for Malatya Erhaç airport (Turkey) is “LTAT 260620Z VRB03KT 2400 BR OVC007 M01/M01 Q1024 NOSIG” : See what it means on https://www.bigorre.org/aero/meteo/ltat/en #malatyaerhacairport #airport #malatya #turkey #ltat #mlx #metar #aviation #aviationweather #avgeek vl
แอปเปิลอัปเดต MLX รองรับ RDMA over Thunderbolt
#ShoperGamer #Apple #Library #MLX #Feed
I've said it before but #AI can be such a useful tool. #Claude built me a bunch of scripts to prepare data for LLM fine tuning and readme files to explain how everything works. I can worry about gathering the data I need and not spending hours formatting it.
#LLM #llama #mlx #finetuning #nerd