Advertisement · 728 × 90
#
Hashtag
#Mlx
Advertisement · 728 × 90
Preview
Apple Silicon LLM Inference Optimization: The Complete Guide to Maximum Performance TL;DR: MLX is 20-87% faster than llama.cpp for generation on Apple Silicon (under 14B params). Use...

Apple Silicon LLM Inference Optimization: The Complete Guide to Maximum Performance TL;DR: MLX is 20-87% faster than llama.cpp for generation on Apple Silicon (under 14B params). Use Ollama 0.19+ w...

#applesilicon #llm #localai #mlx

Origin | Interest | Match

1 0 0 0
Original post on mastodon.world

So my current #vibecoding setup is running locallay the [gemma3:31b](https://ollama.com/library/gemma4 model in ollama. Since ollama now has support for Apple's accelerated #mlx it is sufficiently fast on my M4 MBP (128GB RAM, yay).
The model takes up 45GB RAM according to Activity Monitor […]

0 0 1 0
GitHub - Blaizzy/mlx-vlm: MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX. MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX. - Blaizzy/mlx-vlm

GitHubでトレンドの「mlx-vlm」、MLX上でVLM(視覚言語モデル)の推論やファインチューニングが軽量に回せるの、Appleシリコン勢には朗報ですね。

ローカルLLM環境の構築、GPU選定に悩む前にまずは手元のMacで試せる選択肢が増えるのは嬉しい。

使っている人いますか?🤔

https://github.com/Blaizzy/mlx-vlm

#LocalLLM #AppleSilicon #MLX #OSS #AI

0 0 0 0
Post image

Using apple silicon, I was able to speed up clustering molecules using Butina and KMeans. Of course, BitBIRCH continued to be the fastest (which is done entirely on CPUs). These are all great for plugging into current workflows. #Cheminformatics #compchem #RDKit #mlx

0 0 1 0
Preview
OllamaでMLXを試してみる

OllamaのMLX対応プレビューがすごい。手元のMacで計測したら、GGUFと比較して生成速度が約2.1倍に向上しました。体感でも明らかに速く、ローカルLLMの実用性が一段と上がった印象。対応モデルはまだ限定的ですが、今後の拡大に期待大。みなさんの環境ではどうですか?

#LocalLLM #Ollama #MLX #Mac #AI #エンジニア

https://zenn.dev/sawacarac/articles/49885802b85f0c

1 0 0 0
Post image

Experience faster AI performance on your Mac! Ollama now integrates Apple's MLX framework, optimizing local AI model execution on Apple Silicon. #Ollama #MLX #AppleSilicon #AI Link: thedailytechfeed.com/ollama-boost...

2 0 0 0
Preview
Ollama MLX 지원으로 빨라진 맥 로컬 LLM 구동 환경 - IT Mania 도전인생 최근 개발 환경에서 거대언어모델(LLM)을 외부 클라우드 API에 의존하지 않고 내 컴퓨터에서 직접 돌리는 사례가 늘고 있습니다. 개인 정보 보호와 비용 절감이라는 실질적인 이점 때문인데요. 그동안 맥 사용자들은 하드웨어 제약으로 로컬 모델 구동에 한계를 느껴왔지만, 최근

Ollama MLX 지원으로 빨라진 맥 로컬 LLM 구동 환경

https://bit.ly/4uYXqgx

#Ollama #MLX #MacOS #LLM #ArtificialIntelligence #LocalAI #TechNews

0 0 0 0

Running local models on Macs gets faster with Ollama's MLX support https://arstechni.ca #Applesilicon #alibaba #ollama #Apple #apple #Qwen #mlx #AI

0 0 0 0

⚡ Ollama acelera la IA local en tu Mac con el framework MLX de Apple

https://thenewstack.io/ollama-taps-apples-mlx/

#Ollama #MLX #IA #Apple

0 0 0 0
Preview
Ollama is now powered by MLX on Apple Silicon in preview Comments

That's neat.

"Today, we’re previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple’s machine learning framework."

https://ollama.com/blog/mlx

#Ollama #MLX #LocalAI

0 0 0 0
Post image

Big news for Mac users! Ollama just got a huge speed upgrade on Apple Silicon thanks to MLX, making local LLMs fly. Get ready for faster, smoother AI right on your desktop.

thepixelspulse.com/posts/ollama-mlx-apple-s...

#ollama #mlx #applesilicon

1 0 0 0

This is an experimental setup and I haven’t optimized speed yet, but it’s stable enough that I’ve started testing it in an autoresearch-style loop. #LocalAI #MLX #MoE

0 0 0 0
Awakari App

Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Here's a fascinating piece of researc...

#ai #generative-ai #local-llms #llms #qwen #mlx

Origin | Interest | Match

0 0 0 0
Awakari App

Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally Here's a fascinating piece of researc...

#ai #generative-ai #local-llms #llms #qwen #mlx

Origin | Interest | Match

0 0 0 0
Preview
GitHub - CrispStrobe/CrispSorter: AI-powered document organiser. Extracts text and/or sorts documents: Drop in a bunch of PDFs, DOCX files, or ebooks, and it extracts Document Text, identifies Title, ... AI-powered document organiser. Extracts text and/or sorts documents: Drop in a bunch of PDFs, DOCX files, or ebooks, and it extracts Document Text, identifies Title, Author, and Year, with a local ...

If you use it with a local backend (@ollamabot.bsky.social, #llama.cpp , #mlx, #mistral-rs), every step runs on your device; nothing leaves your machine unless you configure a cloud provider (it supports EU-based ones, e.g. #Nebius @scaleway.com, or #Mistral).

2 1 1 0
Post image

Wow! My MLX vs llama.cpp benchmark hit #9 on r/LocalLLaMA today. Did not expect that.
Takeaway: benchmark actual scenarios, do not rely on just the tok/s counter in your UI. Ran into a caching bug specific to Qwen 3.5 (35B-A3B) on MLX. Effective tokens/s is what we experience

#MLX #LlamaCpp #Qwen

0 0 1 0
Preview
Introducing LLM Ops Kit LLM-Ops-Kit is a new operational toolkit for running, debugging, and maintaining self-hosted AI stacks across hosts.

LLM-Ops-Kit

It is an operational toolkit for the messy but important part of self-hosted AI: brings privacy and security.

Local TTS and voice cloning, can replace premium hosted speech.

Streamlined Jinja ChatML template with tool calling.

#ai #llm #mlx #opensource #devtools #HuggingFace

6 2 3 1
Preview
PersonaPlex 7B Runs Full-Duplex Speech on a Mac A developer ported NVIDIA's PersonaPlex 7B speech-to-speech model to native Swift using MLX, running full-duplex conversation on Apple Silicon with no cloud, no Python, and faster-than-real-time inference.

PersonaPlex 7B Runs Full-Duplex Speech on a Mac

awesomeagents.ai/news/personaplex-7b-appl...

#AppleSilicon #Mlx #Nvidia

0 0 0 0
Preview
The Creator of MLX Just Left Apple - And He's Not the First Awni Hannun, the Stanford-trained researcher who co-created Apple's MLX machine learning framework, announced his departure from Apple. His exit is the latest in a devastating exodus of AI talent that has hollowed out Apple's ML research bench over the past year.

The Creator of MLX Just Left Apple - And He's Not the First

awesomeagents.ai/news/awni-hannun-mlx-cre...

#Apple #Mlx #AwniHannun

0 0 0 0
Post image

LIVE on #Twitch and doing a #giveaway of #100k #mlx
www.twitch.tv/bg_gamer86

#crypto #xrp #btc #xrparmy #bitcoin #CryptoTrading #CryptoCommunity

3 1 1 0
Preview
GitHub - rtk-ai/vox: Claude can talk to you after doing a task Claude can talk to you after doing a task. Contribute to rtk-ai/vox development by creating an account on GitHub.

I built vox for vibe coders a Rust CLI that makes your Mac talk.

Voice cloning with a 6-second audio clip. One command.

⚠️ Experimental. Mac only

Looking for testers !

github.com/rtk-ai/vox

#VibeCoding #Rust #TTS #ClaudeCode #AppleSilicon #MLX #OpenSource #DevTools #MacOS #VoiceCloning #AI

3 0 0 0
Post image

Pi Core Team Moves Over $500 Million in Early February as Token Falls More Than 94%.

#Crypto #MLX #AI #ETH

0 0 0 0
Preview
When Your $1,400 iPhone Can't Do Math: A Hardware Defect in Apple's A18 Neural Engine The Problem: AI Gone Wrong In a world where artificial intelligence is increasingly integrated into our daily computing experiences, one developer's frustrating encounter with a defective iPhone 16 Pro Max reveals the hidden complexities of running m...

When Your $1,400 iPhone Can't Do Math: A Hardware Defect in Apple's A18 Neural Engine

#ai #machine learning #iphone #hardware defect #neural engine #mlx #debugging

0 0 0 0

Join me on my #giveaway stream right now and see if you are the next winner for some #mlx!

bgoines86.tangled.com/join

#btc #bitcoin #XRP #XRPHolders #XRPCommunity

1 0 0 0

Join me on my #giveaway stream right now and see if you are the next winner for some #mlx!

bgoines86.tangled.com/join

#btc #bitcoin #XRP #XRPHolders #XRPCommunity

0 0 0 0

bgoines86.tangled.com/join

I'm earning #MLX with every post on #tangled, join with my link above and starting getting your share of #crypto just for #socialnetworking!

#btc #bitcoin #XRP #XRPHolders #XRPCommunity

1 0 0 0
Preview
GitHub - exo-explore/exo: Run frontier AI locally. Run frontier AI locally. Contribute to exo-explore/exo development by creating an account on GitHub.

#mlx #applesilicon #ai #localAI
What do you think about Exo?
Ever tried?

github.com/exo-explore/...

0 0 0 0
Preview
Malatya Erhaç airport (Turkey) aviation weather and informations LTAT MLX Aviation weather with TAF and METAR, Maps, hotels and aeronautical information for Malatya Erhaç airport (Turkey)

Aviation weather for Malatya Erhaç airport (Turkey) is “LTAT 260620Z VRB03KT 2400 BR OVC007 M01/M01 Q1024 NOSIG” : See what it means on https://www.bigorre.org/aero/meteo/ltat/en #malatyaerhacairport #airport #malatya #turkey #ltat #mlx #metar #aviation #aviationweather #avgeek vl

0 0 0 0
Preview
[Shoper Gamer] แอปเปิลอัปเดต MLX รองรับ RDMA over Thunderbolt โดย โดย

แอปเปิลอัปเดต MLX รองรับ RDMA over Thunderbolt

#ShoperGamer #Apple #Library #MLX #Feed

1 0 0 0
Post image

I've said it before but #AI can be such a useful tool. #Claude built me a bunch of scripts to prepare data for LLM fine tuning and readme files to explain how everything works. I can worry about gathering the data I need and not spending hours formatting it.

#LLM #llama #mlx #finetuning #nerd

2 0 0 0