Advertisement ยท 728 ร— 90

Posts by famstack.dev

Why Your Next Home Server Should Be a Mac Mini or Mac Studio // famstack.dev Real wattmeter numbers from a Mac Studio M1 Max running 25 Docker containers and local LLM inference. 12W average, 50W peak. Mac Mini should be similar. Under โ‚ฌ40/year in Germany.

Here is the article if someone is interested. Measured with a wattmeter.

famstack.dev/guides/mac-m...

2 weeks ago 0 0 0 0

Nice. I measured my Mac Studio M1 Max at 8W idle, 30-50W under full LLM inference. 12 Watt average over one week. Our ancient entertainment system draws more on standby. Apple Silicon is great for home server power efficiency. Especially in Germany with these crazy energy prices.

2 weeks ago 0 0 2 0
57 tok/s on Screen, 3 tok/s in Practice: MLX vs llama.cpp on Apple Silicon // famstack.dev MLX reports nearly 2x the generation speed of GGUF on Apple Silicon. The truth is more nuanced. I benchmarked both across three real workloads.

I used Ollama. Wanted to switch to LM Studio. But it turns out... its complicated
famstack.dev/guides/mlx-v...

2 weeks ago 1 0 2 0

The local LLM community is quite silent here unfortunately :-/
Everyone still hanging around at X?

2 weeks ago 1 0 2 1
57 tok/s on Screen, 3 tok/s in Practice: MLX vs llama.cpp on Apple Silicon // famstack.dev MLX reports nearly 2x the generation speed of GGUF on Apple Silicon. The truth is more nuanced. I benchmarked both across three real workloads.

#LocalAI #AppleSilicon #Mac #SelfHosted
Reddit:
www.reddit.com/r/LocalLLaMA...

The initial article
famstack.dev/guides/mlx-v...

I am going to update the article with the insights from the community soon

2 weeks ago 1 0 0 0
Post image

Wow! My MLX vs llama.cpp benchmark hit #9 on r/LocalLLaMA today. Did not expect that.
Takeaway: benchmark actual scenarios, do not rely on just the tok/s counter in your UI. Ran into a caching bug specific to Qwen 3.5 (35B-A3B) on MLX. Effective tokens/s is what we experience

#MLX #LlamaCpp #Qwen

2 weeks ago 0 0 1 0

The 1.67x claim, is that generation speed or effective throughput including ttft? I benchmarked MLX vs llama.cpp and MLX reported 2x faster generation, but throughput was actually lower for most workloads. prefill was way slower. what matters is effective tok/s, not just the tok/s counter for gen

2 weeks ago 0 0 0 0
Why Your Next Home Server Should Be a Mac Mini or Mac Studio // famstack.dev Real wattmeter numbers from a Mac Studio M1 Max running 25 Docker containers and local LLM inference. 12W average, 50W peak. Mac Mini should be similar. Under โ‚ฌ40/year in Germany.

Next thing I buy: A switch for the Bose system. The Mac server is going to save money then ๐Ÿ˜…

Here is the whole drill-down
famstack.dev/guides/mac-m...

2 weeks ago 0 0 0 0
Wattmeter showing 8.5W power consumption by a Mac Studio M1 Max

Wattmeter showing 8.5W power consumption by a Mac Studio M1 Max

Bought a Watt meter last week. Measured our ancient Bose 5.1 system in standby: 30 watts ๐Ÿซฅ My Mac Studio M1 Max running 25 Docker containers and local AI inference? 5-7W idle. 11.8W average. Old hardware on standby draws more than a full home server stack.
#selfhosted #homelab #AppleSilicon #localAI

2 weeks ago 2 0 2 0

I am going to check it out. Thank you!

3 weeks ago 0 0 0 0
Advertisement

I'll let you know!

3 weeks ago 1 0 0 0

Hi @getmeos.com bot. How is life? What did you accomplish today? For now, we use tailscale to give tunelled access to our family server. Planning to connect our local instance to a VPS hosted though. Just an idea. Maybe we replicate certain galleries with the remote accessible instance then.

3 weeks ago 0 0 1 0
famstack.dev // home server guides for Mac Guides, build logs, and real experience running a self-hosted home server on Apple Silicon. Photos, documents, local AI, backups. All open source.

Building a self-hosted home server for my family on a Mac Studio / Mac Mini. Photos, documents, local AI. No cloud, nothing leaves the house. Documenting everything along the way. Follow, to join the pain.

#selfhosted #homeserver #localai #privacy

3 weeks ago 5 0 3 0