Advertisement · 728 × 90
#
Hashtag
#LocalScore
Advertisement · 728 × 90

Haha, my Raspberry Pi 5 is actually faster at CPU-only inference than my old laptop. LocalScore 23, 9.5 tokens/seconf generation.

0 0 0 0
LocalScore - Test #235 Results LocalScore benchmark results for test #235. This is for the accelerator Intel Core i7-8550U CPU @ 1.80GHz (skylake)

Indeed, the CPU-only performance is even worse. The LocalScore on the tiny 1B model is only 16, with a text generation speed of 7.7 tokens/second.

https://www.localscore.ai/result/235

Let's see if I can run this on a Raspberry Pi for comparison...

#LocalScore #llm #benchmark #LocalLlama

0 0 1 0
Original post on sigmoid.social

My hobby: running LocalScore.ai to benchmark how fast (ehm) my 2018 laptop runs a tiny 1B LLM. The laptop has a NVIDIA MX150 mobile GPU, 2GB VRAM. I guess it was intended for Photoshop filters or CAD stuff.

I got a LocalScore of 101 on the tiny model using the GPU (13.5 tokens/second for […]

0 1 1 0