Advertisement · 728 × 90

Posts by John Records

sestina guardrails perhaps?

3 days ago 1 0 1 0

“To Serve Man”

4 weeks ago 0 0 0 0

There you are!

6 months ago 0 0 0 0

Reading Bee Speaker now. Irae may be a predecessor of Worsel...

6 months ago 1 0 0 0

Now we need the librarian.

6 months ago 5 0 1 0

Very nice! And the MLX community has released a version that will run on your 64 gig Mac.

7 months ago 0 0 0 0

Simon, iirc you have a 64 gb Mac. As you may know it's possible to run OSS 120-b on that, using the Unsloth q2 model. Thanks for your wonderful posts!

8 months ago 1 0 0 0

Ethan, I appreciate your posts enough that I will rejoin LinkedIn to continue seeing them regularly. And of course I read and enjoy your newsletter.

10 months ago 0 0 0 0

Peter Kenny narrating Iain M. Banks’ writing is luscious, and not to be missed.

11 months ago 1 0 0 0

please keep posting here.

11 months ago 1 0 0 0
Advertisement

What a cool idea! I'm trying this idea with a Project Gutenberg book and have asked for a text-based adventure.

1 year ago 0 0 0 0

Thanks for Moonbound!

1 year ago 1 0 0 0

No, haven't tried that.

1 year ago 0 0 0 0
Preview
lmstudio-community/Qwen2.5-7B-Instruct-1M-GGUF · Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Or huggingface.co/lmstudio-com...

1 year ago 0 0 0 0

Maybe a smaller model with large context window (haven't tried this myself). One example: Llama 3.1 8b has 128K context window.

1 year ago 0 0 1 0

Similar experience, gave up on it.

1 year ago 1 0 0 0
Preview
ChatGPT - Mark Cuban Hunger Solutions Shared via ChatGPT

I deeply appreciate your contributions to a better life and better health for many people. Regarding world hunger, ChatGPT has some advice for you. chatgpt.com/share/67b9e9...

1 year ago 0 0 0 0

Thanks for the nudge on MLX models, Simon. They seem to be more memory efficient on Macs than GGUF versions.

1 year ago 1 0 0 0

I like how one can tick a box in LM Studio search and find only MLX models. I haven't tried the option Simon mentions.

1 year ago 1 0 0 0
Advertisement

a chain of thought model it seems from the name.

1 year ago 1 0 0 0
Preview
Screening performance and characteristics of breast cancer detected in the Mammography Screening with Artificial Intelligence trial (MASAI): a randomised, controlled, parallel-group, non-inferiority, ... The findings suggest that AI contributes to the early detection of clinically relevant breast cancer and reduces screen-reading workload without increasing false positives.

New: The largest medical A.I. randomized controlled trial yet performed, enrolling >100,000 women undergoing mammography screening
The use of AI led to 29% higher detection of cancer, no increase of false positives, and reduced workload compared with radiologists w/o AI thelancet.com/journals/lan...

1 year ago 1351 347 38 89
Preview
The End of Search, The Beginning of Research The first narrow agents are here

wrote about Deep Research, which is very, very good at doing nuanced and complex research.

It is also the first narrow agent that can do sophisticated and likely quite economically valuable work, which tells us something important about the future. open.substack.com/pub/oneusefu...

1 year ago 97 16 1 2

Indeed. The Viture Pro does as you suggest. So all one needs is a lightweight display in glasses.

1 year ago 1 0 0 0

"Let there be pudding!" I'm in for the pudding potluck.

1 year ago 0 0 0 0

Thanks, looking forward to the ggufs

1 year ago 0 0 1 0

that's pretty awful

1 year ago 1 0 1 0

Any thoughts on the 70b model, quantized?

1 year ago 1 0 1 0
Post image

This weekend I wrote a post on which AI to use right now (at least for general, individual users). Model strength may matter less to most users than the capabilities of the apps and the other features that each model includes. It is a little complicated.

www.oneusefulthing.org/p/which-ai-t...

1 year ago 86 10 0 5
Advertisement

A fine book, worthy of a reread now and then.

1 year ago 1 0 1 0

I'm almost unwilling to inflict my questions on it

1 year ago 0 0 0 0