Great interview by @howard.fm on AI. Nice to see him continuing to mask.
Jeremy Howard interview at PytorchCon with Anna Tong
www.youtube.com/watch?v=LrFb...
Posts by Jeremy Howard
This is one of the absolute best conversations on AI and software I've heard in a while. (I listen to a lot of stuff about AI.) @howard.fm is an expert & practitioner who cuts through the noise & false dichotomies, to talk about the joys & pitfalls of human machine collab
youtu.be/dHBEQ-Ryo24?...
1/n
"Getting better at the particular prompting skills or whatever details of the current generation AI CLI frameworks isn't growing.
You know, that's as helpful as learning about the details of some AWS API when you don't actually understand how the internet works.
According to OpenAI, their contract with the US DoW locks in current law, "even if those laws or policies change in the future".
Our legal analysis, with Virgil Law CEO Luke Versweyveld, shows that this is almost certainly incorrect.
www.answer.ai/posts/2026-0...
Lemme know what you find :) And thanks for trying FastHTML.
Thanks Christian! :D
This is great news! Too few ML practitioners know what a gamechanger nbdev can be. Years ago at AMD I used nbdev (version 1?) to implement an early prototype of FSR4 realtime super resolution. It was self-explanatory (e.g. with diagrams showing examples of jittering and temporal super resolution)…
Lots of folks have been asking for this for a while, but we wanted to wait until TOML support in the python stdlib was widely available. 3.11 added that support, and is pretty widely installed now.
(We install `tomli` to backfill support where needed.)
nbdev v3 is released!
The big change is that nbdev now uses pyproject.toml, not settings.ini, for all config. (nbdev v1 predates pyproject's support for project metadata, so we used our own settings file.)
v3's `nbdev_migrate_config` automates migration
nbdev.fast.ai/getting_star...
Actually Claude Skills et al are just a re-implementation of this idea - i.e give LLMs access to links with descriptions to LLM-friendly info, and let it follow them if it wants to.
There is a third-party app for this, you can find it here: bsky-follow-finder.theo.io
My impression with bluesky is that a lot of things exist but there is just no way of finding them.
@nkgarg.bsky.social
Close reading is a technique for careful analysis of a piece of writing, practiced by many ancient cultures, major religions, & academic scholars. The latest fastai course experimented with using AI to go deeper when reading. 1/
www.fast.ai/posts/2026-0...
E.g:
llms.txt has nothing to do with SEO. It's basically a pre-written CLAUDE.md/AGENTS.md file to help people using stuff like Cursor and Claude Code use your site/product. It's not used for training LLMs.
Thanks!
GenAI has the potential to worsen health inequities. In a new review we show that with intent, confronting the challenges, it could be just the opposite, globally
led by @nliulab.bsky.social
nature.com/articles/s44...
It's Opus 4.5 btw.
I gave it 15. It found nothing of interest. I'm not gonna spend more time on this for now since it seems like a dead-end.
Claude didn't find much tbh
*wdym
wdyt? (I haven't tried installing it, so I'm curious to know if I really should be avoiding it!)
Thanks Phillip :)
Oh wow, deepseek is starting to make serious progress on LLMs that offload memory to external storage: github.com/deepseek-ai/...
Cool! :)
Great - nice clear answer here:
bsky.app/profile/spac...
(Sorry for the n00b questions, I hope they're not too annoying!)
Does clicking the "show less like this" button do anything on that feed BTW? Are there some docs or anything about how to get the best out of it?
ZLUDA, now in its third iteration, has added support for CUDA 13.1 compatibility on non-NVIDIA GPUs (well… AMD GPUs).
- 1st iteration: Intel created ZLUDA as a drop-in replacement for CUDA on non-NVIDIA GPUs.
- 2nd iteration: AMD took over development after Intel dropped support.
Anthropic announces Claude for Healthcare and expanded Life Sciences tools.
By connecting directly to data sources like CMS and Medidata, Opus 4.5 aims to reduce administrative burdens. There are some promising ideas, but we need to see if they can deliver.
#MedSky #MLSky
One of my favorite findings: Positional embeddings are just training wheels. They help convergence but hurt long-context generalization.
We found that if you simply delete them after pretraining and recalibrate for <1% of the original budget, you unlock massive context windows. Smarter, not harder.