I looked at this briefly (and less elegantly) during my dissertation work integrating LLMs in smart homes, and it was interesting to see how it'd relate keys in JSON structures to user commands like "set the lights up for a party"
Posts by evan king
Amazing work. I would love to see this sort of visualization applied in other contexts, like tool use or code generation. Might be interesting to see which tokens are most salient
Sounds like cognitive behavioral therapy
🎯
Hello, dear generation. I have no idea who I am. I have created a group. Because I generally need to
Fun tool for generating hallucinations from a tiny speech-to-text model. Drag your cursor around to mangle the speech audio streaming into the model, which spits out weird text. Then assemble poems.
Didn't boost this when I built it last year, but it was a fun project.
mishearings.evanking.io
For sure, can’t be easy getting dog piled on like that just for being a bit vulnerable about something.
Agreed, I posted before the “parasites” convo started. There is a lot of animus rightfully directed toward tech right now re: its role in social decay. Getting defensive doesn’t change anyone’s opinion for the better.
Thanks for sharing it. Lot of people going in for a dunk but I thought you broached the subject in a reflective way. The future is going to be weird and full of new problems whether people want to think about it right now or not
Share it if you find it, sounds cool!
That t-SNE of the different Bluesky communities making the rounds is pretty cool, but so is this live 3D visualization of the firehose. Guess I missed it the first time around? Kinda relaxing to lean back and let it wash over you @theo.io
firehose3d.theo.io
Finishing up evals for the new Moonshine v2 speech-to-text models we're rolling out, and it's... exciting. We're right there with NVIDIA pushing the accuracy frontier while doing it with much smaller models, a team of < 10 engineers, and microscopic GPU spend.
Had a great time chatting career trajectories, on-device AI, and model specialization versus generalization with @petewarden.bsky.social on the @eetimes.bsky.social Silicon Grapevine podcast
youtu.be/XbSQtaZ_QM0?...
Excited to start sharing more about the Mirage, a generative groovebox sampler I've been developing these past few months. It uses tiny, on-device generative audio models and a voice interface (powered by Moonshine models) to create new sounds on the fly.
This is a positive thing as far as the historical record is concerned. It will preserve the names and likenesses of people that were brave enough to stand up for their community.
Work in tech often rewards thinking only in abstractions, which makes it easy to miss downstream impacts once your projects are shipped out the door. Building stronger ethical literacy feels increasingly important.
Another strong case against the idea that everything AI should be cloud-connected. The privacy liability vanishes when you do it on-device, at the edge.
It’s easy to forget that global political powers have been engaged in continuous information warfare for 10+ years. Many mistake the radical and disgusting takes they see online for what “real” people think. Even among real people online there is a selection bias – normal people don’t post all day.
I rarely wade into politics, but: Pretti was a model American and, importantly, a model of healthy masculinity. He protected and healed others, was in touch with nature and animals. He was executed by a federal government driven by toxic masculinity: cowardly, shameless, and self-obsessed.
I agree. Updating the weights of big foundation models is not practical nor particularly useful for individual agents. RL passes modify some other weights though :) I’ve def encountered that salience issue (what data is relevant or memorable in a given context) and that is absolutely the hard part.
Are they internalizing that data, or just adding it to their context windows? Maybe you could self-train on it, but then small errors in judgement might compound over time. Regardless of the nuance, I think we’re in agreement that this is an exciting time to be working on this stuff.
Maybe not an “end”point, but there’s a limit to existing ground truth data. Model capacity breakthroughs got us here, and we don’t know the limits of downstream applications yet (see: your Strix project). But we will soon run out of cheap data in existing modalities. Synthetic is another story.
Are you suggesting that language (and by extension thought) becoming more homogeneous is not a problem? Ideas and discourse overfitting to the output of a few LLMs seems like a meaningful problem
Now that everyone is walking around saying "it's not X, it's Y", it's obvious that AI speak has been internalized by the culture
Going from Rev 1 (left, mid December) to Rev 2 (right, mid January) of my hardware project took some long nights and a lot of soldering, but it’s really coming together.
Super interesting and worthwhile project! I’ve wondered how a personally-tuned filter like this on the receiving end could help us regain some agency over our feeds. There is also an interesting body of research about WHEN to deliver certain information, e.g., ieeexplore.ieee.org/stampPDF/get...
If the team's objective is to bake a cake, everyone can now easily go bake their own cake. But it's pretty hard to go back and unbake everyone's cake so you can take the necessary ingredients from each. As always, the direction of the engineering is more important than the engineering itself.
Put differently: knowing what problem to work on is more valuable than just working on problems. And this knowledge comes from living and being in the human world.
Many AI boosters are forgetting that the most important part of work is the subjective, human aspect: empathizing with customer needs, communicating effectively, and understanding tastes and trends. A human can prompt an LLM into effectiveness in these areas, but it will rarely get there on its own.
Feeling a sense of relief when the AI coding output is garbage; it’s a good heuristic for the novelty of the work. But using the AI to do today’s novel work will create training data for the next model iteration 🤔