Advertisement · 728 × 90

Posts by graphdev

Pause Giant AI Experiments: An Open Letter - Wikipedia

The article is paywalled, but it seems like a rehashing of this:

en.wikipedia.org/wiki/Pause_G...

4 days ago 2 0 0 0

Is that good or bad? I feel like friendlyjordies gets some mixed opinions when I’ve asked Australians previously.

5 days ago 0 0 1 0
Primus - The Seven (Official Audio)
Primus - The Seven (Official Audio) YouTube video by PrimusVEVO

There’s an album if you like Primus

youtu.be/jw95wlggfYM

1 week ago 1 0 0 0
Video

Pen plotting. #penplotter

2 weeks ago 2 1 0 1

Samosas 😋

2 weeks ago 1 0 0 0

I way over estimated. A number of them I assumed the models would generate a python script to use. A good reflection on their abilities considering there’s potential for a Clever Hans or Clever Claude situation

2 weeks ago 3 0 0 0

Electric bikes for sure. I think the shift depends on commuter infrastructure handling e bikes. Would be great if the suburbs finally accept bikes as valid transportation. Biking into the suburbs is always a harrowing experience

2 weeks ago 1 0 1 0
Advertisement

Might be possible with lora generating networks? I read a paper about Amazon using them for personalization. Seems like at various intervals you could update the lora adapter. A conversation is just another kind of document. Not exactly continuous but ”streaming“ is just small batches after all

3 weeks ago 2 0 0 0

I think my take is the Claude plans are way underpriced compared to the API. So it leads people to think the token costs are being subsidized. There’s been so many rug pulls by VC subsidizing costs that it makes people skeptical. Ex, Uber and Lyft were cheap until they had to be profitable

3 weeks ago 3 0 1 0

It’s good. I’ve binged multiple seasons

3 weeks ago 2 0 0 0

It’s so weird they won’t let it be a full os. It’s basically for kids now anyways. Likely by far the biggest demographic that uses them imo. But you give me a decent processor, keyboard, and then an os you can’t do anything with locally. It would probably eat into the air market too much though

3 weeks ago 1 0 1 0

I look forward to his future as an engineer

3 weeks ago 1 0 0 0

this is the category error all the AI maximalists make

if I had to break down what I think the impact of AI would be by field, I'd have to first talk about which *subtasks* it's helping people with

the amount of variability by field and task is MASSIVE, ranges from 90% takeover to 0%

3 weeks ago 134 23 7 2

Hypernetworks come across as unrealistic for a hobbyist to train. At least the scale of anything useful. Maybe we’ll see open releases alongside embeddings and LLMs?

3 weeks ago 1 0 0 0
Preview
SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass We propose SHINE (Scalable Hyper In-context NEtwork), a scalable hypernetwork that can map diverse meaningful contexts into high-quality LoRA adapters for large language models (LLM). By reusing the f...

Been on the hypernetwork train since reading the Sakana Doc to Lora paper.

arxiv.org/abs/2602.06358

3 weeks ago 4 1 1 0

The moccamasters make a lot of sense now

3 weeks ago 4 0 0 0

Thinking tech scene to clarify

1 month ago 0 0 0 0
Advertisement

It’s a culture. Thoughts on NY‘s if you’ve been?

1 month ago 0 0 2 0

The marketing lines are drawn

1 month ago 2 0 1 0

Mostly art if not the other things

1 month ago 1 0 0 0

Everyone hates teams

1 month ago 2 0 0 0

The idea of bullshit jobs has existed for a long time. Redundancy is about losing the appearance of productive value imo.

1 month ago 1 0 1 0

I thought cats getting busted with salmon at the airport was top tier 👩‍🍳

1 month ago 2 0 0 0
Post image

#photography #nature (c)stdrozdowski72

3 months ago 621 41 7 0
Advertisement
1 month ago 288 39 9 3
Post image

Language is not Transparent. Mel Bochner, 1969

1 month ago 46 16 1 1

Instantly want to disregard it. I’ve been very skeptical of the know it all. As a personality it comes across as very fragile rn. I don’t mind reviewing suggestions together and still think the person prompting deserves some kudos. I think it’s mostly disingenuous to pass it off as yours

1 month ago 0 0 0 0

These are the kind of market dynamics I struggle with. I look at what I built go there‘s no way company X wouldn’t spend a week of engineering hours and then :shocked:

1 month ago 7 0 1 0
Preview
ReasonCACHE: Teaching LLMs To Reason Without Weight Updates Can Large language models (LLMs) learn to reason without any weight update and only through in-context learning (ICL)? ICL is strikingly sample-efficient, often learning from only a handful of demonst...

I’m skeptical of the large context being enough. It has gotten so much better. I notice less degradation when the context window starts to get full. Claude still has a fairly small window and I think for reasons.

This is an interesting idea. Clearly has some drawbacks.

arxiv.org/abs/2602.02366

1 month ago 3 1 1 0

Looks similar to tumblewords. Very pretty

1 month ago 1 0 0 0