Advertisement · 728 × 90

Posts by Amber 🐱

the existence of Magic the Gathering implies the existence of Magic the Scattering

1 year ago 146 13 8 0
Langchain Is Pointless | Hacker News

people are realizing this now https://news.ycombinator.com/item?id=36645575

2 years ago 0 0 0 0

au contraire, the money went to twitter shareholders, who can now spend it

the real damage is to our culture

2 years ago 0 0 0 0

finding product-market fit for my body

2 years ago 0 0 0 0

finding product-market fit for my brain

2 years ago 0 0 1 0

but the more I'm engrossed with things happening right in front of my eyes, life!, I notice I forget to reach for my phone during downtime. I'm glad it's not so important to me anymore, but I honor how important it once was. so long, and thanks for all the fish

2 years ago 0 0 0 0

I was addicted to reddit. my reddit account is almost 16 years old. I wrote a college essay about how reddit was my family and got rejected. I add "reddit" to my google search queries. I started three subreddits with over 30k subscribers (thanks to the hard work of other mods, I'm lazy af).

2 years ago 0 0 1 0
Advertisement
Post image
2 years ago 0 0 1 0

the reddit shutdown is no big deal, I'll just go outside and touch grass *gets allergic reaction*

2 years ago 1 0 1 0

no matter your initial conditions you'll find the strange attractors

2 years ago 1 0 0 0

hot take: crypto bubbles are good actually, because they make people skeptical of bad usecases of cryptocurrency/NFTs

2 years ago 1 0 1 0

are you maybe referring to something like Microsoft's racist Tay bot that may have made Google focus on AI safety? I'm suggesting that some AI that mildly hurts might be good actually, if it prepares everyone against bigger dangers, especially if it hits a broad population (e.g. tech-illiterate)

2 years ago 0 0 1 0
Preview
Orca: Progressive Learning from Complex Explanation Traces of GPT-4 Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact...

https://arxiv.org/abs/2306.02707

2 years ago 0 0 0 0

as people are born, this awareness fades and inoculation would need to happen every generation to retain immunity. such a "vaccine" has to be intense enough to make learning happen at a large scale, but ideally should induce human learning while causing as little suffering as possible

2 years ago 0 0 1 0
Advertisement

society learns like an immune system, not a brain—we now have some defensive awareness of fascism that society didn't before its rise leading up to WWII. which means it might be important to inoculate society with an AI-unsafety vaccine to learn the value of handling AI safely

2 years ago 0 0 1 0
Post image
2 years ago 0 0 1 0
Gorilla

https://gorilla.cs.berkeley.edu/

2 years ago 0 0 1 0

every AI model naming scheme converges to animals

2 years ago 0 0 1 0

another con of squeezing is not letting your emotions be felt, understood, and resolved in a skillful manner; often squeezing comes as shortcut, suboptimal reactions learned in childhood to emotional interpretations

2 years ago 0 0 0 0

the con of squeezing so much is that it masks up the self-knowledge of what you’re naturally good at; that which takes you less energy to contribute to the world and doesn’t leave you exhausted

2 years ago 0 0 1 0

work is the loudest squeeze which makes all the other squeezes too quiet to observe

2 years ago 0 0 1 0

after starting my vacation it’s so obvious in retrospect how much I’m squeezing myself into shapes that I expect myself to be rather than let myself change shape on its own

2 years ago 1 0 1 0

i need a hell-feed of all feeds that don’t contain themselves so i can stay updated

2 years ago 1 0 0 0

interesting how nobody ever accuses trans women of having small dicks, it’s as if toxic masculinity can’t ever allow emasculation to be affirming

2 years ago 1 0 0 0
Advertisement

is this how asian apps got their aesthetic?

2 years ago 1 0 1 0

disc jockey

2 years ago 0 0 0 0

4. Problems in P: LLMs can’t solve problems taking polynomial time, which are considered “easy” in computer science. The transformer architecture is in the complexity class TC0, which is in NC1 in L in NL in P. There is no known algorithm with logarithmic memory solving polynomial-time problems.

2 years ago 0 0 0 0

3. Secret commitment: LLMs can't secretly commit to a decision and then reveal that decision later on without changing it. They can't hold themselves accountable to plans that they have formulated or participate in zero-knowledge proofs, a cryptographic protocol.

2 years ago 0 0 1 0

2. Efficient generalization: LLMs can't perform tasks without having been shown how to do them before. They can't identify patterns without needing many training examples. They fail at the Abstraction and Reasoning Corpus (ARC), puzzles that humans can easily solve.

2 years ago 0 0 1 0