I don't think the post implies that literally all academics are contemptuous, just that it's common enough, which it totally is
Posts by Agus 🔎🔸
This site does unfortunately disabuse you of the notion that careless thinking is confined to a particular ideology
anyway, here is 2024 Nobel Prize in Physics winner Geoffrey Hinton discussing what we know about large AI models on 60 Minutes.
clearly all part of an evil marketing move to… uhm… drain more water?
lmao
It’s ridiculous to pull credentials in this context when literally many if not most of the people who created modern AI disagree with you. Most people working at frontier labs would too.
things we know about LLMs and large DL models in general:
- how they are trained (gradient descent)
- the structure into which they are placed (architecture)
- the base arithmetic (matmul, norm, batch norm, and so on)
Seeing you both here in the trenches makes me want to go back to bluesky to join in the noble fight
Post from Terence Tao: “I was able to use an extended conversation with an Al (link) to help answer a MathOverflow question (link) I had already conducted a theoretical analysis suggesting that the answer to this question was negative, but needed some numerical parameters verifying certain inequalities in order to conclusively build a counterexample. Initially I sought to ask Al to supply Python code to search for a counterexample that I could run and adjust myself, but found that the run time was infeasible and the initial choice of parameters would have made the search doomed to failure anyway. I then switched strategies and instead engaged in a step by step conversation with the Al where it would perform heuristic calculations to locate feasible choices of parameters. Eventually, the Al was able to produce parameters which I could then verify separately (admittedly using Python code supplied by the same Al, but this was a simple 29-line program that I could visually inspect to do what was asked, and also provided numerical values in line with previous heuristic predictions).
Another victim of AI psychosis. Really sad 😔
one thing that has remained true throughout time is that any assertion or evidence that runs counter to human uniqueness is invariably met with strong (often incoherent/misdirected) anger. Jane Goodall wrote about this wrt. chimpanzees and tool-making.
Lmao
Wow, this seems great. Will try it out
a lot of silence from the stochastic parrots crowd
this is like my life philosophy at this point
so true
(Arguably, the EU AI Act was mostly negative, though I’d say the specific sections of it that were inspired by AI policy work from EA circles was broadly good and important)
And finally on AI safety, the one with the biggest policy efforts, I’d say EA has been fairly successful. It’s hard to trace back some of the wins, but aspects of the EU AI Act, the US EO on AI, and many other ongoing legislation efforts were downstream from EA-funded policy work
On biosecurity, there have been efforts to lobby for pandemic prevention funding in the US, to prevent gain of function research from being approved and to improve policy around antimicrobial resistance. There’s been some major early wins, but most of it is still in progress.
On the animal suffering side, there was some non trivial efforts (alongside other stakeholders) to push for better animal welfare laws, particularly in the US, but I think the results were mostly unsuccessful
There’s also a few projects working on public health policy in developing countries, for example lobbying for higher taxes on tobacco and alcohol where that would make a huge dent on the total disease burden
In GHD you can find things like the Lead Exposure Elimination Project which lobbies for effective lead reduction policies in developing countries (and was wildly successfully, recently evolving into a massive collaboration with USAID and UNICEF)
There isn’t much of an EA lobbying complex, but there are a handful of orgs that do lobbying for specific cause areas.
done!
Yeah, it was the first time I was even hearing about the quoted account
Could you expand on “impedance mismatch between people who make abstract points and those who think every abstract point is really a veiled statement about the specific thing”?
I’ve had people message me days after being like “I’m glad to have talked to you, I’m not used to people taking my arguments seriously, and I think you were right about some things”
I’ve found particularly insightful to just talk to say, antivaxxers, which people treat as “far gone”.
They’re not easy to persuade, but they’re often acting in good faith, and if you’re kind and patient, you can get really far.
I’m proud to say that I think I’m fairly good at changing other people’s beliefs on Twitter, and frequently these are people that everyone else assumes are impossible to persuade
Yeah, I agree with your point regarding people that won’t change their beliefs. But I think it’s a bit of a self reinforcing issue, where the more people assume bad faith, the less likely they’re to convince others, which reinforces their beliefs that they can’t be convinced
Internet trolls abound, but they’re mostly not pulling sophisticated strategies to bait their opponents into wasting their time.
They’re just straightforwardly impulsive and rude most of the time.