Thinking in terms of action and invariants is both really helpful for QM and for non-physics stuff like RL. Technically it’s just a different form of the same underlying math, but it’s a neat way to look at things
Posts by Computational Cosmetologist
It could be, but really not specifically. There’s a whole world of things from robotics to actuarial work that pay well using the same kinds of math you do in physics. All of them respect you more than academia will, especially for anything computational.
The point where “intuition” you can generalize with really hit for me was Lagrangian mechanics. You need to push through normal calc based mechanics first though
The weird thing with calc-based physics is that the two are so close as to almost be the same subject. I remember my phys 1 prof gave us a “formula sheet” for the final that was just “F=ma” in large font. The result is integrals and derivatives.
But also academia is a poison that tells you you are a better person for drinking it, so make sure you *really* like physics and not just applied math. Because people will pay a lot of money to do very little with applied math. This is the opposite for physics.
I don't really care f someone uses it or not but, I don't think a lot of anti-AI people on bluesky realize what outliers they are in mainstream society.
Which is the identity?
Offending the natural order as a life-long patient of fexofenadine and various steroid inhalers, and thus a thrall to big pharma.
I would say DS is more femme vs ML Sci/Eng, but certainly not absolutely. It's like how Astrophysics is femme vs Physics.
I don’t think they need to be lobbied *for*, but sitting out the fight against them does seem to help the case that there’s something sinister there. We really need to not become the anti-tech qua tech party. I’m not sure how to walk the line that avoids corporate boosterism, but we need to find it.
My only caution is you have to be ready for occasion untranslated dialogue in dead European dialects. It’s a little slow and cerebral. Great book though.
For, say, $100 million dollars I am confident I may be able to prevent the annihilation of humanity in this scenario. Against the entire amortized future value of our species this is a small price to pay.
It is irresponsible not to prepare accordingly. Please donate to my foundation dedicated to crafting procedures for solving stupid alien puzzles.
There is only one class of aliens that has no obsession with weird challenges. There are O(2^n) classes of aliens that have weird obsessions that take n bits to describe. For even modest assumptions about n, we must conclude any aliens we encounter will be obsessed with stupid challenges like this
Choosing the right set to count is hard! You can’t just pick one, assume a uniform prior, and go wild. It’s cargo-cult decision theory.
What if there are aliens that will destroy humanity unless a chosen human can point to their ruler’s favorite color from a selection on a monitor? Was it a mistake to go from 8 bit color to 24 because this was a 65k-fold decrease in the probability our champion will choose correctly?
I think you always have to be suspicious of counting arguments when you don’t have any information on the distribution of what you’re counting (or equivalently how it will be drawn). It feels very Pascal’s Wager.
It’s crazy that we built a machine that can intend things but somehow this neither required nor provided insight into what “intend” really means
I see we are updating “ah, but the fact that I believed the parody was real shows how bad things really are” for the vibe coding era
You see, AI is just a “stochastic parrot”, an entity that repeats words it has been exposed to from the internet without understanding them. It treats any assertion online as fact without checking. I read this on a substack—I don’t remember which one.
No, I haven’t tried it.
100% agreement there
Like these are all bad abstractions for organizing what you need to do these days, but the underlying referents are real problems you have to handle
Sadly “prompt engineering” is still very alive for small models. Even for the frontier models, for non-code tasks you often have to think carefully about to present information in context. I agree that a lot of the terminology is stuck in the first 6 months of chatbots though
Is this indirectly mocking other people or myself? Yes.
Not now, love, I’m doing praxis: yelling at other people on Bluesky about AI.
Good luck with that! I foolishly think that it’s hard to defeat something you don’t understand at all (and it’s easy to made to look ridiculous for it), but I will stop giving unsolicited advice. Hopefully one of us will manage to get people together to stop a dystopian future.
The sad thing is that throwing yourself into black hole arguably only gets you to PSPACE.
Like the word “responsible” isn’t a moral one here. If you want to argue that by using AI I’m morally impure and I should feel guilty about bad AI uses like this that’s your right. But even if you converted me completely, I’d be in the same position to actually stop this shit as you.
This is why I didn’t use the word “unfair”. I don’t care about fairness. I care about what is going to work. Telling other progressives that our activism against shit like this doesn’t matter unless we sign on to anti-tech maximalism is *fine* with me, but isn’t actually activism against this.
And this isn't a "hit dogs holler" thing. I don't take criticism of shit like this ad as criticizing me. People seem to *want* me to though.
Ultimately, I think I share a lot of their values and we could be allies here. I've come to realize they don't think so. Unfortunately, that's dispositive.