Advertisement · 728 × 90

Posts by Computational Cosmetologist

It can be hard to go eng -> sci in an external move (there's a not-unfounded bias that more SWE-leaning candidates lack the stats baseline for science roles) but I've seen plenty do it within an org.

10 minutes ago 1 0 0 0

I would phrase it more as "at the time was a plausible way to understand their limitations" that absolutely was not borne out, but yeah you could be a reasonable, informed skeptic about the parrot thing then.

There were even some empirical results! Like GPT-3 performing at-chance on logic.

13 minutes ago 1 0 0 0

Yes! Anything is possible through the power of having free-tier ChatGPT write your resume.

But like also, legitimately yes. Smaller shops don't have enough people to differentiate sci / eng roles.

27 minutes ago 1 0 1 0

Having a PhD, surprisingly, adds nothing above these in expectations for what you actually know how to do.

36 minutes ago 0 0 0 0

From the resumes I see, you absolutely don't. "AI Researcher" requires, at a minimum, that you once figured how to use ChatGPT.

"AI Engineer" requires you to have made a script that calls an LLM.

"AI Scientist" means you've downloaded pytorch before, and may have even run it in a jupyter notebook.

37 minutes ago 1 0 2 0

Is the problem that AI was used for it? Does the "parrot" "extruding" their position correctly somehow make that position weaker? Are you stealing their precious non-textual meaning? Is it lost forever to the data center like coolant water?

Or is this just a "bitch eating crackers" thing?

45 minutes ago 1 0 0 0

The existence of 1 + 2 in the same person is always a treat. "How dare you make a list that accurately describes my views and also I would consider an honor to be on it!"

45 minutes ago 1 0 1 0
Advertisement

How is this even supposed to work? Do 2% of signups just get an error if they try to sign in? “Sorry, actually we falsely advertised to you?”

I’m torn between this being an obvious lie to walk back an unpopular change, or one of the all-time worst marketing decisions ever. Or both?

6 hours ago 1 0 0 0

Back when I was a “transformer is unsuitable for general intelligence” guy the issues with negative prompting were my go-to example

23 hours ago 8 0 0 0

“Maybe in 10 years we will discover infrasound anti-hormesis causes infrasound from data centers to be uniquely harmful” is only not a stupid retort if you have *any* evidence for it. Otherwise it is a bullshit hypothetical. Especially if you’re arguing for precaution because of it.

1 day ago 1 0 0 0

I get very frustrated with arguments that use an appeal to a negative not being proven to say we should “remain vigilant” or “aren’t sure yet” about the thing they want to argue for. We aren’t sure about almost anything! You *can* be too careful. This is a worthless standard to aim for.

1 day ago 1 0 1 0

Qwen’s alignment (in English at least) is often pretty permissive. This would be feasible. You just need the right prompt.

1 day ago 3 0 0 0

Thinking in terms of action and invariants is both really helpful for QM and for non-physics stuff like RL. Technically it’s just a different form of the same underlying math, but it’s a neat way to look at things

2 days ago 3 1 0 0

It could be, but really not specifically. There’s a whole world of things from robotics to actuarial work that pay well using the same kinds of math you do in physics. All of them respect you more than academia will, especially for anything computational.

2 days ago 4 0 1 0

The point where “intuition” you can generalize with really hit for me was Lagrangian mechanics. You need to push through normal calc based mechanics first though

2 days ago 2 0 1 0

The weird thing with calc-based physics is that the two are so close as to almost be the same subject. I remember my phys 1 prof gave us a “formula sheet” for the final that was just “F=ma” in large font. The result is integrals and derivatives.

2 days ago 5 1 2 0

But also academia is a poison that tells you you are a better person for drinking it, so make sure you *really* like physics and not just applied math. Because people will pay a lot of money to do very little with applied math. This is the opposite for physics.

2 days ago 4 0 2 0
Advertisement

I don't really care f someone uses it or not but, I don't think a lot of anti-AI people on bluesky realize what outliers they are in mainstream society.

3 days ago 66 11 11 2

Which is the identity?

3 days ago 0 0 0 0

Offending the natural order as a life-long patient of fexofenadine and various steroid inhalers, and thus a thrall to big pharma.

4 days ago 5 0 0 0

I would say DS is more femme vs ML Sci/Eng, but certainly not absolutely. It's like how Astrophysics is femme vs Physics.

4 days ago 2 0 1 0

I don’t think they need to be lobbied *for*, but sitting out the fight against them does seem to help the case that there’s something sinister there. We really need to not become the anti-tech qua tech party. I’m not sure how to walk the line that avoids corporate boosterism, but we need to find it.

4 days ago 0 0 0 0

My only caution is you have to be ready for occasion untranslated dialogue in dead European dialects. It’s a little slow and cerebral. Great book though.

4 days ago 1 0 1 0

For, say, $100 million dollars I am confident I may be able to prevent the annihilation of humanity in this scenario. Against the entire amortized future value of our species this is a small price to pay.

4 days ago 2 0 0 0

It is irresponsible not to prepare accordingly. Please donate to my foundation dedicated to crafting procedures for solving stupid alien puzzles.

4 days ago 3 0 1 0
Advertisement

There is only one class of aliens that has no obsession with weird challenges. There are O(2^n) classes of aliens that have weird obsessions that take n bits to describe. For even modest assumptions about n, we must conclude any aliens we encounter will be obsessed with stupid challenges like this

4 days ago 3 0 1 0

Choosing the right set to count is hard! You can’t just pick one, assume a uniform prior, and go wild. It’s cargo-cult decision theory.

4 days ago 4 0 0 0

What if there are aliens that will destroy humanity unless a chosen human can point to their ruler’s favorite color from a selection on a monitor? Was it a mistake to go from 8 bit color to 24 because this was a 65k-fold decrease in the probability our champion will choose correctly?

4 days ago 3 0 1 1

I think you always have to be suspicious of counting arguments when you don’t have any information on the distribution of what you’re counting (or equivalently how it will be drawn). It feels very Pascal’s Wager.

4 days ago 2 0 1 0

It’s crazy that we built a machine that can intend things but somehow this neither required nor provided insight into what “intend” really means

4 days ago 2 0 0 0