Advertisement · 728 × 90

Posts by Lee Elkin

Post image

You’re bound to run into @wiglet1981.bsky.social no matter where you are!

3 months ago 8 1 0 0

But the Pittsburgh yinz is for everybody.

4 months ago 2 0 0 0
Post image

First time back in 15 years.

4 months ago 0 0 0 0

True belief only. Theaetetus got played.

5 months ago 5 1 0 0

I was recently asked what I thought about the rationalists. I immediately thought we were talking about Descartes and Leibniz.

5 months ago 0 0 0 0
Post image

Spotted at H&M. Take it however you want.

5 months ago 0 0 0 0

Do you still feel bad about yourself after getting a journal rejection?

6 months ago 1 0 1 0

My teachers in grad school left out that bit.

7 months ago 0 0 0 0
Post image

Sure, let me Dutch book myself.

7 months ago 8 1 0 0

Proceduralists concerned with fair representation should view belief aggregation/opinion pooling as a credal cake-cutting problem.

7 months ago 0 0 0 0
Advertisement

Random thought: alignment should entail coalitional envy-freeness. But envy-freeness is likely violated in many cases since states of alignment are often Pareto dominated by the status quo.

7 months ago 0 0 0 0

Maybe my most boomerish moment: back in my day, we were taught argument from analogy is bad mmmk.

8 months ago 0 0 0 0

Agency is all the rage at the tech companies, so that’s a strong angle. My preliminary thoughts are on how misalignment could be advanced by neglecting wellbeing in case it turns out to be realized by systems (whether genuine or simulated).

8 months ago 2 0 0 0

We should chat about it once I get some ideas going.

8 months ago 1 0 1 0

For sure! The algorithmic fairness stuff was some low hanging fruit since statistical fairness criteria and ensemble learning relate to my formal work. I'm starting to get into AI welfare, more long conceptual lines rather than formal, so that might be something if you have any interest there.

8 months ago 1 0 1 0
Post image

Hot,humid summer day in Hong Kong. But what a view.

8 months ago 10 2 1 0

Thanks!

9 months ago 0 0 0 0

I’ll be joining the University of Hong Kong 🇭🇰 this month to work broadly on AI Welfare. Big topic, but bigger life event.

9 months ago 9 0 2 0

In the case of fear, maybe there is an implicit conditional/positive correlation in the background, where the antecedent/conditioning variable is the reason, i.e., q -> p or pr(p | q) such that pr(p | q) > pr(p).

10 months ago 2 0 0 0
Advertisement

Nein

10 months ago 1 0 0 0

Lots of AI debates would disappear if we brought eliminative materialism back in style.

10 months ago 2 1 0 0

I asked my five-year-old nephew if he is a human. He says, “Of course, I am a HUMEAN!”

10 months ago 0 0 0 0

Controversial take: AI MADE METAPHYSICS GREAT AGAIN

11 months ago 2 0 1 0

And a lack of engagement.

11 months ago 2 0 0 0

Has the following claim been explicitly made by P(doom) > 0 folk:

P(doom | mistreatment) >= P(doom)

'mistreatment' refers to some form of mistreatment of developed AI systems. #AISafety #AIAlignment

11 months ago 2 0 0 0

Yep

11 months ago 0 0 0 0
Post image Post image

Relaxed or uptight?

11 months ago 1 0 1 0

Editors could also make positive suggestions where appropriate for desk rejects. PPA recently desk rejected a paper of mine, but said “we think this looks like a very good paper” but a better fit for another journal like Synthese or Episteme. That was helpful and encouraging.

1 year ago 1 0 0 0

I always do that, especially if it’s clear that the reviewers skimmed the paper and complain about things that have been addressed.

1 year ago 1 0 1 0
Advertisement

Any time you go to Greggs, you’re automatically maximizing.

1 year ago 2 0 1 0