Advertisement · 728 × 90

Posts by Jared Harris

Thanks for this! I was following most of you, now I'm following all of you.

3 months ago 2 0 0 0

Sorry! Only read the earlier thread. Thanks for your helpful writeup

3 months ago 1 0 0 0

I can't find literature on adversarial development methods. I do find a lot of stuff about adversarial this and that, but it mostly has to do with tricking learning systems, which is obviously not what you mean. Do you have pointers?

3 months ago 2 0 2 0

Going through all the comments and many of the accounts I learned a lot about the mental ecology of BlueSky

6 months ago 1 0 0 0

In the end I muted 130 accounts who just made content free negative / insulting comments. I looked at each of their accounts and only muted ones that I wouldn't miss. Some were toxic but most were just quoting others.

6 months ago 0 0 0 0

I went through every comment. I found a couple of people I followed, about 30 people who made interesting but not technical comments, and 11 people who posted content free insults but who were otherwise interesting.

6 months ago 0 0 0 0

Then you aren't in the debate. He said "there are only two positions in the debate about AI". You aren't advocating for either, so the statement doesn't apply to you.

The 800+ people replying to him are ranting a lot louder! Plus they clog up the sidewalk.

6 months ago 0 0 5 1

It did produce an incredibly target rich thread for muting

6 months ago 1 0 1 0
Advertisement

I tend to agree. I prefer mute, I only block when the poster seems likely to be very aggressive.

11 months ago 2 0 0 0

I have been aggressively muting (and occasionally blocking) AI Haters (those who have nothing interesting to say which is most of them). This has helped a lot to clean up my feed. I will investigate how to turn my mutes / blocks into a list others can use.

11 months ago 3 0 1 1

Good points! What are the most important recent paradigm shifts (and some of the papers)?

11 months ago 1 0 0 0

this sort of "language use" is what the "AI is dumb" crowd would point out as evidence of "AI parroting"

1 year ago 3 0 0 0

This is great! BlueSky needs more research discovery tools

1 year ago 3 0 0 0

Are you serious? As long as we have Wikipedia, do we need doctors? Will you be calling up your literature professor friends to talk over poems whenever you have questions?

We have *both* AI *and* people. AI can help us by complementing people and making us all more effective.

1 year ago 0 0 0 0

We're also empowered to have this conversation.

Sounds like on the whole you are not a fan of giving people more power.

1 year ago 0 0 1 0
Advertisement

Do you trust Goldman Sachs?

1 year ago 0 0 1 0

Great to know that from your perspective we have all the understanding of diseases and inflation that we want or need. Unfortunately that isn't how things look from where I sit.

Should we cancel all the literature classes that study poetry?

1 year ago 0 0 1 0

Do you agree that if / when AI empowers individuals, that is democratizing?

1 year ago 0 0 2 0

The newest DeepSeek model matches the best previous models but is 45X cheaper to train.

I am running a version of this model in my home computer without any special hardware or high power consumption.

Any tech is less efficient at the beginning.

1 year ago 0 0 1 0

Individuals can also use the tech to help them understand poems, or math puzzles, or graphs of diseases or inflation.

As the tech becomes widely, cheaply available it *can* empower people. Should we trust them to use it to do good things?

1 year ago 0 0 1 0

How can we tell if AI will do more to empower individuals trying to do good things?

Open source models let enormous numbers of people use AI to accomplish their goals. I believe that most people are good.

1 year ago 0 0 0 0

AI models are rapidly getting cheaper to run, and open source ones are rapidly catching up to proprietary ones in ability. This democratizes access.

People find that AI empowers them, as individuals, to accomplish things they couldn't do otherwise.

These are current facts.

1 year ago 0 0 2 0

Recent "reasoning" models have more ability to be self-critical and catch and fix their mistakes. The capitalist imperative will push the tech toward correctness and more creative solutions because that will be worth more money.

So maybe these problems are growing pains?

1 year ago 1 0 1 0

How will the oligarchs control the DeepSeek models? Or the Llama family? Or the Qwen family?

Worrying about the oligarchs is important. But we must not think of them as having magical powers.

1 year ago 0 0 0 0
Advertisement

Absolutely yes! Surprise are a big part of the package. Right now Open AI et al are very surprised at how good open source models have gotten.

Any given technology reduces the *cost* of some activities. Then *people* decide what they want to do with it.

1 year ago 1 0 0 0

The compute will never be free but it is getting much cheaper. I'm running one of the newest models on my home machine now (it is isn't a special machine). People will soon be able to run open source models on their phones, customize them, etc.

1 year ago 0 0 0 0

Maybe you are referring to the algorithms used by e.g. Facebook to show users content? Those do apparently try to "maximize engagement", promoting addiction. However they are not large language models and don't have at all the same design or capability. Plus they are not open source.

1 year ago 1 0 1 0

So partly this is a theological argument?

I would very much like to see your analysis of how the design of these AI models is targeted to get users addicted. I have seen a lot of discussions (pro and con) of the designs but have never seen how the design is set up to achieve this.

1 year ago 0 0 2 0

are bookstore folks fascist when they perform the same office?

1 year ago 0 0 1 0

Thanks! I'll look at that. Didn't mean to imply you are a library, just that you have researched the topic. I have no idea how to trace your statements back to your sources.

1 year ago 0 0 1 0