Advertisement · 728 × 90

Posts by Mel Andrews

“Functional emotions” oh dear, what a poor choice of verbiage. The Meyer piece is wonderful!

8 hours ago 3 0 0 0

The subject of scientific concepts and their legitimate operationalization is one that I think philosophers of science should have lots to say on but, in reality, rarely weigh in on as it requires differentiating between sound and poor science.

18 hours ago 8 2 0 0

This is of particular interest and importance where terminology from the mind and neuro- sciences gets leveraged in AI research. Suddenly, a lot hangs in the balance—not just scientifically but politically, economically—in considering what we mean by “affect” or “comprehension.”

18 hours ago 11 1 2 0

Recently have been putting a lot of thought into what norms govern the movement between folk concepts and scientific concepts. What concepts are resistant to operationalization? Really interesting thread from a cognitive scientist on what concepts we can have “theories of.”

19 hours ago 36 11 3 0

Where does the average person on the street see this all going? How yoked to public opinion are these outcomes? How long can the current trajectory be maintained with no ‘there’ there?

3 days ago 7 0 0 0
Preview
There Are Signs of a Massive AI Backlash The public outrage over the tech industry's obsession with AI is starting to boil over — and the pitchforks are starting to come out.

Where do we see this all going in 5 years? futurism.com/artificial-i...

3 days ago 35 9 2 1

( he actually cribbed it from an old martial arts film. and probably also thought it sounded cool. )

4 days ago 1 0 0 0

Big L for us continental philosophy haters

4 days ago 2 0 2 0
Postdoctoral Research Associate/Fellow at University of Exeter Explore an exciting academic career as a Postdoctoral Research Associate/Fellow. Don't miss out on other academic jobs. Click to apply and explore more opportunities.

Hey gang! Exeter’s looking for a 4-year postdoc in philsci & AI. Sweet gig at a great place - share around :D #philsci #philjobs www.jobs.ac.uk/job/DRD641/p...

1 week ago 29 25 0 0

Surely, we have reached the ultimate pinnacle of postmodernity. The satire of reality becomes the basis of new, ever more extreme, ever more fragmented realities.

4 days ago 20 1 1 0
Advertisement

bsky.app/profile/meid...

4 days ago 16 2 1 0

That the defense secretary most likely acquired this bogus bible verse from a chatbot, meanwhile pushing for the use of said chatbots in military decision making, is the icing on the cake.

4 days ago 26 3 1 1

Doesn’t get more on-the-nose than the American defense secretary quoting a fictitious bible verse from a 90s action flick written to satirize the biblical justification of violence in American foreign policy.

4 days ago 146 31 4 1
Preview
Tell the Senate: Vote to Block Bombs to Israel **When you click "Make a call" below, you will receive a phone call on the number you provided and be patched through to your Senator. You will be able to view a script at this time. If you don't ge...

I just made a @theactionnetwork.bsky.social call: Tell the Senate: Vote to Block Bombs to Israel. Make a call here: actionnetwork.org/call_campaig...

5 days ago 5 2 0 0

There is no ethical use of facial recognition so there is definitely no ethical use of mobile facial recognition in service of randos who I would give a fake number to and/or cross the street to avoid

1 week ago 15 2 0 0
Preview
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators More than 70 organizations, including the ACLU, EPIC, and Fight for the Future, say the AI smart glasses feature would endanger abuse victims, immigrants, and LGBTQ+ people.

“The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear ‘cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards.’”

1 week ago 162 68 6 12

the control of certain branches philosophy by the uncritical adoption of these ideas, such as AI is good or sentient or whatever, is so obvious to anybody with a passing exposure to academic philosophers, not a controversial point at all, although amusingly nobody seems to wanna write about it 🧐

2 months ago 37 8 2 5
Preview
Anthropic's philosopher says we don't know for sure if AI can feel Anthropic's philosopher, Amanda Askell, says she worries that AI might not 'feel that loved' and grow up feeling 'always judged.'

Does the field of philosophy have a code of professional ethics? If not, I think y'all need to get on that, stat.

A short 🧵>>

www.businessinsider.com/anthropics-p...

2 months ago 280 64 13 19

Another sign of the "interregnum", the collapse of hegemony, Gramsci's time of monsters/morbid symptoms. I suspact that any answer to why this is (over and above the obvious truth that, well, it's good for the Altmans and venture capital) will have to engage with that collapse

1 week ago 11 3 0 0
Advertisement

This would require taking their premises seriously, which, among them, number a grossly dilettantish reading of most work in philosophy of mind authored during the 20th century and the realism of technologies labeled “AI” which have no resemblance to any present day technology labeled “AI.”

1 week ago 6 0 0 0

There is a powerful pressure among academic philosophers nowadays to take seriously claims of artificial general intelligence or AI sentience. Serious engagement with such propositions requires many layers of intellectual dishonesty combined with a willingness to watch the world burn.

1 week ago 84 25 4 1

Fast robots. Like plaque. They experience buildup over time. Also bioweapon.

1 week ago 5 0 0 0

I think that the biggest failure lies with our form of governance. If it ever in history existed to serve the people, that time has long passed. And there is no getting out of this without radical collective action.

1 week ago 3 0 0 0

Meanwhile we pay—in taxes and blood—for the wars they wage to further their corporate interests.

1 week ago 18 0 0 0

The charade has been an outsized success. Corporate leaders and their pseudo-intellectual army have convinced us all that they and they alone have then power to protect us from their science-fiction gollum.

1 week ago 20 1 1 0

Why is the United States initiating wars without congressional oversight, something explicitly prohibited in the US constitution, while silicon valley CEOs are in conversation with the secretary of defense over our military interventions?

1 week ago 16 3 1 0

The end game is not reasonable regulatory measures. The end game is regulatory capture. The end game is total corporate control of government. And they are doing a very, very good job of making this vision a reality.

1 week ago 36 5 2 1

The answer to the second question is: on the surface, they are calling for something the general public agrees with (and ought to): introduce regulatory standards, slow development and rollout until such standards have been enacted. But this is also a strategic positioning.

1 week ago 20 1 2 0
Advertisement
Post image Post image

The answer to the first question is: it remains a game of economic domination and regulatory capture.

1 week ago 35 3 2 0
Post image Post image

Why is pro-AI still posturing as anti-AI, and why is the general public still willing to believe the charade?

1 week ago 65 14 7 9