“Functional emotions” oh dear, what a poor choice of verbiage. The Meyer piece is wonderful!
Posts by Mel Andrews
The subject of scientific concepts and their legitimate operationalization is one that I think philosophers of science should have lots to say on but, in reality, rarely weigh in on as it requires differentiating between sound and poor science.
This is of particular interest and importance where terminology from the mind and neuro- sciences gets leveraged in AI research. Suddenly, a lot hangs in the balance—not just scientifically but politically, economically—in considering what we mean by “affect” or “comprehension.”
Recently have been putting a lot of thought into what norms govern the movement between folk concepts and scientific concepts. What concepts are resistant to operationalization? Really interesting thread from a cognitive scientist on what concepts we can have “theories of.”
Where does the average person on the street see this all going? How yoked to public opinion are these outcomes? How long can the current trajectory be maintained with no ‘there’ there?
( he actually cribbed it from an old martial arts film. and probably also thought it sounded cool. )
Big L for us continental philosophy haters
Hey gang! Exeter’s looking for a 4-year postdoc in philsci & AI. Sweet gig at a great place - share around :D #philsci #philjobs www.jobs.ac.uk/job/DRD641/p...
Surely, we have reached the ultimate pinnacle of postmodernity. The satire of reality becomes the basis of new, ever more extreme, ever more fragmented realities.
bsky.app/profile/meid...
That the defense secretary most likely acquired this bogus bible verse from a chatbot, meanwhile pushing for the use of said chatbots in military decision making, is the icing on the cake.
Doesn’t get more on-the-nose than the American defense secretary quoting a fictitious bible verse from a 90s action flick written to satirize the biblical justification of violence in American foreign policy.
I just made a @theactionnetwork.bsky.social call: Tell the Senate: Vote to Block Bombs to Israel. Make a call here: actionnetwork.org/call_campaig...
There is no ethical use of facial recognition so there is definitely no ethical use of mobile facial recognition in service of randos who I would give a fake number to and/or cross the street to avoid
“The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear ‘cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards.’”
the control of certain branches philosophy by the uncritical adoption of these ideas, such as AI is good or sentient or whatever, is so obvious to anybody with a passing exposure to academic philosophers, not a controversial point at all, although amusingly nobody seems to wanna write about it 🧐
Does the field of philosophy have a code of professional ethics? If not, I think y'all need to get on that, stat.
A short 🧵>>
www.businessinsider.com/anthropics-p...
Another sign of the "interregnum", the collapse of hegemony, Gramsci's time of monsters/morbid symptoms. I suspact that any answer to why this is (over and above the obvious truth that, well, it's good for the Altmans and venture capital) will have to engage with that collapse
This would require taking their premises seriously, which, among them, number a grossly dilettantish reading of most work in philosophy of mind authored during the 20th century and the realism of technologies labeled “AI” which have no resemblance to any present day technology labeled “AI.”
There is a powerful pressure among academic philosophers nowadays to take seriously claims of artificial general intelligence or AI sentience. Serious engagement with such propositions requires many layers of intellectual dishonesty combined with a willingness to watch the world burn.
Fast robots. Like plaque. They experience buildup over time. Also bioweapon.
I think that the biggest failure lies with our form of governance. If it ever in history existed to serve the people, that time has long passed. And there is no getting out of this without radical collective action.
Meanwhile we pay—in taxes and blood—for the wars they wage to further their corporate interests.
The charade has been an outsized success. Corporate leaders and their pseudo-intellectual army have convinced us all that they and they alone have then power to protect us from their science-fiction gollum.
Why is the United States initiating wars without congressional oversight, something explicitly prohibited in the US constitution, while silicon valley CEOs are in conversation with the secretary of defense over our military interventions?
The end game is not reasonable regulatory measures. The end game is regulatory capture. The end game is total corporate control of government. And they are doing a very, very good job of making this vision a reality.
The answer to the second question is: on the surface, they are calling for something the general public agrees with (and ought to): introduce regulatory standards, slow development and rollout until such standards have been enacted. But this is also a strategic positioning.
The answer to the first question is: it remains a game of economic domination and regulatory capture.
Why is pro-AI still posturing as anti-AI, and why is the general public still willing to believe the charade?