sociology's value is rarely noticed to those outside the field and it falls on every individual to try and justify their value. Nobody asks that of Chemists. I can think of few fields that have had a worse time, historically, proving their authority.
Posts by Taylor Beauvais
"sociology is thriving while Sociology is dying"
...And Sociology™, killed it thanks to tremendous gatekeeping and inaccessible writing in Sociology® journals, methodologically and topically conservative Sociology® associations, and institutional Sociology®'s tepid interest in application.
Breaking News: Meta and YouTube harmed a young user’s mental health with addictive design features, a jury found in a landmark trial.
This would help academia so much too. When the zeitgeist centers some billionaire's rambling it amplifies their framing of the technology. Suddenly we have tons of academics asking nonsense questions like "is this code actually alive??"
Imagine if tech reporters reported more on how tech systems function - or don't...rather than reporting on the business machinations of tech execs and companies. This isn't always the journalist's decision of course, but I think about this a lot: how much more useful tech reporting could be.
For this week's Tech Policy Press podcast, Justin Hendrix spoke to Boston University's Woodrow Hartzog and Jessica Silbey about their forthcoming law review paper, "How AI Destroys Democratic Institutions." They say the "affordances of AI systems extinguish" key features of democratic institutions.
Science involving social constructs is extremely fallible and often perpetuates racism/ sexism/ etc.. For example, all medical and statistical processes involving identity are contextually constrained.
IE: thalidomide, IQ tests, pulse ox, eGFR kidney function equation, vbac calculator...
We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons. Building codes are not suggestions; they are the baseline that ensures a structure is fit for its intended use, they write.
Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇
Out here trying to find a job post graduation. Writing cover letters, crafting resumes, revising teaching statements and carefully detailing research statements... and high ranking officials with serious jobs are yabbering about teleporting to waffle house.
Lol Jesus Christ. Social science probs shouldnt take direction from a marketing professor who (checks notes) says Muslims bring mass violence, gender shouldnt be a protected class, and Mein Kampf wasnt so bad for Jewish people.
"Suicidal empathy" was never a legitimate frame for social research
Hard not to read something like this and wonder what the hell we're doing here.
This is a fairly long saga with plenty of named and implied actors doing work that is essentially shit posting with AI for no real purpose.
"The money will help deal AI slop/ mediocre/ fraudulent submissions"
Seems a bit of a cop-out. The vaneer of publication without peer-review has always been a problem with pre-print. It bypasses authoritating processes. More $ doesn't change that.
www.science.org/content/arti...
My new free interactive tool, The Identity Map, helps you visualize how your social identities shape what you see online, and how open you might be to new info. Takes ~5 min. Grounded in social identity complexity research. Try it and see your Complexity Score!
I procrastinated my thesis today (due in a couple months) by walking my dog and day dreaming about opening a yarn store/bakery at the empty storefront on the corner near my apartment. It would be called Skeins and Scones
Every time I get a paper rejection I think croissants would never be rejected.
I once had to grade an graduate student paper that argued, using ethical frameworks, that using race as a factor in AI for dating app matches was valid because "maybe some races are less desirable".
I taught "AI Ethics" to graduate students, some who had industry experience already. Often the "ethics" we tried to instill were weaponized as argumentative tools to justify what they already wanted to do, now "morally justified". It's effective altruist mental gymnastics all the way down.
Yes, but only through a long disinvestment in media literacy is this possible. News consumption should be treated with more care than memes. Practicing law should be more respectable than reciting a magic combo of prior decisions. Digital architecture requires engineering not just autocomplete.
I dunno... I'm going to need some receipts on the "great things" about it...
There exist several other criteria, metrics, and alternative approaches. Fairness measures achieve fairness often by making every group worse off or by bringing down better performing groups to the level of the worse off. In rejection of this approach and with the aim of improving outcomes for historically marginalized groups, the concept of leveling up () has been proposed, whereby systems are designed with minimum acceptable harm thresholds or minimum rate constraints (in which a minimum proportion of positive outcomes must be granted to specific demographic groups). The field has produced countless tools and frameworks geared towards addressing algorithmic bias as well as some of the broader challenges discussed here. These include tools like REVISE, aimed at surfacing bias in visual datasets (); Aequitas, designed to evaluate machine learning models, particularly binary classifiers (); and IBM’s AI Fairness 360, a large suite with prebuilt datasets and visualization tools (). As detailed in the next section, the effectiveness, merit, and utility of these tools is contested. Notably, datasheets and model cards remain amongst the most consequential tools and frameworks initiating, popularizing, and subsequently normalizing integration of these tools and practices as industry standard. Datasheets for datasets () aim to improve transparency through documentation of training data, whereas model cards aid the documentation of key information about a model in a consistent and structured manner geared towards increasing transparency and accountability ().
I also look at traditional approaches to fairness in machine learning (group fairness & individual fairness), as well as less common approaches (such as the concept of levelling up) and some widely used tools
5/
Synthetic collateralized debt obligation vibes
I once waited 4 months for a desk rejection 🤡
Another time I waited 2 months for them to tell me a paper was unsubmitted because I got the citation style off
Sometimes I wonder if perishing is the more favorable option compared to publishing 😂
Having a real hard time not being filled with nihilism about working in/ for higher education when this is increasingly the attitude espoused by "successful" professors.
What a time to be entering the job market
www.science.org/content/arti...
COMMENT 02 February 2026 Does AI already have human-level intelligence? The evidence is clear The vision of human-level machine intelligence laid out by Alan Turing in the 1950s is now a reality. Eyes unclouded by dread or hype will help us to prepare for what comes next.
Nature published another pile of trash
i am trying to catch up on some of my reading but this one is getting under my skin so here’s a thread highlighting why this piece is either ill-informed or intentionally ignorant of a wealth of knowledge from embodied cog sci and related fields
1/
I am old enough to remember the vitriol directed at random humanities scholars' work complaining about the totalizing tendencies of capitalism to corrupt, colonize, or co-opt everything. I also lived long enough to read long term coverage of the literal attempt to privatize thinking in the newspaper