Advertisement · 728 × 90

Posts by Taylor Beauvais

sociology's value is rarely noticed to those outside the field and it falls on every individual to try and justify their value. Nobody asks that of Chemists. I can think of few fields that have had a worse time, historically, proving their authority.

2 weeks ago 0 0 0 0

"sociology is thriving while Sociology is dying"

...And Sociology™, killed it thanks to tremendous gatekeeping and inaccessible writing in Sociology® journals, methodologically and topically conservative Sociology® associations, and institutional Sociology®'s tepid interest in application.

2 weeks ago 0 0 1 0
Preview
Meta and YouTube Found Negligent in Landmark Social Media Addiction Case A jury found the companies negligent in their app designs, harming a young user with design features that were addictive and led to her mental health distress.

Breaking News: Meta and YouTube harmed a young user’s mental health with addictive design features, a jury found in a landmark trial.

4 weeks ago 150 35 10 12

This would help academia so much too. When the zeitgeist centers some billionaire's rambling it amplifies their framing of the technology. Suddenly we have tons of academics asking nonsense questions like "is this code actually alive??"

4 weeks ago 1 0 0 0

Imagine if tech reporters reported more on how tech systems function - or don't...rather than reporting on the business machinations of tech execs and companies. This isn't always the journalist's decision of course, but I think about this a lot: how much more useful tech reporting could be.

4 weeks ago 4 1 1 1
Preview
Considering How AI Destroys Democratic Institutions Boston University School of Law professors Woodrow Hartzog and Jessica Silbey say today's AI systems are a "death sentence" for civic institutions.

For this week's Tech Policy Press podcast, Justin Hendrix spoke to Boston University's Woodrow Hartzog and Jessica Silbey about their forthcoming law review paper, "How AI Destroys Democratic Institutions." They say the "affordances of AI systems extinguish" key features of democratic institutions.

1 month ago 16 9 0 0
Preview
Why Equitable Access to Vaginal Birth Requires Abolition of Race-Based Medicine More cesarean deliveries among Black and Hispanic women in the United States has long demonstrated racial inequity in obstetrical care.

journalofethics.ama-assn.org/article/why-...

1 month ago 0 0 0 1
Preview
eGFR Test Change: Removal of Race from the Calculation Learn more about changes have been made to the eGFR (estimated glomerular filtration rate) calculation.

www.kidneyfund.org/all-about-ki...

1 month ago 0 0 1 0
Preview
A review of the effect of skin pigmentation on pulse oximeter accuracy Objective. Pulse oximetry is a non-invasive optical technique used to measure arterial oxygen saturation (SpO2) in a variety of clinical settings and scenarios. Despite being one the most significant technological advances in health monitoring over ...

pmc.ncbi.nlm.nih.gov/articles/PMC...

1 month ago 0 0 1 0
Advertisement
Preview
On the Dark History and Ongoing Ableist Legacy of the IQ Test How research helps us understand the past to create a better future.

www.bunkhistory.org/resources/on...

1 month ago 0 0 1 0
Preview
The Fight for Women’s Health When Medical Research Failed Women in the Wake of the Thalidomide Scandal, Dr. Nanette Wenger Fought Back.

www.lostwomenofscience.org/post/the-fig...

1 month ago 0 0 1 0

Science involving social constructs is extremely fallible and often perpetuates racism/ sexism/ etc.. For example, all medical and statistical processes involving identity are contextually constrained.

IE: thalidomide, IQ tests, pulse ox, eGFR kidney function equation, vbac calculator...

1 month ago 0 0 1 0
Preview
A Building Code for Digital Infrastructures We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons.

We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons. Building codes are not suggestions; they are the baseline that ensures a structure is fit for its intended use, they write.

1 month ago 12 5 0 0

Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇

1 month ago 223 131 3 13

Out here trying to find a job post graduation. Writing cover letters, crafting resumes, revising teaching statements and carefully detailing research statements... and high ranking officials with serious jobs are yabbering about teleporting to waffle house.

1 month ago 0 0 0 0

Lol Jesus Christ. Social science probs shouldnt take direction from a marketing professor who (checks notes) says Muslims bring mass violence, gender shouldnt be a protected class, and Mein Kampf wasnt so bad for Jewish people.

"Suicidal empathy" was never a legitimate frame for social research

1 month ago 0 0 0 0

Hard not to read something like this and wonder what the hell we're doing here.

This is a fairly long saga with plenty of named and implied actors doing work that is essentially shit posting with AI for no real purpose.

1 month ago 1 0 0 0
Preview
ArXiv, the pioneering preprint server, declares independence from Cornell As an independent nonprofit, it hopes to raise funds to cope with exploding submissions and “AI slop”

"The money will help deal AI slop/ mediocre/ fraudulent submissions"

Seems a bit of a cop-out. The vaneer of publication without peer-review has always been a problem with pre-print. It bypasses authoritating processes. More $ doesn't change that.

www.science.org/content/arti...

1 month ago 0 0 0 0
Video

My new free interactive tool, The Identity Map, helps you visualize how your social identities shape what you see online, and how open you might be to new info. Takes ~5 min. Grounded in social identity complexity research. Try it and see your Complexity Score!

1 month ago 20 3 2 1
Advertisement

I procrastinated my thesis today (due in a couple months) by walking my dog and day dreaming about opening a yarn store/bakery at the empty storefront on the corner near my apartment. It would be called Skeins and Scones

Every time I get a paper rejection I think croissants would never be rejected.

1 month ago 1 0 0 0

I once had to grade an graduate student paper that argued, using ethical frameworks, that using race as a factor in AI for dating app matches was valid because "maybe some races are less desirable".

1 month ago 1 1 1 0

I taught "AI Ethics" to graduate students, some who had industry experience already. Often the "ethics" we tried to instill were weaponized as argumentative tools to justify what they already wanted to do, now "morally justified". It's effective altruist mental gymnastics all the way down.

1 month ago 26 10 0 0

Yes, but only through a long disinvestment in media literacy is this possible. News consumption should be treated with more care than memes. Practicing law should be more respectable than reciting a magic combo of prior decisions. Digital architecture requires engineering not just autocomplete.

1 month ago 0 1 0 0

I dunno... I'm going to need some receipts on the "great things" about it...

1 month ago 1 1 0 0
There exist several other criteria, metrics, and alternative approaches. Fairness measures achieve fairness often by making every group worse off or by bringing down better performing groups to the level of the worse off. In rejection of this approach and with the aim of improving outcomes for historically marginalized groups, the concept of leveling up () has been proposed, whereby systems are designed with minimum acceptable harm thresholds or minimum rate constraints (in which a minimum proportion of positive outcomes must be granted to specific demographic groups).

The field has produced countless tools and frameworks geared towards addressing algorithmic bias as well as some of the broader challenges discussed here. These include tools like REVISE, aimed at surfacing bias in visual datasets (); Aequitas, designed to evaluate machine learning models, particularly binary classifiers (); and IBM’s AI Fairness 360, a large suite with prebuilt datasets and visualization tools (). As detailed in the next section, the effectiveness, merit, and utility of these tools is contested.

Notably, datasheets and model cards remain amongst the most consequential tools and frameworks initiating, popularizing, and subsequently normalizing integration of these tools and practices as industry standard. Datasheets for datasets () aim to improve transparency through documentation of training data, whereas model cards aid the documentation of key information about a model in a consistent and structured manner geared towards increasing transparency and accountability ().

There exist several other criteria, metrics, and alternative approaches. Fairness measures achieve fairness often by making every group worse off or by bringing down better performing groups to the level of the worse off. In rejection of this approach and with the aim of improving outcomes for historically marginalized groups, the concept of leveling up () has been proposed, whereby systems are designed with minimum acceptable harm thresholds or minimum rate constraints (in which a minimum proportion of positive outcomes must be granted to specific demographic groups). The field has produced countless tools and frameworks geared towards addressing algorithmic bias as well as some of the broader challenges discussed here. These include tools like REVISE, aimed at surfacing bias in visual datasets (); Aequitas, designed to evaluate machine learning models, particularly binary classifiers (); and IBM’s AI Fairness 360, a large suite with prebuilt datasets and visualization tools (). As detailed in the next section, the effectiveness, merit, and utility of these tools is contested. Notably, datasheets and model cards remain amongst the most consequential tools and frameworks initiating, popularizing, and subsequently normalizing integration of these tools and practices as industry standard. Datasheets for datasets () aim to improve transparency through documentation of training data, whereas model cards aid the documentation of key information about a model in a consistent and structured manner geared towards increasing transparency and accountability ().

I also look at traditional approaches to fairness in machine learning (group fairness & individual fairness), as well as less common approaches (such as the concept of levelling up) and some widely used tools

5/

1 month ago 40 5 1 0

Synthetic collateralized debt obligation vibes

1 month ago 0 0 0 0

I once waited 4 months for a desk rejection 🤡

Another time I waited 2 months for them to tell me a paper was unsubmitted because I got the citation style off

Sometimes I wonder if perishing is the more favorable option compared to publishing 😂

1 month ago 0 1 1 0
Advertisement
Preview
Why I may ‘hire’ AI instead of a graduate student “It can competently perform a lot of the work I need immediately,” this professor writes

Having a real hard time not being filled with nihilism about working in/ for higher education when this is increasingly the attitude espoused by "successful" professors.

What a time to be entering the job market

www.science.org/content/arti...

1 month ago 0 0 0 0


    COMMENT
    02 February 2026

Does AI already have human-level intelligence? The evidence is clear
The vision of human-level machine intelligence laid out by Alan Turing in the 1950s is now a reality. Eyes unclouded by dread or hype will help us to prepare for what comes next.

COMMENT 02 February 2026 Does AI already have human-level intelligence? The evidence is clear The vision of human-level machine intelligence laid out by Alan Turing in the 1950s is now a reality. Eyes unclouded by dread or hype will help us to prepare for what comes next.

Nature published another pile of trash

i am trying to catch up on some of my reading but this one is getting under my skin so here’s a thread highlighting why this piece is either ill-informed or intentionally ignorant of a wealth of knowledge from embodied cog sci and related fields

1/

1 month ago 809 299 4 52

I am old enough to remember the vitriol directed at random humanities scholars' work complaining about the totalizing tendencies of capitalism to corrupt, colonize, or co-opt everything. I also lived long enough to read long term coverage of the literal attempt to privatize thinking in the newspaper

1 month ago 2115 524 26 5