"We propose a fundamental change in what counts:
researchers should be assessed not by journal publications or journal-based metrics, but by the outputs they have
publicly shared themselves..."
HHMI's pitch for a healthier research assessment paradigm 🚀
zenodo.org/records/1960...
Posts by Michele Avissar-Whiting
I think I'm not old enough to be watching Euphoria.
"Maybe truly revolutionary theories must follow that trajectory. If a scientific idea is young and it’s not cringe, it probably has no promise. But if it’s old and it’s still cringe, it probably has no merit."
www.experimental-history.com/p/nothing-ev...
I didn't know this was a thing, but it just so happens I procured some mantis eggs yesterday and now waiting for them to hatch!
My favorite part is that “NIH FFS” is one comma away from capturing nearly every scientist’s sentiments about this.
Nature Publishing Group finding more dastardly ways to lock us in, well beyond what I complained about in my letter declining reviewing for them
I winced at that bit as well. But this article is clearly aimed at publishers, so it doesn’t surprise me.
This article was shared with me after I read the other one - I agree that it’s better 😁, but the two make a lot of the same points.
This is hands down the best article I've read on this topic: the imperative to transcend the superficiality of the pdf for research outputs.
"This is the bridge from search to understanding: from “find me a paper about X” to “help me reason whether X applies in context Y.”
GLP-1 agonists (eg Ozempic) are showing promise to reduce alcohol drinking and perhaps opioid use. This is, of course, a potential therapeutic success which arises from the random walking of science, not an “efficient” directed research program.
Preprints of pandemic potential - new historical piece from me on the history of bioRxiv/medRxiv, their role in the pandemic, and the way forward. 1/n journals.asm.org/doi/10.1128/...
Did slashing multiple vaccines from the childhood vaccine schedule bring the US in line with other countries? In a word, no.
The US now recommends all kids be protected against fewer diseases than South Korea, Israel, Saudi Arabia, Taiwan & many more. www.statnews.com/2026/01/09/c...
"[Our current ecosystem] treats shared infrastructure like a free beer rather than a free puppy."
An important read on the dangers of open purism, though I'm not sure I agree with the ultimate conclusion. Maybe creative solutions will emerge.
rosalynmetz.substack.com/p/openness-h...
Bumper sticker that reads "How am I driving? How does an engine even work? How can a loving god cause such agony"
I rarely appreciate a bumper sticker...
haha, yes - the latter category are the ones that should probably be retracted. Everything in between is debatable.
(of course LLMs do not need such binaries/discrete outputs, but that's where you start getting into the problems with language interpretability, hallucination, sychophancy etc...all of which may be solved by Spring)
What we ultimately need is something more like a discourse graph with nodes for claims that are either successfully built on or are "dead ends". Done properly, this would both allow for nuance (multiple claims/article) but still provide binaries to satisfy machine readability.
As others in the thread have said, almost everything is "wrong" on some level in the fullness of time. That suggests to me that retraction is not the best implement for correcting the record, not least because it doesn't allow for nuance.
Well, the retraction guy (Ivan Oransky) says at minimum, it should be an order of magnitude higher. That is just based on the minimum proportion likely to be fake or fraudulent. I buy that. The question of what else *should* be retracted is a harder one with a slippery slope problem.
petition to make the entire winter as liminal as the space between Christmas and New Years.
Damn, Wendy’s
Zoe Weissman - survivor of 2018 Marjory Stoneman Douglas High School shooting in Parkland, FL
Mia Tretta - survivor of 2019 Saugus High School shooting in Santa Clarita, CA
Both are now students at Brown University in Providence, RI.
My reflexive reaction to this is also "ick". But for the sake of argument, if the LLM's alignment with human decisions could get to 95% - including alignment on stated rationale - would people still insist that this should not be used as a screening tool, even as a first pass?
"...if they are not transparent about what criteria they are feeding into the AI, there will be a backlash from researchers." Like there is full transparency on the criteria being used by the fickle, moody humans currently calling the shots?
I remember when, in 2021, a Fox correspondent compared Fauci to Mengele…same revolting energy.
"Unless research evaluation systems are reformed, even the highest-quality new non-profit journals will face difficulties competing with top-ranking journals in terms of citation metrics and academic prestige."
"Reforms should be implemented in a coordinated and collective manner; otherwise, [those] who depart from journal-based metrics may risk a decline in international rankings, thereby reducing their competitiveness [...]."
Ay, there's the rub
utppublishing.com/doi/10.3138/...
Two posts from Bluesky. The first one shows a figure from a paper published in Nature Scientific Reports full of totally incoherent AI fabricated gibberish words. The other a comment on a recently published paper by eLife discussing the paper and its peer reviews which were published along with the paper.
Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?
Image screening is going to fail. We need audit trails for data provenance.