Posts by Lindsay Ayearst, PhD
Thanks for sharing!
Thank you for supporting and highlighting our work!
There’s a narrow runway for those of us creating shams to test digital therapeutics: too shammy and it unblinds, not shammy enough and it becomes a therapeutic.
There might be another way…
blogs.bmj.com/medical-ethi...
Happy to share our new commentary in Journal of Medical Ethics: Practical Bioethics on placebo (“sham”) controls in digital therapeutics (DTx).
jmepb.bmj.com/content/2/1/...
CBC featured the SuperAging Research Initiative led by #WesternU professor Angela Roberts.
"What superagers are teaching us is, social interaction is not just important, it's really important in later age."
www.cbc.ca/news/health/...
If our theories have to do with people's daily experience then we really need to know how well we're measuring daily experience.
So we need more of what Kevin's team is doing here! Just because your item has the right words in it doesn't mean you know what it measures, or how well.
Excited to share a new call for papers for a special issue in Psychometrika focused on Data Intensive Methods in Psychometrics that I'll be guest editing with @kyliegorney.bsky.social, @jmbh.bsky.social, @leonievogelsmeier.bsky.social, and Ben Domingue: www.psychometricsociety.org/post/call-sp...
This is a great paper on scale norming that also presents a really nice discussion of how people come up with their responses to surveys, and how that can change over time. One read of it has me wanting to put it into my assessment syllabus already.
Shared on LinkedIn to increase reach - hope you don’t mind.
Really happy to see work out from our first #MITNB workshop in 2024! The paper below suggests that people differ widely om how they fill in the PHQ-9. Troubling when it's such a standard instrument to screen for depression...
Wanna know more about the consortium? Check it out www.mitnb.org
Can’t wait to read this! 📌
Lest we forget
Photo credit: www.davidtopping.ca
Guess I am one of the 🤡 I just posted about a lot of AI-slop in reviews lately where the recommendations for revision are completely unrelated to the manuscript. Not sure why these reviews are being passed on to authors.
Also - any issues (copyright) with uploading MS to LLMs for review?
As a community, we can and must do better. Peer-reviewed publications are still the gold standard, and we all play a role in keeping them that way.
Anyone else seeing a lot of this?
Also a shoutout to the Editors who should be watching for this slop and not passing it along to authors.
End rant
Reviewers: I'd much rather you decline than submit slop. We owe it to the field and to our peers to uphold the value of peer review. Low-quality, AI-generated reviews do a disservice to everyone: authors, journals, and the credibility of science itself.
Have also seen it in the decision letters returned where reviewer comments (not just my own) are included. The slop is a waste of time in a process that is already too often too long.
The comments are often completely irrelevant to the manuscript. I recently had to write a response-to-reviewer comments, stating “not applicable” as my reply. It's frustrating, unhelpful, and frankly disrespectful to the scientific process.
Anyone else seeing an alarming amount of AI-slop in reviews? Particularly in digital health journals?
It seems clear manuscripts are being uploaded into large language models (yes, copyright issues aside) and the resulting AI-generated feedback is submitted as if it were a human review.
Will say that when I was an AE, if it was over 2 months I would review myself as long as there was at least one other reviewer. Was way too much work - but at least the authors got a decision without waiting another 2 months.
Hate to complain because as a past AE I know how hard it is to find reviewers, but recently seems time to first decision is about 3-4 months.
Others finding this?
We all complain about the system - but this is truly getting to be ridiculous.
Feeling feisty today.
Having acted as AE for 2 different journals, I know how hard it is to find reviewers, so I hare declining invitations - but have had to a few times this month due to the influx!
📱EMA is increasingly used in intervention studies to acquire a more fine-grained and ecologically valid assessment of change. But EMA is relatively burdensome. What's the added value? We tried to address this question in our new paper now out @jmirpub.bsky.social www.jmir.org/2025/1/e69297 1/n
Same - mine just went up yesterday. But compared to peer review process - this was fantastic. Very grateful. Curious - does anyone know how many went back up (approved) vs permanently removed because they were garbage?
Taking EMA to a new level! #MeasurementIsTheNewBlack
Interesting topic - a few highlights?
APS's journal AMPPS has accepted its first manuscript through a collaboration with a nonprofit that offers decentralized, community-driven peer review. @dsbarra.bsky.social
For me, the future of healthcare is community-first, digital, and value-based (incentivized). Does anyone know of examples of value-based systems for mental healthcare that are working?