New preprint exploring how conceptual representations can be decoded in people with aphasia using fMRI. Grateful for an exciting collaboration with @jerrytang.bsky.social, @alexanderhuth.bsky.social, @smwilson.bsky.social!
Posts by Rob Cavanaugh
For most of my tables, flextable is just a means to go between a dataframe and {officer}. Any summarizing is done before converting to flextable. Also, I confess that I often do the last 5% of formatting manually because I'm too impatient/lazy to perfect the code or lookup how to center something.
gist.github.com/rbcavanaugh/...
flexible + officer = pretty painless
If you combine this with @easystats.github.io functions in parameters/insight, model results don't need much if any cleanup either.
Right! Iโm just noting that the correlation between fixed effects in the lme4 output (which I admittedly routinely ignore) is also an estimate of the attenuated correlation (the weight means of the two levels of measure).
Am I correct In thinking that the lmer correlation of fixed effects essentially recovers the sample mean correlation and we can backwards estimate the reliability of the measure from the ratio of random effects variances and the residual? They both converge with the simulation parameters
And this is the continuous normal case. Imagine if someone were to say... take sum scores of likert items or a percent accuracy of binary responses.
Accessible materials on this topic would be so useful for students and for reviewers not familiar with bayes (addressing common misconceptions of what priors are and are not). I see this frequently in peer review as an author and editor
๐ฃ The 2026 SMaRT Workshops schedule is officially LIVE โ with workshops covering SEM, MLM, dyadic methods, time series, machine learning, clinical trials design, and more.
STATS NERD SUMMER is HERE.
Come learn something. ๐งโ๐
smart-workshops.com/workshops
Please share and RP๐
More info ๐
1/n
๐จ New Blog Post ๐จ
I wrote my very first blog post ๐
This post is for quantitative researchers working with negatively worded Likert-scale items and haven't heard that these items can cause problems. I outline key issues, describe alternatives, and recommend lit.
yannicmeier.de/2026/03/03/w...
Psychology adjacent here but Google scholar searches index article bodies; Iโve had some success searching something like โfavorite journal name(s)โ AND โlme4โ AND โosf.ioโ AND โrandomizedโ
course schedule as a table. Available at the link in the post.
I'm teaching Statistical Rethinking again starting Jan 2026. This time with live lectures, divided into Beginner and Experienced sections. Will be a lot more work for me, but I hope much better for students.
I will record lectures & all will be found at this link: github.com/rmcelreath/s...
Numerically. The same pp difference at baseline becomes very exaggerated in pomp terms as baseline scores improve.
Right. 16.7 vs 20 in pomp terms even with the exact same % point gain. More exaggerated at the tails too. Iโm skeptical that requiring those with worse baseline scores to improve more in %point terms to have the same pomp scores is a desirable measurement property in most circumstances.
Doesnโt pomp potentially conflate differences baseline ratings with group differences? Both groups could have similar improvements on the ordinal scale but quite different pomp scores if they start with different satisfaction ratings.
Or at least that using a linear model on an ordinal outcome risks mis-specifying the difference between men and women if the variances of their sleep satisfaction also differ.
โThe gang goes to city hallโ in which the gang compete to fix a clerical error with the city. Mac and Dennis try to resolve the issue amicably at city hall. Dee tries secretly dating an officer of the liquor control board. Charlie and Frank hatch a plan to get Frank elected mayor.
For those unfamiliar: adding this fantastic recorded lecture on the topic from John Kruschke. media.dlib.indiana.edu/media_object...
Folks who teach stats to graduate students in applied fields - do you discuss ordinal methods in depth? The Liddell and Kruschke paper? (Analyzing ordinal data with metric models: What could possibly go wrong?)
What do you recommend to students who often use ordinal outcomes? #statssky
oh that is slick!
I know itโs often not identifiable and challenging to fit but I get very nervous about the exclusion of the time|id random slope in these models based on the 2013 Barr paper.
image of code with BLUPs
output of code
Oh you know I assumed you were plotting the RE estimates like this. If its just the observed data, probably min/max if few estimates/group and Q3/Q1 if many. You could probably even do tiny box plots if you didn't have too many groups.
I think to some extent the knee jerk reaction against the strong claim in the paper is due to the muddiness that (unfortunately) exists between prediction and causal claims. "Who is most as risk" as you state vs. why.
If they were bars it would be a caterpillar plot right? What about blupergram. Has a nice ring to it
A "methods primer" article in the journal "BMJ Medicine", titled "Factors associated with: problems of using exploratory multivariable regression to identify causal risk factors"
We wrote an article explaining why you shouldn't put several variables into a regression model and report which are statistically significant - even as exploratory research. bmjmedicine.bmj.com/content/4/1/.... How did we do?
Pretty sure this is one of those sexy offers two very smart podcasters told me to run away from. So Iโm going to say maybe ๐
Love it! Will you be sharing data? (You knowโฆ for those of us teaching stats to CSD PhD students struggling to find cool and salient datasets)
Monty Python understood p-hacking