Advertisement · 728 × 90

Posts by Benjamin Kunc

Cool poster!

If the effect is robust, could you calculate the financial costs of increasing the statistical power in ESM studies by 1%? (And then perhaps create a little interactive ✨ Shiny app ✨ so researchers could calculate it themselves?)

6 months ago 0 0 1 0

Is there any documented procedure how to implement this for multilevel cfa?

6 months ago 0 0 0 0

If anyone has any ideas how the items could be improved, please let us know.

I'm not planning on improving them, but someone else might 🙃

6 months ago 0 0 0 0

Sorry...except for the momentary quality of online solitude. That scale doesn't work at all :)

6 months ago 0 0 1 0
OSF

Our interpretation is that the scales aren't completely useless, but they need revision, if anyone wants to use them.

See the preprint yourself: osf.io/preprints/ps...

6 months ago 0 0 1 0

When we assessed the scales' measurement invariance across several groups, we found that none of the scales function equally among the tested subpopulations.

6 months ago 0 0 1 0

We used multilevel confirmatory factor analyses first on the whole sample (N = 1,913 adolescents), which yielded positively looking results. However...

6 months ago 0 0 1 0

We assessed the structural validity of four ESM scales measuring: the quality of current social company, the quality of current online company, the quality of in-person solitude, and (!) the quality of online solitude.

6 months ago 0 0 1 0
Advertisement
OSF

Are you an ESM item measuring the quality of social experiences looking for validation? 👀

Because if you are, I've got some bad news for you. Our new preprint is out: osf.io/preprints/ps...

@gudruneisele.bsky.social @ginettelafit.bsky.social @lisapeeters.bsky.social @oliviajkirtley.bsky.social

6 months ago 6 5 2 1
Preview
Farewell, dear psych I wish you all the best. But I really need to go.

For anyone interested in reading it, here is the link:
benjaminkunc.substack.com/p/farewell-d...

I will be glad for any thoughts you might have. Enjoy!

7 months ago 0 0 0 0

Later, I found myself coming back to some of the thoughts I've had about the current state of psychology and its metascience. Since I wanted the post to be a conclusion of my psychological journey, I felt I needed to write it all down.

7 months ago 0 0 1 0

When I attempted to write down the reasoning behind it, I realized it is too long for a regular LinkedIn post (or even a bluesky thread!). As a result, I ended up with a full Substack blog consisting of two main parts. The first part is about (you guessed it) why I dropped the PhD.

7 months ago 0 0 1 0

The decision to drop my PhD might be surprising to some of you who weren't lucky enough to run away before I started rambling about methodology and psychological metascience.

7 months ago 0 0 1 0

First of all, I want to thank for the invaluable supervision and advice I got from Olivia J Kirtley, Gudrun Eisele, and Ginette Lafit throughout my PhD, and the whole Centre for Contextual Psychiatry for the opportunity to work with such amazing colleagues.

7 months ago 0 0 1 0

Farewell, dear psych

This year, I have made the difficult decision to end my PhD project, move from Belgium back to Czechia, and take an indeterminate break from psychology.

7 months ago 3 0 1 0

"Altogether, these findings point to the strength of most contemporary psychological research and suggest academic incentives have begun to promote such research. However, there remain key questions about the extent to which robustness is truly valued compared with other research aspects."

10 months ago 0 0 0 0

It's nice to find this post a few moments after discussing paper mills, academic incentives, and peer review.

10 months ago 0 0 1 0
Advertisement
Post image

Wow. The correlation of replication success with IF seems to be positive, while it's negative for citations. That's the opposite of what I expected.

1 year ago 2 0 1 0
Reproducibility made easy: Open resources for all disciplines An interactive workshop by KU Leuven ReproducibiliTea, designed to guide researchers in exploring and discussing cross-disciplinary open science resources to enhance research reproducibility.

Join our workshop "Reproducibility Made Easy: Open Resources for All Disciplines"! 🧪🔬

📅 Date: 6th May 📍 KU Leuven Open Science Day

Dive into cross-disciplinary #openscience resources and boost your research #reproducibility!

Sign up: tinyurl.com/tfs5n7kb
Learn more: tinyurl.com/28pcr324

1 year ago 3 3 0 0

IMHO, many researchers (implicitly) assume that positive results imply a successful measurement process, leading to the intuition that a thorough validation is unnecessary in such cases.

This would be reasonable, if we could trust our findings. Which doesn's seem to be the case. 4/4

1 year ago 0 0 0 0

There's also a slightly edgy take on finding positive results and the criterion validity of the scales used. 3/4

1 year ago 0 0 1 0

One of the points is that if one is about to commit factor analysis, it's best to first check the validity evidence based on content and response processes. Otherwise, one could end up with compelling, yet meaningless, statistical results. 2/4

1 year ago 0 0 1 0

👀Blogpost on measurement👀

I would have rather concluded that we don't really need factor analysis and can just rely on vibes (or previous literature). But here we are: "Factor analysis: Overrated, Misused, But Still Useful." 1/4

1 year ago 2 1 1 0
OSF

The first preprint from my PhD is out: osf.io/preprints/ps...! 🥳

We explored the temporal dynamics of four careless responding indicators (response time, within-beep standard deviation, an inconsistency index, occasion-person correlation) in ESM data across different samples.

Thread below🧵

1 year ago 51 15 7 4

People have already blamed science reform for what is happening.

For 15 years I have said: If we do not get our shit together (less publication bias, higher quality, more coordination) someone else is going to implement change top down, and we are not going to like how they do it.

And here we are.

1 year ago 34 6 5 5

We're thrilled to introduce the Reproducibility Leuven Journal Club to the Bluesky!

Our mission is to foster discussions on #reproducibility in research.

Stay tuned for more info about our upcoming journal clubs.

Let's build a vibrant #OpenScience community together!

1 year ago 10 4 0 0

Super cool project! Workshops on applied LaTeX, multilevel SEM, using web APIs, and more - a perfect toolkit for quantitative social science researchers

1 year ago 1 1 1 0
Advertisement
Post image

mic drop

1 year ago 0 0 0 0

Commonly understood theories can be properly discussed and allow for rigorous measurement, which I think are necessary for building cumulative psych science.

If you're interested in this, I definitely recommend both papers!

link.springer.com/article/10.1...

psycnet.apa.org/doiLanding?d... 5/5

1 year ago 1 0 1 0

However, economics shows that formalization can make our assumptions about complex systems explicit, and constrain our reasoning.

This can be particularly useful for establishing a common understanding and reducing the omnipresent psychological vagueness. 4/5

1 year ago 3 0 1 0