Advertisement · 728 × 90

Posts by Karoline Huth

Preview
APS Editorial Fellowship Program: Call for Applications Deadline: February 6, 2026APS is pleased to announce the application process for the next cohort of the Editorial Fellowship Program (EFP) is now open. This program aims to increase opportunities for ...

Call for Applications! APS's Editorial Fellowship Program (EFP) is now OPEN #AcademicSky

📝 Evaluate submitted manuscripts
🔍 Select and invite reviewers
🤝 Receive mentorship across five APS journals
💵 US $1,000 stipend

Learn more & apply by February 6 www.psychologicalscience.org/publications...

3 months ago 5 4 0 0
Announcement that the applications are open for the third Amsterdam Complexity School on Climate Change, with speakers including Clare Farrell, Ben Franta, Julia Steinberger, Vítor Vasconcelos, and Rachel Donald, with more to come!

Announcement that the applications are open for the third Amsterdam Complexity School on Climate Change, with speakers including Clare Farrell, Ben Franta, Julia Steinberger, Vítor Vasconcelos, and Rachel Donald, with more to come!

We are beyond excited to announce that the applications are now open for the third Amsterdam Complexity School on Climate Change!

Come visit a beautiful city, hear from world-renowned experts, and work with passionate individuals on challenges related to climate change.

More info: acscc.nl

5 months ago 9 6 0 0

I don't mind the statement, rather the parallels to the 2017 paper; our paper makes a different point contradicting your paper. Thus, my reference to the "non-replicability".

Our paper was meant as a scientific evaluation of the evidence in highly parameterized models

6 months ago 0 0 0 0

One thought on networks as "methodological dead ends": for me, networks are just one of many methods useful for some RQ that psychologists have. I agree they've been overused for the wrong questions and often over-interpreted—but it's not the network’s fault researchers used them wrongly.

6 months ago 0 0 1 0

I assume networks are in great company with other highly parameterized models in psychology like SEM models for the amount of uncertainty that is present in the findings.

Mostly in psychology we have (had) too little data for the large models we estimate (e.g., SEM, network).

6 months ago 1 0 2 0

Appreciate you picking up our work. I share many critiques of networks, but I don’t think our paper supports your argument. We show most edges are inconclusive.

Non-replication is conclusive evidence for an edge, it is in network A but not B. Inconclusive edges can’t establish replication (for me)

6 months ago 0 0 2 0

Happy to share that our large-scale network analysis is now out in @nathumbehav.nature.com

We show that networks are often supported by too little evidence from the data for results to be reported with confidence, not meaning that results are flawed but rather suggests caution in interpretation.

6 months ago 7 3 0 0

Such a great app and tool! What is your reasoning in still showing edges between two nodes even if one indicates that one doesn't think there is a connection? Can i indicate that a link is for sure not there?

1 year ago 0 0 1 0
Advertisement

that would require the papers to have a testable research question 🙊

also, happy to give you access to our documents to assess your guess

1 year ago 1 0 0 0

I can see that par cor are more prone to differences because you condition on a set of variables (and if that set of variables differs between two samples the par cor can also differ). zero-order cor and par cor have the same amount of parameters and as such I would expect the same robustness

1 year ago 1 0 1 0

Interesting thought. For me, robustness of findings is a necessary condition to determine (non-)replication.

1) robustness (in this paper): sufficient support from data that my findings hold.
2) non-replication: there is sufficient evidence in sample A and B, in A it is present and absent in B

1 year ago 1 0 1 0

and yes i am also super curious about the uncertainty underlying reported individual-level networks 🤓

1 year ago 1 0 0 0

"[...], if an edge is present in one sample, but not in another, and we have inconclusive evidence in at least one of the samples, this does not mean that there is a contradiction [...]" (p9) We simply have insufficient information in at least one sample. With more data both edges may be present

1 year ago 0 0 1 0

Thanks for the kind thread Miri! To clarify the last point: We fully agree with you that there are/were concerns in the robustness of the network literature. The difference (as I see it) is that we attribute it to insufficient information (data), rather than an inherent property of the networks.

1 year ago 4 0 2 0

Thankful for... 🙏
...all the researchers providing access and input to their data
...the dedicated assistants and colleagues that helped with data collection and cleaning
...everyone providing helpful input and calming words during the extensive project 🙏🧡 /end

1 year ago 5 0 0 0

Applied researcher interested in understanding your phenomenon from a network perspective? Use our website to get an insight into previous studies for potential meta-networks or insights into the nodes/questionnaires commonly included.

1 year ago 1 0 1 0

All results are available in an accompanying open-access website uvasobe.shinyapps.io/ReBayesed/

Methodologist interested in methodology development? Use our resource of aggregated statistics for realistic simulation conditions (i.e., network density and expected edge weights).

1 year ago 1 0 1 0
Advertisement
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

What to do with...
...past network studies: Interpret their findings which caution and ideally aggregated them as a meta-network
...future network studies: Conduct a Bayesian analysis of your network, so you are at least aware of how (un)certain your results are. See how to doi.org/10.1177/2515...

1 year ago 1 0 1 0

Our results do not imply a criticism of network models in general, but rather point out the inherent uncertainty underlying highly-parameterized models estimated on the common insufficient sample sizes.

1 year ago 1 0 1 0

This does not mean that most network results are flawed but rather that most network findings are reported with more confidence than is warranted from the data.
Many network results are overstated of which some may be incorrect (not hold upon further data).

1 year ago 3 0 1 0
Post image

80% of all edges in the analyzed networks lack sufficient data support to confirm their presence or absence. One-third show inconclusive evidence (BF < 3), half show weak evidence (BF 3–10), and fewer than 20% show compelling evidence (BF > 10).

1 year ago 2 2 1 0

Are psychometric networks sufficiently supported by data such that one can be confident when interpreting its results? We analysed 294 psychometric networks from 126 papers with the Bayesian approach to address this question @jmbh.bsky.social Sara Ruth van Holst @maartenmarsman.bsky.social 🧵

1 year ago 51 16 1 2
Post image

Eager to tackle challenges related to climate change?

Then apply for the upcoming Amsterdam Complexity School on Climate Change (ACSCC), hosted by the Institute for Advanced Study, which brings together early-career researchers from all disciplines and non-academic stakeholders.

Website: acscc.nl

1 year ago 7 2 0 0