🚨 Hiring in Munich 🇩🇪: 2 open-topic PhD positions in human & machine learning (TVöD E13 80%).
Start ~June 2026 (flexible). Deadline: March 2, 2026.
Apply/info: hcai-munich.com/PhD_Job_Ad.pdf
Reposts appreciated 🙏
Posts by Kristin Witte
Interested in a PhD looking at trajectories of repetitive negative thought using machine learning and computational modelling?
Take a look at our project in the @drivehealth.bsky.social portfolio and get in touch with any questions!
showcase.drive-health.org.uk/project/quan...
⚠️ Warning: Joining our team as a postdoc has a known side effect of receiving multiple job offers shortly after. We currently have 3 such cases.
If you are willing to take that risk, applications close in 1 week. The upside is you'll be in a supportive and stimulating environment in munich.
Join us in beautiful Munich! If you have any questions about the city, the group or your application, feel free to reach out! ✨
Publication alert! Our latest paper with @kristinwitte.bsky.social and @ericschulz.bsky.social is out in Scientific Reports: rdcu.be/eydDQ! We explore whether model-based exploration strategies can be used to capture individual differences. Curious how cognitive models meet personality science?
Beyond thrilled that this work has now been published in Scientific Reports 🎉
rdcu.be/eydDQ
#CPConf2025 is a wrap - thanks to everyone who made this event so special! @unituebingen.bsky.social @tueneurocampus.bsky.social
Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.
Join us in Munich as a PhD Student! I've been in this lab for about 4.5 years and still love seeing these people everyday. Feel free to reach out if you have any questions.
🚀 Just 5 days since publication, our paper is already the #1 trending article on @natureportfolio.nature.com npj Digital Medicine! 🥳
🤖 Check out how we induced anxiety in Chat-GPT using traumatic narratives - then calmed it down with mindfulness & meditation: www.nature.com/articles/s41... 📝
How does ChatGPT respond to emotional conversations? 🤖
Our latest research dives into AI, mental health, and human-AI interaction—sparking important discussions! 🙏
Now featured in Fortune Magazine @fortunemagazine.bsky.social 📰🔗👇
fortune.com/2025/03/09/o...
I'm hoping to hire a postdoc this year to join our growing lab at Yale! Looking for someone interested in EMA/digital phenotyping, formal theories, complex systems, & computational psychiatry.
If this sounds like you, please reach out! Official ad coming soon.
Thank a lot! That is very interesting indeed. We also found that across all tasks the model-free switch probability (probability of choosing a different option than on the previous trial) is more reliable and has greater convergent validity. Very happy to discuss more about this.
This work was pregistered with the OSF and all code and data is publicly available.
Preregistration: osf.io/cavj3
data and code: osf.io/ra7su/
In sum, we show that simplified modelling and creating latent factors improves robustness. Still, extracted strategies are more linked to working memory than real-world exploration, raising questions about the validity of few-armed bandit tasks for studying exploration.
Our changes improved the convergence of exploration strategies across tasks. The latent factors were however still not related to any self-reported measures of exploration or to any psychiatric constructs. They did however show a strong correlation with working memory capacity.
We subsequently made two main changes to the analyses: 1) We simplified and unified the computational modelling for the Horizon task and the 2-armed bandit. 2) We constructed theoretically informed latent constructs for the exploration strategies across all tasks.
When using the standard modelling approaches, the test-retest reliability of all model parameters and most task measures was rather poor. We also found low correlations between model parameters from different tasks, despite these parameters measuring the same strategies.
We tested these requirements by collecting data on three bandit tasks, five questionnaires and three working memory tasks. We retested participants on all tasks after 6 weeks, to test for temporal stability.
This type of approach is very valuable. However, for treating individual differences in model parameters as cognitive traits, we need the model parameters to be:
🕐stable over time
🔗converging across tasks
📋related to real-world exploration
A growing number of studies uses few-armed bandit tasks to test how people explore. Recently, an increasing body of research (including our own) turned to individual differences in this behaviour and related model parameters to questionnaire measures of psychiatric traits.
Preprint alert! We explore 3 exploration tasks, testing if they measure a stable construct & its link to real-world exploration. We find improved robustness of latent factors compared to single-task estimates.
With Mirko Thalmann & @ericschulz.bsky.social
🔗https://osf.io/preprints/psyarxiv/tzuey