Advertisement Β· 728 Γ— 90

Posts by Ti-Fen Pan

Preview
Reward function compression facilitates goal-dependent reinforcement learning Reinforcement learning agents learn from rewards, but humans can uniquely assign value to novel, abstract outcomes in a goal-dependent manner. However, this flexibility is cognitively costly, making l...

πŸ“’ New preprint!
How do humans learn from arbitrary, abstract goals? We show that, when goal spaces can be compressed, costly working-memory processes give way to internalized reward functions, enabling efficient goal-dependent reinforcement learning. @annecollins.bsky.social arxiv.org/abs/2509.06810

7 months ago 59 24 2 1

βœ… Works with tractable & intractable models
βœ… Handles continuous & discrete latent spaces
βœ… Applicable to real-world datasets.

7 months ago 2 0 0 0

We tested our method on various cognitive models, including reinforcement learning models, Bayesian models, and GLM-HMM. A collective effort with @drjingjing.bsky.social @wdt.bsky.social @annecollins.bsky.social‬

7 months ago 3 0 1 0
Preview
Latent variable sequence identification for cognitive models with neural network estimators - Behavior Research Methods Extracting time-varying latent variables from computational cognitive models plays a key role in uncovering the dynamic cognitive processes that drive behaviors. However, existing methods are limited ...

New paper out in Behavioral Research Methods! We introduce a simulation-based method using RNNs to infer trial-varying latent variables from computational cognitive models.
Link: doi.org/10.3758/s134...
#ComputationalCognitiveModeling #SBI

7 months ago 28 8 1 1