And here's a movie of what the task looks like:
Posts by Marius 't Hart
This is based on earlier work with a semi-circle "track". This task is a bit more complicated and (hopefully) engaging. We expected to see an improvement in both speed and accuracy, but it seems people prioritize accuracy from the start, mostly (but not only) improving on speed.
Here is a sneak preview: deniseh.lab.yorku.ca/files/2026/0...
There is no learning here, but there is extended practice. People use a stylus to move a car along a race track. Raphael checks if people get better, but also has them come back the next day to see how they do on permutations of the track.
Raphael is presenting his poster in session 2 (P2-F-168): Practice makes precision: Motor execution improvements in a 2-D racing task are retained and generalized but depend on movement direction.
Help shape the future of Neuromatch courses. We're considering new courses and want to hear directly from our community - your input will guide what we build next!
What should Neuromatch teach next?
๐ค #Connectomics and Neural Dynamics
๐ค #ComputationalBehavioural Analysis & Modelling
๐ค Computational Approaches to #Neurodegeneration
๐ค Or something else?
Take the 2 minute survey here: airtable.com/appgbLQW3nbb...
#NeurmatchAcademy #OpenScience
We also just uploaded a pre-print here: doi.org/10.64898/202...
Here's a sneak preview: deniseh.lab.yorku.ca/files/2026/0...
Elysa Eliopulos presents her poster (P1-F-142) today and tomorrow: "Redefining Explicit Motor Adaptation into Three Phenotypes" We find 3 ways in which people develop a strategy to counter a visuomotor rotation, _and_ that the mix of strategies depends on rotation size.
Sneak preview: deniseh.lab.yorku.ca/files/2026/0...
This is part of a collaboration that aims to test if postue of surgeons (heavy vests; bent backs) can be improved by giving feedback. We test if the vibrotactile feedback affects the performance of (the surgeons' skilled and) precise movements, so we're hoping for a null effect there.
@westcoastalice.bsky.social presents her poster (P1-B-13) "Vibrotactile stimulus detection during Fitts aiming: Implications for the use of biofeedback devices during skilled, manual tasks" the coming two days.
Sneak preview here: deniseh.lab.yorku.ca/files/2026/0...
Today and tomorrow at #NCM2026 Jacob will be presenting his poster "Learning to throw against the current: internal models of object-environment dynamics" (P1-F-150). How do people learn to account for a sideways current, when launching a ball to a target? VR and 2D solution spaces...
For those attending @ncmsociety.bsky.social this week, come check out posters and talks from the Physical Intelligence Lab! #NCM #MotorLearning #CognitiveNeuroscience
I should of course add the link to the preprint: www.biorxiv.org/content/10.6...
For now, please don't give feedback here, but at Elysa's poster: P1-F-142 "Redefining explicit adaptation into three phenotypes"
So: explicit strategy development can not really be captured in a single function, and the development also depends on rotation size. The dependence on rotation size probably indicates that larger rotations are more noticeable, informing the cognitive process of strategy development.
We also saw that many people do not spontaneously express a strategy at all. The proportion of people expressing a strategy did decrease with rotation size, and so did the mix of ways in which people develop a strategy.
However, like Ding Wei and tsay.bsky.social, we saw multiple ways for people to learn an explicit strategy: some did so incrementally, some did this by exploring far and wide in the solutions, and some did so in a rather sudden way. We call these, "gradual", "exploratory" and "stepwise" phenotypes.
Elysa will be presenting this work at #NCM2026 soon, so we thought to put up a first pre-print, a bit earlier than usual.
previously we noticed that explicit strategies do not develop gradually, but in discrete steps. Here we wanted to test if that depended on rotation size
Manuscript submitted: Leo di caprio looking young and healthy in Titanic Manuscript accepted: Leo di caprio looking like he's at death's door in The Revenant
Just had a paper accepted after 4 rounds of revisions and *10* reviewers!
"Preprint servers are a time machine, they move everyone forward 12 months and speed up the exchange of ideas"
ht @pedrobeltrao.bsky.social www.evocellnet.com/2021/06/a-no...
"Frame Effects Across Space and Time" is published: doi.org/10.1167/jov....
The effect:
- extends a bit over space,
- not time,
- mostly depends on frame edge positions
- doesn't decrease with experience
Looks like vision uses references to localize objects in space.
Personally, I try to come up with sensible hard cutoffs. E.g. if a directions is more than 90 degrees of, you're not even close to going the right way, or if your RT is 20 seconds, you were probably doing something else anyway.
Yeah, I'm on the fence about that. The output here has aggregate data that can be used to do an ANOVA (and family) in your favourite stats software. So there is no way to remove outliers afterward from that file. But... no one has to use the feature, and you can always go back and run it again.
I'll be looking for bugs and may update this periodically. If you see any, please let me know.
mthart.shinyapps.io/PreProcessor/
So I hesitate to put this out here, but on the other hand, I should probably show graduate students how to use their understanding of code to make things like this and be competitive in the current job market.
Vibe coded a Shiny app for data preprocessing. Meant as a tool for undergrads doing projects in the lab. Computer literacy is low across the board, and with tools like these (and the LLMs to build them) there's no need to understand what you're doing.