Also as a reminder there is a special editing on precision neurorehabilitation coming up related to this bsky.app/profile/hele...
Posts by R. James Cotton
And the precision neurorehabilitation satellite is a wrap! Had a great time and nice to hear this was a record size satellite for
@ncmsociety.bsky.social!
Hope everyone had a good time.
There will be signage up by 7am today. The room is Ohwada. It is in the South Building on Level 1.
Now on to our @ncmsociety.bsky.social satellite tomorrow: "Precision neurorehabilitation for movement disorders: Integrating technology, neuroscience, and clinical practice" (ncm-society.org/satellite-me...). Such mentally synergistic meetings back-to-back.
Wrapped up @janeliaconf.bsky.social's Simulated Bodies meeting (www.janelia.org/you-janelia/...) — incredible to see how fast neuromuscular modeling, simulation, and neuroscience are converging.
Please consider submitting your research related to "Precision neurorehabilitation for movement disorders" to IEEE TNSRE. Thanks to the organizers of NCM satellite meeting, Profs Lee Miller and @peabody124.bsky.social
In contrast Claude Code is a tougher because of the smaller context window, but ultimately does what I want (less impulsive), including implementing complex plans through a series of subagents. It feels like the inherent forgetfulness of the subagents keeps it much more aligned on the big picture.
I want to like Gemini 3 but gemini-cli is the most enraging, terrible experience. It is so impulsive and while trying to develop a plan it just starts making random half baked code edits. No amount of agent file or memories seems to prevent this. It is unaligned and usless for anything complex.
At #SfN and want to hear about how BiomechGPT can provide a language interface to your biomechanics data? Checkout poster happening now at YY4 with Ruize Yang and @antihebbiann.bsky.social
This proposal outlines a scientific approach for advancing precision rehabilitation through a novel Causal Framework. The framework aims to optimize therapeutic interventions by integrating data science, artificial intelligence (AI), and causal inference. It synthesizes concepts from Computational Neurorehabilitation, the International Classification of Functioning and Disability (ICF), and the Rehabilitation Treatment Specification System (RTSS). Central to this approach is the development of formal causal models, referred to as "digital twins," which simulate individual patient recovery trajectories. These models leverage heterogeneous, longitudinal data, encompassing clinical assessments and high-resolution measurements obtained via AI-powered movement analysis, such as computer vision-based gait analysis. The RTSS plays a critical role by enabling detailed documentation of treatment targets, active ingredients (viewed as causal influences), and mechanisms, which directly inform the construction of these causal models. The resulting models allow for counterfactual analysis—predicting how a patient might respond to different interventions. This informs an Optimal Dynamic Treatment Regimen (ODTR), guiding the selection of therapies to maximize long-term function across multiple ICF levels (body function, activity, participation), focusing on patient-valued outcomes. Illustrative case studies include EMG biofeedback for spinal cord injury, modeling its effects on motor function via mechanisms like activity-dependent plasticity, and post-stroke gait rehabilitation, integrating neurophysiology and kinematics to differentiate recovery mechanisms and predict outcomes from various interventions (e.g., high-intensity training, FES, AFOs). The framework acknowledges challenges like data integration and model complexity, emphasizing the need for transdisciplinary efforts and advanced methods like causal representation learning.
If you are at @acrmrehab.bsky.social I will be presenting this morning (8am)
SPECIAL SYMPOSIUM: Ai-powered Movement Analysis and a Causal Framework for Precision Rehabilitation SS1 #ACRM2025 cdmcd.co/5npwMR
More demos and code available at intelligentsensingandrehabilitation.github.io/MonocularBio...
JD did a great job creating a Gradio demo so try it out and let us know what you think
And here is a video of JD going on a celebratory run that the preprint is out :)
Clinical Validity of Smartphone-Based Gait Deviation Index. A) Hip and Knee flexion angles of clinical and control groups B) GDI separates groups at risk of falls determined by the Berg Balance Scale. C) GDI correlates with 10 Meter Walk Test performance $r = 0.82$. D) GDI of LLPUs and KOA participants is significantly lower than that of control populations. Further, GDI of Transfemoral amputees is significantly lower than GDI of Transtibial amputees. E) GDI collected in clinical settings correlates ($r = 0.47$) with the mJOA, a clinically used ordinal questionnaire.
Excitingly, in addition to producing accurate kinematics, we can measure the gait deviation index from these videos. We find this is quite sensitive to a number of different clinical backgrounds, and even more responsive after neurosurgical interventions than the standard clincial outcomes (mJOA).
Quality Measures of Single Camera Fitting. A) Kinematic traces from smartphone video (red/blue) compared to ground truth (gray dashed) during walking. B) Joint angle errors across populations for select lower limb angles. n denotes the number of unique individuals in each cohort and v denotes the number of total videos for that cohort. C) Select joint angle errors with respect to camera view angle show that sagittal plane angles have the lowest error with sagittal camera views, and frontal angles have the lowest error with frontal views. D) Pelvis translation (RTE) extracted from handheld smartphone video compared to ground truth during functional gait assessments.
Central to this was extending our end-to-end differentiable biomechanics approach to fitting both 2D and 3D keypoints measured from images. This can also account for smartphone rotation measured by our Portable Biomechanics Platform, which also makes this easy to integrate into clinical workflows.
Methods Overview. We introduce a method for biomechanically grounded movement analysis in clinical settings using a handheld smartphone. \textbf{A)} Researchers held a smartphone (optionally with gimbal) while following a participant walking. Our system has no specific requirements regarding viewing angle, distance to subject, or therapist assistance. \textbf{B)} Recorded smartphone video and optional wearable sensor data are stored in the cloud, and processed using PosePipe, an open-source package implementing computer vision models for person tracking and keypoint detection. \textbf{C)} To reconstruct movement, we represent movement as a function that outputs joint angles, which—combined with body scaling parameters and evaluated through forward kinematics—generate a posed biomechanical model in 3D space. This untrained model is compared to video-extracted joint locations and optionally smartphone sensor data to compute a loss. This loss guides backpropagation to iteratively refine both the kinematic trajectory and body scale. \textbf{D)} Initially, the representation lacks knowledge of the person’s movements and scale (e.g., height, limb proportions), but after optimization, it typically tracks joint locations within 15 mm in 3D and 5 pixels in 2D.
We developed a novel approach to fitting biomechanics from smartphone video that produces kinematic reconstructions within a few degrees and has been validated across a wide range of activities and clinical backgrounds.
Super proud of this work led by JD Peiffer (@abilitylab.bsky.social and @tgsatnu.bsky.social)
preprint: arxiv.org/abs/2507.08268
project/code: intelligentsensingandrehabilitation.github.io/MonocularBio...
Open source code that lets you get state-of-the-art biomechanics from a smartphone
Enjoyed presenting on "The Good, The Bad, and the Ugly: AI for SCI Clinicians" with @ryansolinskymd.bsky.social and @josezariffa.bsky.social. Great enthusiasm from the crowd on the topic and the lively discussion and nice followup from precourse
bsky.app/profile/ryan...
AI integration in spinal cord injury medicine precourse at the ASIA2025 meeting. Led by @peabody124.bsky.social and Dr. Sarah Brueningk. Learning lessons from other successful examples in Cancer, Alzheimer’s, Cardiology.
However, more work to do actually validating these against EMG recordings (we have this in many of our trials from our wearable sensor platform) and I suspect there will be lots of work to really tune it up.
Still, finding clinically sensible patterns is a promising first step.
The imitation learning policy is trained to replicate all the kinematics from 30+ hours of markerless motion data by driving a muscle-driven model with some regularization on muscle activation. Through training it learns muscle patterns that will replicate the kinematics.
Looking forward, we hope to combine things like this.
E.g. using BiomechGPT to understand movement user requests and then run simulations in the physics simulator using imitation learning, for example.
Either way, really starting to see promise for foundation models in biomechanics
Stay tuned :)
Particularly exciting was evidence of positive transfer learning as we increased the set of tasks it is trained on.
Of course lots of it also makes mistakes (the person in that video is using a crutch!). Lots of work to do and we are just starting exploring the opportunities from this approach.
We were super excited to see how well this performed across a range of tasks, even with fairly sparse annotation.
It's doing a great job at things like activity classification, which can be rather challenging for impaired movements, and more subtle things like inferring likely diagnoses.
The next paper is BiomechGPT arxiv.org/abs/2505.18465 with @antihebbiann.bsky.social and Ruize Yang which trains a language model to be fluent in tokenized movement sequences. This draws inspiration from MotionGPT but focuses on benchmarking performance on clinically meaingful tasks.
Shoutout to recent related work from @trackingskills.bsky.social group arxiv.org/abs/2503.14637
Great to see growing enthusiasm in this space
bsky.app/profile/trac...
And shoutout to MyoSuite for pushing the neuromuscular modeling in MuJoCo
Here is another example. It also captures some imperfections like little foot slips we want to improve.
Since then, we've tuned it up to handle anthropomorphic and muscle scaling. Still lots of work to do further tuning this as there are many things we aren't scaling such as mass and inertia and optimizing w.r.t. the EMG data we have from our wearable sensors.
The first is KinTwin arxiv.org/abs/2505.13436 which trains torque driven and muscle-driven policies to replicate movements of intact and impaired gait. It detects clinically meaningful features like propulsion asymmetries and muscle timing.
Teaser from a few months back: bsky.app/profile/peab...
Over the last few years we have been developing methods for markerless motion capture of biomechanics and getting them into the clinics, such as at @abilitylab.bsky.social.
We are now developing foundation models from these large datasets and testing what this enables. Two recent preprints:
It was also the initial meeting of Julius Dewald and Bob Sainburg's Society for Neuromechanics in Rehabilitation (SoNMiR) and it was great to present on biomechanics in rehabilitation and arxiv.org/abs/2411.03919. Very exciting for this society bringing together this community.