Advertisement · 728 × 90

Posts by Pedro Lopes

Post image

If you're curious or working in Neuroscience & HCI and going to
@chi.acm.org #CHI026 join the #NeuroHCI meetup Wed 15 Apr 14:15-15:45 🧠

neurohci.github.io/CHI2026/

W/ @iddowald.bsky.social Yudai Tanaka Yun Ho @jamieward.bsky.social @drmaxlwilson.bsky.social Kia Höök
@pedrolopes.org Rainer Malaka

4 days ago 4 2 1 0

I should note it's the first time in the history of this class that it's being taught with @tengshanyuan.info and it took me like 10m into the first lecture to say, well this symbol was made by ... "Prof. Shan Yuan Teng"! Let's see if we even get to the end without his help this time!

5 days ago 1 0 0 0
Pedro and Romain smiling after ordering all the PCBs

Pedro and Romain smiling after ordering all the PCBs

That time of the year where Romain Nith and I teach PCB (w @kicad.org and @jlcpcb.bsky.social ) to folks with no electronics/EE background, it's awesome it is to see them make physical devices from scratch! This is hcipcb.plopes.org class at @uchicagopsd.bsky.social @uchicagocs.bsky.social

6 days ago 4 0 0 1

I'm still so mind blown by all her work I shared a screenshot of Instagram rather than the actual photo... ! Congratulations @xjasminelu.bsky.social

1 week ago 1 0 0 0
Everyone celebrating with Jasmine in front of her PhD slide!

Everyone celebrating with Jasmine in front of her PhD slide!

I can't put this into any other words except that "you should check our Dr. Jasmine Lu's work at jasminelu.site "
.... A huge congratulations to @xjasminelu.bsky.social on her absolutely stellar PhD and to Eric Paulos Ken Nakagaki and Greg Abowd for serving on this most incredible committee!

1 week ago 1 0 0 1
Post image

[Appreciate repost or referring me to other channels!]
My UM team is recruiting DHH adults who have experience using Social VR apps (casual or professional). The study will be remote, last about 1 hour and compensated. Please fill out this form if you are interested: tinyurl.com/mrtjwtbv! Thanks!

2 weeks ago 3 7 0 0

Finally, this paper got a Best Paper Award 🏆! Both lead authores Romain Nith and Yun Ho, and co-author Shan-Yuan Teng will be at #CHI2026.

Read the paper: embodied-ai.tech/static/pdfs/...

Watch the video: youtube.com/watch?v=pJM2...

Get the code: github.com/humancompute...

or DM us! 7/7

2 weeks ago 0 0 0 0
Images (with consent from participants and blurred where requested) of people interacting with our system to perform real tasks, like use a analogue camera they ever used before, or a weird tool (a magnetic sweeper).

Images (with consent from participants and blurred where requested) of people interacting with our system to perform real tasks, like use a analogue camera they ever used before, or a weird tool (a magnetic sweeper).

But how do people perceive having their bodies moved by an AI? In a study, we found participants successfully completed physical tasks while guided by generative EMS, even when EMS instructions were (purposely) erroneous! (we injected some cool errors) 6/7

2 weeks ago 0 0 1 0
Advertisement
Two usages of our system, here, the same request "EMS help me" generates two different instructions because, a spray can with paint should be shaken and only then pressed, but a spray can with oil should only be pressed (no need to shake): the multimodal AI gets this entirely from context (i.e., sensors, POV image from smart glasses, etc) = no custom code!

Two usages of our system, here, the same request "EMS help me" generates two different instructions because, a spray can with paint should be shaken and only then pressed, but a spray can with oil should only be pressed (no need to shake): the multimodal AI gets this entirely from context (i.e., sensors, POV image from smart glasses, etc) = no custom code!

Here, the same "help me" request generates two different movements because, a spray can with paint should be shaken, but a spray can with oil needs no shake: the multimodal AI gets this entirely from context (i.e., sensors, POV image from smart glasses, etc) = no custom code! 5/7

2 weeks ago 0 0 1 0
System diagram from our embodied ai, note how it uses many sensors, smart glasses to see what the user sees, microphone to hear them, in order to generate contextually relevant instructions

System diagram from our embodied ai, note how it uses many sensors, smart glasses to see what the user sees, microphone to hear them, in order to generate contextually relevant instructions

To implement our form of Embodied AI we use computer-vision + large-language-models + lots of user-worn sensors (smart glasses, GPS, etc) to generate contextually-relevant EMS instructions, constraining these to a muscle-stimulation knowledge base that cares of joint limits! 4/7

2 weeks ago 0 0 1 0
Four tasks our system can do, bike rack, open pill bottle, spray and open a weird window.

Four tasks our system can do, bike rack, open pill bottle, spray and open a weird window.

Instead, in embodied-ai.tech we explore a different approach where muscle-stimulation instructions are AI-generated considering the user’s context (e.g., pose, location, surroundings). The resulting system can perform physical assistance without any custom programming! 3/7

2 weeks ago 0 0 2 0
A photo of affordance++ paper

A photo of affordance++ paper

I have been working on electrical muscle stimulation (EMS) for physical assistance for more than a decade (in affordance++, we showed EMS can help manipulate tools!). But, EMS-assistance is highly specialized and non-contextual—it ignores your body pose & ignores your needs 2/6

2 weeks ago 0 0 1 0
Preview
Generative Muscle Stimulation: Providing Users with Physical Assistance by Constraining Multimodal-AI with Embodied Knowledge A physical AI system that provides users with electrical muscle stimulation for general-purpose physical assistance, constrained by embodied knowledge.

New paper🏆: Generative Muscle Stimulation: Physical Assistance by Constraining Multimodal-AI with Embodied Knowledge.
A physical AI that generates muscle-stimulation instructions based on your physical task goals: embodied-ai.tech
demo video: www.youtube.com/watch?v=pJM2...
1/6

2 weeks ago 0 0 1 0

Might have joined in after your genius move. :)

2 weeks ago 1 0 0 1

My Sun?
O)))

2 weeks ago 2 0 0 0
Video

Text or video is not only form AI can take.

In embodied-ai.tech (#CHI2026, Best Paper 🏆) we create an embodied AI that acts via muscle stimulation to perform physical tasks: e.g., place a bike on a bus rack and more!
www.youtube.com/watch?v=pJM2...

... Yun Ho (www.yunho.org) and Romain Nith!

3 weeks ago 3 1 0 0
Advertisement

As eLife editor, it has been exciting and we are in a new steady state: quality of papers remains high, variance has dropped; work during the editorial phase has gone up tremendously; review quality is excellent and has improved. Kudos to @behrenstimb.bsky.social and the leadership team.

3 weeks ago 31 5 0 0
Post image

Great keynote by Olaf Blanke at #augmentedHumans20026 touching on consciousness and neuroscience at #AHS2026

3 weeks ago 3 1 0 0
Post image

I'm chairing the first session of #augmentedhumans2026 on #haptics, sensory substitution and perception! What a collection of incredible papers! (They will be soon online, but DM if you want a preview). #AHs2026

3 weeks ago 3 0 0 0
Post image Post image Post image

#augmentedhumans2026 conference started (#AHs2026) with a record number of papers and participants! Thanks to the organizing committee for all the hard work! Papers will be in the ACM DL very soon!

3 weeks ago 1 0 0 0
Post image

Building a more sustainable CHI 🌍 We’re sharing resources and guidance to help the community reduce the environmental footprint of the conference. Learn more about sustainability efforts for CHI 2026: chi2026.acm.org/2026/03/02/s...
#CHI2026 #CHICommunity #HCI #SustainableCHI

1 month ago 4 2 0 0

Dan you are always rocking. Hope to see you there and if possible will try to attend those papers (this year might be a bit challenging for me to be at talks, but will do my best!)

3 weeks ago 1 0 1 0
Post image

Barcelona PhD students in HCI: join the global CHI community! 🌍
Apply for the CHI 2026 Local Student Scholarship and receive full conference registration.

Priority for first-time CHI attendees.

📅 Mar 20
📩 local@chi2026.acm.org

#CHI2026

4 weeks ago 2 1 0 0

Wow, what a fantastic set of contributions to CHI26! Congrats everyone at UMD!

3 weeks ago 1 0 0 0
Advertisement

Please follow the official details here about the high demand of #CHI2026 registrations (a good problem for our community!) and how to navigate the waitlist.

3 weeks ago 1 0 0 0

📝 Our new paper officially coming out at #chi2026: we hope to help people think about systems like #CommunityNotes, the design choices they make, and the normative implications of relying on them to moderate our information ecosystem. 📝

2 months ago 31 7 1 1
Post image

🎉 Thrilled to share that our paper "Reporting and Reviewing LLM-Integrated Systems in HCI: Challenges and Considerations" has been conditionally accepted to #CHI2026!
A thread 🧵

1 month ago 10 5 1 0

Lots of AI at #CHI2026, soon we will release the poster schedule and there will be some more on this front for @dbuschek.bsky.social to parse :)

3 weeks ago 2 0 1 0

Amazing meetup! Thanks for your contribution to CHI 26!

3 weeks ago 2 0 0 0

Amazing meetup! So glad you submitted and organized this

3 weeks ago 2 1 0 0