Advertisement · 728 × 90

Posts by IEEE Transactions on Robotics (T-RO)

Experiment involves scratch removal using a convoluted Gaussian kernel as the desired distribution (a). Motion constraints perpendicular to the scratch direction are employed for effective removal (b). The robotic workcell setup and the robot execution using the finishing disk’s edge point as TCP, are illustrated in (c). The surface is uniformly covered with marking powder (d) to facilitate precise profiling. The robot successfully removes the desired convoluted Gaussian profile from the surface paint (e). (a) Desired coverage. (b) Γ-map. (c) Robot execution. (d) Surface before experiment. (e) Final outcome.

Experiment involves scratch removal using a convoluted Gaussian kernel as the desired distribution (a). Motion constraints perpendicular to the scratch direction are employed for effective removal (b). The robotic workcell setup and the robot execution using the finishing disk’s edge point as TCP, are illustrated in (c). The surface is uniformly covered with marking powder (d) to facilitate precise profiling. The robot successfully removes the desired convoluted Gaussian profile from the surface paint (e). (a) Desired coverage. (b) Γ-map. (c) Robot execution. (d) Surface before experiment. (e) Final outcome.

Authors present a learned #ErgodicControl framework that enables #robots to cover complex surfaces for finishing by incorporating tool contact area & human-preferred motion directions learned from demonstrations.

https://ieeexplore.ieee.org/document/11288096

5 days ago 0 0 0 0
Robot throws a firmly grasped bar into the target cup, completing a full flip. The bar’s angular velocity during free flight far exceeds that of the robot hand. What happens during the short 50-ms window during the transient of gripper opening? In this work, we aim to understand this phenomenon through a physical model.

Robot throws a firmly grasped bar into the target cup, completing a full flip. The bar’s angular velocity during free flight far exceeds that of the robot hand. What happens during the short 50-ms window during the transient of gripper opening? In this work, we aim to understand this phenomenon through a physical model.

Authors study transient #ReleaseDynamics in #RobotThrowing, introducing the Sliding Pivot model that captures sticking–pivoting–sliding behavior during release. It cuts horizontal velocity error by ~40% and angular error by ~63% vs conventional models
https://ieeexplore.ieee.org/document/11251211

1 week ago 1 0 0 0
IROS 2026 Pittsburgh logo and CASE 2026 text.

IROS 2026 Pittsburgh logo and CASE 2026 text.

T-RO authors, note that the transfer window to IROS and CASE 2026 close on April 30 for accepted eligible papers.
https://www.ieee-ras.org/publications/t-ro/

#robotics #IROS2026 #CASE2026 #ConferencePapers #IEEEras

2 weeks ago 0 0 0 0
Differentiable policy trajectory optimization with generalizability (DiffOG). Visuomotor policies enhanced by DiffOG generate smoother constraint-compliant action trajectories in a more interpretable way. DiffOG introduces a novel transformer-based differentiable trajectory optimization framework tailored for action refinement in imitation learning. Leveraging the differentiability of the optimization layer and the high capacity of the transformer, DiffOG can be trained on demonstration data to adapt to the diverse characteristics of trajectories across different tasks. We evaluate DiffOG across 13 tasks and showcase four representative ones here. These selected tasks present several key challenges, including long-horizon dual-arm manipulation, high-precision control, and smooth constraint-satisfying trajectory generation.

Differentiable policy trajectory optimization with generalizability (DiffOG). Visuomotor policies enhanced by DiffOG generate smoother constraint-compliant action trajectories in a more interpretable way. DiffOG introduces a novel transformer-based differentiable trajectory optimization framework tailored for action refinement in imitation learning. Leveraging the differentiability of the optimization layer and the high capacity of the transformer, DiffOG can be trained on demonstration data to adapt to the diverse characteristics of trajectories across different tasks. We evaluate DiffOG across 13 tasks and showcase four representative ones here. These selected tasks present several key challenges, including long-horizon dual-arm manipulation, high-precision control, and smooth constraint-satisfying trajectory generation.

DiffOG, a differentiable #TrajectoryOptimization layer that enhances visuomotor policies by generating smoother, constraint-compliant action trajectories with better generalization & interpretability — improving performance over baseline methods.
https://ieeexplore.ieee.org/document/11267071

1 month ago 0 0 0 0
DynoSAM is an open-source smoothing and mapping framework for dynamic SLAM. (a) System output, which includes camera and object trajectories, as well as the static and per-object dynamic map. (b) Feature-based front end, which performs multiobject tracking in addition to visual odometry. (c) Dynamic map from the camera’s perspective, highlighting the estimated trajectory of each object and the tracked 3-D points.

DynoSAM is an open-source smoothing and mapping framework for dynamic SLAM. (a) System output, which includes camera and object trajectories, as well as the static and per-object dynamic map. (b) Feature-based front end, which performs multiobject tracking in addition to visual odometry. (c) Dynamic map from the camera’s perspective, highlighting the estimated trajectory of each object and the tracked 3-D points.

DynoSAM, an open-source dynamic #SLAM framework that jointly estimates #RobotPose, static scene structure, & object motion/structure in a unified factor-graph optimization—improving motion estimation & robust mapping in indoor/outdoor environments

https://ieeexplore.ieee.org/document/11288097


1 month ago 0 0 0 0
Representation of bilevel optimization procedure for the robotic prosthesis. Left panel: The robotic knee torque, τ , is generated from the impedance control parameter values, which are obtained through real-time tuning by the RL controller. Lower right panel: This panel illustrates features of knee and thigh kinematics throughout a single gait cycle. For knee kinematics, superscript numbers 1–4 denote the respective phases within the gait cycle (i.e., STF, STE, SWF, and SWE), each corresponding to a knee feature. For thigh kinematics, the minimum value of the thigh angle acts as the feature. Top right panel: At the end of each bilevel optimization iteration, the cost function derived from IRL, each in a quadratic form, is utilized in the design of the RL controller. The implementation of the two interleaving procedures involving the inverse RL and forward RL is summarized in Algorithm 1. The RL controller’s inputs include kinematic features from the corresponding phase [defined in (5) and (6)], and its outputs involve adjustments to the impedance settings [defined in (3)].

Representation of bilevel optimization procedure for the robotic prosthesis. Left panel: The robotic knee torque, τ , is generated from the impedance control parameter values, which are obtained through real-time tuning by the RL controller. Lower right panel: This panel illustrates features of knee and thigh kinematics throughout a single gait cycle. For knee kinematics, superscript numbers 1–4 denote the respective phases within the gait cycle (i.e., STF, STE, SWF, and SWE), each corresponding to a knee feature. For thigh kinematics, the minimum value of the thigh angle acts as the feature. Top right panel: At the end of each bilevel optimization iteration, the cost function derived from IRL, each in a quadratic form, is utilized in the design of the RL controller. The implementation of the two interleaving procedures involving the inverse RL and forward RL is summarized in Algorithm 1. The RL controller’s inputs include kinematic features from the corresponding phase [defined in (5) and (6)], and its outputs involve adjustments to the impedance settings [defined in (3)].

Authors introduce a method that personalizes robotic #ProstheticLeg control by optimizing both the #prosthesis and the user’s residual limb via #InverseReinforcementLearning. This enables more natural walking and improved long-term health for amputees
https://ieeexplore.ieee.org/document/11251175

1 month ago 3 0 0 1
Vehicle configurations in flight. Video is available online.1 (a) Quadrotor. (b) Hexarotor. (c) 6DOF Hexarotor. (d) Tetrahedron Quadrotor. (e) Tetrahedron Decarotor. (f) Tetrahedron Hexadecarotor.

Vehicle configurations in flight. Video is available online.1 (a) Quadrotor. (b) Hexarotor. (c) 6DOF Hexarotor. (d) Tetrahedron Quadrotor. (e) Tetrahedron Decarotor. (f) Tetrahedron Hexadecarotor.

The #Dodecacopter—a modular UAV made of regular dodecahedron modules that can assemble into 3D, fully actuated configurations beyond flat drone arrays. A prototype flies in multiple shapes, showing versatility and adaptability for #AerialRobotics.

https://ieeexplore.ieee.org/document/11265804

1 month ago 0 0 0 0
Robot waving and text that reads Thank you!

Robot waving and text that reads Thank you!

T-RO is delighted to welcome our many new editorial board members. We thank you for your commitment and dedication to the journal. T-RO would not be the journal it is without the incredible leadership and expertise of our entire editorial board

www.ieee-ras.org/publications...
#IEEEras #Robotics

1 month ago 0 1 0 0
A cube with graphics showing the initial condition (left) and final steady state at t=3 s (right).

A cube with graphics showing the initial condition (left) and final steady state at t=3 s (right).

#Irrotational Contact Fields, a framework that generates convex, physically accurate approximations of complex contact & enables differentiable, artifact-free simulation in Drake, supporting robust sim-to-real transfer for contact-rich robotics tasks
https://ieeexplore.ieee.org/document/11203247

2 months ago 0 0 0 0
Graphical overview of the article's main part.

Graphical overview of the article's main part.

Physics-Informed #NeuralNetworks used to build generalizable, fast surrogate models of articulated #SoftRobot dynamics with accuracy across domains while speeding up prediction by ~466× versus first-principles models, for real-time MPC in hardware
https://ieeexplore.ieee.org/document/11242009


2 months ago 1 0 0 0
Advertisement
Robot Assisted Medical Imaging special collection submissions window closes February 15.

Robot Assisted Medical Imaging special collection submissions window closes February 15.

FINAL CALL: Robot Assisted Medical Imaging (RAMI) Special Collection. Submissions close February 15

For information: www.ieee-ras.org/publications/t-ro/specia...

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

2 months ago 0 0 0 0
Experiment of Unitree Go1 and Go2 on risky gap terrains, including (a) Single plank bridge, with the narrowest traversable width being 18 cm, validates the center-of-gravity control under narrow support. (b)–(c) Balance beams, where the narrowest beam width is 9 cm, to test the robot’s stability in response to height variations, inclination changes, and edge perception. (d)–(e) Large gaps to demonstrate the capability to traverse gaps of varying widths (up to 65 cm in the real-world experiment).

Experiment of Unitree Go1 and Go2 on risky gap terrains, including (a) Single plank bridge, with the narrowest traversable width being 18 cm, validates the center-of-gravity control under narrow support. (b)–(c) Balance beams, where the narrowest beam width is 9 cm, to test the robot’s stability in response to height variations, inclination changes, and edge perception. (d)–(e) Large gaps to demonstrate the capability to traverse gaps of varying widths (up to 65 cm in the real-world experiment).

MARG, a #DRL controller that combines terrain maps and proprioception from a single #LiDAR to traverse risky gap terrains (65 cm wide, narrow planks) with zero-shot sim-to-real transfer—boosting stability & foothold choice without extra sensors
https://ieeexplore.ieee.org/document/11196002

2 months ago 0 0 0 0
3-D reconstruction from a run of OKVIS2-X on the Spagna sequence of the VBR dataset [1]. Reconstruction with a LiDAR sensor (top) or with a depth network (bottom) to showcase the versatility of the presented system to different sensor modalities. The estimated trajectory is visualized in black. Furthermore, different colors per submap are used.

3-D reconstruction from a run of OKVIS2-X on the Spagna sequence of the VBR dataset [1]. Reconstruction with a LiDAR sensor (top) or with a depth network (bottom) to showcase the versatility of the presented system to different sensor modalities. The estimated trajectory is visualized in black. Furthermore, different colors per submap are used.

Authors introduce OKVIS2-X, a real-time multi-sensor #SLAM system that tightly fuses visual, inertial, #GNSS, depth or #LiDAR measurements into dense volumetric maps that scale from city to natural environments with high accuracy and robustness.
https://ieeexplore.ieee.org/document/11196039




2 months ago 1 1 0 0
Multi-LiDARs are equipped in our heavy vehicles to avoid self-occlusion. (a) shows an example placement with six LiDARs. The point colors in (b–c) correspond to the LiDAR from which the points are captured. (b) illustrates the distortion of static structure due to fast-moving ego vehicle. Raw shows the raw data, w. egc shows the ego-motion compensation results. (c) demonstrates distortion caused by motion of other objects, which depends on the velocity of the said objects. In such case, ego-motion compensation alone (w. ego-motion comp.) is insufficient. In comparison, our HiMu pipeline (w. HiMo motion comp.) successfully undistorts the point clouds completely, resulting in an accurate representation of the objects. (a) LiDAR placement illustration. (b) Static structure. (c) Dynamic agents.

Multi-LiDARs are equipped in our heavy vehicles to avoid self-occlusion. (a) shows an example placement with six LiDARs. The point colors in (b–c) correspond to the LiDAR from which the points are captured. (b) illustrates the distortion of static structure due to fast-moving ego vehicle. Raw shows the raw data, w. egc shows the ego-motion compensation results. (c) demonstrates distortion caused by motion of other objects, which depends on the velocity of the said objects. In such case, ego-motion compensation alone (w. ego-motion comp.) is insufficient. In comparison, our HiMu pipeline (w. HiMo motion comp.) successfully undistorts the point clouds completely, resulting in an accurate representation of the objects. (a) LiDAR placement illustration. (b) Static structure. (c) Dynamic agents.

HiMo — a pipeline that compensates for #MotionDistortions caused by other moving vehicles in #LiDAR scans by repurposing scene flow estimation to correct non-ego motion, improving geometric consistency and boosting downstream 3D detection & segmentation
https://ieeexplore.ieee.org/document/11196030

2 months ago 0 0 0 0
Ad for Robot Assisted Medical Imaging Special Collection. Submission window closes February 15.

Ad for Robot Assisted Medical Imaging Special Collection. Submission window closes February 15.

Final call-for-papers for the Robot Assisted Medical Imaging special collection. Submissions close February 15.

www.ieee-ras.org/publications/t-ro/specia...

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

2 months ago 0 0 0 0
Video

T-RO authors present a learning-based low-level #quadcopter controller which is trained entirely in simulation, but generalizes across different dynamics and even adapts to real-world disturbances
https://ieeexplore.ieee.org/document/11025148

#Quadrotors #AutonomousAerialVehicles


2 months ago 0 0 0 0
Snapshots of a robotic simulation using our multicontact solver. Top: Bolt-nut assembly. Bottom: dish piling. Although intensive contact formation and stiff interactions make these scenarios challenging to simulate, our solvers successfully complete the simulations less than a ms of time budget per step.

Snapshots of a robotic simulation using our multicontact solver. Top: Bolt-nut assembly. Bottom: dish piling. Although intensive contact formation and stiff interactions make these scenarios challenging to simulate, our solvers successfully complete the simulations less than a ms of time budget per step.

Introducing CANAL & SubADMM, new multi-contact solvers based on augmented Lagrangian: CANAL for high-precision contact resolution; SubADMM for massively parallel hardware. They improve simulation accuracy & speed
https://ieeexplore.ieee.org/document/11027548

#RobotKinematics #HeuristicAlgorithms

2 months ago 0 0 0 0
Configuration of OceanVoy (left) and the onboard energy diagram (right), power distribution marked in red.

Configuration of OceanVoy (left) and the onboard energy diagram (right), power distribution marked in red.

T-RO authors propose #EeLsT, an energy-efficient long-short-term observer framework that adaptively balances control decisions for sailboat #actuators under environmental disturbances (waves, currents). Saves ~30% energy in sim & ~27% in real #sailing
https://ieeexplore.ieee.org/document/11024557


2 months ago 0 0 0 0
Conceptual illustration exemplifying the dynamic scaling of physical and cognitive-grounded safety zones based on human awareness.

Conceptual illustration exemplifying the dynamic scaling of physical and cognitive-grounded safety zones based on human awareness.

PRO-MIND, a human-in-the-loop framework that tunes #RobotMotion based on human attention, stress & safety. It adapts safety zones & paths using B-splines & multi-objective optimization to balance comfort, execution time, & smoothness
https://ieeexplore.ieee.org/document/10912779
#CollaborativeRobots

3 months ago 0 0 0 0
Intricate, stable structure generated by our object placement planner. The surface is shaded according to robustness to perturbation by external forces; we use this measure to inform our proposed planner.

Intricate, stable structure generated by our object placement planner. The surface is shaded according to robustness to perturbation by external forces; we use this measure to inform our proposed planner.

A planner that reverses the typical pose-sampling approach: picking robust #ContactPoints, then finding a placement pose that satisfies them. The method runs ~20× faster thanks to a stability #heuristic, and works well even in cluttered robot scenes.
https://ieeexplore.ieee.org/document/11027417


3 months ago 2 0 0 0
Advertisement
Globally consistent 3-D maps reconstructed by CURL-SLAM. Point cloud maps with different resolutions are continuously reconstructed using the same CURL map, which is ultracompact (0.26% of the 3.2-GB raw point clouds).

Globally consistent 3-D maps reconstructed by CURL-SLAM. Point cloud maps with different resolutions are continuously reconstructed using the same CURL map, which is ultracompact (0.26% of the 3.2-GB raw point clouds).

CURL-SLAM, a #LiDAR #SLAM system that builds ultra-compact implicit maps using spherical harmonics and CURL representation, handling loop closures & bundle adjustment — in real time (10 Hz) on a laptop~0.26% of the original point-cloud size.

https://ieeexplore.ieee.org/document/11078155

3 months ago 2 0 0 0
Seven images of green rope upright at different angles. Caption: MSRA motion for (a) 40 ∘ , (b) 50 ∘ , (c) 60 ∘ , (d) 0 ∘ in position and orientation control. MSRA motion comparison for the task (e) 50 ∘ a20 ∘ , (f) 50 ∘ , and (g) 50 ∘ c20 ∘ .

Seven images of green rope upright at different angles. Caption: MSRA motion for (a) 40 ∘ , (b) 50 ∘ , (c) 60 ∘ , (d) 0 ∘ in position and orientation control. MSRA motion comparison for the task (e) 50 ∘ a20 ∘ , (f) 50 ∘ , and (g) 50 ∘ c20 ∘ .

Authors propose a planning + control framework for modular #SoftRobot arms that uses biLSTMs and only coarse internal sensing feedback. It handles position & orientation control, obstacle avoidance & online interaction.
ieeexplore.ieee.org/document/110...

#NeuralNetwork

3 months ago 3 0 1 0
MSRA motion for (a) 40 \$^{\circ }\$ , (b) 50 \$^{\circ }\$ , (c) 60 \$^{\circ }\$ , (d) 0 \$^{\circ }\$ in position and orientation control. MSRA motion comparison for the task (e) 50 \$^{\circ }\$ a20 \$^{\circ }\$ , (f) 50 \$^{\circ }\$ , and (g) 50 \$^{\circ }\$ c20 \$^{\circ }\$ .

MSRA motion for (a) 40 \$^{\circ }\$ , (b) 50 \$^{\circ }\$ , (c) 60 \$^{\circ }\$ , (d) 0 \$^{\circ }\$ in position and orientation control. MSRA motion comparison for the task (e) 50 \$^{\circ }\$ a20 \$^{\circ }\$ , (f) 50 \$^{\circ }\$ , and (g) 50 \$^{\circ }\$ c20 \$^{\circ }\$ .

Authors propose a planning + control framework for modular #SoftRobot arms that uses biLSTMs and only coarse internal sensing feedback. It handles position & orientation control, obstacle avoidance & online interaction.
https://ieeexplore.ieee.org/document/11049035

#NeuralNetwork

3 months ago 2 0 1 0
Post image

A model-simplification framework for efficient deformable-object manipulation: task-conditioned action-space reduction plus simplified #dynamics lets faster planning for cloth folding & rope shaping.
ieeexplore.ieee.org/document/110...

#ComputationalModeling #PathPlanning #Robots

3 months ago 0 0 0 0
A graphic overview of the iterative model simplification and motion planning framework for a cloth side folding task, with closed-loop robot execution in the real world. Initially, a simplified geometric model is identified and used to extract key picking points in the reduced action space. A simplified dynamics model is then built and utilized to plan a trajectory in a significantly shorter time. The trajectory is executed on the original model, and if the goal is not reached, the loop iterates, refining the simplified model until a satisfactory trajectory is found. Once a valid trajectory is identified, it is executed on the robot, with the perception system continuously tracking the deformation during manipulation.

A graphic overview of the iterative model simplification and motion planning framework for a cloth side folding task, with closed-loop robot execution in the real world. Initially, a simplified geometric model is identified and used to extract key picking points in the reduced action space. A simplified dynamics model is then built and utilized to plan a trajectory in a significantly shorter time. The trajectory is executed on the original model, and if the goal is not reached, the loop iterates, refining the simplified model until a satisfactory trajectory is found. Once a valid trajectory is identified, it is executed on the robot, with the perception system continuously tracking the deformation during manipulation.

A model-simplification framework for efficient deformable-object manipulation: task-conditioned action-space reduction plus simplified #dynamics lets faster planning for cloth folding & rope shaping.
ieeexplore.ieee.org/document/110...

#ComputationalModeling #PathPlanning #Robots

3 months ago 2 1 0 0
Post image
4 months ago 0 0 0 0
Text ad that reads, "Call for papers. Robot-Assisted Medical Imaging, a special collection".

Text ad that reads, "Call for papers. Robot-Assisted Medical Imaging, a special collection".

A T-RO call-for-papers: Robot-Assisted Medical Imaging special collection. Submissions due by February 15.
www.ieee-ras.org/publications...

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

4 months ago 1 0 0 0
Post image

Call-for-Papers. T-RO Special Collection Foundation Models deadline has been extended to December 12.

Read more: www.ieee-ras.org/publications...

#FoundationModelsforRobotics #TactileSensing #RobotEmbodiments

4 months ago 1 0 0 0
Snapshots and visualization results when the quadrotor flies through the 3-D tunnel case 2 shown in Fig. 1(e). The markers are same as previous figures and the quadrotor positions in the snapshots are labeled in the visualization. (a) Visualization. (b) Flying into the entrance. (c) Flying down the vertical section. (d) Flying upward along the slope. (e) Traversing the dark rectangular tunnel section. (f) Flying out of the exit.

Snapshots and visualization results when the quadrotor flies through the 3-D tunnel case 2 shown in Fig. 1(e). The markers are same as previous figures and the quadrotor positions in the snapshots are labeled in the visualization. (a) Visualization. (b) Flying into the entrance. (c) Flying down the vertical section. (d) Flying upward along the slope. (e) Traversing the dark rectangular tunnel section. (f) Flying out of the exit.

This system lets drones fly autonomously through tunnels as narrow as 0.5 m in diameter; combining virtual omni-directional perception + a motion planner that handles low-light, sparse visual features, and airflow disturbances beating human pilots.
ieeexplore.ieee.org/document/109...

#Quadrotors

4 months ago 1 0 0 0
Advertisement
Schematic diagram of the closed-loop motion control experiment based on flow perception.

Schematic diagram of the closed-loop motion control experiment based on flow perception.

FlowSight: a fish-inspired #ArtificialLateralLine that lets #UnderwaterRobots ‘feel’ flow in real-time. A vision system watches a #biomimetic tentacle deform, then AI estimates flow vector; enabling closed-loop control and opening new avenues.
ieeexplore.ieee.org/document/109...

5 months ago 0 0 0 0