GT @ ICRA 2022

Accepted Papers

 

Tue 24 May 2022

Autonomous Systems

C. Jimenez Cortes, and Magnus Egerstedt, “Task Persistification for Robots with Control-Dependent Energy Dynamics

Bioinspired and Biometic Systems

J. Lynch, J. Gau, S. N. Sponberg, and N. Gravish, “Autonomous Actuation of Flapping Wing Robots Inspired by Asynchronous Insect Muscle

Task and Motion Planning

Z-Y. Gu, N. Boyd, and Y. Zhao, “Reactive Locomotion Decision-Making and Robust Motion Planning for Real-Time Perturbation Recovery

Mechanism Design II

L.R. Huang, A. Zhu1, K. Wang, D. Goldman, A. Ruina, and K. H. Petersen, “Construction and Excavation by Collaborative Double-Tailed SAW Robots

C.C. Kemp, A. Edsinger, H.M. Clever, and B. Matulevich, “The Design of Stretch: A Compact, Lightweight Mobile Manipulator for Indoor Human Environments

Collision Avoidance

M. King-Smith, P. Tsiotras, and Frank Dellaert, “Simultaneous Control and Trajectory Estimation for Collision Avoidance of Autonomous Robotic Spacecraft Systems

Perception for Grasping and Manipulation I

Y-Z. Lin, J. Tremblay, S. Tyree, P. Vela, and S. Birchfield, “Single-Stage Keypoint-Based Category-Level Object Pose Estimation from an RGB Image

Visual Servoing and Tracking

Y-Z. Lin, J. Tremblay, S. Tyree, P. Vela, and S. Birchfield, “Keypoint-Based Category-Level Object Pose Tracking from an RGB Sequence with Uncertainty Estimation

Mechanism Design I

P. Lis, A. Sarma, G. Trimpe, T.A. Brumfiel, R-H. Qi, and Jaydev Desai, “Design and Modeling of a Compact Advancement Mechanism for a Modified COAST Guidewire Robot

Imitation Learning

A. Silva, N. Moorman, W. Silva, Z. Zaidi, N. Gopalan, and M. Gombolay, “LanCon-Learn: Learning with Language to Enable Generalization in Multi-Task Manipulation Domains

Reinforcement Learning for Mobile Robots

L. Smith, J.C. Kew, X-B. Peng, S. Ha, J. Tan, and S. Levine, “Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Optimization and Optimal Control II

O. So, Z-Y. Wang, and E. Theodorou, “Maximum Entropy Differential Dynamic Programming

Optimization and Optimal Control I

Kyle Stachowicz and Evangelos Theodorou, “Optimal-Horizon Model-Predictive Control with Differential Dynamic Programming

Human Detection, Tracking, and Modeling

M. Tunnell, H-J. Chung, and Y-C. Chang, “A Novel Convolutional Neural Network for Emotion Recognition Using Neurophysiological Signals

Bioinspired and Biometic Systems

T-Y. Wang, B-X. Zhong, Y-L. Deng, R-J. Fu, H. Choset, and D. Goldman, “Generalized Omega Turn Gait Enables Agile Limbless Robot Turning in Complex Environments

Optimization and Optimal Control I

J. Yin, Z-Y. Zhang, E, Theodorou, and Panagiotis Tsiotras, “Trajectory Distribution Control for Model Predictive Path Integral Control Using Covariance Steering

 

Wed 25 May 2022

 

Planning and Control

H. Almubarak, K. Stachowicz, N. Sadegh, and E. Theodorou, “Safety Embedded Differential Dynamic Programming Using Discrete Barrier States

Art and Entertainment / Space Robotics

G. Chen, S. Baek, J.D. Florez, W-L. Qian, S-W. Leigh, S. Hutchinson, and F. Dellaert, “GTGraffiti: Spray Painting Graffiti Art from Human Painting Motions with a Cable Driven Parallel Robot

Integrated Planning and Learning

N. Dashora, D. Shin, D. Shah, H. A. Leopold, D. Fan, A.A. Agha-Mohammadi, N. Rhinehart, and S. Levine, “Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments

Soft Robot Applications

A. Gunderman, J. Collins, A. Myers, R. Threlfall, and Y. Chen, “Tendon-Driven Soft Robotic Gripper for Blackberry Harvesting

Reinforcement Learning I

H-L. Hsu, Q-H. Huang, and S. Ha, “Improving Safety in Deep Reinforcement Learning Using Unsupervised Action Planning

N.K. Kannabiran, I. Essa, and S. Ha, “Graph-Based Cluttered Scene Generation and Interactive Exploration Using Deep Reinforcement Learning

Multi-Robot and Swarm Robotics I

S-B. Kim, M. Santos, L. Guerrero-Bonilla, A. Yezzi, and M. Egerstedt, “Coverage Control of Mobile Robots with Different Maximum Speeds for Time-Sensitive Applications

Reinforcement Learning II

A. Kosta, M. A. Anwar, P. Panda, A. Raychowdhury, and K. Roy, “RAPID-RL: A Reconfigurable Architecture with Preemptive-Exits for Efficient Deep-Reinforcement Learning

Deep Learning in Grasping and Manipulation I

W-Y. Liu, C. Paxton, T. Hermans, and D. Fox, “StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects

Planning and Learning

H. Nichols, M. Jimenez, Z. Goddard, M. Sparapany, B. Boots and A. Mazumdar, "Adversarial Sampling-Based Motion Planning"

Industrial and Environmental Robotics and Monitoring

O.L. Ouabi, A. Ridani, P. Pomarede, N. Zeghidour, N. F. Declercq, M. Geist, and C. Pradalier, “Combined Grid and Feature-Based Mapping of Metal Structures with Ultrasonic Guided Waves

Physical HRI

K. Puthuveetil, C. C. Kemp, and Z. Erickson, “Bodies Uncovered: Learning to Manipulate Real Blankets Around People Via Physics Simulations

Deep Learning for Visual Perception I

J-J. Tian, N. C. Mithun, Z. Seymour, H-P. Chiu, and Z. Kira, “Striking the Right Balance: Recall Loss for Semantic Segmentation

Sensing and Dynamics

K. Van Wyk, M. Xie, A. Li, M. A. Rana, B. Babich, B. Peele, Q. Wan, I. Akinola, B. Sundaralingam, D. Fox, B. Boots, and N. Ratliff, “Geometric Fabrics: Generalizing Classical Mechanics to Capture the Physics of Behavior

 

Thu 26 May 2022

 

Multi-Robot Learning

J. Banfi, A. Messing, C. Kroninger, E. Stump, S. Hutchinson, and N. Roy, “Hierarchical Planning for Heterogeneous Multi-Robot Routing Problems Via Learned Subteam Performance

Planning under Uncertainty I

D. Fan, S. Dey, A.A. Agha-Mohammadi, and E. Theodorou, “Learning Risk-Aware Costmaps for Traversability in Challenging Environments

RGB-D Perception II

M.Z. Irshad, T. Kollar, M. Laskey, K. Stone, and Z. Kira, “CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation"

Autonomous Vehicle Navigation II/ Localization and Mapping

K-T. Lee, D. Isele, E. Theodorou, and S-J. Bae, “Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning

Wearable Robotics and Human Augmentation

J. M. Li, D. D. Molinaro, A. S. King, A. Mazumdar and A. J. Young, "Design and Validation of a Cable-Driven Asymmetric Back Exosuit"

Soft Sensors and Actuators

M. Musa, S. Sengupta, and Y. Chen, “MRI-Compatible Soft Robotic Sensing Pad for Head Motion Detection

Field Robotics II / Multi Robot

Muhammad Ali Murtaza and Seth Hutchinson, “Consensus in Operational Space for Robotic Manipulators with Task and Input Constraints

Planning under Uncertainty I

E. Seraj, L-T. Chen, and M. Gombolay, “A Hierarchical Coordination Framework for Joint Perception-Action Tasks in Composite Robot Teams"

Wearable Robots and Interfaces

M. K. Shepherd, D. D. Molinaro, G. S. Sawicki and A. J. Young, "Deep Learning Enables Exoboot Control to Augment Variable-Speed Walking"

Vision-Based Navigation II

M. Sorokin, J. Tan, K. Liu, and S. Ha, “Learning to Navigate Sidewalks in Outdoor Environments

Planning under Uncertainty II

D-L. Zheng, J. Ridderhof, P. Tsiotras, and A.A. Agha-Mohammadi, “Belief Space Planning: A Covariance Steering Approach

Related Media

Reactive Locomotion Decision-Making and Planning for Real-Time Perturbation Recovery
Simultaneous Control and Trajectory Estimation for Collision Avoidance
Graph-based Cluttered Scene Generation and Interactive Exploration using Deep Reinforcement Learning
GTGraffiti: Graffiti Art from Human Painting Motions w/ a Cable Driven Parallel Robot

Workshops

 

ICRA 2022 Workshop Highlight : Online Machine Learning-Based Control of Lower Limb Exoskeletons

 

EPIC Lab LogoRobotic lower-limb exoskeletons are capable of augmenting human mobility and assisting individuals with mobility impairments. Conventionally, these systems generate parallel joint torques that mimic the user’s underlying biological joint demand during ambulation. Unfortunately, due to the dynamic nature of human movement during daily locomotor activities, it is challenging to develop a control framework that captures the full range of intended movements. However, recent breakthroughs in machine learning (ML) have enabled improved comprehension of the human’s state information in real-time, enabling robust control of these wearable systems during dynamic locomotion. While these ML-based strategies show exciting promise, there remain critical hurdles for these interventions to be deployed to the real world. Challenges include positive feedback loops between actuation and sensing, data size requirements for user-independent models, model robustness to unseen mobility contexts, transitions between locomotion modes, and sensor shifting. In general, there have been few attempts to tackle the critical problem of translating/generalizing laboratory-based ML approaches to real-world, large-scale applications. In this workshop, we will tackle these important challenges from multiple perspectives (both high-level and practical; academic and industrial) and provide roadmaps for future exoskeleton developers to incorporate ML-based controllers for their applications.

Invited Speakers and Panelists

Dr. Aaron Young, Assistant Professor, Georgia Institute of Technology
Dr. Keehong Seo, Principal Engineer, Samsung Electronics
Dr. Helen Huang, Professor, North Carolina State University and University of North Carolina
Dr. Elliott Rouse, Assistant Professor, University of Michigan
Dr. Brokoslaw Laschowski, Postdoctoral Research Fellow, Toronto Rehabilitation Institute and University of Toronto

 

May 27, 2022 | Integrating Multidisciplinary Approaches to Advance Physical Human-Robot Interaction Workshop: Pushing Beyond Locomotion Economy – What Can Exoskeletons Do on the Shortest and Longest Timescales?

 

Featuring Gregory Sawicki | Professor; Georgia Institute of Technology, Dept. of Mechanical Engineering & Dept. of Biological Sciences
PoWeR Lab Logo

The goal of the Human Physiology of Wearable Robotics (PoWeR) Laboratory is to discover and exploit key principles of locomotion neuromechanics in order to build wearable devices that can augment intact and/or restore impaired human locomotion. Performance goals include improving economy, stability and agility of human movement.

Over the last `5+ years, enabled by non-invasive dynamic ultrasound imaging, our lab has taken a deep dive ‘under the skin’ to reveal how exoskeletons can shift muscle-level contractile dynamics and improve locomotion economy. While reducing metabolic effort is an important use-case for exoskeletons, our muscle-level perspective has motivated a number of next questions that go beyond metabolic energetics including: How do exoskeletons impact sensory feedback? or Can exoskeletons improve balance recovery? and Can novel exoskeleton controllers using real-time, ‘muscle in the loop’ feedback shape in vivo muscle structure-function over long-timescales (e.g. months to years)? Along these lines, I’ll share some snapshots of our more recent work focusing on experiments to: (i) evaluate exoskeleton systems (e.g., hip) during unsteady locomotion, when balance is challenged and (ii) characterize long-term structure-function effects due to daily-use of wearable devices.

Topics of Interest

Physical human-robot collaboration
Ergonomics
Human motor and neuromuscular control
Cognitive aspects of human-robot collaboration
Physiology and biomechanics of human movement
Human adaptation and learning
Human performance augmentation
Design and control of robotic assistive devices