Overview

I am excited to automatically synthesize task-driven perception and control that is robust to task-irrelevant distractors. 

Attention Switching

How can a robot (i) determine which portions of its environment to pay attention to at any given point in time, (ii) infer changes in context (e.g., task or environment dynamics), and (iii) switch its attention accordingly in order to operate reliably in complex, uncertain, and time-varying environments?

Up next: Active attention switching to minimize context uncertainty


Related Publication:

"Switching Attention in Time-Varying Environments via Bayesian Inference of Abstractions" 

M. Booker and A. Majumdar

International Conference on Robotics and Automation (ICRA) 2023. [ArXiv] [Video]

Minimal Memory Representations

How do we jointly synthesize minimal memory representations and policies for a robot?

Related Publication:

"Learning to Actively Reduce Memory Requirements for Robot Control Tasks"

M. Booker and A. Majumdar

Learning for Dynamics and Control (L4DC) 2021

  [Paper] [Code] [Video]

Collaborations!

Online Learning for Obstacle Avoidance

How do we find a policy that adapts online to realizations of uncertainty and provably compares well with the best obstacle avoidance policy in hindsight?

Related Publication:

"Online Learning for Obstacle Avoidance"

D. Snyder, M. Booker, N. Simon, W. Xia, D. Suo, E. Hazan, and A. Majumdar

Conference on Robot Learning (CoRL) 2023

[Paper]


Generalization Guarantees for Perception-Based Planning and Control

How do we  learn a perception module with guarantees such that the corresponding control policy can still perform well in novel environments?

In progress!