Aditya Kapoor


I am an ELLIS PhD student jointly at the University of Manchester and TU Darmstadt, where I am advised by Dr. Mingfei Sun and Prof. Jan Peters. My research intersects foundational models, reinforcement learning, and multi-agent systems, with a particular focus on building agents capable of effective communication and collaboration. I closely collaborate with Yilun Du and Benjamin Freed, and prior to my PhD, I was a predoctoral researcher at Tata Consultancy Services Research & Innovation in Mumbai, working with Dr. Mayank Baranwal and Dr. Harshad Khadilkar. I also briefly worked with Dr. Vighnesh Vatsal and Dr. Jay Gubbi in TCS Bangalore. I completed my Bachelor of Engineering in Computer Science from BITS Pilani, Goa.

My research focuses on developing intelligent, embodied agents that can operate within complex, multi-agent societies. Drawing inspiration from human social systems, where individuals communicate, collaborate, and coordinate seamlessly to accomplish shared objectives, my goal is to create agents capable of similar high-level interactions. To achieve this, I aim to leverage foundational models and reinforcement learning to enable agents to not only interpret and adapt to their environment but also to dynamically adjust their roles and communication patterns within a group.

In this context, foundational models serve as building blocks for core agent functions like perception, communication, and decision-making. By embedding these models within a multi-agent reinforcement learning framework, each agent can selectively draw on shared knowledge and learn to respond to other agents' behaviors, improving both individual performance and collective outcomes. This approach also facilitates task decomposition and planning, enabling agents to break down complex objectives into manageable sub-tasks and coordinate with others to achieve them.

One of the major challenges is to design agents that balance autonomous action with collaborative roles—essentially mirroring how humans contribute unique skills within a team. By combining reinforcement learning with foundational models, my work aims to foster agents that not only optimize individual rewards but also support collective goals. Ultimately, this approach holds promise for building scalable, adaptable multi-agent societies that communicate efficiently, learn continuously, and address challenges that require coordinated, intelligent interactions across diverse environments and tasks.

Feel free to get in touch if you are interested in working with me via email at aditya [dot] kapoor [at] postgrad [dot] manchester [dot] ac [dot] uk.

This information was last updated in November 2024.

CV   |   Google Scholar   |   GitHub   |   LinkedIn   |   Twitter

Publications


Assigning Credit with Partial Reward Decoupling in Multi-Agent Proximal Policy Optimization
Aditya Kapoor, Benjamin Freed, Howie Choset, Jeff Schneider
Arxiv
PDF

DeepClean: Integrated Distortion Identification and Algorithm Selection for Rectifying Image Corruptions
Aditya Kapoor, Harshad Khadilkar, Jayvardhana Gubbi
Arxiv
PDF

Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments
Siddharth Nayak, Adelmo Morrison Orozco, Marina Ten Have, Vittal Thirumalai, Jackson Zhang, Darren Chen, Aditya Kapoor, Eric Robinson, Karthik Gopalakrishnan, James Harrison, Brian Ichter, Anuj Mahajan, Hamsa Balakrishnan
Neural Information Processing Systems , 2024
PDF

Systems and methods for anomaly detection and correction
Jayavardhana Rama Gubbi Lakshminarasimha, Vartika Sengar, Vighnesh Vatsal, Balamuralidhar Purushothaman, Arpan Pal, Nijil George, Aditya Kapoor
US Patent App. 18/199,708
Link

Methods and systems for autonomous task composition of vision pipelines using an algorithm selection framework
Abhishek Roy Choudhury, Vighnesh Vatsal, Mehesh Rangarajan, Naveen Kumar Basa Anitha, Aditya Kapoor, Jayavardhana Rama Gubbi Lakshminarasimha, Aravindhan Saravanan, Vartika Sengar, Balamuralidhar Purushothaman, Arpan Pal, Nijil George
US Patent App. 18/199,708
Link

SocNavGym: A Reinforcement Learning Gym for Social Navigation
Aditya Kapoor*, Sushant Swamy*, Pilar Bachiller, Luis Manso (* indicates equal contribution)
IEEE RO-MAN, 2023
PDF

Concept-based Anomaly Detection in Retail Stores for Automatic Correction using Mobile Robots
Aditya Kapoor, Vartika Sengar, Nijil George, Vighnesh Vatsal, Jayvardhana Gubbi, Balamuralidhar P, Arpan Pal
IEEE Systems, Man, and Cybernetics Society (SMCS) 2023
PDF

Learning Cooperative Multi-Agent Policies with Partial Reward Decoupling
Benjamin Freed*, Aditya Kapoor*, Ian Abraham, Jeff Schneider, Howie Choset (* indicates equal contribution)
IEEE RA-L, presented at ICRA 2022
PDF

Challenges in Applying Robotics to Retail Store Management
Vartika Sengar*, Aditya Kapoor, Nijil George*, Vighnesh Vatsal*, Jayvardhana Gubbi, Balamuralidhar P, Arpan Pal (* indicates equal contribution)
ICRA 2022 Workshop - Challenges in Applying Academic Research to Real-World Robotics
PDF

Auto-TransRL: Autonomous Composition of Vision Pipelines for Robotic Perception
Aditya Kapoor, Vartika Sengar, Nijil George, Vighnesh Vatsal, Jayvardhana Gubbi
ICRA 2022 Workshop - Robotic Perception and Mapping: Emerging Techniques
PDF

Supervised category learning: When do participants use a partially diagnostic feature?
Sujith Thomas, Aditya Kapoor, Narayanan Srinivasan
CogSci 2021
PDF

A toolkit to generate social navigation datasets
Rishabh Baghel, Aditya Kapoor, Pilar Bachiller, Ronit R Jorvekar, Daniel Rodriguez-Criado, Luis J Manso
Workshop of Physical Agents 2020
PDF

Effect of a colour-based descriptor and stimuli presentation mode in unsupervised categorization
Sujith Thomas, Aditya Kapoor, Narayanan Srinivasan
CogSci 2020
PDF

Website template courtesy