Aditya Kapoor |
|
I am an ELLIS PhD student jointly at the University of Manchester and TU Darmstadt, where I am advised by Dr. Mingfei Sun and Prof. Jan Peters. My research intersects foundational models, reinforcement learning, and multi-agent systems, with a particular focus on building agents capable of effective communication and collaboration. I closely collaborate with Yilun Du and Benjamin Freed, and prior to my PhD, I was a predoctoral researcher at Tata Consultancy Services Research & Innovation in Mumbai, working with Dr. Mayank Baranwal and Dr. Harshad Khadilkar. I also briefly worked with Dr. Vighnesh Vatsal and Dr. Jay Gubbi in TCS Bangalore. I completed my Bachelor of Engineering in Computer Science from BITS Pilani, Goa.
My research focuses on developing intelligent, embodied agents that can operate within complex, multi-agent societies. Drawing inspiration from human social systems, where individuals communicate, collaborate, and coordinate seamlessly to accomplish shared objectives, my goal is to create agents capable of similar high-level interactions. To achieve this, I aim to leverage foundational models and reinforcement learning to enable agents to not only interpret and adapt to their environment but also to dynamically adjust their roles and communication patterns within a group. In this context, foundational models serve as building blocks for core agent functions like perception, communication, and decision-making. By embedding these models within a multi-agent reinforcement learning framework, each agent can selectively draw on shared knowledge and learn to respond to other agents' behaviors, improving both individual performance and collective outcomes. This approach also facilitates task decomposition and planning, enabling agents to break down complex objectives into manageable sub-tasks and coordinate with others to achieve them. One of the major challenges is to design agents that balance autonomous action with collaborative roles—essentially mirroring how humans contribute unique skills within a team. By combining reinforcement learning with foundational models, my work aims to foster agents that not only optimize individual rewards but also support collective goals. Ultimately, this approach holds promise for building scalable, adaptable multi-agent societies that communicate efficiently, learn continuously, and address challenges that require coordinated, intelligent interactions across diverse environments and tasks. Feel free to get in touch if you are interested in working with me via email at aditya [dot] kapoor [at] postgrad [dot] manchester [dot] ac [dot] uk. This information was last updated in November 2024. CV | Google Scholar | GitHub | LinkedIn | Twitter |
|
Assigning Credit with Partial Reward Decoupling in Multi-Agent Proximal Policy Optimization
|
|
DeepClean: Integrated Distortion Identification and Algorithm Selection for Rectifying Image Corruptions
|
|
Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments
|
|
Systems and methods for anomaly detection and correction
|
|
Methods and systems for autonomous task composition of vision pipelines using an algorithm selection framework
|
|
SocNavGym: A Reinforcement Learning Gym for Social Navigation
|
|
Concept-based Anomaly Detection in Retail Stores for Automatic Correction using Mobile Robots
|
|
Learning Cooperative Multi-Agent Policies with Partial Reward Decoupling
|
|
Challenges in Applying Robotics to Retail Store Management
|
|
Auto-TransRL: Autonomous Composition of Vision Pipelines for Robotic Perception
|
|
Supervised category learning: When do participants use a partially diagnostic feature?
|
|
A toolkit to generate social navigation datasets
|
|
Effect of a colour-based descriptor and stimuli presentation mode in unsupervised categorization
|