The focus of my research is to design and investigate effective and efficient methods for human-swarm interaction. Communicating with a robot or a machine has been an intriguing field of research since Nikola Tesla engineered the first remote controlled motorboat. Since then many researchers have been coming up with interaction devices, methods, and schemes. Now imagine, instead of one robot, a human is instructed to interact with a swarm/group of robots; would this be as simple or intuitive as interacting with one robot? Can the traditional interfaces used for interacting with one robot be sufficient enough to interact with a swarm of robots? Is it efficient to interact with each member of a swarm or interact with them as a whole or with some other mode of interaction? To find an answer to these questions, I am investigating and designing interface and methods for interacting with a robot swarm.

During my graduate studies, I have also completed projects on object detection using computer vision, robot navigation, and localization, robot interfaces, and robotic simulation platform. My M.S. thesis involved designing an indoor localization and navigation of a smart wheelchair. You can check out my YouTube Channel for some research videos.

Key Projects

  • PresentJan 2019

    Improving Human Performance in Multi-Human Multi-Robot Interaction (Ph.D. Dissertation)

    Designed an augmented reality-based and a cloud-based interface for multi-human multi-robot interaction. Evaluated various research aspects with 8 user studies involving 122 participants. (VIDEO)

  • PresentJan 2016

    Human-Swarm Interaction for Disaster Management

    The project targets the study of interaction methods and devices for human-swarm interaction. These interfaces include EMG Band, Augmented Reality, and Point-and-click interfaces. Collective transport is used to test the framework. (VIDEO)

  • Dec 2016Jun 2016

    Author Age and Gender Prediction from Written Samples

    Age and gender prediction of an author were tested and implemented using K-Nearest Neighbors, Decision Tree and Support Vector Machine algorithms.

  • May 2016July 2015

    Gesture-Based Navigation and Localization of a Smart Wheelchair using Fiducial Markers (MS Thesis)

    The project revolved around designing an EMG interface for navigation of a smart wheelchair for specially-abled people. The project also included an April-tag indoor localization technique for the wheelchair and to ensure the patient's safety. (VIDEO)

  • Dec 2015July 2015

    Gesture-based sensor fusion SLAM

    EMG and Camera Data to localize the robot position while generating the map of the environment. (VIDEO)

  • May 2015Jan 2015

    ROS-GAZEBO Simulation of Neurosurgery Robot

    The project was to create a GAZEBO-based simulator for testing and analyzing the neurosurgery robot. The project enabled the robot developers to plan the trajectory of the robot in an MRI machine while avoiding obstacles. (VIDEO)

  • May 2013Jan 2012

    Automated Reliable Effective and Intelligent Security System (B.Engg. Final Project)

    Implemented a security system that triggers the doors and windows on intruder alert ensuring the safety of all the people in the house. This system can also unlock/lock doors and windows from anywhere in the world with a simple phone call. A patent application is filled for the project. (VIDEO)


  • Submitted to Swarm Intelligence

    On Multi-Human Multi-Robot Remote Interaction: A Study of Transparency, Inter-Human Communication, and Information Loss in Remote Interaction

    In this paper, we investigate how to design an effective interface for remote multi-human multi-robot interaction. While significant research exists on interfaces for individual human operators, little research exists for the multi-human case. Yet, this is a critical problem to solve to make complex, large-scale missions achievable in which direct human involvement is impossible or undesirable, and robot swarms act as a semi-autonomous agents. This paper’s contribution is twofold. The first contribution is an exploration of the design space of computer-based interfaces for multi-human multi-robot operations. In particular, we focus on information transparency and on the factors that affect inter-human communication in ideal conditions, i.e., without communication issues. Our second contribution concerns the same problem, but considering increasing degrees of information loss, defined as intermittent reception of data with noticeable gaps between individual receipts. We derived a set of design recommendations based on two user studies involving 48 participants. arXiv Link.

    Submitted to IEEE THMS

    Direct and Indirect Communication in Multi-Human Multi-Robot Interaction

    Communication is the key essence for an effective teamwork, be it between humans or be it between humans and robots. Researchers have extensively studied different aspects of human-human communication and human-robot communication. However, human-human communication in a multi-human multi-robot scenario has never been explored. Humans can either engage directly through verbal communication or indirectly representing their actions and intentions by using technology or with a mix of both. In this paper, we study how different communication modes can affect a operators’ awareness, workload, trust, and usability in a multi-human multi-robot system. We report a user study based on a collective transport task involving 18 human subjects and 9 robots. The operators repeated the task with different communication modes — no communication, direct communication, indirect communication, and mixed communication. We investigate and compare the subjective and objective measures to understand the effects of each mode on multi-human multi-robot interaction. arXiv Link.

    Submitted to RA-L

    Transparency in Multi-Human Multi-Robot Interaction

    Transparency is a key factor in improving the performance of human-robot interaction. When multi-robot systems are involved, transparency is an even greater challenge, due to the larger number of variables affecting the behavior of the robots as a whole. Significant effort has been devoted to studying transparency when single operators interact with multiple robots. However, to the best of our knowledge, studies on transparency that focus on multiple human operators interacting with a multi-robot systems are currently missing. This paper aims to fill this gap by comparing four transparency modes: (i) no transparency (no operator receives information from the robots), (ii) central transparency (the operators receive information only relevant to their personal task), (iii) peripheral transparency (the operators share information on each others' tasks), and (iv) mixed transparency (both central and peripheral). We report the results in terms of awareness, trust, and workload of a user study involving 18 participants engaged in a complex multi-robot task. arXiv Link.

    RO-MAN 2020

    Improving Human Performance Using Mixed Granularity of Control in Multi-Human Multi-Robot Interaction

    Due to the potentially large number of units involved, the interaction with a multi-robot system is likely to exceed the limits of the span of apprehension of any individual human operator. In previous work, we studied how this issue can be tackled by interacting with the robots in two modalities --- environment-oriented and robot-oriented. In this paper, we study how this concept can be applied to the case in which multiple human operators perform supervisory control on a multi-robot system. While the presence of extra operators suggests that more complex tasks could be accomplished, little research exists on how this could be achieved efficiently. In particular, one challenge arises --- the out-of-the-loop performance problem caused by a lack of engagement in the task, awareness of its state, and trust in the system and in the other operators. Through a user study involving 28 human operators and 8 real robots, we study how the concept of mixed granularity in multi-human multi-robot interaction affects user engagement, awareness, and trust while balancing the workload between multiple operators. Paper Link. arXiv Link. VIDEO.

    ICRA 2019

    Mixed-Granularity Human-Swarm Interaction

    We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy. Paper Link. arXiv Link. VIDEO.