Greetings, I’m Sid. I’m currently a PhD student with the computer science department at Boston University. I currently work under the advisement of Kate Saenko, as a part of her research group, which is a part of the Image and Video Computing (IVC) group at BU.
Before starting my PhD at BU in the Fall of 2017, I worked for a year as a research assistant under Kostas Daniilidis at the University of Pennsylvania (Penn), as a part the GRASP Lab, where I had also completed my MSE in Robotics in 2016. Prior to joining Penn, I obtained a MEng in Mechatronic Engineering from the University of Nottingham, in 2014.
I am broadly interested in researching techniques and tools that exploit the versatility of data generated in simulations to develop more robust AI, with tentative applications in robotics, creative tools, mixed-reality, and game development. I am especially interested in utilizing video games and game engines for improved robustness in simulated environments, with the goal of constructing sandbox environments and training schemes which in turn allow for the development of more robust machine learning (ML) polcies. Being a part of the Image and Video Computing group, I am also generally interested in applications of computer vision, with more of a focus on real-time perception, as was motivated by my background in robotics.
My current research primarily involves studying domain adaption and generalization for applications of reinforcement learning in games and robotics. We investigate techniques to bridge domain gaps when applying control policies learned over different distributions of domains/tasks.
Additionally, I am involved in research employing RL in the Computer Aided Design (CAD) process to help users ensure their designs fit specific physical properties.
Since Spring ‘19, I have also been involved with research around the neuroflight platform, which seeks to build a neural network based flight controller for high performance racing drones.
A problem I’ve been increasingly more interested in is the problem of intentional and active cooperation between multiple AI agents or between AI and humans. It appears that this is a different problem, and that makes sense (after all, defining optimality criteria for cooperation is complicated, to put it mildly). It is however an extremely important problem to solve for any tools that are meant to work alongside humans. This is a relatively new line of inquiry for me, but I’m always happy to chat more about it or any of my other interests.
Visitors be aware: There have been known issues with ad-blockers blocking some graphics elements on this page (though the cause is yet unclear).