About me

Greetings, I’m Sid. I’m currently a PhD student with the computer science department at Boston University. I currently work under the advisement of Kate Saenko, as a part of her research group, which is a part of the Image and Video Computing (IVC) group at BU.

Before starting my PhD at BU in the Fall of 2017, I worked for a year as a research assistant under Kostas Daniilidis at the University of Pennsylvania (Penn), as a part the GRASP Lab, where I had also completed my MSE in Robotics in 2016. Prior to joining Penn, I obtained a MEng in Mechatronic Engineering from the University of Nottingham, in 2014.

Research Interests

My research primarily involves studying domain adaptatiion and generalization for applications of reinforcement learning (RL) in games and robotics. We investigate techniques to bridge domain gaps when applying control policies learned over different distributions of domains/tasks.

I am broadly interested in researching techniques and tools that exploit the versatility of data generated in simulations to develop more robust AI, with tentative applications in robotics, creative tools, mixed-reality, and game development. I am especially interested in utilizing video games and game engines for improved robustness in simulated environments, with the goal of constructing sandbox environments and training schemes which in turn allow for the development of more robust machine learning (ML) polcies. Being a part of the Image and Video Computing group, I am also generally interested in applications of computer vision, with more of a focus on real-time perception, as was motivated by my background in robotics.

A problem I’ve been increasingly more interested in is the problem of intentional and active cooperation between multiple AI agents or between AI and humans. It appears that this is a different problem, and that makes sense (after all, defining optimality criteria for cooperation is complicated, to put it mildly). It is however an extremely important problem to solve for any tools that are meant to work alongside humans. This is a relatively new line of inquiry for me, but I’m always happy to chat more about it or any of my other interests.

Current Work

I am currently studying how RL agents can be motivated to learn specific behavior styles and switch efficiently between styles - this work

Since Spring ‘19, I have also been involved with research around the neuroflight platform, which seeks to build a neural network based flight controller for high performance racing drones. My work so far on this platform has focused on improving the smoothness of learned RL policies and we are now extending reserarch into on-line learning and controller certification.

In the Summer of 2020, I interned with the Data & AI team at Electronic Arts, where I worked on training RL agents to play games while learning specific (desired) behavior styles during gameplay and on techniques for switching between styles at runtime (specifics pending publication). I plan to intern there again in the Summer of 2021.

Additionally, I am involved in research employing RL in the Computer Aided Design (CAD) process to help users ensure their designs fit specific physical properties.

Note(s):

Visitors be aware: There have been known issues with content-blockers blocking some graphics elements on this page.

CS585 Coursework Tab archived - still available here

Powered by the academicpages template and hosted on GitHub pages