About me

Greetings, I’m Sid. I’m currently a PhD student with the computer science department at Boston University. I currently work under the advisement of Kate Saenko, as a part of her research group, which is a part of the Image and Video Computing (IVC) group at BU.

Before starting my PhD at BU in the Fall of 2017, I worked for a year as a research assistant under Kostas Daniilidis at the University of Pennsylvania (Penn), as a part the GRASP Lab, where I had also completed my MSE in Robotics in 2016. Prior to joining Penn, I obtained a MEng in Mechatronic Engineering from the University of Nottingham, in 2014.

Research Interests

My research primarily involves studying domain adaptatiion and generalization for applications of reinforcement learning (RL) in games and robotics. We investigate techniques to bridge domain gaps when applying control policies learned over different distributions of domains/tasks.

I am broadly interested in researching techniques and tools that exploit the versatility of data generated in simulations to develop more robust AI, with tentative applications in robotics, creative tools, mixed-reality, and game development. I am especially interested in utilizing video games and game engines for improved robustness in simulated environments, with the goal of constructing sandbox environments and training schemes which in turn allow for the development of more robust machine learning (ML) polcies.

Being a part of the Image and Video Computing group, I am also generally interested in applications of computer vision, with more of a focus on real-time perception, as was motivated by my background in robotics.

PhD Work

I am currently studying how RL agents can be motivated to learn specific behavior styles and switch efficiently between styles - this work is being done in collaboartion with researchers at Electronic Arts(EA). In the Summer of 2020, my internship work at EA focused on training RL agents to play games while learning specific (desired) behavior styles during gameplay and on techniques for switching between styles at runtime. My internship work in 2021 has extended this work to explore applications of imitation learning in conjunction with reinforcement learning, to allow for a more natural definition of style in games through demonstration. (Specifics pending publication)

Since Spring 2019, I have also been involved with research around the neuroflight platform, which seeks to build a neural network based flight controller for high performance racing drones. My work so far on this platform has focused on improving the smoothness of learned RL policies and we are now extending reserarch into on-line learning and controller certification. Our work on smooth control with RL has been published in ICRA 2021 and TCPS 2021.

Additionally, I have been involved in research employing RL in the Computer Aided Design (CAD) process to help users ensure their designs fit specific physical properties. Results form this work has been accepted for publication at SCF 2021.

Note(s):

Visitors be aware: There have been known issues with content-blockers blocking some graphics elements on this page.

CS585 Coursework Tab archived - still available here

Powered by the academicpages template and hosted on GitHub pages