I was employed as an Applied Scientist at Amazon Alexa AI. My focus was on efficient enterprise-level ML platforms that powered Alexa’s model-training infrastructure, used by over 1500 scientists with hundreds of models trained everyday. I primarily focused on cost-efficient, and scalable distributed training environments. I worked both in engineering and research capacities. The problem spaces that I focused on were neural network compression via network architecture search and predictive early-stopping algorithms. Some of the work I have done in this team have been featured at EMNLP 2020.
Previously, I was an applied scientist at Amazon Web Services AI Labs. I was part of the AWS SageMaker launch team and was involved in the development of several AWS Sagemaker CV algorithms, with primary ownership stake in Object Detection and Semantic Segmentation algorithms. I also was a member of the launch team of Sagemaker RL. I owned and launched model compression using RL that became a significant part of the Keynote address at re:Invent 2019. I also worked on domain-adaptation algorithms for Sagemaker CV, which lead to both products and publications including an oral paper at CVPR 2019.
For obvious reasons, some of the work that was performed while being employed by Amazon will not be made public here. Others that are already launched or made public are listed here.
I published an arXiv Pre-print titled "Out-of-the-box channel pruned networks".
I have a patent issued on privacy preserving ML.
I helped launch the Amazon SageMaker RL project for re:Invent 2018. One of the pieces that I was involved with was Neural Network Compression using Reinforcement Learning in collaboration with GE Healthcare.
I developed and launched the Amazon SageMaker Semantic Segmentation algorithms.
I published a blog demonstrating BYO Tensorflow or MXNet Models into SageMaker
I participated in the launch of the Amazon SageMaker Object Detection algorithms.