Role: full-time
Location: San Francisco, CA (Remote) | July 2021 - July 2024
Context: chippercash.com
Description
Building pieces of the data infrastructure that supports our growth, commerce, and compliance efforts
Tools: python, typescript, react-native, node.js, sql
Role: internship
Location: Los Angeles, CA (Remote) | January 2021
Context: saucepricing.com
Description
In this month-long project, I worked closely with the co-founders in implementing the customer-facing
front-end of the product.
We used React.js for building the product's pages. We then used Apollo and GraphQL for querying and mutating
data from and to the database
which was hosted on MongoDB. This was an immense learning experience and have grown confident with back-end
dev through it.
Tools: javascript, react.js, apollo, graphql, mongodb
Role: project
Location: Networks course | Cambridge, MA | Fall 2020
Context
At its core, the problem of congestion control is about managing the behavior of senders
in a network to optimize for throughput and fairness. Tens of protocols have been designed since the birth
of TCP, some being variants of it and others novel. Aurora is one such
protocol. Introduced in 2019, it was inspired by the recent success of reinforcement learning in domains
that involve long-term decision-making. Aurora’s major strength lies in its ability to quickly adapt to new
network conditions such as changes in link bandwidths, queue sizes, latencies and packet losses. When
evaluated alongside standard protocols such as TCP-CUBIC, that adaptability has been shown to give it a
significant edge in terms of link utilization.
Description
An open challenge that remains is designing a solution to ensure that Aurora senders share the network's
resources fairly with senders using other protocols. The reward function that’s used in training Aurora is
solely focused on ensuring high throughput, low latency, and fewer packet losses. If placed alongside
senders using TCP, it’s been rightfully speculated that given what that reward function emphasizes, Aurora
would learn to behave in a way that causes those TCP senders to always back-off in order to get higher
values of the reward function. We consider this an opportunity to explore alternative reward functions that
could be used to train Aurora such that it not only prioritizes maximizing throughput but also learns how
many other senders it’s sharing, say, a bottleneck link with and how it can balance maximizing throughput
and fairly sharing that available bandwidth.
Tools: python
Results: report
Role: project
Location: Networks class | Cambridge, MA | Fall 2020
Context
6.829 is a project-based networks course that covers a broad range of core topics
including congestion control (both end-to-end and network-assisted), routing, decentralized systems, and
more.
Description
The video streaming set-up is as follows: There's a client (eg. youtube app) that maintains a buffer of some
size. As the user is watching a video, the client is actively downloading packets from a server (eg. youtube
server) over a link/connection that has a varying throughput. These incoming packets, encoded in multiple
bitrates, are queued in the client buffer - waiting to be played. The main question is at what bitrate
should the client play the in-buffer packets to optimize an overall measure of Quality of Experience
(QoE). More of the problem description can be found here.
Tools: python
Results: The course staff prepared the code that simulated the client and network. I implemented a feedback-control based approach which maintained a running average of the connection capacity and nudged the selected bitrate towards that average.
Resources: repo
Role: internship
Location: MIT Media Lab | Cambridge, MA | Summer 2020
Context: personal robots group
Description
A study on misinformation, conducted by a different group, showed that misinformation spreads about 6 times
faster than real information. This can have serious implications when it comes to activities, such as
voting, protesting, etc, that involve collective decision-making. The goal of this NSF-funded project was to
raise awareness in young children about the threats of misinformation and how they can be more alert when
they're online. We ended up developing School-book, a fun headline-sharing platform, which allowed students
to directly witness how the information they shared with others spread in their network. I was responsible
for designing and developing School-book’s front-end and back-end.
Tools: python, javascript, bootstrap, socket.io, react.js, express.js
Results: In a period of ~2 months, we used school-book in 5 workshops with over 100 students total.
Resources: paper | project page
Role: internship
Location: Affectiva | Remote | Summer 2020
Context
Affectiva is an AI company that uses and develops computer vision and natural language
processing technologies, combined with a lot of in-house curated datasets to do affect appraisal along
multiple modalities. Their software’s applications span multiple industries including gaming, robotics,
education, healthcare, experiential marketing, retail, human resources etc.
Description
On my team’s month-long project, we set out to apply Affdex, Affectiva’s tool that’s tuned to recognize a
host of facial expressions, to virtual patient-doctor interactions. Our solution was a web application that
executed real-time emotion recognition over 1:1 video calls. From the doctor’s side, we used Affectiva’s
Affdex SDK and Zoom’s web SDK to process the video feeds from them and their patient. We then slightly
modified the Zoom interface by adding a ‘Get Analysis’ button that the doctor could use at the end of the
session to see charts of the session’s expressions by both themselves and their patient.
Tools: html+css, javascript, zoom sdk
Results: My team and I presented to a panel of Affectiva engineers, some of whom developed Affdex.
Resources: code
Role: project
Location: MIT | Cambridge, MA | Fall & Spring 2019
Context
I did this project as a follow-up to the cartpole-balancing project I write about below
Description
My objective was to get hands-on experience with some elementary model-free RL algorithms such as Q-learning
by applying them to the task of learning the shortest-path out of a maze.
Tools: python, openai gym
Results: I was fortunate to deliver a brief presentation on it in a project-based class
Resources: code
Role: project
Location: MIT | Cambridge, MA | Fall 2019
Context
After taking 6.036 (MIT's introductory ML class), I developed an interest in
reinforcement learning. RL is about training an agent to make a series of optimal decisions so that it's
able to achieve some long-term objective. Examples of areas that RL has been applied include games,
robotics, trading, advertising and many others.
Description
For this project, my objective was to apply this learning technique to a simpler environment, namely,
balancing a pole on a moving cart. A full description of the cartpole problem can be found here
Tools: python, openai gym
Results: This was my first RL project! It was more of a hands-on learning experience than anything else.
Resources: code