Chipotle 2.0!

Data

My first ever project that I posted to my previous website was related to Chipotle, a place that I haven’t been to in months since the pandemic after having been in my weekly lunch rotation. At the end of my first blog post, I mentioned a bit about finding price discrepancies between nearby locations to see if there are price arbitrage opportunities. Is there a preferred location for you to minimize costs?

All Locations (Marker Colored by Price)

Zoom in and out of the map to see the price of steak burritos across the country.

Most Proximate Chipotle Locations

These are pairs of the nearest chipotle locations. Distance measures are calculated using latitude, longitude coordinates with a haversine distance approxmiation. I fully recognize that this straight distance measure does not reflect the effective distance — people can’t be expected to walk through walls or on water.

Most Proximate Chipotle Locations with Different Prices

These are the nearest locations where there is a price different between goods. Again, this is caveated with the same conditions as the nearest locations.

Greatest $ Saved Per Mile Between Chipotle Locations

These are the locations with the highest dollars saved per mile. These are places where it could be worthwhile for you to consider going to the cheaper location. Once again, straight line distance does not necessarily reflect the ease/opportunity cost of going to the other location.

Backstory

The first project featured a poorly coded for-loop with copy and pasted store ids in a massive list. The script that collected the store ids was separate from the script that collected the data. It was spaghetti code that I was too embarassed to share with anyone, and it was nearly impossible to repeatedly run the code. Also, there was no version control whatsoever. If you’re eager to look at the code, let me know. I think there’s a potential risk of bombarding the Chipotle API if I released into the wild. The major differences between V1 and V2 are:

  1. V1 could not be a fully scheduled and automated script. The intention of V2 was to have a fully automated pipeline that runs the script on ec2, pushes the outputs into s3, and queries and loads data from RDS. I hit the memory limits on the free tier EC2 instance so the analytical layer is still local, but the scraping runs on EC2, and assets are loaded to S3. I left out the RDS layer as it was overkill for this project. TLDR: V2 is only a few key strokes from fresh data.
  2. V1 used selenium to click around and generate hundreds of online orders. I thought I was really clever at the time. Now V2 uses the Chipotle APIs. This enabled me to collect more data, faster.
  3. V1 included my initial attempts to learn D3 and used the Google Fusion Tables API. V2 is much simpler with regards to data presentation (HTML Pandas tables) and the Folium library now that Fusion Tables are deprecated. Just wanted to get this out inaugural post out quick and move on to the next.

Getting Phamiliar!

It’s finally happening! I’m finally making the switch over from my blog on johnpham.me to this page. And while switching over is business as usual for most, I’d like to take a second to lookback when it started 6 years ago.

Roosevelt Island

I found myself in New York City after graduating in 2014 working in digital research and consulting at L2 Inc. While there, I started to learn basic programming just so I could get more things done faster. Initially, I spent most of my time in Excel and Powerpoint, but I quickly found myself in Sublime writing small automation snippets automation in Python and SQL. Before long, I had experienced that, “Wow, this is really cool. What else can I do?” moment. This curiosity spilled over into my spare time as I pored over Codecademy and Youtube tutorials.

I was still only two months out of college. While my peers were out living their best lives at happy hour in Murray Hill or at tables in Meatpacking, I was holed up learning lambda functions. I’d be lying if I said that I hadn’t also been itching to go out on nights and weekends and that all concentration was 100% willpower. But truthfully, living on Roosevelt Island and then the Upper West Side made staying in so much easier rather trekking downtown in the wee hours to meet people. A 6-pack and lines of code was how I spent my early days in the city that never sleeps.

Old Homepage

I launched johnpham.me in December 2014, probably 3 months after my initial endeavors. I used the website to prove to myself that I was actually getting somewhere with my time and having an end project made learning feel much more productive. I spun up the backend using Django and learning the intricacies of web development. I painfully learned how to deploy on a DigitalOcean droplet while I barely knew what OOP meant. The power of YouTube, trial-and-error, and perseverance got me a running website. I also created my first web application called TrackMate that allowed people to collaborate on Spotify playlists (now a feature!) The main goal was just to learn and explore new technologies. And I’ve always had a fascination for data— the stories it can tell, forgotten truths it can reveal, and its power in evidence-based decision-making (Thank you Global Health Strategies class). With this new set of hacky skills, I endeavored tackling some of my own burning questions…

  • How much did Chipotle burritos cost throughout the Country? (At the time, a Manhattan chicken burrito was $1.25 more expensive than across the Hudson in Hoboken).
  • What does Yellow Cab ridership look like in New York City? (The most expensive cab fares typically originate from the airports, shocking.)
  • Could I train a Neural Network to predict the winner of the Bachelor on looks alone? (Unsurprisingly, appearances solely could not predict the winner of the Bachelor. Our predicted winner came in 3rd place though!)

Bowery Farming

Fast-forward a handful of years later, I’ve had the opportunity to turn these self-directed projects and musings into a career! I’ve had the good fortune to join experienced data science teams previously at Dstillery and now Bowery Farming. It’s funny to think back how my sublet on Roosevelt Island played a role in where I am today. I think there is something to be said about putting in the time and having obviously a bit of luck.

But about this website itself, I’m going to try to keep things simple and reduce the effort to get things done. Hopefully, this will encourage me to keep the website updated with content.