One question all Instagram engineers hear wherever they go is this:
“What’s your stack?”
Their answer is always the same and it is incredibly simple: Their database is not!
Their database is a milieu of many technologies, their own take on popular relational database management systems that now powers the largest image-based social network in the world. It’s powerful enough to manage over 14 million users and over 40 billion photos (shared till date).
According to the engineers of Instagram, their core principles while choosing their database were pretty straightforward. The list includes:
- Keeping it simple
- Using available resources
- Choosing verified and popular solutions
These criteria made the database creation and management process a lot easier than most people thought. If you don’t believe us, check out their choice of OS and application servers in the next section of our article.
It is time to take it from the top.
Instagram’s choice of OS has kept them alive for long
Instagram used to run on the forever almighty Ubuntu Linux 11.04 (the Natty Narwhal) on Amazon EC2 back in ‘11. Unbelievably enough, the entire workforce of Instagram engineering initially consisted of just three engineers.
The Natty Narwhal has been reliant and powerful enough to run smoothly on EC2. As of 2015, Instagram has 20 team members who take care of Search, Trending, Explore and Data Infrastructure.
With just three engineers and continually evolving needs from the UI end, self-hosting had never been an option for Instagram until Facebook stepped in. Most parts of it are hosted on off-premise, private clouds to keep all private information safe and secure. Most of Instagram’s hosting needs are taken care of by Facebook ever since the big buy in 2012.
When the thoughts about building the Data-gram team began in 2013, there were barely 35 engineers working on mobile applications, database management, and the backend processes.
That is when a handful of engineers at Facebook bridged the gap between the data management processes of Instagram and Facebook. They designed a new data infrastructure that would provide end-to-end impact for a small team and that would fit the engineering structure of Instagram.
Instagram’s database choices have never failed the users
Back when Instagram was functioning alone in the big bad world, the engineering and data teams chose a mix of NoSQL and SQL servers to support the massive growth. Instagram has about 250 million active users today and yet the database is plastic enough to accommodate the steady inflow of massive bulks of data.
In 2011, Instagram undoubtedly used Amazon services and many opensource software solutions to address their data concerns. From data collection to data security, the team used to rely on third-party solutions than creating their own RDBMS. They heavily relied on external services they did not have to build and manage the collected data from different time zones.
We didn’t find remote DBA in the life story of Instagram as of yet, but many other social networking sites and applications have used remote DBAs in their early days to collect, collate, and curate data from millions of users across the globe.
Application servers: Simplicity has always been the beauty
According to a recent post from the Instagram engineering team, their application servers run Django with PostgreSQL. Their server stacks further include Cassandra, which we all know is a plinth for all Facebook products.
Since the integration of Instagram Stories, the engineering team has been working with IG Disk Cache, IG JSON Parser, IGListKitUICollectionView data binding framework for iOSx, and the Rebound Animation Library for Android devices. They have also been using FLEX for iOS devices to make their application and new integrations more cross-platform friendly.
They use Redis to power their main feed. It also powers their session’s system and related app systems regularly. Since Redis data need to fit into available memory, the team runs several quadruple extra large memory instances for it.
They usually run Redis in a master replica setup to save memory and time. This is much faster and spares the server, too. For caching, Instagram still relies on Memcached, like most of the other popular web services and apps. As their number of users and bulk of data grow with time, their Memcached instances also improved.
Regular monitoring services keep the team ahead
Monitoring is a part of the team’s task.
Back in 2011, they started with Munin to graph all the user-end metrics across the system. The same alerted the team if something went amiss.
As of 2015, the team relies more on Python and Python-Munin to write custom plugins for their web service. These plugins can efficiently provide metrics that are not a part of the system level. The list includes photos uploaded by users per minute, sign-ins per minute, and comments on an influencer’s post. Sentry has always been a reliable opensource Django app for reporting Python-related errors in the system.
Their success in the smooth integration of new features lies in the process of testing cross-platform compatibility from the days of conception. They thoroughly check their new filters, video features, and gallery features across multiple devices before the final deadline. They use a plethora of tools from Facebook and services, too. That helps us run these new Instagram features smoothly on almost all devices.
The very need for an application that can work across multiple platforms brings the need for simple yet powerful RDBMS stacks and simple servers. The minimalism of the entire setup, including the data infrastructure at the Instagram back-end, makes it a great app for every device and every operating system.