What Is The Design a Free Food App Problem?
What Is The Design a Free Food App Problem?
The Design a Free Food App problem requires us to create a service that allows users to claim a free burger by clicking a button in the app. Our primary objective is to distribute 6 million burgers within a time limit of 10 minutes, ensuring that each user receives only one burger. To achieve this, we need to design an efficient and scalable system that can handle a surge of front-loaded requests without causing bottlenecks or exceeding the available capacity. Our solution will focus on implementing a microservices architecture and utilizing effective techniques for managing user claims, avoiding over-promising, and maintaining data consistency.
An Example of the Design a Free Food App Problem
An Example of the Design a Free Food App Problem
Design the service for an app that supports distributing 6 million burgers in 10 minutes. People will come and click a button in the app that says, “get my free burger.” No one should get more than one burger. We should not promise more than 6M people that they will get a free burger.
How to Solve the Design a Free Food App Problem
How to Solve the Design a Free Food App Problem
To begin with let’s do some quick math. 6 million burgers in 10 mins is 6 million burgers in 600 seconds, which is 10,000 burgers per second on average; and, naturally, for a popular giveaway, we can expect a far higher average load, plus a big spike at the very beginning.
This problem appears to focus specifically on the atomicity of the very giveaway operations. We then will assume lower weight on the sub-problems that are far more generic, such as user registration, login, authentication, authorization, and a plethora of other important features like email verification, “forgot password,” “restore my account,” etc.
Backend Architecture:
To handle the spike of these front-loaded requests and accommodate a write-heavy system, a microservices architecture will be employed because it allows for scalability and independent deployment of services. The system will be split into multiple services, including User Service for user authentication, user validation for checking status of claims (e.g., not claimed, in progress, claimed, or rejected), bulk burger giveaway service (BBGS) which will handle distribution of giveaway codes to successful users. BBGS will also update user claim status.
It is a bit of an art to determine the “best” bulk size, but, without loss of generality, let’s assume the BBGS uses a bulk size of — and thus has a decreasing atomic counter starting at — one hundred.
Consistency:
The core requirements for this problem are: do not give out more than one burger per person and never exceed our limit of 6 million burgers. To ensure consistency given these constraints, we can process this giveaway on one machine, as the job of scaling out the giveaway process is controlled by the BBGS servers that claim burgers from the DB in bulk, thus greatly reducing the load on the DB (by a factor 100, in our case).
This solves the “DB as the bottleneck” problem. Now, to make sure the service does not go down and lose all the data in case this single-point-of-failure DB goes down, also use leader-follow replication to keep a copy or a few copies of this DB as a backup. We need to make sure that once the next batch of burgers is being claimed by the next BBGS, this batch is only released to this BBGS once the leader-following replication process has successfully replicated this particular “bulk giveaway transaction.”
The previous paragraph covered the interface (the “API contract”) between the BBGS and the DB server. Now, when it comes to handling the claim between the end user and the BBGS (i.e., to the “API contract” between some front-row service and the very BBGS), the state machine is the following.
Life of a user claim:
A separate service to check legal availability in a given region / country / license, etc. Since this service is contacted per each burger claim separately, it is outside the critical path performance-wise, and it can be sharded and sliced and diced as we please. It might be a separate box on the architectural diagram or it can be a part of the “caller side” of the contract with the BBGS, much like in the case with Authentication, Authorization, etc. It would be more than enough for most interview purposes to mention that any and all legal checks do happen outside the very “consistently decrementing counter” part of the system.
User Management:
Authentication and authorization (AuthN and AuthZ) mechanisms will be implemented to ensure that each user can claim only one burger, and AuthN and AuthZ are out of scope. A user database will store user information, including unique user IDs and the status of burger claims, which is a simple yes/pending/no value. This allows for accurate tracking of users and ensures that duplicate claims are prevented. In addition, if a particular burger giveaway event did suffer from an outage, we can then resolve the “pending” burger claims once the 10 minutes window is over, perhaps in the next 24 or 72 hours.
Burger Distribution:
To manage the distribution of burgers, a decrementing counter will be used. This counter starts at the total number of available burgers and decreases with each successful burger claim. This approach ensures that the system accurately tracks the number of remaining burgers in real-time and prevents over-distribution. Queue up claims, bulk claims in 100s. Provision blocks of 100 claims and distribute when we hit 100.
Rate Limiting and API Gateway:
To handle the potential surge of front-loaded requests, rate limiting will be applied. This prevents excessive requests from a single user or potential abuse, ensuring fair distribution. An API gateway, such as Amazon API Gateway, will handle incoming requests, enforce security measures, and provide rate limiting capabilities.
Database and Storage:
Let’s consider how we will store our information. A NoSQL database like DynamoDB may appear as a good choice to not “lose” burgers, as it was designed specifically for this. When it comes to making sure the counter never goes below zero, simpler and more reliable techniques can be used, especially if the batch size per each Burger Server is relatively large. Thus, the very “claim the next batch of burgers” operation is far less frequent (100 times less frequent) than end-user requests. It would be fine to use, for example, a SQL database that could leverage ACID transactions to strongly consistently guarantee no double-assignment of “chunks of burgers” from the “central authority” to the burger servers. One could use DynamoDB, but, to ensure the data is sufficiently replicated, consistent writes and consistent reads would have to be used, which increases the AWS bill substantially.
Watch These Related Mock Interviews

About interviewing.io
interviewing.io is a mock interview practice platform. We've hosted over 100K mock interviews, conducted by senior engineers from FAANG & other top companies. We've drawn on data from these interviews to bring you the best interview prep resource on the web.