Design LeetCode

Watch someone solve the design leetcode problem in an interview with an Amazon engineer and see the feedback their interviewer left them. Explore this problem and others in our library of interview replays.

Interview Summary

Problem type

Design LeetCode

Interview question

Design a coding competition platform with a leaderboard and execution environment.

Interview Feedback

Feedback about Electric Tetrahedron (the interviewee)

Advance this person to the next round?
Thumbs up
How were their technical skills?
4/4
How was their problem solving ability?
4/4
What about their communication ability?
4/4
Overall very solid experience. Keep the concerns about the customers in mind and speak them out during the interview.

Feedback about Metal Cephalopod (the interviewer)

Would you want to work with this person?
Thumbs up
How excited would you be to work with them?
3/4
How good were the questions?
4/4
How helpful was your interviewer in guiding you to the solution(s)?
4/4
The feedback was really useful and I'd say even unusual - it's definitely good idea to think more about the product itself as well Guidance during the interview is good, all the answers were clear and concise I didn't feel I'm solving a problem with a colleague (as most of the companies tell about their system design interview process), but I felt really comfortable and felt I'll get all the support I need

Interview Transcript

Metal Cephalopod: Hello.
Electric Tetrahedron: Hey.
Metal Cephalopod: Hi. Is it a systems design session?
Electric Tetrahedron: Yep.
Metal Cephalopod: Yeah. Could you tell me? Do you have some concrete interview coming or just practice?
Electric Tetrahedron: Yeah, I'm currently interviewing. And this week I have an interview on Saturday through Amazon. And next week it will be Google. So yeah, definitely practicing for interviews.
Metal Cephalopod: Okay, great. So yeah, in this case, Oh, yeah. And what level?
Electric Tetrahedron: I'm coming into L5 for Google.
Metal Cephalopod: Okay. And Amazon?
Electric Tetrahedron: In Amazon, I guess it should be L6. Because they should have it slightly lower. It was in trinkets, yes. Yeah, as the three.
Metal Cephalopod: Okay. Then let okay. So, it will be then 45 minutes of session and then I will give you some feedback. And I will ask you, I will, you will have time to ask questions.
Electric Tetrahedron: Okay.
Metal Cephalopod: My question will be, could you design a system, which, or platform, which will be able to provide coding challenges, like let's say a million participants join this platform and during one hour they solve programming, coding challenge core tasks. And overall, during this one hour, we can we can provide them an overall scoreboard and in the end, we we will rank them and so then show them.
Electric Tetrahedron: After they solve a coding challenge during one hour, and after we show them leaderboard. The yes, leaderboards we rank, right, right. Yes. Or we shoot updated during this hour. So shouldn't be a real time or we just queue it until the coding challenge is over?
Metal Cephalopod: No leaderboard is online.
Electric Tetrahedron: So we not after hour we have rank leaderboard update is you in real time. Yeah. Okay. Okay, so... So, my first question is, you said that million participants are in the platform and they solve coding challenge, that doesn't seem something like similar to LeetCode contests were... So similar to LeetCode, so basically, we have a predefined problem, right. And all people should solve this problem within the dedicated amount of time. And they they have an access to test the code and to run it within the browser, I guess, and to validate the results. And if they solve it properly, so all the test cases are fast. They are they finish they are parts and we placed them to the leaderboard based on some ranks, I guess it should be something like number of attempts, speed of the solution and the memory required for this solution. I guess something like that?
Metal Cephalopod: That's right.
Electric Tetrahedron: Yep. Okay, so let me prepare some sorts of templates here are some requirements, functional, non functional. We'll see if I will need the same sorts of capacity estimations as app, should I concentrate on the system which will allow to run the code and validate the solution or we may assume that it's already there?
Metal Cephalopod: No, no, no. This part we have to design, login, register. Everything.
Electric Tetrahedron: Yeah, it's so good. Just quick notes are their game and is done. Okay, so, well, functional requirements are actually kinda like the same, so might just physically move there. So we define a system which should first of all the allow people to validate their solutions and create leaderboards. Right. So validate solutions and place participants in the leaderboard. And I guess the leaderboard should be persistent, right? We should sorted somewhere for some time. Okay, so what about different non functional requirements? This thirst I think comes to my mind these availability. So basically, it's a time constraint events and during this people should be able to, to work with our system, it should not break at any time, right? Because it's really important for people to be able to send me to everything. So availability is important, I guess. High availability, we're going to talk about if we aim to some specific numbers, but if we think about one hour contest, I guess it just should be within one hour, it should be high level about the data persistency. I guess, I guess it should be persistent. Right. There should store the leaderboard at least for a pretty long amount of time. Let's say forever? The same actually for submitted solutions, we should not lose them as in that case.
Metal Cephalopod: No, no. Yeah, let's keep on the leaderboard for the beginning. And I mean, solution we can forget.
Electric Tetrahedron: Consistency, only leader. Okay. What about consistency? If we if some players will see such slightly stale data in leaderboard? Is it okay, or we should keep it really consistent?
Metal Cephalopod: Let's let's discuss it. Do you think from product? I mean, far from our customer experience, it would be good that sometimes they see the the rank, not updated?
Electric Tetrahedron: It's a really good question. And if I would think about the real product, and I already have worked with leaderboards before. My idea was would be that within the one hour, we may assume that there will be still latest because you know, many players are updating their solutions. But after we have contest finished, we should calculate it recalculate... Yeah, updated, update the leaderboard and observe that it should be consistent all the time. But we should be able to tune these consistency within this one hour to achieve the best results. So from that perspective, we may leave with some stale data within the within this one hour. But if it if the system requires having a strong consistency, I guess we can achieve that as well. It will...
Metal Cephalopod: Your arguments is very valid, let's think about eventual consistency.
Electric Tetrahedron: Eventual consistency. Okay. What about the latency? And by latency? I'm talking about not only the latency for the accessibility to work, but also I guess for validating a solution. I guess it should be as low as possible, right? Yeah. Let them see. As low as possible. Okay, got it. Do you? Yeah, I guess that's mostly all so far and maybe I will add some non functional requirements we try to discover during the high level design. Do you want me to run back of napkin calculations there? Or we could just maybe level design?
Metal Cephalopod: Let's move. If we needed, we can calculate.
Electric Tetrahedron: Well, I guess that we can just what we can just do is to just define, what data do we need for, for storing all this information? Because reason. And I will just estimate that and we'll see what else do we need? So let's say our solution size would be something like about 100 lines of code, right? And that leads us to, I don't know, maybe, maybe about one kilobyte of text, or maybe even less, right? And if we have millions of participants, I guess it should be something like let's say 5 million or so. Right? So we get five gigabytes, gigabytes of memory required to store all of these solutions and submit them. And the leaderboard itself, if we just store the information about like the rank, and for solutions. And for the leaderboard, we would require only a really small amount of data, because we need to store some information about user like maybe name, ID, profile photo, yeah. Which actually may be retrieved from the from some user data table. So we assume that we what we need to have is just an ID, which is four bytes and their position. But if our four bytes, just a number, and we still half a million, it would be just been 20, it's 50 megabytes of data required for leaderboards. So it's pretty full on there for maybe we will need more to store additional information. Yeah. Okay, so I guess I can think, from this high level perspective, if you if you're okay with it. Yes. Okay. One second. Good. So from the high level perspective, what should How should it look like we have some sorts of predefined tasks, which we start somewhere and every participant X will access the same task, right? Everyone has the same desk, I actually put it there. So tasks are the same for all participants. It's really important, I guess, as every participant access the same task. They have an access to our, let's say, just a textbook there. We don't think right now about the syntax highlighting and so and when they pass the code, they will able to run some test cases, right? And validate solution. And what we should do what the system should do when someone hit the run or submit button, we should launch our solution validated, and compare the results and say like, if it's okay, then we show the congratulations about if now, we showed the test case where we failed and the reason why we have failed during the time constraints or because of the run out. And when we every time when we when we submit something right? We record some sort of events and we should process this event to be able to go calculate the run for the player right. And when we get a quick question, if someone wasn't able to finish the contest but was able to cover some number of test cases will will this person have some position on the leaderboard or it will be just low? So should we should we take into consideration people who were not finished all the test cases but finished some of them?
Metal Cephalopod: Yeah, definitely. Because our it will be it will be a separate task and if some if somebody has finished some of them it's already good results.
Electric Tetrahedron: If finished only some test cases, they already have. That's really, that's really good. So basically, we process these events. And we are building leaderboards. Right. And I guess mostly, that's how it should look from the high level perspective. So I will just try to define it here. So we have our user, right? It's, we have our load balancer somewhere, to distribute the workload evenly, and we have our, let's say, content server, right? They got the server is responsible for accepting inputs from the user, and running the, at least, maybe not writing directly, but a sign in to run the solution to check it out. Right. And also, what we have, what we need to have at least there would need to have some sort of database to store the information about attempts for the user, right? And I will dive into the how are we going to validate solutions, but let's say we have this result somehow. So we have events, from the coming from the contest server, or from the server responsible for running everything, that's, we receive the events, right? So events are so right. And we put it to our database, and we will need to process it somehow. And to be able to build our leaderboard. The reason why I asked about the consistency is that we may actually, as we may have, it kinda can highlight, we may split this even processing. So let's call these parts of the stream processing as because we have just been a number of events. We may have our stream processing pipelines as just the number of some jobs responsible for calculating the data. But also what's we can do, we can also in this stream, persistent pipelines, split our paths to hot and cold, and hot and cold, and hot will mean that we built our leaderboards, update our leaderboards really quickly, but maybe it was some non really precise data. But it's high level. And in cold path will spend more time to update the leaderboard, but the results will be just precise. So I would put it as a stream processing pipeline. And I will just cover that later there. And within so persistent pipeline, and an output of this emphasis in pipelines, is leaderboards in three, basically, or maybe even the whole either birth. And then when we have this leaderboard, we put it to the some leaderboards database, right. And leaderboard database should be accessible to users. So I guess we should have something quite, that's a lead. And we have our app service, which is responsible basically for everything regarding to the communication. And the app service route acts as the leaderboards database to show the the show the user actual leaderboard. And what we may also want to have is to have some sorts of cache layer there. To cache this leaderboard for some time, if we expect really big clouds on this leaderboard, right. And next part is the contest. Just want to clarify that it's in the reasonable right now what I was saying so far, about the stream processing pipeline and building the leaderboards in the background.
Metal Cephalopod: Yeah.
Electric Tetrahedron: Okay, so about the contest server. Our contest server should accept the input, as I said from the user and should be able to run... Well, first of all, it should give the player the description of the task and some test cases. And it should be able to run this solution and check it, check it against some test cases, right? How it should work. From my perspective, what we should do, we should be able to run solutions quickly. At the same time, the solutions should not intersect with each other. So if we run some really, you know, let's say maybe even malicious code, it should not affect any other code running on the same machine, right? Because it will be problematic, or a some yourself solution just grabs, like gigabytes of memory, it should not affect other solutions running on the same server. So that led me to, to the idea that we want to, to sub to run these solutions that does the solutions in some sort of containers, which will have some predefined amounts of memory. And that actually may be a really good idea as in every task in leetcode, we have some, some constraints for memory, and for time, right? And it may be our system, how to maybe somehow combined with the, with constraints of the virtual machine, right? So how should it look like, we have our contest server, which is basically responsible for getting the information about the task. So I would put in and gets a task pass. And in this gets, that's what we need to do, we need to go to some data store, basically, and get this task description and some sort of test case. So I guess it should be something like that. Because, well, let's imagine the average size of the description, I guess it should be about one kilobyte, right. And I will put a slash here because maybe we do not want to store such amount of data in database itself, maybe we want to put it to some object storage. And in database, we'll just have a link to the to the text, which lies in an object storage. But nevertheless, get the kinds of servers require that as database and get the description of the task and get the object of description from the task object search, scope, object search, combine them together and get the response to user, right. And what I see here is that every user will basically get the same task every time, right? They just saw in the same task. So it's really reasonable to me to have a cache layer there, just for the sake of full of availability at low latency. So by having the cache and as we have like, maybe one or two tasks or so maybe even five, anyway, it shouldn't be a big, big chunk of memory, we just store it in cache, and every user just should get it instantly, right? Okay. What about the submit pass? They're both like lower. Okay. So it says submit pass. And once you I see sure what we can do we can every time when user attempts to run something, have a new virtual machine running this code. But I guess that would be not really effective. Why? Because we will need some spend some time for startup. And for small solutions, this startup time will be even bigger than the running of the solution itself, right. And that will lead to unnecessary latency. So my idea is that when we when user loads when the user logs contest server will be responsible for creating a virtual machine for the session ID. So what we do when a user enters the contest, the contest server already creates a virtual machines with the session ID and we already have it somewhere. So basically, we will not need to recreate it for every submit, but we just need to do to just run the code and clean up after that. And it may lead to some load on the server. But as we do it only for one hour, maybe we can just play version and else, we may maybe do not create it, just from the start, maybe we will want to, you know, create it for the first segment, and then keep it as we already find out that user is going to solve the problem at night just looking at does it sound reasonable?
Metal Cephalopod: Yes.
Electric Tetrahedron: Okay, so we have our submit path, and we have our virtual machine running, or not yet. So if it's running, that's good. If it's not running cool, we should create this virtual machine and their virtual machine service will be responsible for that. I guess there may be some some solutions helpful there. But anyway, for me, it's just some Docker image run with some predefined rules. And yeah, I will dive into that later to be able to understand how to work properly. And the contest server is responsible for running run validates the solution solution in virtual machine. And it should keep an eye on on the on the on a virtual machine to check it out what the result is. And if the results is successful, we should nevertheless have the results we should put in evidence right. Runs and events. And in our process it later. Right? And if their solution is, okay, what we should do we should I guess we should have some... Yeah, we have our leaderboards and I guess we can just reuse it for the same purposes. We have some sort of contest database. And maybe it's I guess it's even located somewhere near the leaderboard database. So if the solution is good. We just put the entry to the contest database and to the three contests db. Right. And starting from that, we assume that the user finished the contest and so does the question. When user submits working code, should we allow them to resubmit to get more to get better results, or we just count on the different submission?
Metal Cephalopod: No. Number of attempts is not limited, if they want to improve their solution.
Electric Tetrahedron: Okay, okay. Okay. Okay, so that path may be optional here, I guess. Because in that case, we ever case, we just need to put an event, right. So that's how looks like and I will maybe do want me to dive more into the virtual machine service, how we should allocate machines, or you want me to jump to the stream processing service?
Metal Cephalopod: Just a second, I have one question. Correct understand is that if we use session ID, user one join to our contest, every time it will be directed to the same machine?
Electric Tetrahedron: Yes. That's the idea. Well, no, actually not. There. We should be able to create a virtual machine associated for the user with the session ID. But it doesn't mean that it should be stored on a contest server itself. Right. So we may have started this information with say, there from the cache, right? It will be somewhere around the virtual machine service. So that means that every time when user reload the page, they can go to every kind of server, but every kind of server will know about the virtual machine associated with the session ID. So that's the idea.
Metal Cephalopod: Okay, so it means that if some contest server failed, yeah. Request.
Electric Tetrahedron: That means that if some my idea is to have the contest server, maybe it wasn't clear from this year. But my idea is that we will need to have contest server really stateless, and to store all the information in separate service and that case, contest server will be really easy to replace them in. In case of failure, so yeah, there's that.
Metal Cephalopod: Okay. Yeah, that's, that's clear for me. Yeah. Let's move on.
Electric Tetrahedron: Okay, so let's talk about the stream processing pipeline, because that's not covered yet. So my idea here is that we will get many events come in within one hour, so it will be pretty dense. Pretty dense stream of events. Let's imagine that in average, like, we have three items per person, so we have 15 million events within the one hour, right? And that means 15 million, and if we divided by 100k, and I just, let's imagine that we have it's 100 to 360 seconds, right? So divided, and we have, okay. And I guess it's something like 400, APS 400 events are pretty big numbers, I guess. So, again, my idea is to have two paths. So, so we put all the information in a database, database itself is just append only. So db append only, so we just add the events and we forget about it. So should be really easy. And then what we use here, we should we should have our database as a single source of truth for these events, but we should be able to process all the new events. So there is a structure called change data capture, which basically database commit block created in a forum readable for other services, or it may be as well described as a Kafka queue. So I will not boot camp this year, for just for the sake of coding, it's service agnostic, but I is just log of events with offsets. And we have to we have one stream pipeline at one stream processing service stream processing service, which is responsible for both building divert online and building leaderboards with the precise data. So we have our hot pass as I said before, and we have our cold pass. And the reason why I call them like that is that it's what I will just discuss right now is called as a lambda architecture. So, when we have hot processing, so there will be just crud this data, real time and then called we process its slowly processing. And then what we do we just combine all these results together and merge them together to get the precise data and update our not really precise data which I mean from the real time calculations with the data we have in flow processing. And we have these either these are boards db, how it will look like when we have this hot processing, when we have this hot processing, we will have these events come into the hot process can we will need to calculate the rank really quickly, right to be able to display it in real time. So we have our service for creating the rank, right? We have many things for run calculation. But we should make it really quick. And my idea is that we may have some sorts of probabilistic algorithms in hot paths, which will allow us to have maybe not really precise data, but it will allow us to have these data calculate It's really quickly. So I will put it is just probably right? For my idea is that we may use count means here counting, counting sketch circles, which one of the algorithms really useful for such type of calculations. So based on that we will be able to calculate, not to really precise to divert centers. But in a slow processing, we should calculate every run appropriately, right, we should take into consideration all the facts we know about the user, the time of solution, old solutions, aggregated number of attempts, aggregated number of memory, right, the best attempt. 10, etc, etc. time required for the first solution, so, and so, and it will be a bunch of criteria there. And my idea is that we may have some sort of as a MapReduce jobs to process all the events together. And we may do that, theoretically, maybe like one once per minute, one minute or so, it should be configurable, of course. And after we have this MapReduce jobs completed, we just merged this data with the results from our probabilistic hot pass inside the leaderboard. That will mean that we will get the consistency within our sign of MapReduce job of our mapping review into MapReduce interval. So if we define it as a once per minutes, minutes, we will have our leaderboard updated, within one minute, we may configure this value to be like, once every 30 seconds, it all depends on the on our on our requirements of the system and our resources. Right. So that's how it should look like from a stream processing pipeline, I guess. What's worth to mention there is that we also may want to update the cache itself. So we will need not only have the leaderboard database, but we also may have the cache, because we don't want users to see the stale data. So there are a couple of solutions here, each one with their own trade offs. The first idea is that we may have write ahead cache are so you may update cache and leaderboard db simultaneously. The solution may work. But it will lead us to additional, let them see an update. And they'll revert. And the same approach is that we have some sort of as we know that we run MapReduce jobs in an interval, we may have some sorts of detail within the cache. And when the user requests the leaderboard, we basically check if it's some cache, if it's expired, we just go straight to the rewards and put it in cash. So we have two solutions there. And I would put it as a as a just there, maybe we'll have some time to discuss it. But if you want I can just discuss the status and decide what can we do with the cache itself just right now?
Metal Cephalopod: No, no, it's okay. It's okay, whatever your plan is.
Electric Tetrahedron: Okay. So what else here? So, yeah, we have our virtual machine service, and it also has some sort of cache. Well, it's not really cache it just sounds like cache db when the associate session ID with with the virtual machine service. So I will just guess, should sell a couple of words about the virtual machine service virtual machine service should be able to allocate new machines for the session ID, it should shut down machines after the interval and also that should do it should also have a have a database. Have a cache database with a key value pair. Well, you fair I guess he Well, you're here for session ID and machine ID. And it should be able to actually run the run the specified code on this virtual machine. These four number leads me to interesting question about the security itself, right? Because we do not want to be able to run arbitrary code. And I guess we should be able to limit the number of possible operations by having everything running inside the Docker or virtual machines. But anyway, that brings some security concerns as well. We should disallow some malicious code, and we have some sort of service responsible for checking the code before that important some backed by libraries or using some sort of URLs? I guess that may or may work there. And after the running the code, it should also report the results. Right. Okay. It seems like we have four minutes left for the interview. And I may go through the how the system should work on a under the under load and carry.
Metal Cephalopod: No, no, it's already enough. It's deep enough. Could you maybe go through this, your design and maybe explain me some more trade offs, which we what we have, and what kind of bottlenecks and how we will scale it?
Electric Tetrahedron: Okay, so about the content server. As I said, the contest server itself should not be really responsible for storing the information itself, it should be just, you know, putting some codes to ram servers or putting some events to the database. So if it dies, it shouldn't be a big problem for us, because we don't... should be able to just locate another contest server. But it may be a problem, if the contest server dies, after we get the response from the virtual machine. And when we get the items events sent to the database, because in that case, user may think that, okay, my request failed. And actually, it's already there. But if we, we make where these database and the servers have some sorts of cache for the user itself, so cache for user events, because actually, what actually, even a user should see is the history of his attempts, it's really important, I completely forgot about it. So we may store it in cache. And if the server dies on a way to get the response, to send the response to user, when user load the page, it will see that actually attempted event was sent, and he will see that in his history. So it should be safe. By having a load balancer with some mechanism like round robin with health checks, we will be able to distribute the workload between contest servers evenly. Database itself will accept many events per second. But as we decided that it will be only up and only without any changes, it should really, it should be able to maintain higher throughput, high price throughput, right. But we can use some of the solutions. If we decided it's not enough for us, when we use some of the solutions with really higher throughput, like Cassandra or some other noSQL solutions. What about the stream processing pipeline and Change Data Capture change to the capture essentially, is a lock. And it should be kind of safe to use it. And what we can do here we can also introduce some sorts of locks to not to stir old looks. And anyway it should be really, it shouldn't children as we have only one hour events, one hour timeframe. But also we may introduce partitions inside the lock to be able to distribute a number of events evenly as well. The stream processing service itself is a system and it should be scalable and available as well but I will look at these two parts, MapReduce itself is really fault tolerance, because every job itself writes and reads from the disk and if you the whole pipeline is fails, you should rerun only the failed parts. And if there are mistakes, invalid probabilistic all go to we may store the count min sketch inside the database or the cache as well. And if our helpers dies it may be a problem for us as we lose some information about the real time calculations, how can we solve it, we may store the snapshots of the count min sketch and offsets from the from the log to the disk sometimes, like maybe every 10 seconds or so, and if our server dies, we will be able to restart this information from from disk. So, we will not have only one instance we will have multiple instance for this calculations. And we should be able to allocate the instance if one fails as well. Leaderboards itself should should expect higher throughput for one hour and also pretty big read load I guess, especially after the one hour, so the load is distributed there. From my perspective, it is current I cannot really estimated by the numbers, but I guess that we may use SQL solution or noSQL solutions as well. The main concern here as the size of the data and loads for price, but as we already discovered that it will take only 20 megabytes for leaderboards. And maybe it's even yet it will fit to the cache on a single machine and leaderboard itself will not be a big number. And think is that we really will have only 5 million rows. So I guess for our purposes, SQL solution just fits well. And it should be dependent on the real world application. So if we expect really high throughput, maybe it's worth to take a look on the noSQL solutions. And to have to have to support availability will need to have replicas constant byte replacement for master and three replicas for leaderboard itself. So it will support us good latency and good availability as well and good reliability. What about the what about the virtual machine, virtual machine itself should be should be responsible for allocating machines. And well, it should be really fault tolerance, right? If we allocate some machines, and we die and the server dies and doesn't know the new server doesn't know about machines running on the host, we will just be running out of the resources. So we may introduce some some snapshots into disk for this database of machines running as well. But we anyway will need some sort of scheduler and maybe it should be even more complex system there. As for leaderboard parts, we have App Service, which is basically only goes to the leaderboard database or cache. It's just application layers for communication between datastore. And it doesn't really bring any difference if it dies or not. Because it may be replaced by any other service as well. And we have cache and leaderboards to query to get the to get the leaderboard itself. And to introduce fault tolerance, we may want to have this cache snapshotting to disk as well, just to be able to return the information as a fail as well. So maybe we want not to store this cache on the App Services as well, we may put it to a separate service just put them there separate service. So we will just clear this cache and we will either use some fault tolerance mechanism on the on these services as well.
Metal Cephalopod: Yeah, sounds pretty good. Finish. Yep. Thank you. So yeah, because Because We have a time limit. Yeah. Let me give you some feedback. So I see very solid experience. And it's very good for if I text. And technically, it's brilliant. I would point out that I know that, for instance, fairness and privacy of this platform will be covered, technically, but it I would expect as interview that you will mention it, that with this technical implementation, we will cover the privacy with this. Implementation with this design, we will cover fairness, and we will provide fair user experience. And I would expect more. I mean, technically, it was, it was brilliant, I would also, but technically a brilliant is good for middle level, from a senior level, it's, it's worth to mention some user experience and product from in our own and this, like, expect some vision of a platform from user and not only technical implementation, and to mention this fairness, privacy, it will be it will, it will show it will be enough for with all the technical details, it will be enough for me to understand that you think not only about technical, but also about to provide a better customer experience. Okay, and yes, and I like how you structure it, I understand everything. And I like how it was step by step deeper and deeper. And I like your questions. And also I would ask questions like Is it for one computer language or multiple? It's it I understand it can it was kind of obvious, but I was expected you ask this question like, initial questions about requirements, because it can be important in your design. And it will be it will it can involve also fairness, because some of you can mention that some computer language required more time to execute or some more memory and so on. And to mention that we need some different tests for different language, and probably more space. It would be good.
Electric Tetrahedron: Yeah, but really, implicitly, right. Yeah, by that by saying that every virtual machine will have some predefined number of data. Yeah. Yeah. Thank you.
Metal Cephalopod: Yeah. And, and, let's see, I think, I mean, it can say, our, at least second most important criteria, and on the on the way, you mentioned, higher level ability and how we achieve it, but would be good to mention that we do it to increase our latency because for a contest for contest perspective, it will be crucial that our participants doesn't wait some execution and so on you I liked that you mentioned it with virtual machine and how you handle this trade off, but also on the go you from time to time, I could mention this lesson that we have it in mind and we will provide this low as possible. And yes, that's it. I mean, it was very solid and I would definitely move your interview forward. All signals I covered and this is only about polishing. It's not something crucial and red flags. It's just polishing to make your whole your interview technical and product side perfect. So that is coming.
Electric Tetrahedron: Okay, thank you to us actually really useful. That's a big problem for me, because I sometimes can't keep something in mind. For example, I definitely thought about different languages, which is, but I didn't mention that and I didn't communicate as well. So yeah, I will try to concentrate on that more. Great.
Metal Cephalopod: No, I mean, technically you explain everything perfectly. I mean, you just need to keep this product perspective.
Electric Tetrahedron: Yeah, yeah, I see your point, that senior should also have some product vision. And I should also have communicated habits communicate as well. Right. Okay, great. Thank you for this feedback. It was really interesting question.
Metal Cephalopod: Yeah, this is kind of a level of question for Google, Amazon kind of lower level. I mean, it's not so complicated. But it's kind of Google level questions. I think with this experience, you can cover Amazon question easily.
Electric Tetrahedron: May I ask you, where do you work? At which company? Is Amazon? Okay.
Metal Cephalopod: Do you have maybe back what else it would be really nice to cover as a front? As from the product design perspective there? You said privacy, UX and fairness. I guess the security itself is to just slash privacy. Right. Yeah. But yeah, I guess that also security from the point as not writing the malicious codes worth to read. But I mentioned that summer. Yeah. I mentioned Yes. Okay. Oh, yeah. One actually important question here, because I already did not spend too much time when the capacity back up and the calculations all the time. And he writes right to reduce this time is short, to make it as short as possible. And I just want to clarify, how is it really important to have these back of envelope calculations? Should they just ask for that? Or I should I run for it when I need to? When I feel I needed?
Electric Tetrahedron: No, I would run it without question. Do you feel that you need this calculation? I mean, most of it anyway, you can. It, you're quite familiar with all these numbers. And I say that it's not the big time for you to just type it in and I will follow it. And if I have any question I will ask if everything is okay for me. We will move further. And I will have this in mind that, you know this what numbers we are talking about. And it's and it's good not to ask, just type, calculate and type it and everything will be clear.
Metal Cephalopod: Okay, okay, good. Well, good. Good, I guess≥ Yeah, I guess I'm run out of questions.
Electric Tetrahedron: Okay, so yeah, good luck for your interview.
Metal Cephalopod: Yeah. Thank you. Thank you for your time today. And thank you. And yeah, have a nice day.
Electric Tetrahedron: Yeah, you too.

We know exactly what to do and say to get the company, title, and salary you want.

Interview prep and job hunting are chaos and pain. We can help. Really.