Aws

9 results

MongoDB Atlas Powers Half a Billion Players of India's Favorite Mobile Pastime, Ludo King

>> Announcement: Some features mentioned below will be deprecated on Sep. 30, 2025. Learn more . Nothing is more human than playing games. Boards and pieces can be found from the beginnings of civilization — little scraps of technology we created to entertain ourselves. No wonder, then, that gaming is a dominant force in mobile tech. What's more surprising is that some of the most successful mobile games are versions of some of the oldest traditions. Take Ludo. A classic board game for up to four players, it can trace its direct ancestry to 6th-century India and is built from much older ideas. Players roll a die to move pieces from home along a track to a finish; the first to get all pieces there wins. You can't pass an opponent on the track, but if you land on them they go back to the start. That's it. Simple. But the way it brings players together has been enough to make Ludo the national game of the subcontinent. Now Ludo is king of the phones, in the shape of Gametion's Ludo King app. A faithful yet stylish rendition of the board game, it retains the game's simplicity and social interaction, but at an epic scale. It topped the charts for Google Play downloads in India and reached the top ten internationally, with tens of millions of players chalking up a quarter of a billion minutes of playing time a day. At one point, numbers quadrupled overnight. Yet all this was managed by a tiny team of developers who'd built their platform on MongoDB Atlas , the global cloud database service. Gametion Founder and CEO Vikash Jaiswal Ludo King's authentic board game emulation quickly tapped into the Indian psyche. "We had strong takeup right from 2016, when we launched the first version," says Gametion founder and CEO Vikash Jaiswal. "A million downloads in the first 25 days, and up to a million minutes of play a day by the start of 2020. We were doing very well already. Then came the lockdown and we went through the roof." "We Just Wanted to Concentrate on the Game" Gametion was the quintessential small gaming startup. In 2015, it had a couple of developers out of a staff of four or five, and they'd produced a suite of in-browser Flash games. The next move was obviously mobile. But at first, the company didn't move far from the idea of a simple gaming experience. Jaiswal says: "There was no database component to the Flash games, no login or user ID. We launched Ludo King in 2016 as a single player game, and soon got the user feedback that they wanted multiplayer features. You need user accounts and user data for that." The company takes pride in how quickly it can adopt and incorporate new technologies, explains Jaiswal, but that means finding the right technology to adopt. And the game was exhibiting demanding growth. "Ludo King was becoming very popular, so we knew we needed something that could scale. It had to be quick to learn — we didn't have time for complexity or long learning curves." MongoDB seemed a good fit for an underlying database. I knew it was fast and very flexible to build on, and it had lots of features. And it turned out to be a really good fit for mobile gaming — MongoDB integrates very well into our Node.js architecture. It's a native speaker. Vikash Jaiswal, Founder and CEO, Gametion Jaiswal's team was able to rely on MongoDB's flexible data model to continually expand the game's features, including more options for players and monetisation tactics. That's never stopped. In 2020, Gametion introduced two new in-game features: voice chat and egreetings to users. But they had no interest in the nuts and bolts of database administration. "We didn't want to make our own backend or worry about scaling, management or any of that. We just wanted to concentrate on the game," says Jaiswal. MongoDB Atlas hadn't made its debut yet at the time — Gametion being ahead of the game -- so the company chose the third-party mLab platform for hosting. Then in 2019, after mLab was acquired by MongoDB Inc, Gametion transitioned from mLab to MongoDB Atlas, the platform made and managed by the company behind the database. MongoDB Atlas: A 'Native Speaker' for Mobile Gaming Transitions can be challenging, but with the same underlying architecture and the support of MongoDB itself, this one was straightforward. In fact, it was so uneventful that Jaiswal says he can't remember it happening. "I don't recall any problems at all. There was no downtime, which I definitely would have remembered. MongoDB managed it all for us. The migration must have been very smooth." Once on MongoDB Atlas, running on AWS's cloud infrastructure, the team — which was now five developers — quickly found the features that mattered, such as Continuous Cloud Backup and Performance Advisor . "The dashboard is very cool. We can dial up the performance we need when we need it, and see exactly what's going on." Ludo King's Lockdown Gametion's emphasis on common open standards and a component approach has made it easy to add other functions as the game demands, maintaining a regular schedule of updates that keep the users engaged. "You can think of it as a microservices architecture. We use Kafka to manage data movement and synchronize between services. It's another way to optimize resource use across the board without sacrificing scalability or release cadence." Infrastructure Diagram for Ludo King That's something you need when you go from being one of the top mobile games in India to the uncontested champ. "At the start of March 2020, we had between 150,000 and 200,000 simultaneous users, but when lockdown hit that month, it jumped to a million, 1.5 million. We went from 8,000 IOPS to peaking at 35,000." "With 145 million downloads in the first week of lockdown alone, quickly finding the rights answers was important," says Jaiswal. "We have 50 million users a day, averaging 50 minutes of gameplay each. Some of them are on for five, six hours at a stretch." MongoDB is Integral to Future Growth The future will see more features on Ludo King, such as league tables and what Gametion sees as its primary revenue generator: in-app purchases. It'll also see some brand-new games. MongoDB is integral to this strategy, both to power innovation and to manage the consequences of success. And Gametion's roadmap is growing with its market, which means it will need features for economically managing huge numbers of casual users. " Atlas Data Lake looks useful," says Jaiswal. "We want to move inactive players — those who haven't been online in a while — away from the main database, but we don't want to just delete them." Efficiently managing hundreds of millions of users — and supporting near-instantaneous, 1,000% growth — would have once required the resources of a large corporation. But for Gametion, which still has fewer than 100 employees, these aren't limiting factors. In August 2020, India Prime Minister Narendra Modi even highlighted the success of the the game during his monthly radio programme. Ludo King is helping to fulfill the vision of popularising Indian games with a global audience. For now, Gametion's focus is growth. And MongoDB is part of that experience, the game piece that shows where you are and implements your strategy, quietly and efficiently. MongoDB Atlas is not just a database, it's a genuine game changer. Try MongoDB Atlas Free

October 9, 2020

The Top 6 Questions From AWS re:Invent 2018

Hey there, MongoDB Community! I'm Lauren Schaefer ( @Lauren_Schaefer , linkedin.com/in/laurenjanece ), MongoDB's newest developer advocate. I've only been on the job a couple of weeks, but I had the opportunity to travel to fabulous Las Vegas, Nevada, last week to speak with many of you at the MongoDB Atlas booth at AWS re:Invent 2018. The people I chatted with complimented MongoDB over and over again. I heard things like, "The performance is great!" and "When I get to choose what database I use, I choose MongoDB" and "I love Mongo!" People also asked me a lot of questions. I’ve compiled those questions into a list of the top 6 most frequently asked questions at AWS re:Invent 2018. 6. Are the socks different sizes? My primary job at the booth was to give out socks. And I gave out a LOT of socks. Several people told me that they wear the MongoDB socks they received at last year's conference all the time. I even had people show me the MongoDB socks they were wearing. Since I was giving out so many socks, one of the most common questions I received was, "Are the socks different sizes?" The socks were all one size, but they seemed to stretch to fit a variety of sizes--they’re built to scale! 5. What is Atlas? To be fair, this question probably came up so frequently because I asked people, "Are you familiar with Atlas?" as I was handing them socks to which they commonly replied, "No. What is Atlas?" You can think of MongoDB Atlas as MongoDB-as-a-service. Atlas is a fully managed, global cloud database. Atlas takes care of all the operations related to running a MongoDB database in production -- security, availability, upgrades, and patches -- so you can focus on your data and your app. You can get more details in the video below. 4. Can I see a demo of Atlas? As you can probably imagine, people were pretty excited about Atlas when they heard about it, so they wanted to see a demo. We had experts on-hand ready to give demos. For those of you who weren't able to get a demo in-person, below is a demo of how to get started with Atlas. 3. Is Atlas new? People were very excited about Atlas and many were surprised they hadn't heard of it before. A common question was, "Is Atlas new?" No, Atlas is actually a little over two years old. It was officially announced at MongoDB World on June 28, 2016. 2. What is the Atlas pricing model? Before people started to get too excited about Atlas, they wanted to know if there was a catch. They'd ask, "What's the pricing model?" Atlas has a free tier so you can tinker and begin early development without paying a thing. You don’t even need to provide credit card information to get started. Once you exceed the free tier, Atlas is billed hourly based on how much you use. Check out the Atlas Pricing page for more details on the pricing model. The Atlas Pricing page also has a pricing calculator so you can estimate how much Atlas would cost for your particular use case. 1. Why would I choose MongoDB over Amazon DynamoDB? Since we were at an Amazon conference, many people were curious about the differences between MongoDB and Amazon's DynamoDB. You can get a detailed comparison of the two on the Comparing DynamoDB and MongoDB page. Some of the key points that resonated with people at the conference were: MongoDB provides built-in document validation. Users can enforce checks on document structure, data types, data ranges, and the presence of mandatory fields. DynamoDB has limited support for different data types. As a result, developers must preserve data types on the client, which adds complexity and reduces data re-use across different applications. DynamoDB does not have native data validation capabilities. MongoDB documents can be up to 16 MB in size whereas DynamoDB items or records can be up to 400 KB in size. MongoDB provides for more flexible indexing and querying. For example, MongoDB indexes are consistent with data whereas DynamoDB indexes are sized and provisioned separately from data. Also, MongoDB allows for querying and analyzing data in multiple ways including single keys, ranges, faceted search, graph traversals, and geospatial queries. DynamoDB allows for key-value queries. MongoDB can be deployed anywhere or as a service with MongoDB Atlas on Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, so you are not locked into a particular vendor. DynamoDB is available as a service on AWS. Summary I had a blast meeting so many of you at AWS re:Invent, and I hope to meet many more of you at upcoming events ! Give Atlas a shot and let me know what you think!

December 3, 2018

AWS Step Functions: Integrating MongoDB Atlas, Twilio,& AWS Simple Email Service - Part 2

This is Part 2 of the AWS Step Functions overview post published a few weeks ago. If you want to get more context on the sample application business scenario, head back to read Part 1 . In this post, you’ll get a deep dive into the application’s technical details. As a reference, the source code of this sample app is available on GitHub . Setting up the Lambda functions The screenshot above is the graphical representation of the state machine we will eventually be able to test and run. But before we get there, we need to set up and publish the 4 Lambda functions this Step Functions state machine relies on. To do so, clone the AWS Step Functions with MongoDB GitHub repository and follow the instructions in the Readme file to create and configure these Lambda functions. If you have some time to dig into their respective codebases, you'll realize they're all made up of just a few lines, making it simple to embed Twilio, AWS and MongoDB APIs in your Lambda function code. In particular, I would like to point out the concise code the Get-Restaurants lambda function uses to query the MongoDB Atlas database: db.collection('restaurants').aggregate( [ { $match: { "address.zipcode": jsonContents.zipcode, "cuisine": jsonContents.cuisine, "name": new RegExp(jsonContents.startsWith) } }, { $project: { "_id": 0, "name": 1, "address.building": 1, "address.street": 1, "borough": 1, "address.zipcode": 1, "healthScoreAverage": { $avg: "$grades.score" }, "healthScoreWorst": { $max: "$grades.score" } } } ] ) The code snippet above is a simple yet powerful example of aggregation framework queries using the $match and $project stages along with the $avg and $max accumulator operators . In a nutshell, this aggregation filters the restaurants dataset by 3 properties (zip code, cuisine, and name) in the $match stage , returns a subset of each restaurant’s properties (to minimize the bandwidth usage and query latency), and computes the maximum and average values of health scores obtained by each restaurant (over the course of 4 years) in the $project stage . This example shows how you can very easily replace SQL clauses (such as WHERE(), MAX() and AVG()) using MongoDB’s expressive query language. Creating the Step Functions state machine Once you are done with setting up and configuring these Lambda functions, it's time to finally create our Step Functions state machine. AWS created a JSON-based declarative language called the Amazon States Language , fully documented on the Amazon States Language specification page . A Step Functions state machine is essentially a JSON file whose structure conforms to this new Amazon States Language. While you don’t need to read its whole specification to understand how it works, I recommend reading the AWS Step Functions Developer Guide to understand its main concepts and artifacts. For now, let's go ahead and create our WhatsThisRestaurantAgain state machine. Head over to the Create State Machine page in AWS Step Functions and give your new state machine a name (such as WhatsThisRestaurantAgain ). Next, copy and paste the following JSON document ( also available on GitHub ) into the Code text editor (at the bottom of the Create State Machine page): { "Comment": "A state machine showcasing the use of MongoDB Atlas to notify a user by text message or email depending on the number of returned restaurants", "StartAt": "GetRestaurants", "States": { "GetRestaurants": { "Type": "Task", "Resource": "", "ResultPath": "$.restaurants", "Next": "CountItems" }, "CountItems": { "Type": "Task", "Resource": "", "InputPath": "$.restaurants", "ResultPath": "$.count", "Next": "NotificationMethodChoice" }, "NotificationMethodChoice": { "Type": "Choice", "Choices": [ { "Variable": "$.count", "NumericGreaterThan": 1, "Next": "SendByEmail" }, { "Variable": "$.count", "NumericLessThanEquals": 1, "Next": "SendBySMS" } ], "Default": "SendByEmail" }, "SendByEmail": { "Type": "Task", "Resource": "", "End": true }, "SendBySMS": { "Type": "Task", "Resource": "", "End": true } } } Once you’re done pasting this JSON document, press the Refresh button of the Preview section right above the Code editor and... voilà! The state machine now shows up in its full, visual glory: We’re not quite done yet. But before we complete the last steps to get a fully functional Step Functions state machine, let me take a few minutes to walk you through some of the technical details of my state machine JSON file. Note that 4 states are of type "Task" but that their Resource attributes are empty. These 4 "Task" states represent the calls to our 4 Lambda functions and should thus reference the ARNs (Amazon Resource Names) of our Lambda functions. You might think you have to get these ARNs one by one—which might prove to be tedious—but don't be discouraged; AWS provides a neat little trick to get these ARNs automatically populated! Simply click inside the double quotes for each Resource attribute and the following drop-down list should appear (if it doesn't, make sure you are creating your state machine in the same region as your Lambda functions): Once you have filled out the 4 empty Resource attributes with their expected values, press the Create State Machine button at the bottom. Last, select the IAM role that will execute your state machine (AWS should have conveniently created one for you) and press OK : On the page that appears, press the New execution button: Enter the following JSON test document (with a valid emailTo field) and press Start Execution : { "startsWith": "M", "cuisine": "Italian", "zipcode": "10036", "phoneTo": "+15555555555", "firstnameTo": "Raphael", "emailTo": "raphael@example.com", "subject": "List of restaurants for {{firstnameTo}}", } If everything was properly configured, you should get a successful result, similar to the following one: If you see any red boxes (in lieu of a green one), check CloudWatch where the Lambda functions log their errors. For instance, here is one you might get if you forgot to update the emailTo field I mentioned above: And that's it (I guess you can truly say we’re " done done " now)! You have successfully built and deployed a fully functional cloud workflow that mashes up various API services thanks to serverless functions. For those of you who are still curious, read on to learn how that sample state machine was designed and architected. Design and architecture choices Let's start with the state machine design: The GetRestaurants function queries a MongoDB Atlas database of restaurants using some search criteria provided by our calling application, such as the restaurant's cuisine type, its zip code and the first few letters of the restaurant's name. It retrieves a list of matching restaurants and passes that result to the next function ( CountItems ). As I pointed out above, it uses MongoDB's aggregation framework to retrieve the worst and average health score granted by New York's Health Department during its food safety inspections. That data provides the end user with information on the presumed cleanliness and reliability of the restaurant she intends to go to. Visit the aggregation framework documentation page to learn more about how you can leverage it for advanced insights into your data. The CountItems method counts the number of the restaurants; we'll use this number to determine how the requesting user is notified. If we get a single restaurant match, we'll send the name and address of the restaurant to the user's cell phone using the SendBySMS function. However, if there's more than one match, it's probably more convenient to display that list in a table format. As such, we'll send an email to the user using the SendByEmail method. At this point, you might ask yourself: how is the data passed from one lambda function to another? As it turns out, the Amazon States Language provides developers with a flexible and efficient way of treating inputs and outputs. By default, the output of a state machine function becomes the input of the next function. That doesn't exactly work well for us since the SendBySMS and SendByEmail methods must know the user's cell phone number or email address to properly work. An application that would like to use our state machine would have no choice but to pass all these parameters as a single input to our state machine, so how do we go about solving this issue? Fortunately for us, the Amazon States Language has the answer: it allows us to easily append the result of a function to the input it received and forward the concatenated result to the next function. Here's how we achieved this with our GetRestaurants function: "GetRestaurants": { "Type": "Task", "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME", "ResultPath": "$.restaurants", "Next": "CountItems" } Note the ResultPath attribute above where we instruct Step Functions to append the result of our GetRestaurants task (an array of matching restaurants) to the input it received, whose structure is the test JSON document I mentioned above (duplicated here for reading convenience): { "startsWith": "M", "cuisine": "Italian", "zipcode": "10036", "phoneTo": "+15555555555", "firstnameTo": "Raphael", "emailTo": "raphael@example.com", "subject": "List of restaurants for {{firstnameTo}}" } This input contains all the information my state machine might need, from the search criteria (startsWith, cuisine, and zipcode), to the user's cell phone number (if the state machine ends up using the SMS notification method), first name, email address and email subject (if the state machine ends up using the email notification method). Thanks to the ResultPath attribute we set on the GetRestaurants task, its output has a structure similar to the following JSON document (additional data in bold): { "firstnameTo": "Raphael", "emailTo": "raphael@example.com", "subject": "List of restaurants for {{firstnameTo}}", "restaurants": [ { "address": { "building": "235-237", "street": "West 48 Street" }, "borough": "Manhattan", "name": "La Masseria" }, { "address": { "building": "315", "street": "West 48 Street" }, "borough": "Manhattan", "name": "Maria'S Mont Blanc Restaurant" }, { "address": { "building": "654", "street": "9 Avenue" }, "borough": "Manhattan", "name": "Cara Mia" } ] } As expected, the restaurants sub-document has been properly appended to our original JSON input. That output becomes by default the input for the CountItems method. But, we don't want that function to have any dependency on the input it receives. Since it's a helper function, we might want to use it in another scenario where the input structure is radically different. Once again, the Amazon States Language comes to the rescue with the optional InputPath parameter. Let's take a closer look at our CountItems task declaration in the state machine’s JSON document: "CountItems": { "Type": "Task", "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME", "InputPath": "$.restaurants", "ResultPath": "$.count", "Next": "NotificationMethodChoice" } By default, the InputPath value is the whole output of the preceding task ( GetRestaurants in our state machine). The Amazon States Language allows you to override this parameter by explicitly setting it to a specific value or sub-document. As you can see in the JSON fragment above, this is exactly what I have done to only pass an array of JSON elements to the CountItems Lambda function (in my case, the array of restaurants we received from our previous GetRestaurants function), thereby making it agnostic to any JSON schema. Conversely, the result of the CountItems task is stored in a new count attribute that serves as the input of the NotificationMethodChoice choice state that follows: "NotificationMethodChoice": { "Type": "Choice", "Choices": [ { "Variable": "$.count", "NumericGreaterThan": 1, "Next": "SendByEmail" }, { "Variable": "$.count", "NumericLessThanEquals": 1, "Next": "SendBySMS" } ], "Default": "SendByEmail" } The logic here is fairly simple: if the restaurants count is greater than one, the state machine will send an email message with a nicely formatted table of the restaurants to the requesting user’s email address. If only one restaurant is returned, we’ll send a text message to the user’s phone number (using Twilio’s SMS API ) since it’s probably faster and more convenient for single row results (especially since the user might be on the move while requesting this piece of information). Note that my JSON "code" actually uses the NumericLessThanEquals operator to trigger the SendBySMS task and not the Equals operator as it really should. So technically speaking, even if no result is returned from the GetRestaurants task, the state machine would still send a text message to the user with no restaurant information whatsoever! I’ll leave it up to you to fix this intentional bug. Next steps In this post, I showed you how to create a state machine that orchestrates calls to various cloud services and APIs using a fictitious restaurant search and notification scenario. I hope you enjoyed this tutorial explaining how to deploy and test that state machine using the AWS console. Last, I went through various design and architecture considerations, with a focus on data flow abilities available in Step Functions. If you haven’t done so already, sign up for MongoDB Atlas and create your free M0 MongoDB cluster in minutes. Next, you can get more familiar with AWS Lambda development and deployment by following our 101 Lambda tutorial . If you already have some experience with AWS Lambda, Developing a Facebook Chatbot with AWS Lambda and MongoDB Atlas will walk through a richer use case. As a last step, you might be interested in Step Functions integration with API Gateway to learn how to call a state machine from an external application. About the Author - Raphael Londner Raphael Londner is a Principal Developer Advocate at MongoDB, focused on cloud technologies such as Amazon Web Services, Microsoft Azure and Google Cloud Engine. Previously he was a developer advocate at Okta as well as a startup entrepreneur in the identity management space. You can follow him on Twitter at @rlondner .

May 17, 2017

AWS Step Functions: Integrating MongoDB Atlas, Twillio,& AWS Simple Email Service - Part 1

A few weeks ago, I took a close look at AWS Lambda functions and I hope you've enjoyed that tutorial. I highly suggest that you first read it if you have no previous experience with AWS Lambda, as I will build on that knowledge for the second leg of our road trip into developer-centric services available in Amazon Web Services. Indeed, I'd like to continue our discovery journey by investigating AWS Step Functions . If that name doesn't ring a bell, don't worry: Step Functions were recently introduced at AWS re:Invent 2016 and are still relatively unknown, despite their incredible potential, which I'll try to make apparent in this post. Lambda functions are great, but... If you're building a real cloud native app, you probably already know that your app won't be able to rely on one single Lambda function. As a matter of fact, it will rely on a multitude of functions that interact with multiple systems, such as databases, message queues and even physical or virtual servers (yes, even in a serverless world, there are still servers!). And as the screenshot below tries to depict, your app will need to call them at certain times, in a specific order and under specific conditions. @Copyright Amazon Web Services (from the AWS:reInvent 2016 Step Functions SVR201 session ) Orchestrating all these calls is a tedious endeavor that falls into the developer's lap (you!). Wouldn't it be nice if there was some sort of cloud technology that would help us reliably deal with all that coordination work? Introducing AWS Step Functions In a nutshell, Step Functions are a cloud workflow engine: they aim at solving the issue of orchestrating multiple serverless functions (such as AWS Lambdas) without having to rely on the application itself to perform that orchestration. Essentially, Step Functions allow us to design visual workflows (or "state machines" as AWS refers to) that coordinate the order and conditions in which serverless functions should be called. Surely enough, the concepts of orchestration and workflows have been around for quite some time so there's nothing groundbreaking about them. As a matter of fact, AWS even released its own Simple Workflow Service back in 2012, before serverless had become the cool kid on the block. What's interesting about Step Functions though is that they provide an integrated environment primarily designed to ease the orchestration of AWS Lambda functions. And as Lambda functions become more and more popular, AWS Step Functions turn out to be exactly what we need! So what's a good use case to employ Step Functions? For those of you who have no idea what the screenshot above means, don't worry! In my next post, I'll dive into the technical details of the sample app it's taken from. For now, let's just say that it's the visual representation of the state machine I built. But you may ask yourself: what does this state machine do exactly and why is it relevant to the topic today? Here's the fictitious (but probably not too hypothetical) use case I tried to solve with it: you went to a great italian restaurant in New York but you don't quite remember its exact name. But the food was so delicious you'd really like to go back there! (you might think only Dory - or me - does not to remember an amazing restaurant but I'm sure that happens even to the best of us). Wouldn't it be useful if you could get notified instantly about the possible restaurant matches with their exact names and addresses in your area? Ideally, if there happens to be one match only, you'd like to get the notification via text message (or "SMS" in non-US parlance). But if there are a lot of matches, a text message might be difficult to read, so you'd rather get an email instead with the list of restaurants matching your search. Now, I'm quite sure that service already exists (Yelp, anyone?) but I thought it was a good use case to demonstrate how Step Functions can help you solve a business process requirement, as well as Step Functions’ ability to easily mash up different APIs and services together into one single workflow. How did I go about building such a step function? As I was envisioning this sample app built with AWS Step Functions, I thought about the required components I'd have to build, and then boiled them down to 3 AWS Lambda functions: A GetRestaurants function that queries a collection of restaurants stored in a MongoDB Atlas database. A SendBySMS function that sends a text message using SMS by Twilio if the result of the GetRestaurants query only returns one restaurant. A SendByEmail function that sends an email using AWS Simple Email Service if the GetRestaurants function returns more than one restaurant. If you look closely at the screenshot above, you will probably notice I seemingly forgot a step: there's indeed a fourth Lambda helper function named CountItems whose purpose is simply to count the items returned by the GetRestaurants function and pass that count value on to the NotificationMethodChoice branching logic. Granted, I could have easily merged that helper function into the GetRestaurants function but I chose to leave it because I figured it was a good way to experiment with Step Functions' inputs and outputs flexibility and showcase their power to you (more about this topic in my next post). It's a Step Functions technique I've used extensively to pass my initial input fields down to the latest SendBy* Lambda functions. I hope you liked this short introduction to AWS Step Functions and the use case of the sample app I built to demonstrate its usefulness. You can now read Part 2 here ! Enjoyed this post? Replay our webinar where we have an interactive tutorial on serverless architectures with AWS Lambda. Watch Serverless Architectures with AWS Lambda and MongoDB Atlas About the Author - Raphael Londner Raphael Londner is a Principal Developer Advocate at MongoDB, focused on cloud technologies such as Amazon Web Services, Microsoft Azure and Google Cloud Engine. Previously he was a developer advocate at Okta as well as a startup entrepreneur in the identity management space. You can follow him on Twitter at @rlondner

March 30, 2017

Provisioned IOPS On AWS Marketplace Significantly Boosts MongoDB Performance, Ease Of Use

One of the largest factors affecting the performance of MongoDB is the choice of storage configuration. As data sets exceed the size of memory, the random IOPS rate of your storage will begin to dominate database performance . How you split your logs, journal and data files across drives will impact performance and the maintainability of your database. Even choice of filesystem and read-ahead settings can have a major impact. A large number of performance issues we encounter in the field are related to misconfigured or under-provisioned storage. Storage configuration is often more important than instance size in determining the expected performance of a MongoDB server. MongoDB With Provisioned IOPS: Better Performance, Less Guesswork That’s why we’re excited to announce the availability of MongoDB with bundled storage configurations on the Amazon Web Services (AWS) Marketplace . Working closely with the Marketplace and EBS teams, we’ve made available a new set of MongoDB AMI’s that not only include the world’s leading document database software installed and configured according to our best practices, but also include high performance storage configurations that leverage Amazon’s Provisioned IOPS storage volumes , including Amazon’s new 4000 IOP pIOPS drives. These options take a lot of the guess work out of running MongoDB on EC2 and help ensure a great out-of-the-box experience without needing to do any additional setup yourself. These configurations offer radically improved performance for MongoDB, even on datasets much larger than RAM. If you want to take MongoDB for a spin, or set up your first production cluster, we recommend starting with these images. We plan to keep extending this set of configurations to give you more choices to address different workloads and use cases . The MongoDB with Bundled Storage AMI is available today in 3 configurations: MongoDB 2.4 with 1000 IOPS MongoDB 2.4 with 2000 IOPS MongoDB 2.4 with 4000 IOPS The choice of configuration will depend on how much storage capacity you want to put behind your MongoDB instance. For comparison, we have found that ephemeral storage and regular (non-pIOPS) EBS volumes can reliably deliver about 100 IOPS on a sustained basis. That means that these configurations can deliver up to 10x-40x higher out-of-memory throughput than non-pIOPS based setups. There’s no charge from 10gen for using these AMI’s. You pay only the EC2 usage charges for the instances and disk volumes used by your setup. Take them for a test-drive and please let us know what you think. Implications Of Using MongoDB With pIOPS Here’s what you get when you use these instances: Separate Volumes For Data, Journal And Logs When you launch the AMI on an EC2 instance, there will be three EBS volumes attached. One each for Data, Journal and Logs. By separating these onto separate volumes, we help decrease contention for disk access during high load scenarios and avoid head-of-line blocking that can occur. The data volume is provisioned at 200GB or 400GB, with IOPS rates at 1000, 2000 and 4000. For write-heavy workloads, this helps ensure that the background flush can get synced quickly to disk. For read-heavy workloads, the IOPS rate of the drive determines the rate at which a random document, or b-tree bucket can be loaded from disk into memory. The journal gets its own 25GB drive provisioned at 250 IOPS. While 25GB is large for the journal, we wanted to make sure we had enough IOPS to handle the journal load and to provide sufficient capacity for reading the journal during a recovery. In order to maintain the 10:1 ratio of size to IOPS imposed by EBS, we made it a little bigger than needed. Separating the journal onto a separate volume ensures that a journal flush is never queued behind the big IO’s that happen when data files are synced. The log volumes are provisioned at 10GB, 15GB and 20GB sizes with 100, 150, and 200 IOPS. This gives you plenty of room for storage of logs as well as predictable storage performance for collection of log data. Pre-tuned Filesystem And OS Configuration We’ve pre-configured EXT4 filesystem, sensible mount options, read-aheads and ulimit settings into the AMI. pIOPS EBS volumes are rated for 16KB IO’s, so using read-aheads higher than this size actually lead to decreased throughput. We’ve set this up out of the box. Amazon Linux With Pre-installed Software And Repositories We started with Amazon’s latest and greatest Linux AMI, and then added in 10gen’s RPM repo. No more adding a repo to get access to the latest software version. We’ve also pre-installed MongoDB, the 10gen MMS Agent and various useful software utilities like sysstat (which contains the useful iostat utility) and munin-node (which MMS can use to access host statistics). The MMS agent is deactivated by default, but can be activated simply by adding your MMS account ID and then starting the agent. A New Wave Of MongoDB Adoption In The Cloud A significant percentage of MongoDB applications are currently deployed in the cloud. We expect this percentage to continue to grow as enterprises discover the cost and agility benefits of running their applications on clouds like AWS. As such, it's critically important that MongoDB run exceptionally well on Amazon, and with the addition of pIOPS to the MongoDB AMI's on Marketplace, MongoDB performance in the cloud just got a big boost. We look forward to continuing to work closely with Amazon to facilitate MongoDB performance improvements on AWS.

May 7, 2013