Create rate limiting / throttling strategies for Atlas App Services

My organization uses an Atlas App Services backend instance for their production web application. How can I apply rate limits / throttles to calls to Functions? For example, apply a hard limit to 200 Function calls or apply a rate limit of 3 Function calls per minute.

I understand the question has been asked before here.

I’d like to know how to create strategies for implementing these features at this moment? This is a strategy I have come up with so far:

  • Create an API with HTTP custom integration of the Atlas App Services backend instance using Amazon API Gateway and then configure usage plans and API keys to allow users to access the API, and begin throttling requests to the API based on defined limits and quotas.

However, is there anything I can do within Atlas App Services to achieve similarly?

Hello, bumping up my post to get some replies, hopefully.

Without knowing what is intended for the data, how static it is, how much data is it, budget, and what the queries are doing I’m going to put down an idea that I’ve often used to overcome system with no simple rate limiting. The main reason for this is often users will still find ways around rate limiting so lets explore another possibility but it has some assumptions that could not make this a possible solution. Hopefully it helps.

1 Word: Caching

Instead of rate limiting if possible think of it in a way of, how can I remove the desire for someone to spam requests. Looking at some options think of something like Cloudflare where the data is served from a host pipped through Cloudflare and let them handle the requests. Services like Cloudflare often have automatic rate limiting (that will ban IPs that spam too much too quickly if wanted) but mostly they also have nice caching services without much effort. A Cloudflare worker perhaps could query your data and cache it for quite a long time.

This is just one quick example but you would need to open your mind to many possibilities and explore. If you can cache it, cache it and don’t rate limit. Because if you can cache it you have so many more options available to you. You could also have some service like Google Cloud Functions/Amazon Lambdas/App Service Triggers automatically on timers to put these results a user often calls for to a cache/storage/hosting that doesn’t directly require you to open the gates to direct user interaction that requires rate limiting. If you can go this caching way and done correctly it wouldn’t matter if they do 1 request or 1000 request in 10 minutes because your cache says 10 minutes, its gonna be 10 minutes before they see new data. People don’t really spam if they gonna get the same answer no matter what.

Good luck!

Thank you for the suggestions!

I shall look into caching as well as configuring App Service Triggers and other serverless functions on timers to put the data to a cache/storage/hosting service.