LiftIgniter offers fairly generous rate limits for usage. Generally, these rate limits are adjusted based on customers' use cases at the time of launch, without any specific work needed from the customer. This documentation is there for understanding just in case you see rate limit errors.
See also our Uptime and Latency documentation.
Our rate limits are based on the principle that at any given time, the fraction of our resources spent on any one organization or any one end user should not be too large. This is consistent with the way we handle infrastructure, through autoscaling. In practice, therefore, you will be able to achieve larger rate limits during hours of higher traffic.
While the guidelines here are intended to be helpful, keep in mind that we do not provide full detail on the techniques we use to block fake traffic. If you are a customer and want more insight into how this affects you, please feel free to contact Support.
1. Javascript beacon
There are no rate limits applied by LiftIgniter to loading of the Javascript beacon.
2. Activity endpoint (API as well as pixel endpoint for Javascript integrations)
At the organization level, we accept 25,000 activities per minute (or 433 activities per second) per core (a core is a unit of computational power). In practice, at any given time, we have dozens of cores available across several regions, so we should be able to accept well over 100,000 activities per minute per organization.
There is no end-user-level blocking with respect to sending activities (in other words, we do not forbid a single user from sending a large number of activities). However, our backend machine learning systems may end up discarding activities that look like they originated from bots or otherwise suspicious user behavior. Therefore, these activities, although logged, should have minimal effect in terms of disrupting our machine learning.
3. Query endpoint for getting recommendations
There are two kinds of rate limits we apply. Queries that we reject due to rate limiting are not included as API calls in LiftIgniter's reporting and billing. Our rate limiting therefore protects our customers from incurring unusually high charges due to fake users.
Organization-level rate limiting
At the organization level, we accept 150 requests per minute (or 6.7 per second) per core. This translates to at least 2400 requests per minute, because we have at least 16 cores available in any region that can be used by any organization. At peak traffic hours, we can accept significantly higher traffic levels as we use autoscaling to obtain more resources to process queries.
We can disable or raise these rate limits for customers who expect to frequently encounter traffic levels higher than 10,000 requests per minute.
For more on how our technology stack works, see Uptime and Latency.
User-level rate limiting
For client-side integrations using our Javascript SDK, we reject model queries based on individual users (based on their IP address). At the level of an individual user, we allow 400 requests per minute on average. At peak hours, we can accept higher traffic levels. If the requests are made very concurrently (i.e., all at once) we might rate-limit them even if the average rate is less than 400 requests per minute.
For server-side integrations, we do not apply user-level rate limiting based on the server that sends us the request. Rather, we apply this rate limiting only if you include a field called host
in the model queries, and rate-limit based on the value of that field.
If you are an end user who is executing a bot or scraper on one of LiftIgniter's customer sites, and are receiving 429 errors from LiftIgniter, one thing you can do to reduce the occurrence of 429s is to include "bot" in your virtual browser's user agent string. Most web search engine bots (like the Google bot) already do so. Doing this turns off history-based personalization, so that the recommendations for the user are not based on any of the items they previously visited. In exchange, the load is distributed over a larger computational pool. In particular, the number of allowed requests roughly doubles from 400 to 800.
4. Inventory endpoint
Our inventory endpoint has a batch insertion limit of 10,000 inventory items per batch. For best performance, we recommend batch sizes of no more than 1,000 items. The reason for keeping batch sizes limit to 1,000 is that each batch is processed by a single frontend server, so sending a larger number of medium-sized batches makes sure that the load is distributed equitably between our frontend servers, making the insertion happen faster and more efficiently.
In total, we can accept 30,000 new inventories per minute per organization per core. In practice, this could allow for the insertion of over 100,000 inventories per minute.