top of page

The Power of Edge Computing: Enhancing Performance and Cost Efficiency

Jan 31

3 min read

Douglas Cardoso

5

46



Edge computing, a topic predating the rise of AI, was not an area I had extensively explored before. My previous experience was primarily with Lambda functions, microservices, and similar technologies. However, edge computing encompasses far more, offering significant savings in both time and costs for our customers.


To illustrate this, consider the following example:


A customer’s website required that a POST request be sent for each new user to an API endpoint to fetch specific user’s related data. This data is determined by parameters included in the request body, such as browser, path, UTM, and language. Based on these parameters, a single record is retrieved from the database. If multiple records match, only one is selected using a Session ID passed in the request header, which is then used in a mathematical calculation to determine the index. For product consistency, the same result must always be returned for repeated requests with identical parameters and Session ID.


The challenge arises when millions of users access the site simultaneously. While frontend caching is feasible, caching the API is not due to the unique Session IDs for each client. Although storing data in Redis with a parameter-based key would improve response times, full request caching would be impossible, specifically because API doesn’t manage client sessions, hence data cannot be stored in the client session. Without session storage, the API would receive an excessive number of requests, leading to inefficiencies.



In a multi-tenant, multi-site scenario, this complexity increases exponentially as the number of users multiplies by the number of sites. Let me clarify this further, this API is a GraphQL API service, using Stellate for edge caching. However, as I mentioned earlier, specific query requests cannot be cached due to the Session ID dependency. Additionally, Stellate is a very expensive service, and caching millions of requests would incur significant costs to my customer.


You might be wondering why we use edge caching at all. If we bypassed Stellate and had the client directly call the API to obtain data, it would overwhelm the API with a volume of requests beyond its intended capacity. More critically, site navigation would become sluggish, as every visitor would have to wait for the API to process their request individually, instead of benefiting from the speed and efficiency of distributed caching. This is where edge computing provides a solution. A function is created to handle these requests while leveraging Stellate’s caching capabilities, implemented using Cloudflare Workers.


Implementation Steps:

  1. Update the API to offer a new query that returns data without filtering for parameters or Session ID.

  2. Deploy a function on Cloudflare Workers to handle POST requests. This function receives the Session ID and parameters, queries Stellate for the complete dataset using the new query, and caches the results.

  3. Once the data is retrieved, the function filters the records based on the parameters and selects the appropriate record using the Session ID, thus transferring this logic from the API to the function.



However, with this new implementation, a new issue arises. Every request to this function results in a cache hit on Stellate. Since caching is now based on the site ID (tenant), fewer requests reach the API, which is the intended benefit. However, millions of requests are still being made to Stellate, incurring significant costs.


To address this, internal caching must be implemented within the function. In my instance, I used Cloudflare KV to store the retrieved data, which significantly reduces the number of Stellate cache requests and further minimizes the load on the API.

One might wonder: why query Stellate from the function instead of calling the API directly to reduce costs? The answer lies in how edge computing works. Edge computing ensures that requests are processed closer to the client. Directly sending requests to the API from each region would generate thousands of global API calls. By routing requests through Stellate, only a single API request is required, as Stellate handles the caching for all edge function requests.


Through this approach, the number of requests to Stellate has been reduced by 99.905%, and at least a one-hour cache should be set across all services. Whenever data in the database is updated, caches on Stellate and Cloudflare are refreshed to ensure users receive the most up-to-date information as quickly as possible.

 

Related Posts

bottom of page