Data-driven applications span a wide breadth of complexity, from simple microservices to real-time event-driven systems under significant load. However, as any development and/or DevOps team tasked with performance improvements will attest, making data-driven apps fast globally is “non-trivial”.
Modern application architectures such as the JAMstack enforce the separation of concerns by moving the data and persistence requirements to the API. Cleanly separating static content, business logic and data persistence allow each to be scaled and managed independently.
Many enterprises are also focused on decoupling their monolithic applications to utilize microservices and are often deploying within serverless environments. This shift to more decoupling for better environmental isolation can also provide better regional agility with regards to where business logic is deployed and how it is scaled. Applications can now be deployed globally in a single CI/CD action.
The data tier however poses greater complexity. There are practical challenges such as transactional consistency, high availability, and query performance under load. There are constraints such as adhering to PII and compliance requirements. And there are insurmountable bounds such as those the laws of physics impose on latency.
Many development teams look to caching to solve these issues at the application layer, backed by persistence layers like Redis or homegrown systems. The concept is simple: store the data requested by the client for a period of time and if we see it again, we have it ready to serve the next request without resorting to the origin database. Engineering a good caching strategy brings its own set of challenges: what data to cache, how to cache it, and when. And perhaps more importantly, what, how, and when to evict data from the cache. The caching strategy must be well defined, understood and employed for every new feature set added to the application, across developers and potentially departmental teams. Development time and complexity is the cost.
Alternatively, many enterprises solve latency and scaling challenges with database read replicas. Read replicas are read-only instances of the primary database and are automatically kept synchronized (asynchronously) as updates are made to the primary. Engineering a solid read-replica strategy is a daunting task full of its own subtle and not so subtle costs and complexities.
Much of that complexity can be tamed with ScaleGrid. Fully managed read-replicas can be deployed at the click of a button from ScaleGrid (with HA support) into all major clouds and regions, with the key benefit being that the data is kept in sync with the primary database automatically.
However, read replicas cannot escape the necessity of running multiple, perhaps many multiple, database servers and their associated cost.
A different approach: PolyScale.ai Edge Cache
PolyScale is a database edge cache that takes a different approach. PolyScale’s cache provides two primary benefits: improved query latency and reduced database workload. Let’s break that down a little:
Regional latency is solved much like a CDN; PolyScale provides a global edge network of Points of Presence (PoP) and stores database query responses close to the originating client, significantly speeding up responses.
Read Query Performance is dramatically improved since PolyScale will serve any cached database request in < 10ms, no matter the query complexity. Additionally, given that read requests are served from PolyScale, this load never impacts the origin database.
PolyScale can be implemented without writing code or deploying servers in a few minutes. Simply update the database client (be it a web application, microservice or BI tool such as Tableau) connection string with the PolyScale hostname. Database traffic will then pass through the edge network and is ready for caching.
Being wire compatible with MySQL and Postgres, PolyScale is completely transparent to database clients, hence nothing changes with your current architecture. No migrations, no changes to transactionality and no changes to your current query language. Truly plug and play.
How Does It Work?
PolyScale’s global network proxies and caches native database wire protocols so it is transparent to any database client. Queries are inspected and reads (
SQL SELECT) can be cached geographically close to the requesting origin for accelerated performance. All other traffic (
DELETE) seamlessly passes through to the source database.
PolyScale’s AI is on a path to full automation. Rather than configuring the cache as needed, the platform will measure the flow of traffic and continuously adjust caching properties to provide optimum performance. You can read more about the PolyScale AI caching model here.
PolyScale.ai provides a modern, plug and play approach to performance and scaling at the data tier. The PolyScale platform is on a path to full automation where once connected, will intelligently manage caching of the data for optimum performance.
Given that PolyScale is wire compatible with your current database, no changes are necessary to scale reads globally in minutes.