Kqr Row Cache Contention Check Gets Now

, the on-call engineer, saw the alert: kqr row cache contention check gets = CRITICAL She’d seen this before. It wasn’t a database problem — it was a thundering herd problem.

— KQR’s row cache for item:42 expired. 9:00:02 — 10,000 concurrent GET requests arrived simultaneously.

In the bustling data center of the e-commerce platform, there lived a tired but loyal piece of infrastructure: a PostgreSQL database named KQR (Key-Query-Resolver). kqr row cache contention check gets

But they didn’t just rush to the database — they collided at the . You see, KQR’s cache was protected by a single, global synchronized block for writes.

def get(key): if key in cache: return cache[key] else: // Only one thread goes to DB; others wait for its result return cache.load_or_wait(key) Within 30 seconds, the contention ratio dropped from 1.00 to 0.001. , the on-call engineer, saw the alert: kqr

KQR had a job: cache frequently accessed rows so the main disk could rest. For years, this worked beautifully. Until .

At 9:00:00 AM, a surge of traffic hit. Every user, in every time zone, suddenly demanded the same piece of data: the flash sale metadata for item ID #42. You see, KQR’s cache was protected by a

CACHE GETS (total): 10,000 CACHE HITS: 0 CACHE MISSES: 10,000 MISSES WHILE LOCK HELD: 10,000 CONTENTION RATIO: 1.00 TOP CONTENDED ROW: item:42 WAITING THREADS: 9,999 LOCK HOLD TIME (avg): 487ms This was a contention storm . The first thread to acquire the cache lock went to the database (487ms). The other 9,999 threads didn’t just wait — they spun, retried, and choked the CPU.

KQR’s cache logic looked like this (pseudocode):

She hot-patched KQR’s logic to use :

From that day on, KQR’s monitoring dashboard had a new rule: If row cache contention check gets > 1000 per second — flip on single-flight mode. And the team learned a valuable lesson: sometimes, the most dangerous lock isn’t in your database — it’s in your cache’s eagerness to help .