When the Fix Is the Bug: Two QuickJS Findings from a WebKit Audit Harness
Posted on Mon 11 May 2026 in Thought, vulnerability research and discovery
I built this pipeline for WebKit. The idea was simple: stop reading patches and start attacking them. Every proposed fix gets treated as a hypothesis, if this commit closes off attack surface X, the job is to prove it, find the adjacent sites it missed, and explicitly challenge the "currently impossible" claims that show up in every security fix commit message.
After running it on JSC for a while, I got curious whether the same methodology would hold against a different engine. Plus there are some RTS games on linux that depend on QuickJS :). QuickJS was an obvious target. Small codebase, active development as quickjs-ng, maintained by people who care about correctness. I audited five commits. Two of them produced new findings.
The Methodology, Briefly
The pipeline enforces three things that don't happen naturally when you're reading a diff:
Adversarial analysis over confirmation testing. The question isn't "does the fix work?" The question is "what invariant did the fix assume, and is that invariant actually enforced?" Every hypothesis needs empirical proof, not source analysis, not reasoning about what should happen. The bug firing in the dynamically from the binary.
Adjacent site hunting. If a bug class exists in one function, mandate a check of every function with the same structure. Not "is there another function that looks similar?" actually read it, find the specific lines, write the probe.
Challenge the hedge. "Theoretical overflow" and "impossible in practice" are red flags, not reassurances. If someone added a hedge to the commit message, they were worried about something. That worry is the starting point.
Commit 40e197f passed all three filters. Loudly.
Finding 1: The Fix That Introduced the Bug
expand_fast_array() grows fast arrays using size * 3 / 2. When size gets above about 1.43 billion, that overflows uint32_t. The commit message calls it theoretical... arrays don't typically hold a billion entries. They added a size check. Fine.
But then at
quickjs.c:9988 in the post-fix code:
new_size = max_int(new_len, new_size);
max_int takes int. Signed 32-bit. new_len is uint32_t. When new_len >= 0x80000000, the cast makes it negative. max_int(negative, small_positive) returns the small value. Buffer allocated undersized. Caller writes past the end.
max_uint32 was already in cutils.h at line 167. One word. The patch used the wrong function.
The trigger - 3 lines, no memory pressure:
const arr = [1, 2, 3];
arr.length = 0x7FFFFFFF; // sets length property, allocates nothing
arr.push(4); // new_len = 0x80000000, sign truncation, OOB write
The original overflow required ~11GB of actual elements. This one uses three and a property assignment. The crash: BUS WRITE at js_array_push:42458. Registers confirm new_len = 0x80000000, array_len = 0x7FFFFFFF. The allocation was sized for the latter. The write happened at the former.
That's not a theoretical overflow. That's the fix degrading its own safety invariant while introducing an easier-to-reach primitive.
Finding 2: The Adjacent Site That Got Missed
Commit 1b0b660 patched a UAF in delete_property(): fast-path for deleting the last array element freed the element and decremented count without converting to a holey array. push() reused the freed slot. Good fix. Then I read set_array_length().
Post-patch, at
quickjs.c:9899:
for(i = len; i < old_len; i++)
JS_FreeValue(ctx, p->u.array.u.values[i]);
p->u.array.count = len; // count decremented, array stays fast
Elements freed. Count decremented. No conversion. Same pattern, different function, untouched by the diff.
Trigger sequence:
const arr = [{a:1}, {b:2}, {c:3}, {d:4}, {e:5}];
arr.length = 2; // frees values[2..4], count = 2, still fast
arr.length = 5; // updates length property, count stays 2
arr.push({f:6}); // fast push: count becomes 6, slots 2..4 freed
arr[3]; // idx=3 < count=6 → reads freed memory
Two crash paths: UAF read in js_get_fast_array_element, double-free in js_array_finalizer at GC time. Both confirmed with ASan.
The Part That Actually Interests Me
Both of these are adjacent-site misses. That's not a criticism, the same thing happens in WebKit constantly. You fix the specific instance that was reported, and the function two hundred lines away with the same structure doesn't make it into the diff.
The 40e197f case is the more interesting failure mode. The developer fixed the overflow, then introduced a new one by using the wrong comparison function. The fix created attack surface that didn't exist before. The original theoretical overflow required 11GB. The replacement requires three elements and one property assignment.
There's a pattern worth naming: partial invariant restoration. The fix re-establishes the invariant for the specific reported case, but the new code path creates a different violation. If your test only validates that size * 3 / 2 overflow is caught, you won't catch that max_int truncates new_len.
The methodology held. Same harness, different engine, two new findings from five commits.
Both issues were reported to the quickjs-ng maintainers. The expand_fast_array sign truncation was the more interesting of the two - OOB write with a 3-line reproducer isn't a theoretical concern. The set_array_length UAF requires a little bit of setup to exploit but the primitive is real - yes I popped with Calc. The pipeline doesn't care what language the engine is written in or who the author is. It cares about invariants.
Update: I submitted patches for both findings and both were accepted and merged upstream: #1467 and #1468. The finding was real enough to ship.