From Finding to Fix: Submitting Security Patches to Open Source Projects

Posted on Wed 13 May 2026 in Thought, vulnerability research and discovery • Tagged with chronicles, vulnerability research, vulnerability discovery, QuickJS, open source, patch submission, methodology

Finding a bug is the first half. Getting the fix shipped is a different skill set entirely, and almost nobody writes about it.

Most security research ends at the proof of concept. You found the thing, you have a crash, maybe a writeup. What happens next is either a CVE …


Continue reading

When the Fix Is the Bug: Two QuickJS Findings from a WebKit Audit Harness

Posted on Mon 11 May 2026 in Thought, vulnerability research and discovery • Tagged with chronicles, vulnerability research, vulnerability discovery, QuickJS, JavaScript engines, patch auditing, methodology

I built this pipeline for WebKit. The idea was simple: stop reading patches and start attacking them. Every proposed fix gets treated as a hypothesis, if this commit closes off attack surface X, the job is to prove it, find the adjacent sites it missed, and explicitly challenge the "currently …


Continue reading

Fixing Concurrent Agent Slowness in llama-server (and Why I Didn't Switch to vLLM)

Posted on Sat 09 May 2026 in Thought, AI, Security Research • Tagged with chronicles, AI agents, claude, LiteLLM, llama-server, local models, routing

This is a follow-up to what I learned running local models in my agent pipeline. That post covered context sizing and KV cache memory. This one covers what I got wrong about concurrency.

The Problem: Agents Queuing Up

My pipeline runs up to four agents simultaneously, called step1-2, step3, step4 …


Continue reading

How I Audit Security Patches with an AI Pipeline

Posted on Sat 02 May 2026 in Thought, AI, Security Research • Tagged with chronicles, AI agents, WebKit, security research, vulnerability research, patch auditing, methodology

Most security patch auditing tools look for known vulnerability patterns. They diff a commit, grep for dangerous functions, maybe flag things that look like what last year's CVEs looked like. That works for the obvious stuff. It doesn't work for the commit that says "no behavior change" and silently fixes …


Continue reading

Patch Audit Schema - SOUNDNESS_CLAIMS Spec

Posted on Sat 02 May 2026 in Security Research • Tagged with methodology, patch auditing, schema, vulnerability research

This is the schema spec referenced in How I Audit Security Patches with an AI Pipeline. It defines the SOUNDNESS_CLAIMS format, the empirical proof requirement, and the adjacent-unpatched rule.

SOUNDNESS_CLAIMS Format

Each claim is a falsifiable hypothesis about a specific attack vector. Required fields:

F(x):          The trigger - exact JS …

Continue reading