Shared Parameter Subspaces and Cross-Task Linearity in Emergently Misaligned Behavior

Daniel Aarao Reis Arturi*†
McGill University
Eric Zhang*†
McMaster University
Andrew Ansah
University of Alberta
Kevin Zhu
Algoverse AI Research
Ashwinee Panda
Algoverse AI Research
Aishwarya Balwani‡†
St. Jude Children's Research Hospital
*Joint first co-authors with equal contributions. Listed in alphabetical order.
Work conducted with Algoverse AI Research
Corresponding author. Email: aishwarya.balwani@stjude.org
Overview figure showing shared parameter subspaces and cross-task linearity in emergently misaligned behavior

Workshop Recognition

Oral Presentation and Honorable Mention at the UniReps Workshop

Spotlight at the MechInterp Workshop

Accepted at the CogInterp Workshop

Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Abstract

Recent work has discovered that large language models can develop broadly misaligned behaviors after being fine-tuned on narrowly harmful datasets, a phenomenon known as emergent misalignment (EM). However, the fundamental mechanisms enabling such harmful generalization across disparate domains remain poorly understood. In this work, we adopt a geometric perspective to study EM and demonstrate that it exhibits a fundamental cross-task linear structure in how harmful behavior is encoded across different datasets. Specifically, we find a strong convergence in EM parameters across tasks, with the fine-tuned weight updates showing relatively high cosine similarities, as well as shared lower-dimensional subspaces as measured by their principal angles and projection overlaps. Furthermore, we also show functional equivalence via linear mode connectivity, wherein interpolated models across narrow misalignment tasks maintain coherent, broadly misaligned behavior. Our results indicate that EM arises from different narrow tasks discovering the same set of shared parameter directions, suggesting that harmful behaviors may be organized into specific, predictable regions of the weight landscape. By revealing this fundamental connection between parametric geometry and behavioral outcomes, we hope our work catalyzes further research on parameter space interpretability and weight-based interventions.