ASIDE: Architectural Separation of Instructions and Data in Language Models
Published in ICLR 2025 Workshop Building Trust in LLMs, 2025
Egor Zverev, Evgenii Kortukov, Alexander Panfilov, Soroush Tabesh, Sebastian Lapuschkin, Wojciech Samek, Christoph H. Lampert
Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Specifically, the data embedding is initialized with a rotation of the pretrained model’s embedding, prompting the model to learn to treat instructions and data differently. We demonstrate the effectiveness of our method by showing (1) greatly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations. Full paper
zverev2025aside, title={{ASIDE}: Architectural Separation of Instructions and Data in Language Models},
author={Egor Zverev and Evgenii Kortukov and Alexander Panfilov and Soroush Tabesh and Sebastian Lapuschkin and Wojciech Samek and Christoph H. Lampert},
booktitle={ICLR 2025 Workshop on Building Trust in Language Models and Applications},
year={2025},
url={https://openreview.net/forum?id=GlmqRQsCaI}
}