Workshop on analyzing and interpreting neural networks for NLP
Accelerating Sparse Autoencoder Training via Layer-Wise Transfer Learning in Large Language Models
Davide Ghilardi, Federico Belotti, Marco Molinari, Jaehyuk Lim
Toward the Evaluation of Large Language Models Considering Score Variance across Instruction Templates
Yusuke Sakai, Adam Nohejl, JIANGNAN HANG, Hidetaka Kamigaito, Taro Watanabe
Mechanistic?
Sarah Wiegreffe, Naomi Saphra
Investigating Layer Importance in Large Language Models
Yang Zhang, Yanfei Dong, Kenji Kawaguchi
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi, Yadollah Yaghoobzadeh
IvRA: A Framework to Enhance Attention-Based Explanations for Language Models with Interpretability-Driven Training
Sean Xie, Soroush Vosoughi, Saeed Hassanpour
Pruning for Protection: Increasing Jailbreak Resistance in Aligned LLMs Without Fine-Tuning
Adib Hasan, Ileana Rugina, Alex Wang
Attribution Patching Outperforms Automated Circuit Discovery
Aaquib Syed, Can Rager, Arthur Conmy
Do Metadata and Appearance of the Retrieved Webpages Affect LLM’s Reasoning in Retrieval-Augmented Generation?
Cheng-Han Chiang, Hung-yi Lee
Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Attribution of Model Decisions to Training Data
Yiheng Su, Junyi Jessy Li, Matthew Lease
WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Seyedali Mohammadi, Edward Raff, Jinendra Malekar, Vedant Palit, Francis Ferraro, Manas Gaur
Copy Suppression: Comprehensively Understanding a Motif in Language Model Attention Heads
Callum Stuart McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, Neel Nanda
How Language Models Prioritize Contextual Grammatical Cues?
Hamidreza Amirzadeh, Afra Alishahi, Hosein Mohebbi
Self-Assessment Tests are Unreliable Measures of LLM Personality
Akshat Gupta, Xiaoyang Song, Gopala Anumanchipalli
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, Janos Kramar, Anca Dragan, Rohin Shah, Neel Nanda
Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models
Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, Anna A Ivanova
Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations
Róbert Csordás, Christopher Potts, Christopher D Manning, Atticus Geiger
Uncovering Syllable Constituents in the Self-Attention-Based Speech Representations of Whisper
Erfan A Shams, Iona Gessinger, Julie Carson-Berndsen
An Adversarial Example for Direct Logit Attribution: Memory Management in GELU-4L
Jett Janiak, Can Rager, James Dao, Yeu-Tong Lau
On the alignment of LM language generation and human language comprehension
Lena Sophia Bolliger, Patrick Haller, Lena Ann Jäger
Transformers Learn Hidden Dynamics when Trained to Predict Markov Decision Processes
Yuxi Chen, Suwei Ma, Tony Dear, Xu Chen
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna, Niladri Sett
Enhancing Question Answering on Charts Through Effective Pre-training Tasks
Ashim Gupta, Vivek Gupta, Shuo Zhang, Yujie He, Ning Zhang, Shalin Shah
Attend First, Consolidate Later: On the Importance of Attention in Different LLM Layers
Amit Ben Artzy, Roy Schwartz
Can We Statically Locate Knowledge in Large Language Models? Financial Domain and Toxicity Reduction Case Studies
Jordi Armengol-Estapé, Lingyu Li, Sebastian Gehrmann, Achintya Gopal, David S Rosenberg, Gideon S. Mann, Mark Dredze
MultiContrievers: Analysis of Dense Retrieval Representations
Seraphina Goldfarb-Tarrant, Pedro Rodriguez, Jane Dwivedi-Yu, Patrick Lewis
Enhancing adversarial robustness in Natural Language Inference using explanations
Alexandros Koulakos, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou
LLM Internal States Reveal Hallucination Risk Faced With a Query
Ziwei Ji, Delong Chen, Etsuko Ishii, Samuel Cahyawijaya, Yejin Bang, Bryan Wilie, Pascale Fung
Language Models Linearly Represent Sentiment
Oskar John Hollinsworth, Curt Tigges, Atticus Geiger, Neel Nanda
Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training
Danny D. Leybzon, Corentin Kervadec
Are there identifiable structural parts in the sentence embedding whole?
Vivi Nastase, Paola Merlo
Routing in Sparsely-gated Language Models responds to Context
Stefan Arnold, Marian Fietta, Dilara Yesilbas
Optimal and efficient text counterfactuals using Graph Neural Networks
Dimitris Lymperopoulos, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou
Probing Language Models on Their Knowledge Source
Zineddine Tighidet, Jiali Mei, Benjamin Piwowarski, Patrick Gallinari
Multi-property Steering of Large Language Models with Dynamic Activation Composition
Daniel Scalena, Gabriele Sarti, Malvina Nissim
Fifty shapes of BLiMP: syntactic learning curves in language models are not uniform, but sometimes unruly
Bastian Bunzeck, Sina Zarrieß
Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, Bernhard Schölkopf
Inducing Induction in Llama via Linear Probe Interventions
Sheridan Feucht, Byron C Wallace, David Bau
Implicit Meta-Learning in Small Transformer Models: Insights from a Toy Task
Luan Fletcher, Victor Levoso, Kunvar Thaman, Misha Kilianovski
Latent Concept-based Explanation of NLP Models
Xuemin Yu, Fahim Dalvi, Nadir Durrani, Marzia Nouri, Hassan Sajjad
Exploring Alignment in Shared Cross-Lingual Spaces
Basel Mousi, Nadir Durrani, Fahim Dalvi, Majd Hawasly, Ahmed Abdelali
Compositional Cores: Persistent Attention Patterns in Compositionally Generalizing Subnetworks
Michael Y. Hu, Chuan Shi, Tal Linzen
How LLMs Reinforce Political Misinformation: Insights from the Analysis of False Presuppositions
Judith Sieker, Clara Lachenmaier, Sina Zarrieß
Does Alignment Tuning Really Break LLMs’ Internal Confidence?
Hongseok Oh, Wonseok Hwang
How Does Code Pretraining Affect Language Model Task Performance?
Jackson Petty, Sjoerd van Steenkiste, Tal Linzen
ToxiSight: Insights Towards Detected Chat Toxicity
Zachary Yang, Domenico Tullo, Reihaneh Rabbany
Clusters Emerge in Transformer-based Causal Language Models
Xinbo Wu, Lav R. Varshney
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis
Reshmi Ghosh, Rahul Seetharaman, Hitesh Wadhwa, Somyaa Aggarwal, Samyadeep Basu, Soundararajan Srinivasan, Wenlong Zhao, Shreyas Chaudhari, Ehsan Aghazadeh
Mind Your Manners: Detoxifying Language Models via Attention Head Intervention
Jordan Nikolai Pettyjohn, Nathaniel C Hudson, Mansi Sakarvadia, Aswathy Ajith, Kyle Chard
Can One Token Make All the Difference? Forking Paths in Autoregressive Text Generation
Eric J Bigelow, Ari Holtzman, Hidenori Tanaka, Tomer Ullman
Exploring the Recall of Language Models: Case Study on Molecules
Knarik Mheryan, Hasmik Mnatsakanyan, Philipp Guevorguian, Hrant Khachatrian
How do LLMs deal with Syntactic Conflicts in In-context-learning ?
Nahyun Kim
Do LLMs Use Language Like Humans? Toward A Linguistics Analysis for Understanding LLMs Behavior
Xinyu Zhou, Delong Chen, Samuel Cahyawijaya, Xufeng Duan, Zhenguang Cai