Advertisement Β· 728 Γ— 90

Posts by Sekh (Sk) Mainul Islam

This work was conducted at @copenlu.bsky.social under the guidance of my amazing supervisors, @iaugenstein.bsky.social and @apepa.bsky.social.

5 months ago 1 0 0 0

Overall, this work advances understanding of how LLMs integrate internal and external knowledge by introducing the first systematic framework for multi-step analysis of knowledge interactions via rank-2 subspace disentanglement.

5 months ago 1 0 1 0
Post image

πŸ’‘How is the CoT mechanism aligned with the knowledge interaction subspace?
πŸ“Š CoT maintains similar CK alignment compared to standard prompting for all the datasets, and also reduces PK alignment.

5 months ago 1 0 1 0
Post image

πŸ’‘ Can we find reasons for hallucinations based on PK-CK interactions?
πŸ“Š The gap between PK and CK is much higher for the examples with hallucinated spans than for the examples with no hallucinated spans across the sequence steps.

5 months ago 1 0 1 0
Post image

πŸ’‘ How do individual PK and CK contributions change over the NLE generation steps for different knowledge interactions?
πŸ“Š During most of the NLE generations, the model slightly prioritizes PK.

5 months ago 1 0 1 0

πŸ’‘ How do individual PK and CK contributions change over the NLE generation steps for different knowledge interactions?
πŸ“Š While generating an answer, the model aligns with the CK direction for conflicting examples, while for supportive examples, the model aligns with PK.

5 months ago 1 0 1 0
Post image

πŸͺ› We propose a novel rank-2 projection subspace that disentangles PK and CK contributions more accurately and use it for the first multi-step analysis of knowledge interactions across longer NLE sequences.

5 months ago 1 0 1 0
Advertisement
Post image

πŸ’‘ Is a rank-1 projection subspace enough for disentangling PK and CK contributions in all types of knowledge interaction scenarios?
πŸ“Š Different knowledge interactions are poorly captured by the rank-1 projection subspace in LLM model parameter

5 months ago 1 0 1 0

Prior work has largely examined only single-step generation – typically the final answer, and has modelled PK–CK interaction only as a binary choice in a rank-1 subspace. This overlooks richer forms of interaction, such as complementary or supportive knowledge.

5 months ago 1 0 1 0

πŸ€” NLEs illustrate the underlying decision-making process of LLMs in a human-readable format and reveal the utilization of PK and CK. Understanding their interaction is key to assessing the grounding of NLEs, yet it remains underexplored.

5 months ago 2 0 1 0

I am excited to share our new preprint answering this question:
"Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement"

πŸ“„ Paper: arxiv.org/pdf/2511.01706
πŸ’» Code: github.com/copenlu/pk-c...

5 months ago 2 0 1 1

What is the interaction dynamics between Parametric Knowledge (PK) and Context Knowledge (CK) in generating longer Natural Language Explanation (NLE) sequences?

5 months ago 5 2 1 0

πŸ‘©β€πŸ”¬ Huge thanks to my brilliant co-authors from @copenlu.bsky.social (led by @iaugenstein.bsky.social ) β€” @nadavb.bsky.social , Siddhesh Pawar, @haeunyu.bsky.social , and @rnv.bsky.social .
@aicentre.dk

8 months ago 1 0 0 0
Post image Post image

πŸ“Š Key Takeaways:
3️⃣ Real & Fictional Bias Mitigation: Reduces both real-world stereotypes (e.g., β€œItalians are reckless drivers”) and fictional associations (e.g., β€œcitizens of a fictional country have blue skin”), making it useful for both safety and interpretability research.

8 months ago 1 0 1 0
Post image

πŸ“Š Key Takeaways:
2️⃣ Strong Generalization: Works on unseen biases during token-based fine-tuning.

8 months ago 1 0 1 0
Post image Post image

πŸ“Š Key Takeaways:
1️⃣ Consistent Bias Elicitation: BiasGym reliably surfaces biases for mechanistic analysis, enabling targeted debiasing without hurting downstream performance.

8 months ago 1 0 1 0
Advertisement

BiasGym consists of two components:
BiasInject: injects specific biases into the model via token-based fine-tuning while keeping the model frozen.
BiasScope: leverages these injected signals to identify and steer the components responsible for biased behaviour.

8 months ago 1 0 1 0

πŸ’‘ Our Approach: We propose BiasGym, a simple, cost-effective, and generalizable framework for surfacing and mitigating biases in LLMs through controlled bias injection and targeted intervention.

8 months ago 1 0 1 0

πŸ” Problem: Biased behaviour of LLMs is often subtle and non-trivial to isolate, even when deliberately elicited, making systematic analysis and debiasing particularly challenging.

8 months ago 1 0 1 0
Post image

πŸš€ Excited to share our new preprint: BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them

πŸ“„ Read the paper: arxiv.org/abs/2508.08855

8 months ago 11 2 1 1