Beyond Positional Bias: How DroPE Unlocks Zero-Shot Long Context in LLMs
23 February 2026
A review of DroPE, a simple but counterintuitive method that extends LLM context length by dropping positional embeddings at inference and achievs strong zero-shot long-context generalization without retraining.