LLM jailbreaks remain a major safety challenge. We tackle this by focusing on a specific failure mode: safety mechanisms don't generalize well across semantically equivalent inputs. Our approach ...