
Photo: Wikimedia Commons
#74ResearcherAI Safety & Alignment
Jan Leike
Anthropic
Biography
Head of Alignment at Anthropic. Former OpenAI Superalignment team leader. Key RLHF researcher.
💡 My Take
Leike's move from OpenAI to Anthropic — publicly citing safety concerns — was one of the most significant talent moves in AI. He's working on the hardest problem: how do you align AI systems that might become smarter than us? His work on RLHF and scalable oversight is foundational.
Key Influence
Leading technical AI alignment research with RLHF and scalable oversight methods
Awards & Recognition
Former head of OpenAI Alignment team, now at Anthropic