Photo of Jan Leike

Photo: Wikimedia Commons

#74ResearcherAI Safety & Alignment

Jan Leike

Anthropic

Specialties

AI Safety

Location

Germany/USA

Education

PhD in Machine Learning, Australian National University.

Biography

Head of Alignment at Anthropic. Former OpenAI Superalignment team leader. Key RLHF researcher.

💡 My Take

Leike's move from OpenAI to Anthropic — publicly citing safety concerns — was one of the most significant talent moves in AI. He's working on the hardest problem: how do you align AI systems that might become smarter than us? His work on RLHF and scalable oversight is foundational.

Key Influence

Leading technical AI alignment research with RLHF and scalable oversight methods

Awards & Recognition

Former head of OpenAI Alignment team, now at Anthropic