professor profile picture

Stefano Sarao Mannelli

Assistant Professor at Chalmers University of Technology

Chalmers University of Technology

Country flag

Sweden

Has open position

This profile is automatically generated from trusted academic sources.

Google Scholar

.

ORCID

.

LinkedIn

Social connections

How do I reach out?

Sign in for free to see their profile details and contact information.

Meet Kite AI

Contact this professor

Send an email
LinkedIn
ORCID
Google Scholar

Research Interests

Statistics

10%

Mathematics

10%

Deep Learning

10%

Physics

10%

Information Technology

10%

Machine Learning

10%

Ask ApplyKite AI

Start chatting
How can you help me contact this professor?
What are this professor's research interests?
How should I write an email to this professor?

Positions1

Publisher
source

Stefano Sarao Mannelli

University Name
.

Chalmers University of Technology

Postdoctoral Positions in Theoretical Foundations of AI Safety

This postdoctoral position at Chalmers University of Technology offers a unique opportunity to advance the theoretical foundations of AI safety and alignment. The project, "Theoretical Model Organisms of Misalignment," is funded by OpenAI's Alignment Team and the UK AI Security Institute, and aims to transform AI alignment from reactive trial-and-error into a predictive science. The research group is led by Dr. Stefano Sarao Mannelli and collaborates closely with Prof. Andrew Saxe at University College London, as well as industrial advisors from leading AI labs such as Anthropic, Meta, and Google DeepMind/Mila. As a postdoc, you will join the Division of Data Science and AI within the Department of Computer Science and Engineering, a joint department of Chalmers and the University of Gothenburg. The project leverages tools from statistical physics and high-dimensional probability to build tractable "model organisms" that capture the root causes of misalignment in AI systems. The goal is to derive quantitative laws for when and why harmful capabilities arise, focusing on inductive bias, fine-tuning, and mitigation strategies. Empirical validation through programming and mathematical modelling is a key component of the research. Applicants must hold a doctoral degree (or equivalent) in Physics, Mathematics, Computer Science, or Machine Learning, with strong skills in mathematical modelling, analysis, and programming. Experience with statistical physics of disordered systems, control theory, high-dimensional probability, teacher-student models, or deep linear networks is highly valued. Candidates should be accustomed to teaching and demonstrate potential in both research and education. Physical presence in Sweden is required throughout the employment, and a valid residence permit must be presented by the start date. The position is a full-time, temporary employment for two years, with the possibility of a one-year extension. Funding is provided by OpenAI's Alignment Team and the UK AI Security Institute, and includes full employee benefits. Chalmers offers a dynamic and inspiring working environment in Gothenburg, with generous parental leave, subsidised day care, free schools, and healthcare. The university is committed to gender balance, equality, and inclusion, and offers Swedish language courses for international staff. To apply, submit your application in English as PDF files (maximum 40 MB each) via the provided application form. Include a comprehensive CV with publications and references, and a personal letter outlining your research background, outcomes, future goals, and motivation for applying. Incomplete applications and those sent by email will not be considered. The application deadline is April 1, 2026. For questions, contact Dr. Stefano Sarao Mannelli at [email protected]. This postdoctoral position is meritorious for future roles in academia, industry, or the public sector, and offers regular engagement with the UCL team and industrial advisors. You will dedicate 20% of your time to teaching, including lecturing, TAing, or supervising students. Join Chalmers to contribute to cutting-edge research in AI safety and alignment, and help shape the future of trustworthy AI systems.

just-published