Publisher
source

Leon van der Torre

Just added

just-published

Postdoctoral Fellow in Child-Safe Multi-Agent AI University of Luxembourg in Luxembourg

Degree Level

Postdoc

Field of study

Computer Science

Funding

Available

Country flag

Country

Luxembourg

University

University of Luxembourg

Social connections

How do Indian students apply for this?

Sign in for free to reveal details, requirements, and source links.

Where to contact

Keywords

Computer Science
Information Technology
Robotics
Social Robotics
Multi-agent System

About this position

The University of Luxembourg invites applications for a Postdoctoral Fellow in Child-Safe Multi-Agent AI, based at the Belval Campus within the Faculty of Science, Technology and Medicine (FSTM). The position is embedded in the Department of Computer Science and the Interdisciplinary Lab for Intelligent and Adaptive Systems (ILIAS), specifically within the Research Group for Individual and Collective Reasoning (ICR) led by Professor Dr. Leon van der Torre. The AI4KIDS project aims to advance child-centric safe AI by developing a norm-first Belief-Desire-Intention (BDI) architecture, where generative models (LLMs) are constrained by machine-readable child-protection policies to ensure purposeful, legally compliant, explainable, and auditable AI behaviour.

Research will focus on embedding explicit norms into the BDI cycle, compiling enforceable policy-as-code for real-time safety checks, parameter-efficient fine-tuning of LLMs for age-appropriate and pedagogically meaningful dialogue, and high-performance computing pipelines on Luxembourg's MeluXina supercomputer for large-scale simulation and adversarial testing. The project includes industrial validation with social robotics platforms (such as QTrobot) for deployment in educational and special-needs contexts, integrating computational law, symbolic AI, and large-scale evaluation into a blueprint for safe child-facing AI.

The successful candidate will lead high-impact research at the intersection of multi-agent systems, normative architectures, symbolic AI, and safe LLM integration. Responsibilities include designing and implementing norm-constrained agent architectures with real-time compliance checking, integrating research outcomes with social robot platforms and simulation environments, disseminating results through scientific publications and presentations, participating in research funding proposal writing, and performing teaching activities at bachelor, master, and PhD levels. The postdoctoral fellow will report to the project coordinator and Principal Investigator for AI4KIDS.

Applicants must hold a PhD in Computer Science or related fields (Software Engineering, Artificial Intelligence, Distributed Systems), with a strong research record in multi-agent systems, distributed/adaptive systems, and/or agent-oriented software engineering. Experience or strong interest in normative agent architectures (BDI), safety-by-design AI, and integration with robotic platforms is expected. Familiarity with LLMs and their safe adaptation for restricted domains is a strong asset. Candidates should be team players able to work in an interdisciplinary, international research environment and must be fluent in English.

The University of Luxembourg offers a modern, dynamic, and international environment with high-quality equipment and close ties to industry and European institutions. The position is a fixed-term contract for 24 months, with a yearly gross salary of EUR 85,176. The planned start date is June 2026. Applications should be submitted online through the HR system and must include a CV, cover letter, PhD diploma or expected defense date, transcript of university-level courses, publication list, and names/contact details of three referees. Early application is encouraged as applications are processed upon reception. The University promotes an inclusive culture and encourages applications from individuals of all backgrounds.

Funding details

Available

What's required

Applicants must hold a PhD degree in Computer Science or related fields such as Software Engineering, Artificial Intelligence, or Distributed Systems. A strong research record in multi-agent systems, distributed/adaptive systems, and/or agent-oriented software engineering is required, demonstrated by publications or projects. Experience or strong interest in normative agent architectures (BDI), safety-by-design AI, and integration with robotic platforms is expected. Familiarity with large language models (LLMs) and their safe adaptation for restricted domains is a strong asset. Candidates must be able to work in an interdisciplinary, international research environment and demonstrate fluency in English.

How to apply

Submit your application online through the University of Luxembourg HR system. Applications must include a CV, cover letter, PhD diploma or expected defense date, transcript of university-level courses, publication list, and names/contact details of three referees. Early application is encouraged as applications are processed upon reception. Applications by email will not be considered.

Ask ApplyKite AI

Start chatting
Can you summarize this position?
What qualifications are required for this position?
How should I prepare my application?

Professors