Andre Freitas
1 year ago
Agent/LLM-based Evidence-based Reasoning IDIAP and EPFL PhD programme in Switzerland
Degree Level
PhD
Field of study
Computer Science
Funding
4-year position funded by the SNSF-FAPESP RATIONAL project
Deadline
December 31st
Country
Switzerland
University
University Name
How do Pakistani students apply for this?
Sign in for free to reveal details, requirements, and source links.
Where to contact
Official Email
Keywords
Computer Science
Artificial Intelligence
Natural Language Processing
Nlp
Computational Linguistics
Biomedical Applications
Large Language Models
About this position
PHD position in Agent/LLM-based Evidence-based Reasoning (Switzerland)
* How to make LLM-based models capable of rigorous, fact-based inference?
* How to build AI models which can make sense of large-scale complex evidence?
This is an exciting opportunity to work at the interface between Large Language Models (LLMs) and complex reasoning.
The project:
LLMs have defined the foundations for the construction of models which can interpret and reason over language at scale. However, technical challenges remain in delivering models which are capable of factually correct, controlled and rigorous reasoning, fundamental in critical domains of applications such as biomedicine and policy-making.
In this project, we will work on the development of novel natural language inference (NLI) paradigms which can jointly reason over qualitative and quantitative evidence at scale. The project will focus on areas such as: (i) the development of new reasoning planning models for evidence-based reasoning and (ii) the integration of NLI models with statistical, causal and mechanistic inference paradigms.
The candidate will work at the Neuro-symbolic AI Group at IDIAP and will also be affiliated with the EPFL PhD programme.
This is a 4-year position funded by the SNSF-FAPESP RATIONAL project in collaboration with Daniel Pedronette (UNESP).
Candidates are expected to have:
A BSc/MSc degree in Computer Science or related areas.
Previous academic or industrial project experience in NLP (evidenced by project results and papers).
Be confident in software development and in developing complex NLP experimental pipelines.
Interested applicants please send your CV to [email protected] by December 31st.
* How to make LLM-based models capable of rigorous, fact-based inference?
* How to build AI models which can make sense of large-scale complex evidence?
This is an exciting opportunity to work at the interface between Large Language Models (LLMs) and complex reasoning.
The project:
LLMs have defined the foundations for the construction of models which can interpret and reason over language at scale. However, technical challenges remain in delivering models which are capable of factually correct, controlled and rigorous reasoning, fundamental in critical domains of applications such as biomedicine and policy-making.
In this project, we will work on the development of novel natural language inference (NLI) paradigms which can jointly reason over qualitative and quantitative evidence at scale. The project will focus on areas such as: (i) the development of new reasoning planning models for evidence-based reasoning and (ii) the integration of NLI models with statistical, causal and mechanistic inference paradigms.
The candidate will work at the Neuro-symbolic AI Group at IDIAP and will also be affiliated with the EPFL PhD programme.
This is a 4-year position funded by the SNSF-FAPESP RATIONAL project in collaboration with Daniel Pedronette (UNESP).
Candidates are expected to have:
A BSc/MSc degree in Computer Science or related areas.
Previous academic or industrial project experience in NLP (evidenced by project results and papers).
Be confident in software development and in developing complex NLP experimental pipelines.
Interested applicants please send your CV to [email protected] by December 31st.
Funding details
4-year position funded by the SNSF-FAPESP RATIONAL project
How to apply
Interested applicants please send your CV to [email protected]
Ask ApplyKite AI
Start chatting
Can you summarize this position?
What qualifications are required for this position?
How should I prepare my application?
Professors

How do Pakistani students apply for this?
Sign in for free to reveal details, requirements, and source links.
I’ve participated in discussions about the limitations of Large Language Models (LLMs), with the most prominent critique being their alleged inability to reason.
LLMs possess clear spatially-abstracted representations of words, both in and out of context. This means they encode relationships between words and concepts in a multidimensional space. Language itself, however, is merely a medium for conveying information. What’s truly interesting is that when an LLM creates connections between abstract representations of concepts, facts, or thoughts, it is effectively processing and reasoning through those representations. This process aligns with the essence of reasoning: the ability to form connections. In fact, the word "intelligence" derives from the Greek intel-lego, meaning "to connect."
This perspective challenges the notion that LLMs lack reasoning. While their reasoning may differ fundamentally from human cognition, it’s difficult to deny the sophisticated abstraction and pattern recognition they exhibit.
On these topics, I recommend a PhD position in ???? shared by Andre Freitas , whose lectures I had the privilege of attending. His perspective adds depth to this debate and is worth exploring further, along with the Evidence-based nature of the LLM fact-controlled reasoning capabilities this PhD path is targeting, valuable for critical domains such as biomedicine and policy-making.