What are you looking for?
51 Résultats pour : « Portes ouvertes »

L'ÉTS vous donne rendez-vous à sa journée portes ouvertes qui aura lieu sur son campus à l'automne et à l'hiver : Samedi 18 novembre 2023 Samedi 17 février 2024 Le dépôt de votre demande d'admission à un programme de baccalauréat ou au cheminement universitaire en technologie sera gratuit si vous étudiez ou détenez un diplôme collégial d'un établissement québécois.

Upcoming events
May 20, 2026 at 12:00

3 Fully Funded PhD Positions | Making AI Coding Agents Reliable and Production-Ready

Targeted study program
Doctorate
Research domains
Software Systems, Multimedia and Cybersecurity
Financing
Fully funded

3 Fully Funded PhD Positions | Making AI Coding Agents Reliable and Production-Ready

AI coding assistants like GitHub Copilot or Claude are reshaping software development — adopted by millions of developers worldwide, integrated into professional workflows, and increasingly trusted to write code that ends up in production systems. Yet studies show that a significant portion of the code they generate contains bugs, security vulnerabilities, or simply fails to run in real environments. As adoption accelerates, so does the risk: developers under time pressure accept AI suggestions they don't fully scrutinize, and broken or undeployable code quietly makes its way into critical software.

But the problem doesn't stop at deployment. Once AI-generated code is running in production, new challenges emerge: systems behave in unexpected ways, failures are harder to anticipate, and traditional monitoring approaches struggle to keep up with the complexity and volume of modern software. Closing the loop between code generation and system operation — using AI not only to write software but to watch over it, detect anomalies, and reason about failures at runtime — is one of the most pressing open problems at the intersection of AI and software engineering.

This is not a minor inconvenience — it's a fundamental barrier to the responsible and effective use of agentic-AI in software engineering. Solving it requires more than better models: it demands a deep understanding of why these agents fail, when their output cannot be trusted, and how the full software lifecycle — from generation to deployment to operation — can be made more reliable with agentic-AI.

We are looking for 3 motivated PhD students to join our research team and take on this challenge.

About the Project

This project investigates the fundamental limits of state-of-the-art AI coding agents across the full software lifecycle, with the goal of making the code they generate more correct, more deployable, and more observable once running. You will combine rigorous empirical analysis with the design and evaluation of novel solutions — research that is both scientifically impactful and practically urgent.

Core research questions include:

  • Where and why do coding agents produce plausible-but-broken code?
  • Under what conditions does AI-generated code fail to deploy?
  • How can AI-powered monitoring detect, diagnose, and explain failures in systems built with AI-generated code?
  • What new techniques can systematically close the gap between code generation and reliable system operation?

 

What You'll Work On

  • Benchmarking and failure analysis of coding AI agents (e.g., Copilot, Codex-based systems)
  • Designing and evaluating novel approaches to improve correctness and deployability
  • Exploring AI-driven software monitoring: anomaly detection, failure diagnosis, and runtime reasoning over AI-generated systems
  • Publishing at top venues in software engineering and AI (ICSE, FSE, ASE, NeurIPS, ICLR)

Required knowledge

What We're Looking For:

  • A Master's degree in computer science or a related field
  • Solid foundations in software engineering, machine learning, or programming languages
  • Intellectual curiosity and a genuine drive to do impactful research
  • Experience with LLMs, program analysis, or software testing is a plus — but not required.