My current research revolves around Human-Aware AI, with a pointed focus on using logic-based techniques to augment the explainability of systems involved in (mostly, sequential) decision-making. My overarching goal is to engineer AI agents that can engage and cooperate with humans in a manner that is as intuitive and natural as human-to-human interaction.
Take planning as an example. Agents often suggest plans that a human user might find challenging to understand—especially when compared to alternative options they have in mind. This disconnect underscores the necessity for agents to offer detailed explanations for their recommendations.
To tackle this challenge, our approach is rooted in Knowledge Representation and Reasoning (KR). We use logical expressions and create structured representations of the (mental) models for both the AI agent and the human user. Within this KR framework, we are adapting and generalizing key notions, such as entailment, hitting sets, and model counting to systematically address the problem of generating explanations. By integrating these methodologies, we're paving the way for a new paradigm in Human-AI Interaction. One where not only the needs but also the conceptual models of human users are fully acknowledged and incorporated into the decision-making process of the AI agent.

workshop series

project thumbnail