Yehonatan Elisha

Yehonatan Elisha

AI Researcher | PhD Candidate at Tel Aviv University

About Me

I am a PhD candidate at Tel Aviv University, advised by Oren Barkan and Noam Koenigstein.
My research centers on Interpretable and Explainable AI, where I focus on making complex black-box models transparent, adaptive, and trustworthy.

In parallel to my academic work, I lead a research group and have spent three years as a Computer Vision Researcher working on video analytics and classical vision.
My technical foundation includes three years working as a Data Engineer prior to my academic career. I focused primarily on data streaming, ETL, and system modeling, and had the opportunity to build a comprehensive data processing system from scratch to support the data needs of multiple teams and consumers.

I'm always happy to connect - feel free to reach out if you'd like to explore potential collaborations!

Explainable AI Interpretable AI Computer Vision Natural Language Processing Deep Learning

News

Feb 2026 Our paper "Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness" has been accepted to CVPR 2026.
Nov 2025 Three papers accepted to AAAI 2026.
Oct 2025 Our paper "Soft Local Completeness: Rethinking Completeness in XAI" was presented as an Oral paper at ICCV 2025 in Hawaii.
Aug 2025 Our paper "Forget What You Know about LLMs Evaluations" has been accepted to the main track at EMNLP 2025.
Mar 2025 Presented "BEE: Metric-Adapted Explanations via Baseline Exploration-Exploitation" at AAAI 2025 in Philadelphia.

Selected Publications

Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations Yehonatan Elisha, Seffi Cohen, Oren Barkan, Noam Koenigstein AAAI 2026
Extracting Interaction-Aware Monosemantic Concepts in Recommender Systems Dor Arviv, Yehonatan Elisha, Oren Barkan, Noam Koenigstein AAAI 2026
Fidelity-Aware Recommendation Explanations via Stochastic Path Integration Oren Barkan*, Yahlly Schein*, Yehonatan Elisha, Veronika Bogina, Mikhail Baklanov, Noam Koenigstein AAAI 2026 (Oral)
Soft Local Completeness: Rethinking Completeness in XAI Ziv Weiss Haddad*, Oren Barkan*, Yehonatan Elisha, Noam Koenigstein ICCV 2025 (Oral)
Forget What You Know about LLMs Evaluations-LLMs are Like a Chameleon Nurit Cohen Inger, Yehonatan Elisha, Bracha Shapira, Lior Rokach, Seffi Cohen EMNLP 2025
BEE: Metric-Adapted Explanations via Baseline Exploration-Exploitation Oren Barkan*, Yehonatan Elisha*, Jonathan Weill, Noam Koenigstein AAAI 2025
Refining Fidelity Metrics for Explainable Recommendations Mikhail Baklanov, Veronika Bogina, Yehonatan Elisha, Yahlly Schein, Liron Allerhand, Oren Barkan, Noam Koenigstein SIGIR 2025
Improving LLM Attributions with Randomized Path-Integration Oren Barkan*, Yehonatan Elisha*, Yonatan Toib*, Jonathan Weill, Noam Koenigstein EMNLP Findings 2024
LLM Explainability via Attributive Masking Learning Oren Barkan*, Yonatan Toib*, Yehonatan Elisha*, Jonathan Weill, Noam Koenigstein EMNLP Findings 2024
Probabilistic Path Integration with Mixture of Baseline Distributions Yehonatan Elisha, Oren Barkan, Noam Koenigstein CIKM 2024
A Learning-based Approach for Explaining Language Models Oren Barkan*, Yonatan Toib*, Yehonatan Elisha*, Noam Koenigstein CIKM 2024
Visual Explanations via Iterated Integrated Attributions Oren Barkan*, Yehonatan Elisha*, Yuval Asher, Amit Eshel, Noam Koenigstein ICCV 2023
Stochastic Integrated Explanations for Vision Models Oren Barkan*, Yehonatan Elisha*, Jonathan Weill, Yuval Asher, Amit Eshel, Noam Koenigstein ICDM 2023
Deep Integrated Explanations Oren Barkan*, Yehonatan Elisha*, Jonathan Weill, Yuval Asher, Amit Eshel, Noam Koenigstein CIKM 2023