Beyond Explainability: Evaluating XAI Methods with Confidence using xai_evals

Register to attend

Thank you for your interest!

Your registration for the event is confirmed! We will send you the invite soon.

For any query please contact us on

Oops! Something went wrong while submitting the form.

Register for the event

Registration Successful!

Thank you for registering for the event! We're excited to have you join us. You will receive a confirmation email shortly with all the details. If you have any questions or need further assistance, please don't hesitate to reach out.

If you are unable to download the document or have same query, please contact us on

Oops! Something went wrong while submitting the form.

As AI systems take center stage in decision-making across industries like healthcare, finance, and legal services, explainability alone is no longer enough. While post-hoc explanation methods (like SHAP, LIME, Grad-CAM, and Integrated Gradients) attempt to bridge the gap, how can we trust these explanations? It’s not just about showing why a model made a decision—it’s about evaluating whether that explanation is trustworthy, stable, and actionable.

Join us for an exclusive session introducing xai_evals, AryaXAI’s open-source framework for evaluating the quality and robustness of AI model explanations. Whether you're a model developer, risk leader, or executive overseeing responsible AI deployment, this tool helps you debug models and audit explanations with measurable confidence.

Led by Pratinav Seth, Research Scientist at AryaXAI and co-author of the framework, this session will unpack how to assess explanation methods like SHAP, LIME, Grad-CAM, Integrated Gradients, and DLBacktrace, using metrics that reflect real-world deployment needs across modalities—tabular and image.

Who Should Attend:

  • AI/ML Practitioners & Data Scientists: Compare, debug, and validate explanation methods with real metrics 
  • Model Auditors & Risk Managers: Use quantifiable explanation evaluations for risk profiling and internal audits 
  • Regulators & Compliance Officers: Understand how explanation metrics support legal obligations under frameworks like GDPR, the EU AI Act, and other emerging standards 
  • Chief AI Officers & Tech Leaders: Ensure models meet enterprise requirements for transparency, reliability, and accountability

What You’ll Learn:

  • Why most explainability tools fall short in evaluating explanations
  • How xai_evals enables consistent benchmarking across SHAP, LIME, Grad-CAM, Integrated Gradients, and DlBacktrace
  • Faithfulness, comprehensiveness, sensitivity, monotonicity, and other metrics—explained with real examples
  • How to implement XAI evaluations for tabular and image models in production pipelines
  • How evaluation metrics support governance, documentation, and audit workflows

Speaker:

Pratinav Seth is a research scientist at AryaXAI Alignment Lab, India, where he works at the intersection of Explainable AI (XAI), AI alignment, and AI safety for high-stakes, real-world applications. He is currently focused on interpreting black-box models, evaluating XAI algorithm reliability, and ensuring these systems are aligned and trustworthy. His expertise spans across various domains in AI with an keen interest in AI for Social Good. He was recognized as an AAAI Undergraduate Consortium Scholar in 2023. His research has been showcased at prestigious conferences, including ICML, ICLR, CVPR, NeurIPS and ACL.

Stay till the end for a live Q&A with Pratinav Seth. Bring your questions on XAI metrics, model interpretability in production, or how to extend xai_evals for your custom use case.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Beyond Explainability: Evaluating XAI Methods with Confidence using xai_evals

Event
ML Explainability

As AI systems take center stage in decision-making across industries like healthcare, finance, and legal services, explainability alone is no longer enough. While post-hoc explanation methods (like SHAP, LIME, Grad-CAM, and Integrated Gradients) attempt to bridge the gap, how can we trust these explanations? It’s not just about showing why a model made a decision—it’s about evaluating whether that explanation is trustworthy, stable, and actionable.

Join us for an exclusive session introducing xai_evals, AryaXAI’s open-source framework for evaluating the quality and robustness of AI model explanations. Whether you're a model developer, risk leader, or executive overseeing responsible AI deployment, this tool helps you debug models and audit explanations with measurable confidence.

Led by Pratinav Seth, Research Scientist at AryaXAI and co-author of the framework, this session will unpack how to assess explanation methods like SHAP, LIME, Grad-CAM, Integrated Gradients, and DLBacktrace, using metrics that reflect real-world deployment needs across modalities—tabular and image.

Who Should Attend:

  • AI/ML Practitioners & Data Scientists: Compare, debug, and validate explanation methods with real metrics 
  • Model Auditors & Risk Managers: Use quantifiable explanation evaluations for risk profiling and internal audits 
  • Regulators & Compliance Officers: Understand how explanation metrics support legal obligations under frameworks like GDPR, the EU AI Act, and other emerging standards 
  • Chief AI Officers & Tech Leaders: Ensure models meet enterprise requirements for transparency, reliability, and accountability

What You’ll Learn:

  • Why most explainability tools fall short in evaluating explanations
  • How xai_evals enables consistent benchmarking across SHAP, LIME, Grad-CAM, Integrated Gradients, and DlBacktrace
  • Faithfulness, comprehensiveness, sensitivity, monotonicity, and other metrics—explained with real examples
  • How to implement XAI evaluations for tabular and image models in production pipelines
  • How evaluation metrics support governance, documentation, and audit workflows

Speaker:

Pratinav Seth is a research scientist at AryaXAI Alignment Lab, India, where he works at the intersection of Explainable AI (XAI), AI alignment, and AI safety for high-stakes, real-world applications. He is currently focused on interpreting black-box models, evaluating XAI algorithm reliability, and ensuring these systems are aligned and trustworthy. His expertise spans across various domains in AI with an keen interest in AI for Social Good. He was recognized as an AAAI Undergraduate Consortium Scholar in 2023. His research has been showcased at prestigious conferences, including ICML, ICLR, CVPR, NeurIPS and ACL.

Stay till the end for a live Q&A with Pratinav Seth. Bring your questions on XAI metrics, model interpretability in production, or how to extend xai_evals for your custom use case.

Register to attend

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Upcoming Events

Stay connected with our lineup of upcoming events. Join industry leaders and experts for engaging discussions, workshops, and networking opportunities.1 SeenLike

Visit our Events

Is Explainability critical for your AI solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.