top of page

Explainability AI(XAI) beyond the Surface

In this post together we delve into Advanced XAI, XAI characteristics, Categories and Python libraries  📚


Characteristics of XAI:

  • Transparency

  • Justification

  • Informativeness

  • Reliability


Categories of XAI:

  • Feature attribution: Determine Feature Importance

  • Instance based: Determine Subset of features that guarantee a prediction

  • Graph Convolution based: Interpret Model using subgraphs

  • Self explaining: Develop Model that are explainable by Design

  • Uncertainty Estimation: Quantify the reliability of a prediction


XAI

Advanced XAI methods:

Diving into some examples in the categories and also other types:

  • Counterfactual: alternative scenarios that could have led to a different AI decision.

  • Human-Readable Rule Extraction: extract human-readable rules from complex models

  • Attention Mechanisms: highlight the important features or regions in input data that influenced an AI decision.

  • NLP techniques: can generate explanations in human-readable language to provide intuitive insights

  • Bayesian Networks: enable understanding of causal relationships within AI models.

  • Model Confidence: conveying model confidence and uncertainty in predictions.

  • Adversarial Explanation: testing AI models with intentionally modified inputs to reveal vulnerabilities or biases.

  • Transfer Learning: widely used for improving AI performance, can also enhance explainability.

  • Interactive Explanations: allow users to actively engage with AI systems and explore decision pathways.

  • Integrating Human Feedback Loops: by incorporating iterative feedback loops from human users.


Here are 60 Python Libraries for AI Explainability and Model Interpretability:

  1. SHAP - SHAP Documentation

  2. LIME - LIME Documentation

  3. ELI5 - ELI5 Documentation

  4. Alibi - https://docs.seldon.io/projects/alibi/en/latest/

  5. interpret - interpret Documentation

  6. Dalex - https://dalex.drwhy.ai/python/api/

  7. Captum - Captum Documentation

  8. Skater - https://github.com/GapData/skater

  9. Fairlearn - Fairlearn Documentation

  10. Fairness Indicators - https://github.com/tensorflow/fairness-indicators, https://notebook.community/tensorflow/fairness-indicators/fairness_indicators/documentation/examples/Fairness_Indicators_Example_Colab

  11. Yellowbrick - https://www.scikit-yb.org/en/latest/

  12. PyCEbox - PyCEbox Documentation

  13. Anchor - Anchor Documentation

  14. SHAPash - SHAPash Documentation

  15. DiCE - DiCE Documentation

  16. Aequitas - https://github.com/dssg/aequitas

  17. CleverHans - CleverHans Documentation

  18. PrivacyRaven - PrivacyRaven Documentation

  19. interpretML - interpretML Documentation

  20. PDPbox - PDPbox Documentation

  21. Fairness - Fairness Documentation

  22. FAT Forensics - FAT Forensics Documentation

  23. What-If Tool - https://pair-code.github.io/what-if-tool/

  24. certifai - certifai Documentation

  25. Explanatory Model Analysis - https://ema.drwhy.ai/

  26. XAI - XAI Documentation

  27. Fairness Comparison - Fairness Comparison Documentation

  28. AI Explainability 360 - https://aix360.readthedocs.io/en/latest/

  29. BlackBoxAuditing - BlackBoxAuditing Documentation

  30. Deap - Deap Documentation

  31. Facets - https://github.com/BCG-X-Official/facet

  32. TCAV - TCAV Documentation

  33. Grad-CAM - Grad-CAM Documentation

  34. AIX360 - AIX360 Documentation

  35. fairkit-learn - fairkit-learn Documentation

  36. Adversarial Robustness Toolbox (ART) - ART Documentation

  37. ExplainX.ai - ExplainX.ai Documentation

  38. Treeinterpreter - Treeinterpreter Documentation

  39. H2O.ai Explainability - H2O.ai Explainability Documentation

  40. TensorFlow Explain - https://www.tensorflow.org/guide -- https://tf-explain.readthedocs.io/en/latest/usage.html

  41. Concept Activation Vectors - Concept Activation Vectors Documentation

  42. Holoclean - Holoclean Documentation

  43. Saabas - https://cran.r-project.org/web/packages/tree.interpreter/vignettes/MDI.html

  44. RelEx - RelEx Documentation

  45. iNNvestigate - iNNvestigate Documentation

  46. Profweight - Profweight Documentation

  47. XDeep - XDeep Documentation

  48. DeepLIFT - DeepLIFT Documentation

  49. L2X - L2X Documentation

  50. Fiddler AI - Fiddler AI Documentation

  51. TrustyAI - TrustyAI Documentation

  52. RAI - RAI Documentation

  53. LimeTabular - LimeTabular Documentation

  54. Gamut - Gamut Documentation

  55. cxplain - cxplain Documentation

  56. AnchorTabular - AnchorTabular Documentation

  57. H2O-3 Explainability - https://docs.h2o.ai/h2o/latest-stable/h2o-docs/explain.html

  58. Alibi Detect - https://github.com/SeldonIO/alibi-detect

  59. WeightWatcher - WeightWatcher Documentation


These resources provide comprehensive guides and examples for implementing and understanding the respective tools and frameworks.


Let's build a future where humans and AI work together to achieve extraordinary things!


Let's keep the conversation going!

What are your thoughts on the limitations of AI for struggling companies? Share your experiences and ideas for successful AI adoption.


Contact us(info@drpinnacle.com) today to learn more about how we can help you.




3 views0 comments

Our Partners

Burpsuite
web security
GCP
  • Twitter
  • LinkedIn
  • YouTube

Terms and Conditions

Cookies Policy

© 2020 by Dr.Pinnacle All rights reserved

bottom of page