SDK
QUICK LINKS
TUTORIALS
Bias Monitoring
Given the prevalence of bias and vulnerabilities in ML models, understanding a model's operations is crucial before its deployment to production. Generally, a model is said to demonstrate bias if its decisions unfairly impact a protected group without justifiable reasons.
AryaXAI's bias monitoring functionality detects potential bias in a model's output. The platform also provides analytical and reporting capabilities that can help determine whether the bias is justified.
Monitor bias in your models through the AryaXAI python package:
Help function to get Bias monitoring dashboard:
No items found.