QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

Using Multiple Explainability Methods for Model Training and Upload

AryaXAI supports multiple explainability methods to enhance model interpretability:

  1. Default Method:
    • SHAP (SHapley Additive exPlanations): SHAP is enabled by default, providing consistent and reliable feature attributions.
  2. Additional Supported Methods:
    • LIME (Local Interpretable Model-Agnostic Explanations): LIME is available as an alternative for generating local explanations.
    • DL-Backtrace: Our proprietary explainability method, designed for deep learning models, offers highly accurate and stable explanations tailored to complex architectures.
    • Integrated Gradients
    • Gradcam
  3. Upcoming Methods:
    • CEM (Contrastive Explanation Method): Soon to be integrated, CEM will allow generating contrastive explanations to highlight critical feature differences.

You can switch between supported methods based on your project requirements. 

Using Multiple Explainability Methods for Model Training and Upload

Users can also train or upload models with support for multiple explainability methods simultaneously. This feature allows you to leverage different approaches for better insights into your model's behavior.

To enable multiple explainability methods, use the explainability_method parameter as shown below:


"explainability_method": ["lime", "shap"]

SHAP has 'Data Sampling size' as the customizable metric. This can be defined while training a new model in AryaXAI AutoML.

The below function will show the visualization of the prediction path taken by the model. If you don't see this graph, then retrain on a larger compute. Sometimes, this may fail as it may not enough compute.


modelinfo.prediction_path() 
# The Tree Prediction Path the Model has using for Prediction (Only for Tree Based Models)

Case explainability

Using the below command displays a list of cases. The list allows you to apply filters using tags and search for a particular case by using its unique identifier.


project.cases(tag='Training') # list of cases you can filter it using Tag and search case using Unique Identifier

Users can view a case by providing at least one component from the list, such as feature importance, policies, or similar cases. To retrieve all components at once, pass "all" in the component list.


case_info=project.case_info(unique_identifer='1171',tag='training', components=['similar_cases'], instance_type ="nova-4")

SHAP Feature Importance: To retrieve the SHAP feature importance for a given case, use the following code:


case_info.explainability_shap_feature_importance()

Integrated Gradients Feature Importance: To retrieve the Integrated Gradients feature importance for the case, use the following code:


case_info.explainability_ig_feature_importance()

DLBacktrace Feature Importance: Developed by AryaXAI, offers a new approach to interpreting deep learning models of any modality and size by meticulously tracing relevance from model decisions to their inputs. To get feature importance for a case using DLB, use the following function:


case_info.explainability_dlb_feature_importance()

To invoke the Case View through Gova servers, pass the instance type as a parameter—for example: instance_type = "gova-2".

For Tabular project


case_info=project.case_info(unique_identifer="case999",tag='training',components=['all'], instance_type ="gova-2")

Users can retrieve the feature importance value for a specific feature by passing the feature name as a parameter to the following function:


case_info.feature_importance(feature="last_pymnt_amnt")

Audit

The Audit tab in the case view provides key insights into model behavior and system alerts associated with the selected case. The overview includes:

  • Model Snapshot: Displays the model information as it was at the time the case was processed
  • Recent Alerts: Lists any alerts triggered for this case within the past 7 days, helping users identify anomalies or issues related to model performance, data quality, or policy violations.

case_info.audit()

To retrieve alerts from the past 7 days, use the Alerts Trail function as shown below. It will return only the relevant results within that time frame.


case_info.alerts_trail()

To view alerts triggered within a specific time frame, pass the desired number of days as a parameter to the following function:


case_info.alerts_trail(days=70)

Activating Inference for an Inactive Model:

When the XGBoost default model is active, you can still access cases associated with an inactive model by first activating inference for the desired model. Use the following function to activate the inference:


project.update_inference_model_status(model_name="XGBoost_v2", activate=True)

Retrieving Case Details for a Specific Model:

Once inference for the specified model is activated, you can retrieve the case details by using the case_info function. Use the following function:


case_info = project.case_info(unique_identifier="129550520", tag="training", model_name="XGBoost_v2")

Feature Importance

Case Feature Importance - Retrieve and analyze feature importance for a given case using SHAP or LIME.


case_info.explainability_shap_feature_importance()
case_info.explainability_lime_feature_importance()

Raw Data and Engineered Data

In AryaXAI, there is a convenient feature that allows users to segregate raw data and engineered data. This feature provides an easy way to switch between viewing and working with the original raw data and the processed engineered data, since your model is trained on the engineered data.

To fetch Raw Data of all features for a particular case via SDK, use the following prompt:


# raw data
case_info.explainability_raw_data() 

Uploading feature mapping:

To upload a feature mapping file in JSON format, use the upload_feature_mapping function. This function allows you to map features for your model directly from a JSON file.


project.upload_feature_mapping("/content/feature.json")

Running a new case


case_info = project.case_info(unique_identifer='',tag='') 

Retrieve logs

You can set up ML Explainability and rerun the inferencing from time to time. You can see all the inferencing logs here.

Get all cases which are already viewed


project.case_logs(page=1)

 # You can retrieve the inferencing output for any previous here. No credits are used when retriving logs.
case = project.get_viewed_case(case_id="")

To fetch Explainability for a case. This will use the current 'active' model. To call for the explainability, you can pass the UID and the tag.

Caution: if you get notification that 'Failed to generate explainability', please rerun the model training again and redo the case view.

case_info = project.case_info('unique_identifer','tag')

# Case Decision
case_info.explainability_decision() 
NOTE: It takes some time as all predictions are in real time.

Help function on method case info:


help(project.case_info) 
NOTE: If you change the active model, then the prediction and explainability will change as well.

To fetch the Case Prediction Path:


case_info.explainability_prediction_path()

Similar cases as explanations

'Similar cases,' also known as reference explanations, parallels the concept of citing references for a prediction. This method extracts the most similar cases from the training data compared to the 'prediction case.' The similarity algorithm employed depends on the plan. In the AryaXAI Developer version, the 'prediction probability' similarity method is used, while the AryaXAI Enterprise version offers additional methods such as 'Feature Importance Similarity' and 'Data Similarity.'

To list all Similar Cases wrt a particular case use the following function:


# List of Similar Cases wrt to a Case
case_info.similar_cases()

Get data of the similar cases via SDK:


# Data of Similar Cases
case_info.explainability_similar_cases()