# MonitoringCustomMetrics
`MonitoringCustomMetrics` is a code package that simplifies the creation of metrics to use for monitoring Machine Learning files. We follow
the formats and standards defined by [Amazon SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html).
It can be executed locally by using Docker, or it can be used within a SageMaker Processing Job.
## What does it do?
This tool helps you monitor the quality of ML models with metrics that are not present in Amazon SageMaker Model Monitor. We follow
SageMaker standards for metric output:
- [Statistics file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-statistics.html): raw statistics calculated
per column/feature. They are calculated for the baseline and also for the current input being analyzed.
- [Constraints file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html): these are the constraints
that a dataset must satisfy. The constraints are used to determine if the dataset has violations when running an evaluation job.
- [Constraint violations file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html): generated as
the output of a monitor execution. It contains the list of constraints evaluated (using a provided constraints file) against the dataset
being analyzed.
To avoid filename conflicts with SageMaker Monitor output, our files are renamed to:
- community_statistics.json
- community_constraints.json
- community_constraint_violations.json
## Operation modes
The package has two operation modes:
- Suggest baseline: as the name implies, this operation mode will suggest a baseline that you can later use for evaluating statistics.
It will generate "statistics" and "constraints" files. You will need to provide the input file(s) to be evaluated.
In case of Model Quality metrics, a "parameters.json" file is also needed in order to specify the metrics to evaluate and any additional
required parameter.
- Run monitor: it evaluates the input file(s) using the constraints provided. It will generate a "constraint_violations" file.
It can perform both Data Quality and Model Quality analyses. The input can be a single file, or it can be split into multiple files.
### Data Quality
Data Quality analysis will evaluate all the existing metrics against all the columns. Based on the inferred column type, the package will
run either "numerical" or "string" metrics on a given column.
### Model Quality
Model Quality analysis will only evaluate metrics specified in the configuration file provided.
## Known limitations
- The code runs on a single machine. If running on a SageMaker Processing Job, it will be limited to the capacity of a single instance.
- Pandas loads data in memory. Choose a host that can handle the amount data you need to process.
- `MonitoringCustomMetrics` expects the input file(s) to be in CSV format (comma-separated files).
# Running the package locally
`MonitoringCustomMetrics` can be executed locally. You will need to install Docker CLI, set the needed parameters in the Dockerfile, and provide
the required input file(s).
## Prerequisites
Before running locally, you will need to install Docker CLI:
https://docs.docker.com/get-started/get-docker/
https://docs.docker.com/reference/cli/docker/
## Environment variables
The package uses the following variables:
- analysis_type: specifies the type of analysis to do.
- Possible values:
- DATA_QUALITY.
- MODEL_QUALITY.
- Required: Yes.
- baseline_statistics: specifies the container path to the baseline statistics file.
- Required: only if you want to evaluate statistics. Not required when suggesting baseline.
- baseline_constraints: specifies the container path to the baseline constraints file.
- Required: only if you want to evaluate statistics. Not required when suggesting baseline.
Model Quality specific environment variables:
- config_path: specifies the container path to the configuration file.
- Required: only for Model Quality metrics. You need to specify the metric(s) to use, as well as any required parameter.
- problem_type: problem type for the analysis.
- Required: Yes.
- Possible values:
- BinaryClassification
- Regression
- MulticlassClassification
- To specify that this is a Data Quality analysis:
```
ENV analysis_type=DATA_QUALITY
```
- To specify that this is a Model Quality analysis:
```
ENV analysis_type=MODEL_QUALITY
```
- If you want to evaluate statistics, you also need to provide the location of statistics and constraints files inside the container.
If these files are not provided, the package will suggest a baseline instead.
```
ENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json
ENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json
```
### Model Quality specific environment variables
For Model Quality, 'config_path' is also required:
`config_path` specifies the location of the "parameters" file within the container.
```
ENV config_path=/opt/ml/processing/input/parameters
```
Depending on the metrics to use, these variables might be needed also:
```
ENV problem_type=<problem type>
ENV ground_truth_attribute=<ground truth attribute column>
ENV inference_attribute=<inference attribute column>
```
#### Model Quality parameters file
Only the metrics specified in the "parameters" file will be evaluated in a Model Quality job. The parameters file is structured as a map,
with the top-level representing the metric names to use. For example:
```
{
"prc_auc": {
"threshold_override": 55
}
}
```
would mean that the job will only evaluate the "prc_auc" metric, and it will pass parameter "threshold_override" with value "55".
## Providing input files
The container also needs certain files to do the analysis. You can put your files in the "local_resources" directory. Once the files are
present, you need to add the following statements to the Dockerfile to have them copied over to the container:
- Copy the input data file. Input data can be split across multiple files if needed:
```
COPY local_resources/data_quality/input.csv /opt/ml/processing/input/data
```
- Copy statistics and constraints files, if needed:
```
COPY local_resources/model_quality/community_constraints.json /opt/ml/processing/baseline/constraints
COPY local_resources/model_quality/community_statistics.json /opt/ml/processing/baseline/statistics
```
- Copy "parameters" file, if needed (only needed for Model Monitoring metrics):
```
COPY local_resources/model_quality/binary_classification/custom_metric/parameters.json /opt/ml/processing/input/parameters
```
## Running the container locally
Add the required parameters to the Dockerfile in the section specified. It should look something like:
```
##### Parameters for running locally should be put here: #####################################
ENV analysis_type=DATA_QUALITY
ENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json
ENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json
COPY local_resources/data_quality/input.csv /opt/ml/processing/input/data
COPY local_resources/data_quality/community_constraints.json /opt/ml/processing/baseline/constraints
COPY local_resources/data_quality/community_statistics.json /opt/ml/processing/baseline/statistics
##### End of Parameters for running locally ###########################################################################################
```
You can now execute the container by using the Shell script "run_local.sh":
```
./run_local.sh
```
You should see the output of your container in the terminal:
```
Executing entry point:
---------------- BEGINNING OF CONTAINER EXECUTION ----------------------
Starting Monitoring Custom Metrics
Retrieving data from path: /opt/ml/processing/input/data
Reading data from file: /opt/ml/processing/input/data
Finished retrieving data from path: /opt/ml/processing/input/data
Determining operation to run based on provided parameters ...
Determining monitor type ...
Monitor type detected based on 'analysis_type' environment variable
Operation type: OperationType.run_monitor
Monitor type: MonitorType.DATA_QUALITY
<class 'pandas.core.frame.DataFrame'>
...
```
The output files will be available in the "local_output" folder after the execution.
# Running the package in SageMaker
To use `MonitoringCustomMetrics` in a SageMaker Processing Job, you will need to:
- Configure AWS CLI.
- Containerize the code using Docker.
- Create an ECR Repo for MonitoringCustomMetrics in your AWS account.
- Create an IAM Role with Trust Relationship with SageMaker.
- Create an S3 bucket that will contain the input and output files.
- Start a SageMaker Processing Job.
### Configure AWS CLI
You will need to set up your AWS CLI. Choose the authentication method that best suits you:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html#getting-started-prereqs-keys
### Containerize the code using Docker
You can build the container using the following command:
```
docker build . --load
```
We need to identify the IMAGE_ID of our container. We can do so by running:
```
docker images
```
From the list, we should grab the most recent IMAGE_ID. We will use it in the next steps.
### Create an ECR Repo for MonitoringCustomMetrics in your AWS account
We need an ECR Repo where the container images will be uploaded.
```
aws ecr create-repository --repository-name <Repository Name> --region <AWS Region> --image-tag-mutability MUTABLE
```
Log in to ECR with AWS CLI:
```
aws ecr get-login-password --region <AWS Region> | docker login --username AWS --password-stdin <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com
```
Then we need to tag the image and push it to ECR. The "image tag" will be used to identify the container. You can use "MonitoringCustomMetrics" or any other name you prefer:
```
docker tag <Image Id> <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com/<Repository Name>:<Image Tag>
docker push <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com/Repository Name>:<Image Tag>
```
### Create an IAM Role with Trust Relationship with SageMaker
We need an IAM Role that has a trust relationship with SageMaker. We can create a trust policy file with this content:
trust-policy.json
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "sagemaker.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
```
Then we can create the role and attach the policy:
```
aws iam create-role --role-name <Role Name> --assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess --role-name <Role Name>
```
### Create an S3 bucket that will contain the input and output files
You can create a new S3 bucket through AWS Console or through AWS CLI.
```
aws s3api create-bucket --bucket <S3 Bucket Name> --create-bucket-configuration LocationConstraint=<AWS Region>
```
You can now create folders inside the bucket, and upload the necessary files to each:
- input
- output
- baseline
Now we need to update the bucket policy, so that the IAM Role we created can read/write to the bucket:
bucket-policy.json
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:role/<Role Name>"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::<S3 Bucket Name>/*"
}
]
}
```
Run the command to update the policy:
```
aws s3api put-bucket-policy --bucket <S3 Bucket Name> --policy file://bucket-policy.json
```
### Start a SageMaker Processing Job
Once all the required resources have been created in your AWS Account, you can now launch a Processing Job with the following command:
```
aws sagemaker create-processing-job \
--processing-job-name <Name of the processing job> \
--app-specification ImageUri="<ECR Image URI>",ContainerEntrypoint="python","./src/monitoring_custom_metrics/main.py" \
--processing-resources 'ClusterConfig={InstanceCount=1,InstanceType="<Instance type to use>",VolumeSizeInGB=5}' \
--role-arn <ARN of the IAM Role we created> \
--environment analysis_type=DATA_QUALITY \
--processing-inputs='[{"InputName": "dataInput", "S3Input": {"S3Uri": "<S3 Path to your input location>","LocalPath":"/opt/ml/processing/input/data","S3InputMode":"File", "S3DataType":"S3Prefix"}}]' \
--processing-output-config 'Outputs=[{OutputName="report",S3Output={S3Uri="<S3 Path to your output location>",LocalPath="/opt/ml/processing/output",S3UploadMode="Continuous"}}]'
```
You should get a response from AWS CLI similar to:
```
{
"ProcessingJobArn": "arn:aws:sagemaker:<AWS Region>:<AWS Account ID>:processing-job/<Name of the processing job>"
}
```
You can also see the SageMaker Processing Job in AWS Console. Once the job finishes, you will find the result files in the
output location you specified in the "processing-output-config" parameter.
# Available metrics
## Data Quality
|Metric name|Description|Data type|
|---|---|---|
|sum|Example metric that sums up an entire column's data|Numerical|
|email|Example metric to verify that a field is not an email|String|
## Model Quality
|Metric name|Description|Output data type| Parameters|
|---|---|---|---|
|brier_score_loss| The Brier score measures the mean squared difference between the predicted probability and the actual outcome. Reference: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.brier_score_loss.html|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|
|gini|GINI is a model performance metric commonly used in Credit Science. It measures the ranking power of a model and it ranges from 0 to 1: 0 means no ranking power while 1 means perfect ranking power|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|
|pr_auc|PR AUC is the area under precision-recall curve. Reference: https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|
|score_diff|Score difference measures the absolute/relative difference between predicted probability and the actual outcome.|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>comparison_type: [optional] str. "absolute" to calculate absolute difference and "relative" to calculate relative difference. Default value is "absolute".</li><li>two_sided: [optional] bool. Default value is False: <ul> <li>two_sided = True will set the constraint and violation policy by the absolute value of the score difference to enable the detection of both under-prediction and over-prediction at the same time. The absolute value of score difference will be returned.</li> <li>two_sided = False will set the constraint and violation policy by the original value of the score difference.</li> </ul></li><li>comparison_operator: [optional] str. configure comparison_operator when two_sided is set as False. "GreaterThanThreshold" to detect over-prediction and "LessThanThreshold" to detect under-prediction.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|
# How to implement additional metrics
Each metric is defined in its own class file. The file must be created in the right folder, based on the metric type:
- data_quality
- numerical
- string
- model_quality
- binary_classification
- multiclass_classification
- regression
## Unit tests
Metrics must also have a unit test file in the "test" folder, following the same structure.
## Metric class conventions
- A metric must inherit from an Abstract Base Class (ABC) called "ModelQualityMetric".
- The class must include the following methods:
- calculate_statistics.
- suggest_constraints.
- evaluate_constraints.
- At the end of the class, the file must expose a variable called "instance", which is an instance of the class itself.
Please refer to the existing metrics for additional details.
Raw data
{
"_id": null,
"home_page": "https://github.com/amzn/MonitoringCustomMetrics",
"name": "MonitoringCustomMetrics",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "MonitoringCustomMetrics ML Monitoring Metrics",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/05/60/abbff838bd6ab54abb9a204a10342dc42e65f47975492d1a91219f4d3a57/MonitoringCustomMetrics-1.0.4.tar.gz",
"platform": null,
"description": "# MonitoringCustomMetrics\n\n`MonitoringCustomMetrics` is a code package that simplifies the creation of metrics to use for monitoring Machine Learning files. We follow \nthe formats and standards defined by [Amazon SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html).\nIt can be executed locally by using Docker, or it can be used within a SageMaker Processing Job.\n\n## What does it do?\n\nThis tool helps you monitor the quality of ML models with metrics that are not present in Amazon SageMaker Model Monitor. We follow \nSageMaker standards for metric output:\n\n- [Statistics file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-statistics.html): raw statistics calculated\nper column/feature. They are calculated for the baseline and also for the current input being analyzed.\n- [Constraints file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html): these are the constraints\nthat a dataset must satisfy. The constraints are used to determine if the dataset has violations when running an evaluation job.\n- [Constraint violations file](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html): generated as\nthe output of a monitor execution. It contains the list of constraints evaluated (using a provided constraints file) against the dataset\nbeing analyzed.\n\nTo avoid filename conflicts with SageMaker Monitor output, our files are renamed to:\n\n- community_statistics.json\n- community_constraints.json\n- community_constraint_violations.json\n\n## Operation modes\n\nThe package has two operation modes:\n\n- Suggest baseline: as the name implies, this operation mode will suggest a baseline that you can later use for evaluating statistics.\nIt will generate \"statistics\" and \"constraints\" files. You will need to provide the input file(s) to be evaluated.\nIn case of Model Quality metrics, a \"parameters.json\" file is also needed in order to specify the metrics to evaluate and any additional\nrequired parameter.\n- Run monitor: it evaluates the input file(s) using the constraints provided. It will generate a \"constraint_violations\" file. \n\nIt can perform both Data Quality and Model Quality analyses. The input can be a single file, or it can be split into multiple files.\n\n### Data Quality\n\nData Quality analysis will evaluate all the existing metrics against all the columns. Based on the inferred column type, the package will \nrun either \"numerical\" or \"string\" metrics on a given column.\n\n### Model Quality\n\nModel Quality analysis will only evaluate metrics specified in the configuration file provided.\n\n\n## Known limitations\n\n- The code runs on a single machine. If running on a SageMaker Processing Job, it will be limited to the capacity of a single instance.\n- Pandas loads data in memory. Choose a host that can handle the amount data you need to process.\n- `MonitoringCustomMetrics` expects the input file(s) to be in CSV format (comma-separated files).\n\n# Running the package locally\n\n`MonitoringCustomMetrics` can be executed locally. You will need to install Docker CLI, set the needed parameters in the Dockerfile, and provide\nthe required input file(s).\n\n## Prerequisites\n\nBefore running locally, you will need to install Docker CLI:\n\nhttps://docs.docker.com/get-started/get-docker/\n\nhttps://docs.docker.com/reference/cli/docker/\n\n## Environment variables\n\nThe package uses the following variables:\n\n- analysis_type: specifies the type of analysis to do.\n - Possible values: \n - DATA_QUALITY.\n - MODEL_QUALITY.\n - Required: Yes.\n- baseline_statistics: specifies the container path to the baseline statistics file.\n - Required: only if you want to evaluate statistics. Not required when suggesting baseline.\n- baseline_constraints: specifies the container path to the baseline constraints file.\n - Required: only if you want to evaluate statistics. Not required when suggesting baseline.\n\nModel Quality specific environment variables:\n\n- config_path: specifies the container path to the configuration file.\n - Required: only for Model Quality metrics. You need to specify the metric(s) to use, as well as any required parameter.\n- problem_type: problem type for the analysis.\n - Required: Yes.\n - Possible values:\n - BinaryClassification\n - Regression\n - MulticlassClassification\n\n- To specify that this is a Data Quality analysis:\n ```\n ENV analysis_type=DATA_QUALITY\n ```\n\n- To specify that this is a Model Quality analysis:\n ```\n ENV analysis_type=MODEL_QUALITY\n ```\n\n- If you want to evaluate statistics, you also need to provide the location of statistics and constraints files inside the container.\nIf these files are not provided, the package will suggest a baseline instead.\n ```\n ENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json\n ENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json\n ```\n\n### Model Quality specific environment variables\n\nFor Model Quality, 'config_path' is also required:\n\n`config_path` specifies the location of the \"parameters\" file within the container.\n```\nENV config_path=/opt/ml/processing/input/parameters\n```\n\nDepending on the metrics to use, these variables might be needed also:\n```\nENV problem_type=<problem type>\nENV ground_truth_attribute=<ground truth attribute column>\nENV inference_attribute=<inference attribute column>\n```\n\n#### Model Quality parameters file\n\nOnly the metrics specified in the \"parameters\" file will be evaluated in a Model Quality job. The parameters file is structured as a map,\nwith the top-level representing the metric names to use. For example:\n\n```\n{\n \"prc_auc\": {\n \"threshold_override\": 55\n }\n}\n```\nwould mean that the job will only evaluate the \"prc_auc\" metric, and it will pass parameter \"threshold_override\" with value \"55\". \n\n\n## Providing input files\n\nThe container also needs certain files to do the analysis. You can put your files in the \"local_resources\" directory. Once the files are\npresent, you need to add the following statements to the Dockerfile to have them copied over to the container:\n\n- Copy the input data file. Input data can be split across multiple files if needed:\n ```\n COPY local_resources/data_quality/input.csv /opt/ml/processing/input/data\n ```\n\n- Copy statistics and constraints files, if needed: \n ```\n COPY local_resources/model_quality/community_constraints.json /opt/ml/processing/baseline/constraints\n COPY local_resources/model_quality/community_statistics.json /opt/ml/processing/baseline/statistics\n ```\n \n- Copy \"parameters\" file, if needed (only needed for Model Monitoring metrics):\n ```\n COPY local_resources/model_quality/binary_classification/custom_metric/parameters.json /opt/ml/processing/input/parameters\n ```\n\n## Running the container locally\n\nAdd the required parameters to the Dockerfile in the section specified. It should look something like:\n\n```\n##### Parameters for running locally should be put here: #####################################\nENV analysis_type=DATA_QUALITY\nENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json\nENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json\nCOPY local_resources/data_quality/input.csv /opt/ml/processing/input/data\nCOPY local_resources/data_quality/community_constraints.json /opt/ml/processing/baseline/constraints\nCOPY local_resources/data_quality/community_statistics.json /opt/ml/processing/baseline/statistics\n##### End of Parameters for running locally ###########################################################################################\n```\n\nYou can now execute the container by using the Shell script \"run_local.sh\":\n\n```\n./run_local.sh\n```\n\nYou should see the output of your container in the terminal:\n\n```\nExecuting entry point: \n---------------- BEGINNING OF CONTAINER EXECUTION ----------------------\nStarting Monitoring Custom Metrics\nRetrieving data from path: /opt/ml/processing/input/data\n Reading data from file: /opt/ml/processing/input/data\nFinished retrieving data from path: /opt/ml/processing/input/data\nDetermining operation to run based on provided parameters ...\nDetermining monitor type ...\nMonitor type detected based on 'analysis_type' environment variable\nOperation type: OperationType.run_monitor\nMonitor type: MonitorType.DATA_QUALITY\n<class 'pandas.core.frame.DataFrame'>\n...\n```\n\nThe output files will be available in the \"local_output\" folder after the execution.\n\n\n# Running the package in SageMaker\n\nTo use `MonitoringCustomMetrics` in a SageMaker Processing Job, you will need to:\n\n- Configure AWS CLI.\n- Containerize the code using Docker.\n- Create an ECR Repo for MonitoringCustomMetrics in your AWS account.\n- Create an IAM Role with Trust Relationship with SageMaker.\n- Create an S3 bucket that will contain the input and output files.\n- Start a SageMaker Processing Job.\n\n### Configure AWS CLI\n\nYou will need to set up your AWS CLI. Choose the authentication method that best suits you: \nhttps://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html#getting-started-prereqs-keys\n\n### Containerize the code using Docker\n\nYou can build the container using the following command:\n\n```\ndocker build . --load\n```\n\nWe need to identify the IMAGE_ID of our container. We can do so by running:\n\n```\ndocker images\n```\n\nFrom the list, we should grab the most recent IMAGE_ID. We will use it in the next steps.\n\n### Create an ECR Repo for MonitoringCustomMetrics in your AWS account\n\nWe need an ECR Repo where the container images will be uploaded.\n\n```\naws ecr create-repository --repository-name <Repository Name> --region <AWS Region> --image-tag-mutability MUTABLE\n```\n\nLog in to ECR with AWS CLI:\n\n```\naws ecr get-login-password --region <AWS Region> | docker login --username AWS --password-stdin <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com\n```\n\nThen we need to tag the image and push it to ECR. The \"image tag\" will be used to identify the container. You can use \"MonitoringCustomMetrics\" or any other name you prefer:\n\n```\ndocker tag <Image Id> <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com/<Repository Name>:<Image Tag>\n\ndocker push <AWS Account ID>.dkr.ecr.<AWS Region>.amazonaws.com/Repository Name>:<Image Tag>\n```\n\n### Create an IAM Role with Trust Relationship with SageMaker\n\nWe need an IAM Role that has a trust relationship with SageMaker. We can create a trust policy file with this content:\n\ntrust-policy.json\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"sagemaker.amazonaws.com\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n```\n\nThen we can create the role and attach the policy:\n\n```\naws iam create-role --role-name <Role Name> --assume-role-policy-document file://trust-policy.json\n\naws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess --role-name <Role Name>\n```\n\n### Create an S3 bucket that will contain the input and output files\n\nYou can create a new S3 bucket through AWS Console or through AWS CLI.\n\n```\naws s3api create-bucket --bucket <S3 Bucket Name> --create-bucket-configuration LocationConstraint=<AWS Region>\n```\n\nYou can now create folders inside the bucket, and upload the necessary files to each:\n- input\n- output\n- baseline\n\nNow we need to update the bucket policy, so that the IAM Role we created can read/write to the bucket:\n\nbucket-policy.json\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::<AWS Account ID>:role/<Role Name>\"\n },\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::<S3 Bucket Name>/*\"\n }\n ]\n}\n```\n\nRun the command to update the policy:\n\n```\naws s3api put-bucket-policy --bucket <S3 Bucket Name> --policy file://bucket-policy.json\n```\n\n### Start a SageMaker Processing Job\n\nOnce all the required resources have been created in your AWS Account, you can now launch a Processing Job with the following command:\n\n```\naws sagemaker create-processing-job \\\n\t--processing-job-name <Name of the processing job> \\\n\t--app-specification ImageUri=\"<ECR Image URI>\",ContainerEntrypoint=\"python\",\"./src/monitoring_custom_metrics/main.py\" \\\n\t--processing-resources 'ClusterConfig={InstanceCount=1,InstanceType=\"<Instance type to use>\",VolumeSizeInGB=5}' \\\n\t--role-arn <ARN of the IAM Role we created> \\\n\t--environment analysis_type=DATA_QUALITY \\\n\t--processing-inputs='[{\"InputName\": \"dataInput\", \"S3Input\": {\"S3Uri\": \"<S3 Path to your input location>\",\"LocalPath\":\"/opt/ml/processing/input/data\",\"S3InputMode\":\"File\", \"S3DataType\":\"S3Prefix\"}}]' \\\n\t--processing-output-config 'Outputs=[{OutputName=\"report\",S3Output={S3Uri=\"<S3 Path to your output location>\",LocalPath=\"/opt/ml/processing/output\",S3UploadMode=\"Continuous\"}}]'\n```\n\nYou should get a response from AWS CLI similar to:\n\n```\n{\n \"ProcessingJobArn\": \"arn:aws:sagemaker:<AWS Region>:<AWS Account ID>:processing-job/<Name of the processing job>\"\n}\n```\n\nYou can also see the SageMaker Processing Job in AWS Console. Once the job finishes, you will find the result files in the\noutput location you specified in the \"processing-output-config\" parameter.\n\n# Available metrics\n\n## Data Quality\n\n|Metric name|Description|Data type|\n|---|---|---|\n|sum|Example metric that sums up an entire column's data|Numerical|\n|email|Example metric to verify that a field is not an email|String|\n\n\n## Model Quality\n\n|Metric name|Description|Output data type| Parameters|\n|---|---|---|---|\n|brier_score_loss|\tThe Brier score measures the mean squared difference between the predicted probability and the actual outcome. Reference: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.brier_score_loss.html|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|\n|gini|GINI is a model performance metric commonly used in Credit Science. It measures the ranking power of a model and it ranges from 0 to 1: 0 means no ranking power while 1 means perfect ranking power|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|\n|pr_auc|PR AUC is the area under precision-recall curve. Reference: https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|\n|score_diff|Score difference measures the absolute/relative difference between predicted probability and the actual outcome.|Numerical|<ul><li>ground_truth_attribute: [required] str. Model target attribute.</li><li>probability_attribute: [required] str. Model inference attribute.</li><li>comparison_type: [optional] str. \"absolute\" to calculate absolute difference and \"relative\" to calculate relative difference. Default value is \"absolute\".</li><li>two_sided: [optional] bool. Default value is False:\t<ul>\t\t<li>two_sided = True will set the constraint and violation policy by the absolute value of the score difference to enable the detection of both under-prediction and over-prediction at the same time. The absolute value of score difference will be returned.</li>\t\t<li>two_sided = False will set the constraint and violation policy by the original value of the score difference.</li>\t</ul></li><li>comparison_operator: [optional] str. configure comparison_operator when two_sided is set as False. \"GreaterThanThreshold\" to detect over-prediction and \"LessThanThreshold\" to detect under-prediction.</li><li>threshold_override:[optional] float. Set constraint as baseline value + threshold_override.</li></ul>|\n\n\n\n# How to implement additional metrics\n\nEach metric is defined in its own class file. The file must be created in the right folder, based on the metric type:\n\n- data_quality\n - numerical\n - string\n- model_quality\n - binary_classification\n - multiclass_classification\n - regression\n\n## Unit tests\n\nMetrics must also have a unit test file in the \"test\" folder, following the same structure.\n\n## Metric class conventions\n\n- A metric must inherit from an Abstract Base Class (ABC) called \"ModelQualityMetric\".\n- The class must include the following methods:\n - calculate_statistics.\n - suggest_constraints.\n - evaluate_constraints.\n- At the end of the class, the file must expose a variable called \"instance\", which is an instance of the class itself.\n\nPlease refer to the existing metrics for additional details.\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "Custom Metrics for ML Model Monitoring",
"version": "1.0.4",
"project_urls": {
"Homepage": "https://github.com/amzn/MonitoringCustomMetrics"
},
"split_keywords": [
"monitoringcustommetrics",
"ml",
"monitoring",
"metrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0560abbff838bd6ab54abb9a204a10342dc42e65f47975492d1a91219f4d3a57",
"md5": "d236e2d7764e44d33b37bf971c829f91",
"sha256": "3a0d71fea25dc50359131d2e5af3bc75089bab53b4e317842a2bcd2e5f49ea5e"
},
"downloads": -1,
"filename": "MonitoringCustomMetrics-1.0.4.tar.gz",
"has_sig": false,
"md5_digest": "d236e2d7764e44d33b37bf971c829f91",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 21495,
"upload_time": "2024-11-18T23:07:20",
"upload_time_iso_8601": "2024-11-18T23:07:20.370194Z",
"url": "https://files.pythonhosted.org/packages/05/60/abbff838bd6ab54abb9a204a10342dc42e65f47975492d1a91219f4d3a57/MonitoringCustomMetrics-1.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 23:07:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "amzn",
"github_project": "MonitoringCustomMetrics",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "monitoringcustommetrics"
}