tfma.types.EvalSharedModel
Stay organized with collections
Save and categorize content based on your preferences.
Shared model used during extraction and evaluation.
tfma.types.EvalSharedModel(
model_path: Optional[str] = None,
add_metrics_callbacks: Optional[List[AddMetricsCallbackType]] = None,
include_default_metrics: Optional[bool] = True,
example_weight_key: Optional[Union[str, Dict[str, str]]] = None,
additional_fetches: Optional[List[str]] = None,
model_loader: Optional[tfma.types.ModelLoader
] = None,
model_name: str = '',
model_type: str = '',
rubber_stamp: bool = False,
is_baseline: bool = False,
resource_hints: Optional[Dict[str, Any]] = None,
backend_config: Optional[Any] = None,
construct_fn: Optional[Callable[[], Any]] = None
)
Used in the notebooks
More details on add_metrics_callbacks:
Each add_metrics_callback should have the following prototype:
def add_metrics_callback(features_dict, predictions_dict, labels_dict):
Note that features_dict, predictions_dict and labels_dict are not
necessarily dictionaries - they might also be Tensors, depending on what the
model's eval_input_receiver_fn returns.
It should create and return a metric_ops dictionary, such that
metric_ops['metric_name'] = (value_op, update_op), just as in the Trainer.
Short example:
def add_metrics_callback(features_dict, predictions_dict, labels):
metrics_ops = {}
metric_ops['mean_label'] = tf.metrics.mean(labels)
metric_ops['mean_probability'] = tf.metrics.mean(tf.slice(
predictions_dict['probabilities'], [0, 1], [2, 1]))
return metric_ops
Attributes |
model_path
|
Path to EvalSavedModel (containing the saved_model.pb file).
|
add_metrics_callbacks
|
Optional list of callbacks for adding additional
metrics to the graph. The names of the metrics added by the callbacks
should not conflict with existing metrics. See below for more details
about what each callback should do. The callbacks are only used during
evaluation.
|
include_default_metrics
|
True to include the default metrics that are part
of the saved model graph during evaluation.
|
example_weight_key
|
Example weight key (single-output model) or dict of
example weight keys (multi-output model) keyed by output_name.
|
additional_fetches
|
Prefixes of additional tensors stored in
signature_def.inputs that should be fetched at prediction time. The
"features" and "labels" tensors are handled automatically and should not
be included in this list.
|
model_loader
|
Model loader.
|
model_name
|
Model name (should align with ModelSpecs.name).
|
model_type
|
Model type (tfma.TF_KERAS, tfma.TF_LITE, tfma.TF_ESTIMATOR, ..).
|
rubber_stamp
|
True if this model is being rubber stamped. When a
model is rubber stamped diff thresholds will be ignored if an associated
baseline model is not passed.
|
is_baseline
|
The model is the baseline for comparison or not.
|
resource_hints
|
The beam resource hints to apply to the PTransform which
runs inference for this model.
|
backend_config
|
The backend config for running model inference.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.types.EvalSharedModel\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/api/types.py#L448-L564) |\n\nShared model used during extraction and evaluation.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfma.EvalSharedModel`](https://www.tensorflow.org/tfx/model_analysis/api_docs/python/tfma/types/EvalSharedModel)\n\n\u003cbr /\u003e\n\n tfma.types.EvalSharedModel(\n model_path: Optional[str] = None,\n add_metrics_callbacks: Optional[List[AddMetricsCallbackType]] = None,\n include_default_metrics: Optional[bool] = True,\n example_weight_key: Optional[Union[str, Dict[str, str]]] = None,\n additional_fetches: Optional[List[str]] = None,\n model_loader: Optional[../../tfma/types/ModelLoader] = None,\n model_name: str = '',\n model_type: str = '',\n rubber_stamp: bool = False,\n is_baseline: bool = False,\n resource_hints: Optional[Dict[str, Any]] = None,\n backend_config: Optional[Any] = None,\n construct_fn: Optional[Callable[[], Any]] = None\n )\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [FaceSSD Fairness Indicators Example Colab](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Facessd_Fairness_Indicators_Example_Colab) - [Wiki Talk Comments Toxicity Prediction](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study) |\n\nMore details on add_metrics_callbacks:\n\nEach add_metrics_callback should have the following prototype:\ndef add_metrics_callback(features_dict, predictions_dict, labels_dict):\n\nNote that features_dict, predictions_dict and labels_dict are not\nnecessarily dictionaries - they might also be Tensors, depending on what the\nmodel's eval_input_receiver_fn returns.\n\nIt should create and return a metric_ops dictionary, such that\nmetric_ops\\['metric_name'\\] = (value_op, update_op), just as in the Trainer.\n\nShort example:\n\ndef add_metrics_callback(features_dict, predictions_dict, labels):\nmetrics_ops = {}\nmetric_ops\\['mean_label'\\] = tf.metrics.mean(labels)\nmetric_ops\\['mean_probability'\\] = tf.metrics.mean(tf.slice(\npredictions_dict\\['probabilities'\\], \\[0, 1\\], \\[2, 1\\]))\nreturn metric_ops\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `model_path` | Path to EvalSavedModel (containing the saved_model.pb file). |\n| `add_metrics_callbacks` | Optional list of callbacks for adding additional metrics to the graph. The names of the metrics added by the callbacks should not conflict with existing metrics. See below for more details about what each callback should do. The callbacks are only used during evaluation. |\n| `include_default_metrics` | True to include the default metrics that are part of the saved model graph during evaluation. |\n| `example_weight_key` | Example weight key (single-output model) or dict of example weight keys (multi-output model) keyed by output_name. |\n| `additional_fetches` | Prefixes of additional tensors stored in signature_def.inputs that should be fetched at prediction time. The \"features\" and \"labels\" tensors are handled automatically and should not be included in this list. |\n| `model_loader` | Model loader. |\n| `model_name` | Model name (should align with ModelSpecs.name). |\n| `model_type` | Model type (tfma.TF_KERAS, tfma.TF_LITE, tfma.TF_ESTIMATOR, ..). |\n| `rubber_stamp` | True if this model is being rubber stamped. When a model is rubber stamped diff thresholds will be ignored if an associated baseline model is not passed. |\n| `is_baseline` | The model is the baseline for comparison or not. |\n| `resource_hints` | The beam resource hints to apply to the PTransform which runs inference for this model. |\n| `backend_config` | The backend config for running model inference. |\n\n\u003cbr /\u003e"]]