merlin package¶
- 
merlin.set_url(url: str, use_google_oauth: bool = True)[source]¶
- Set Merlin URL - Parameters: - url – Merlin URL 
- 
merlin.get_url() → Optional[str][source]¶
- Get currently active Merlin URL - Returns: - merlin url if set, otherwise None 
- 
merlin.list_project() → List[merlin.model.Project][source]¶
- List all project in MLP - Returns: - list of project 
- 
merlin.set_project(project_name: str)[source]¶
- Set active project - Parameters: - project_name – project name. If project_name is not found, it will create the project. 
- 
merlin.active_project() → Optional[merlin.model.Project][source]¶
- Get current active project - Returns: - active project 
- 
merlin.list_environment() → List[merlin.environment.Environment][source]¶
- List all available environment for deployment - Returns: - List[Environment] 
- 
merlin.get_environment(env_name: str) → merlin.environment.Environment[source]¶
- Get environment for given env name - Returns: - Environment or None 
- 
merlin.get_default_environment() → Optional[merlin.environment.Environment][source]¶
- Get default environment - Returns: - Environment or None 
- 
merlin.set_model(model_name, model_type: merlin.model.ModelType = None)[source]¶
- Set active model - Parameters: - model_name – model name to be set as active model. If model name is not found, it will create the model.
- model_type – type of the model
 - Returns: 
- 
merlin.new_model_version()[source]¶
- Create new model version under currently active model - Returns: - ModelVersion 
- 
merlin.log_param(key: str, value: str)[source]¶
- Log parameter to the active model version - Parameters: - key – parameter key
- value – parameter value
 
- 
merlin.log_metric(key: str, value: float)[source]¶
- Log a metric to the active model version - Parameters: - key – metric key
- value – metric value
 
- 
merlin.set_tag(key: str, value: str)[source]¶
- Set tag in the active model version - Parameters: - key – tag name
- value – tag value
 
- 
merlin.delete_tag(key: str)[source]¶
- Delete tag from the active model version - Parameters: - key – tag name 
- 
merlin.log_artifact(local_path: str, artifact_path: str = None)[source]¶
- Log artifacts for the active model version - Parameters: - local_path – directory to be uploaded into artifact store
- artifact_path – destination directory in artifact store
 
- 
merlin.log_pyfunc_model(model_instance: Any, conda_env: str, code_dir: List[str] = None, artifacts: Dict[str, str] = None)[source]¶
- Upload PyFunc based model into artifact storage. - User has to specify model_instance and conda_env. model_instance shall implement all method specified in PyFuncModel. conda_env shall contain all dependency required by the model - Parameters: - model_instance – instance of python function model
- conda_env – path to conda env.yaml file
- code_dir – additional code directory that will be loaded with ModelType.PYFUNC model
- code_dir – additional code directory to be uploaded
- artifacts – dictionary of artifact that will be stored together with the model. This will be passed to PythonModel.initialize. Example: {“config”: “config/staging.yaml”}
 
- 
merlin.log_pytorch_model(model_dir: str, model_class_name: str = None)[source]¶
- Upload PyTorch model to artifact storage. - Parameters: - model_dir – directory containing serialized PyTorch model
- model_class_name – class name of PyTorch model. By default the model class name is ‘PyTorchModel’
 
- 
merlin.log_model(model_dir)[source]¶
- Upload model to artifact storage. This method is used to upload model for xgboost, tensorflow, and sklearn model. - Parameters: - model_dir – directory which contain serialized model 
- 
merlin.deploy(model_version: merlin.model.ModelVersion = None, environment_name: str = None, resource_request: merlin.resource_request.ResourceRequest = None, env_vars: Dict[str, str] = None) → merlin.endpoint.VersionEndpoint[source]¶
- Deploy a model version. - Parameters: - model_version – If model_version is not given it will deploy active model version - Returns: - VersionEndpoint 
- 
merlin.undeploy(model_version=None, environment_name: str = None)[source]¶
- Delete deployment of a model version. - Parameters: - model_version – model version to be undeployed. If model_version is not given it will undeploy active model version 
- 
merlin.set_traffic(traffic_rule: Dict[merlin.model.ModelVersion, int]) → merlin.endpoint.ModelEndpoint[source]¶
- Update traffic rule of the active model.Parameters: traffic_rule – dict of model version and the percentage of traffic. Returns: ModelEndpoint 
- 
merlin.serve_traffic(traffic_rule: Dict[VersionEndpoint, int], environment_name: str = None) → merlin.endpoint.ModelEndpoint[source]¶
- Update traffic rule of the active model. - Parameters: - traffic_rule – dict of version endpoint and the percentage of traffic.
- environment_name – environment in which the traffic rule shall be applied
 - Returns: - ModelEndpoint 
- 
class merlin.ResourceRequest(min_replica: int, max_replica: int, cpu_request: str, memory_request: str)[source]¶
- Bases: - object- The resource requirement and replicas requests for model version endpoint. - 
cpu_request¶
 - 
max_replica¶
 - 
memory_request¶
 - 
min_replica¶
 
- 
- 
merlin.create_prediction_job(job_config: merlin.batch.config.PredictionJobConfig, sync: bool = True) → client.models.prediction_job.PredictionJob[source]¶
- Parameters: - sync –
- job_config –
 - Returns: 
Subpackages¶
Submodules¶
merlin.client module¶
- 
class merlin.client.MerlinClient(merlin_url: str, use_google_oauth: bool = True)[source]¶
- Bases: - object- 
deploy(model_version: merlin.model.ModelVersion, environment_name: str = None, resource_request: merlin.resource_request.ResourceRequest = None, env_vars: Dict[str, str] = None) → merlin.endpoint.VersionEndpoint[source]¶
 - 
get_default_environment() → Optional[merlin.environment.Environment][source]¶
- Return default environment - Returns: - Environment or None 
 - 
get_environment(env_name: str) → Optional[merlin.environment.Environment][source]¶
- Get environment for given env name - Returns: - Environment or None 
 - 
get_model(model_name: str, project_name: str) → Optional[merlin.model.Model][source]¶
- Get model with given name - Parameters: - model_name – model name to be retrieved
- project_name – project name
 - Returns: - Model or None 
 - 
get_or_create_model(model_name: str, project_name: str, model_type: merlin.model.ModelType = None) → merlin.model.Model[source]¶
- Get or create a model under a project - If project_name is not given it will use currently active project otherwise will raise Exception - Parameters: - model_type –
- model_name – model name
- project_name – project name (optional)
- model_type – model type
 - Returns: - Model 
 - 
get_project(project_name: str) → merlin.model.Project[source]¶
- Get a project in Merlin and optionally assign list of readers and administrators. The identity used for creating the project will be automatically included as project’s administrators. - Parameters: - project_name – project name - Returns: - project 
 - 
list_environment() → List[merlin.environment.Environment][source]¶
- List all available environment for deployment - Returns: - list of Environment 
 - 
list_project() → List[merlin.model.Project][source]¶
- List project in the connected MLP server - Returns: - list of Project 
 - 
new_model_version(model_name: str, project_name: str) → merlin.model.ModelVersion[source]¶
- Create new model version for the given model and project - Parameters: - model_name –
- project_name –
 - Returns: 
 - 
url¶
 
- 
merlin.endpoint module¶
- 
class merlin.endpoint.ModelEndpoint(endpoint: client.models.model_endpoint.ModelEndpoint)[source]¶
- Bases: - object- 
environment¶
 - 
environment_name¶
 - 
id¶
 - 
status¶
 - 
url¶
 
- 
merlin.environment module¶
merlin.fluent module¶
- 
merlin.fluent.active_model() → Optional[merlin.model.Model][source]¶
- Get active model - Returns: - active model 
- 
merlin.fluent.active_project() → Optional[merlin.model.Project][source]¶
- Get current active project - Returns: - active project 
- 
merlin.fluent.create_prediction_job(job_config: merlin.batch.config.PredictionJobConfig, sync: bool = True) → client.models.prediction_job.PredictionJob[source]¶
- Parameters: - sync –
- job_config –
 - Returns: 
- 
merlin.fluent.delete_tag(key: str)[source]¶
- Delete tag from the active model version - Parameters: - key – tag name 
- 
merlin.fluent.deploy(model_version: merlin.model.ModelVersion = None, environment_name: str = None, resource_request: merlin.resource_request.ResourceRequest = None, env_vars: Dict[str, str] = None) → merlin.endpoint.VersionEndpoint[source]¶
- Deploy a model version. - Parameters: - model_version – If model_version is not given it will deploy active model version - Returns: - VersionEndpoint 
- 
merlin.fluent.download_artifact(destination_path: str)[source]¶
- Download artifact from the active model version - Parameters: - destination_path – destination of file when downloaded 
- 
merlin.fluent.get_default_environment() → Optional[merlin.environment.Environment][source]¶
- Get default environment - Returns: - Environment or None 
- 
merlin.fluent.get_environment(env_name: str) → merlin.environment.Environment[source]¶
- Get environment for given env name - Returns: - Environment or None 
- 
merlin.fluent.get_metric(key: str) → Optional[float][source]¶
- Get metric value from the active model version - Parameters: - key – metric name 
- 
merlin.fluent.get_param(key: str) → Optional[str][source]¶
- Get param value from the active model version - Parameters: - key – param name 
- 
merlin.fluent.get_tag(key: str) → Optional[str][source]¶
- Get tag value from the active model version - Parameters: - key – tag name 
- 
merlin.fluent.get_url() → Optional[str][source]¶
- Get currently active Merlin URL - Returns: - merlin url if set, otherwise None 
- 
merlin.fluent.list_environment() → List[merlin.environment.Environment][source]¶
- List all available environment for deployment - Returns: - List[Environment] 
- 
merlin.fluent.list_model_endpoints() → List[merlin.endpoint.ModelEndpoint][source]¶
- Get list of all serving model endpoints. - Returns: - List of model endpoints. 
- 
merlin.fluent.list_project() → List[merlin.model.Project][source]¶
- List all project in MLP - Returns: - list of project 
- 
merlin.fluent.log_artifact(local_path: str, artifact_path: str = None)[source]¶
- Log artifacts for the active model version - Parameters: - local_path – directory to be uploaded into artifact store
- artifact_path – destination directory in artifact store
 
- 
merlin.fluent.log_metric(key: str, value: float)[source]¶
- Log a metric to the active model version - Parameters: - key – metric key
- value – metric value
 
- 
merlin.fluent.log_model(model_dir)[source]¶
- Upload model to artifact storage. This method is used to upload model for xgboost, tensorflow, and sklearn model. - Parameters: - model_dir – directory which contain serialized model 
- 
merlin.fluent.log_param(key: str, value: str)[source]¶
- Log parameter to the active model version - Parameters: - key – parameter key
- value – parameter value
 
- 
merlin.fluent.log_pyfunc_model(model_instance: Any, conda_env: str, code_dir: List[str] = None, artifacts: Dict[str, str] = None)[source]¶
- Upload PyFunc based model into artifact storage. - User has to specify model_instance and conda_env. model_instance shall implement all method specified in PyFuncModel. conda_env shall contain all dependency required by the model - Parameters: - model_instance – instance of python function model
- conda_env – path to conda env.yaml file
- code_dir – additional code directory that will be loaded with ModelType.PYFUNC model
- code_dir – additional code directory to be uploaded
- artifacts – dictionary of artifact that will be stored together with the model. This will be passed to PythonModel.initialize. Example: {“config”: “config/staging.yaml”}
 
- 
merlin.fluent.log_pytorch_model(model_dir: str, model_class_name: str = None)[source]¶
- Upload PyTorch model to artifact storage. - Parameters: - model_dir – directory containing serialized PyTorch model
- model_class_name – class name of PyTorch model. By default the model class name is ‘PyTorchModel’
 
- 
merlin.fluent.new_model_version()[source]¶
- Create new model version under currently active model - Returns: - ModelVersion 
- 
merlin.fluent.serve_traffic(traffic_rule: Dict[VersionEndpoint, int], environment_name: str = None) → merlin.endpoint.ModelEndpoint[source]¶
- Update traffic rule of the active model. - Parameters: - traffic_rule – dict of version endpoint and the percentage of traffic.
- environment_name – environment in which the traffic rule shall be applied
 - Returns: - ModelEndpoint 
- 
merlin.fluent.set_model(model_name, model_type: merlin.model.ModelType = None)[source]¶
- Set active model - Parameters: - model_name – model name to be set as active model. If model name is not found, it will create the model.
- model_type – type of the model
 - Returns: 
- 
merlin.fluent.set_project(project_name: str)[source]¶
- Set active project - Parameters: - project_name – project name. If project_name is not found, it will create the project. 
- 
merlin.fluent.set_tag(key: str, value: str)[source]¶
- Set tag in the active model version - Parameters: - key – tag name
- value – tag value
 
- 
merlin.fluent.set_traffic(traffic_rule: Dict[merlin.model.ModelVersion, int]) → merlin.endpoint.ModelEndpoint[source]¶
- Update traffic rule of the active model.Parameters: traffic_rule – dict of model version and the percentage of traffic. Returns: ModelEndpoint 
- 
merlin.fluent.set_url(url: str, use_google_oauth: bool = True)[source]¶
- Set Merlin URL - Parameters: - url – Merlin URL 
merlin.merlin module¶
merlin.model module¶
- 
class merlin.model.Model(model: client.models.model.Model, project: merlin.model.Project, api_client: client.api_client.ApiClient)[source]¶
- Bases: - object- Model representation - 
created_at¶
 - 
endpoint¶
- Get endpoint of this model that is deployed in default environment - Returns: - Endpoint if exist, otherwise None 
 - 
get_version(id: int) → Optional[merlin.model.ModelVersion][source]¶
- Get version with specific ID - Parameters: - id – version id to retrieve - Returns: 
 - 
id¶
 - 
list_endpoint() → List[merlin.endpoint.ModelEndpoint][source]¶
- List all model endpoint assosiated with this model - Returns: - List[ModelEndpoint] 
 - 
list_version() → List[merlin.model.ModelVersion][source]¶
- List all version of the model List all version of the model - Returns: - list of ModelVersion 
 - 
mlflow_experiment_id¶
 - 
mlflow_url¶
 - 
name¶
 - 
new_model_version() → merlin.model.ModelVersion[source]¶
- Create a new version of this model - Returns: - new ModelVersion 
 - 
project¶
 - 
serve_traffic(traffic_rule: Dict[VersionEndpoint, int], environment_name: str = None) → merlin.endpoint.ModelEndpoint[source]¶
- Set traffic rule for this model. - Parameters: - traffic_rule – dict of version endpoint and the percentage of traffic.
- environment_name – target environment in which the model endpoint will be created. If left empty it will create in default environment.
 - Returns: - ModelEndpoint 
 - 
set_traffic(traffic_rule: Dict[ModelVersion, int]) → merlin.endpoint.ModelEndpoint[source]¶
- Set traffic rule for this model. - This method is deprecated, use serve_traffic instead - Parameters: - traffic_rule – dict of model version and the percentage of traffic. - Returns: - ModelEndpoint 
 - 
stop_serving_traffic(environment_name: str = None)[source]¶
- Stop serving traffic for this model in given environment. - Parameters: - environment_name – environment name in which the endpoint should be stopped from serving traffic. If environment_name is empty it will attempt to undeploy the model from default environment. 
 - 
type¶
 - 
updated_at¶
 
- 
- 
class merlin.model.ModelType[source]¶
- Bases: - enum.Enum- Model type supported by merlin - 
ONNX= 'onnx'¶
 - 
PYFUNC= 'pyfunc'¶
 - 
PYFUNC_V2= 'pyfunc_v2'¶
 - 
PYTORCH= 'pytorch'¶
 - 
SKLEARN= 'sklearn'¶
 - 
TENSORFLOW= 'tensorflow'¶
 - 
XGBOOST= 'xgboost'¶
 
- 
- 
class merlin.model.ModelVersion(version: client.models.version.Version, model: merlin.model.Model, api_client: client.api_client.ApiClient)[source]¶
- Bases: - object- Representation of version in a model - 
MODEL_TYPE_TO_IMAGE_MAP= {<ModelType.SKLEARN: 'sklearn'>: 'gcr.io/kfserving/sklearnserver:0.2.2', <ModelType.TENSORFLOW: 'tensorflow'>: 'tensorflow/serving:1.14.0', <ModelType.XGBOOST: 'xgboost'>: 'gcr.io/kfserving/xgbserver:0.2.2', <ModelType.PYTORCH: 'pytorch'>: 'gcr.io/kfserving/pytorchserver:0.2.2'}¶
 - 
artifact_uri¶
 - 
create_prediction_job(job_config: merlin.batch.config.PredictionJobConfig, sync: bool = True) → merlin.batch.job.PredictionJob[source]¶
- Create and run prediction job with given config using this model version - Parameters: - sync – boolean to set synchronicity of job. The default is set to True.
- job_config – prediction job config
 - Returns: - prediction job 
 - 
created_at¶
 - 
deploy(environment_name: str = None, resource_request: merlin.resource_request.ResourceRequest = None, env_vars: Dict[str, str] = None) → merlin.endpoint.VersionEndpoint[source]¶
- Deploy current model to MLP One of log_model, log_pytorch_model, and log_pyfunc_model has to be called beforehand - Parameters: - environment_name – target environment to which the model version will be deployed to. If left empty it will deploy to default environment.
- resource_request – The resource requirement and replicas requests for model version endpoint.
 - Returns: - Endpoint object 
 - 
download_artifact(destination_path)[source]¶
- Download artifact - Parameters: - destination_path – - Returns: 
 - 
endpoint¶
- Return endpoint of this model version that is deployed in default environment - Returns: - Endpoint or None 
 - 
get_metric(key) → Optional[float][source]¶
- Get metric value from metric name(key) - Parameters: - key – - Return value: 
 - 
get_param(key) → Optional[str][source]¶
- Get param value for specific param name(key) - Parameters: - key – - Return value: 
 - 
id¶
 - 
list_endpoint() → List[merlin.endpoint.VersionEndpoint][source]¶
- Return all endpoint deployment for this particular model version - Returns: - List of VersionEndpoint 
 - 
list_prediction_job() → List[merlin.batch.job.PredictionJob][source]¶
- List all prediction job created from the model version - Returns: - list of prediction jobs 
 - 
log_artifact(local_path, artifact_path=None)[source]¶
- Log artifact - Parameters: - local_path –
- artifact_path –
 - Returns: 
 - 
log_artifacts(local_dir, artifact_path=None)[source]¶
- Log artifacts - Parameters: - local_dir –
- artifact_path –
 - Returns: 
 - 
log_model(model_dir=None)[source]¶
- Upload model to artifact storage. This method is used to upload model for xgboost, tensorflow, and sklearn model. - Parameters: - model_dir – directory which contain serialized model 
 - 
log_pyfunc_model(model_instance, conda_env, code_dir=None, artifacts=None)[source]¶
- Upload PyFunc based model into artifact storage. User has to specify model_instance and conda_env. model_instance shall implement all method specified in PyFuncModel. conda_env shall contain all dependency required by the model - Parameters: - model_instance – instance of python function model
- conda_env – path to conda env.yaml file
- code_dir – additional code directory that will be loaded with ModelType.PYFUNC model
- artifacts – dictionary of artifact that will be stored together with the model. This will be passed to PythonModel.initialize. Example: {“config” : “config/staging.yaml”}
 
 - 
log_pytorch_model(model_dir, model_class_name=None)[source]¶
- Upload PyTorch model to artifact storage. - Parameters: - model_dir – directory containing serialized PyTorch model
- model_class_name – class name of PyTorch model. By default the model class name is ‘PyTorchModel’
 
 - 
mlflow_run_id¶
 - 
mlflow_url¶
 - 
model¶
 - 
properties¶
 - 
start_server(env_vars: Dict[str, str] = None, port: int = 8080, pyfunc_base_image: str = None, kill_existing_server: bool = False, tmp_dir: Optional[str] = None, build_image: bool = False)[source]¶
- Start a local server running the model version - Parameters: - env_vars – dictionary of environment variables to be passed to the server
- port – host port that will be used to expose model server
- pyfunc_base_image – (optional, default=None) docker image to be used as base image for building pyfunc model
- kill_existing_server – (optional, default=False) kill existing server if has been started previously
- tmp_dir – (optional, default=None) specify base path for storing model artifact
- build_image – (optional, default=False) build image for standard model instead of directly mounting the model artifact to model container
 - Returns: 
 - 
undeploy(environment_name: str = None)[source]¶
- Delete deployment of the model version - Parameters: - environment_name – environment name in which the endpoint should be undeployed from. If environment_name is empty it will attempt to undeploy the model from default environment 
 - 
updated_at¶
 - 
url¶
 
- 
- 
class merlin.model.Project(project: client.models.project.Project, mlp_url: str, api_client: client.api_client.ApiClient)[source]¶
- Bases: - object- 
administrators¶
 - 
create_secret(name: str, data: str)[source]¶
- Create a secret within the project - Parameters: - name – secret name
- data – secret data
 - Returns: 
 - 
created_at¶
 - 
delete_secret(name: str)[source]¶
- Delete secret with given name - Parameters: - name – secret to be removed - Returns: 
 - 
get_or_create_model(model_name: str, model_type: Optional[merlin.model.ModelType] = None) → merlin.model.Model[source]¶
- Get or create a model with given name - Parameters: - model_name – model name
- model_type – type of model, mandatory when creation is needed
 - Returns: - Model instance 
 - 
id¶
 - 
list_model() → List[merlin.model.Model][source]¶
- List all model available within the project :return: list of Model 
 - 
mlflow_tracking_url¶
 - 
name¶
 - 
readers¶
 - 
update_secret(name: str, data: str)[source]¶
- Update secret with given name - Parameters: - name – secret name
- data – new secret data
 - Returns: 
 - 
updated_at¶
 - 
url¶
 
- 
- 
class merlin.model.PyFuncModel[source]¶
- Bases: - mlflow.pyfunc.model.PythonModel- 
infer(request: dict, **kwargs) → dict[source]¶
- Do inference This method MUST be implemented by concrete implementation of PyFuncModel. This method accept ‘request’ which is the body content of incoming request. Implementation should return inference a json object of response. - Parameters: - request – Dictionary containing incoming request body content
- **kwargs – See below. 
 - Returns: - Dictionary containing response body - Keyword Arguments: - arguments – - headers (dict): Dictionary containing incoming HTTP request headers
 
 - 
initialize(artifacts: dict)[source]¶
- Implementation of PyFuncModel can specify initialization step which will be called one time during model initialization. - Parameters: - artifacts – dictionary of artifacts passed to log_model method 
 - 
load_context(context)[source]¶
- Loads artifacts from the specified - PythonModelContextthat can be used by- predict()when evaluating inputs. When loading an MLflow model with- load_pyfunc(), this method is called as soon as the- PythonModelis constructed.- The same - PythonModelContextwill also be available during calls to- predict(), but it may be more efficient to override this method and load artifacts from the context at model load time.- Parameters: - context – A - PythonModelContextinstance containing artifacts that the model can use to perform inference.
 - 
predict(model_input, **kwargs)[source]¶
- Evaluates a pyfunc-compatible input and produces a pyfunc-compatible output. For more information about the pyfunc input/output API, see the pyfunc-inference-api. - Parameters: - context – A PythonModelContextinstance containing artifacts that the model can use to perform inference.
- model_input – A pyfunc-compatible input for the model to evaluate.
 
- context – A 
 
- 
- 
class merlin.model.PyFuncV2Model[source]¶
- Bases: - mlflow.pyfunc.model.PythonModel- 
infer(model_input: pandas.core.frame.DataFrame) → Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame][source]¶
- Infer method is the main method that will be called when calculating the inference result for both online prediction and batch prediction. The method accepts pandas Dataframe and returns either another panda Dataframe / pandas Series / ndarray of the same length as the input. In the batch prediction case the model_input will contain an arbitrary partition of the whole dataset that the user defines as the data source. As such, it is advisable not to do aggregation within the infer method, as it will be incorrect since it will only apply to the partition in contrary to the whole dataset. - Parameters: - model_input – input to the model (pandas.DataFrame) - Returns: - inference result as numpy.ndarray or pandas.Series or pandas.DataFrame 
 - 
initialize(artifacts: dict)[source]¶
- Implementation of PyFuncModel can specify initialization step which will be called one time during model initialization. - Parameters: - artifacts – dictionary of artifacts passed to log_model method 
 - 
load_context(context)[source]¶
- Loads artifacts from the specified - PythonModelContextthat can be used by- predict()when evaluating inputs. When loading an MLflow model with- load_pyfunc(), this method is called as soon as the- PythonModelis constructed.- The same - PythonModelContextwill also be available during calls to- predict(), but it may be more efficient to override this method and load artifacts from the context at model load time.- Parameters: - context – A - PythonModelContextinstance containing artifacts that the model can use to perform inference.
 - 
postprocess(model_result: Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame]) → dict[source]¶
- Postprocess prediction result returned by infer method into dictionary representing the response body of the model. This method will not be called during batch prediction. - Parameters: - model_result – output of the model’s infer method - Returns: - dictionary containing the response body 
 - 
predict(context, model_input)[source]¶
- Evaluates a pyfunc-compatible input and produces a pyfunc-compatible output. For more information about the pyfunc input/output API, see the pyfunc-inference-api. - Parameters: - context – A PythonModelContextinstance containing artifacts that the model can use to perform inference.
- model_input – A pyfunc-compatible input for the model to evaluate.
 
- context – A 
 - 
preprocess(request: dict) → pandas.core.frame.DataFrame[source]¶
- Preprocess incoming request into a pandas Dataframe that will be passed to the infer method. This method will not be called during batch prediction. - Parameters: - request – dictionary representing the incoming request body - Returns: - pandas.DataFrame that will be passed to infer method 
 - 
raw_infer(request: dict) → dict[source]¶
- Do inference This method MUST be implemented by concrete implementation of PyFuncV2Model. This method accept ‘request’ which is the body content of incoming request. This method will not be called during batch prediction. - Implementation should return inference a json object of response. - Parameters: - request – Dictionary containing incoming request body content - Returns: - Dictionary containing response body 
 
- 
merlin.resource_request module¶
merlin.util module¶
merlin.validation module¶
- 
merlin.validation.validate_model_dir(input_model_type, target_model_type, model_dir)[source]¶
- Validates user-provided model directory based on file structure. For tensorflow models, checking is only done on the subdirectory with the largest version number. - Parameters: - input_model_type – type of given model
- target_model_type – type of supposed model, dependent on log_<model type>(…)
- model_dir – directory containing serialised model file