Skip to content

TensorFlow

create_tensorflow_neuropod

Packages a TensorFlow model as a neuropod package.

create_tensorflow_neuropod(
    neuropod_path,
    model_name,
    input_spec,
    output_spec,
    node_name_mapping = None,
    frozen_graph_path = None,
    graph_def = None,
    saved_model_dir = None,
    trackable_obj = None,
    init_op_names = [],
    platform_version_semver = *,
    input_tensor_device = None,
    default_input_tensor_device = GPU,
    custom_ops = [],
    package_as_zip = True,
    test_input_data = None,
    test_expected_out = None,
    persist_test_data = True,
)

Params:

neuropod_path

The output neuropod path

model_name

The name of the model

input_spec

A list of dicts specifying the input to the model. For each input, if shape is set to None, no validation is done on the shape. If shape is a tuple, the dimensions of the input are validated against that tuple. A value of None for any of the dimensions means that dimension will not be checked. dtype can be any valid numpy datatype string.

Example:

[
    {"name": "x", "dtype": "float32", "shape": (None,)},
    {"name": "y", "dtype": "float32", "shape": (None,)},
]

output_spec

A list of dicts specifying the output of the model. See the documentation for the input_spec parameter for more details.

Example:

[
    {"name": "out", "dtype": "float32", "shape": (None,)},
]

node_name_mapping

default: None

Mapping from a neuropod input/output name to a node in the graph. The :0 is optional. Required unless using a saved model.

Example:

{
    "x": "some_namespace/in_x:0",
    "y": "some_namespace/in_y:0",
    "out": "some_namespace/out:0",
}

frozen_graph_path

default: None

The path to a frozen tensorflow graph. Exactly one of frozen_graph_path, graph_def, saved_model_dir and trackable_obj must be provided.

graph_def

default: None

A tensorflow GraphDef object. Exactly one of frozen_graph_path, graph_def, saved_model_dir and trackable_obj must be provided.

saved_model_dir

default: None

The path to a tensorflow saved model dir. Exactly one of frozen_graph_path, graph_def, saved_model_dir and trackable_obj must be provided. Note: this is only tested with TF 2.x at the moment

trackable_obj

default: None

A trackable object that can be passed to tf.saved_model.save. For more control over the saved model, you can create one yourself and pass in the path using saved_model_dir. Exactly one of frozen_graph_path, graph_def, saved_model_dir and trackable_obj must be provided. Note: this is only tested with TF 2.x at the moment

init_op_names

default: []

A list of initialization operator names. These operations are evaluated in the session used for inference right after the session is created. These operators may be used for initialization of variables.

platform_version_semver

default: *

The versions of the platform (e.g. Torch, TensorFlow, etc) that this model is compatible with specified as semver range. See https://semver.org/, https://docs.npmjs.com/misc/semver#ranges or https://docs.npmjs.com/misc/semver#advanced-range-syntax for examples and more info. Default is * (any version is okay).

When this model is loaded, Neuropod will load it with a backend that is compatible with the specified versions ranges or throw an error if no compatible backends are installed. This can be used to ensure a model always runs with a particular version of a framework.

Example: 1.13.1 or > 1.13.1 or 1.4.0 - 1.6.0

input_tensor_device

default: None

A dict mapping input tensor names to the device that the model expects them to be on. This can either be GPU or CPU. Any tensors in input_spec not specified in this mapping will use the default_input_tensor_device specified below.

If a GPU is selected at inference time, Neuropod will move tensors to the appropriate devices before running the model. Otherwise, it will attempt to run the model on CPU and move all tensors (and the model) to CPU.

See the docstring for load_neuropod for more info.

Example:

{"x": "GPU"}

default_input_tensor_device

default: GPU

The default device that input tensors are expected to be on. This can either be GPU or CPU.

custom_ops

default: []

A list of paths to custom op shared libraries to include in the packaged neuropod.

Note: Including custom ops ties your neuropod to the specific platform (e.g. Mac, Linux) that the custom ops were built for. It is the user's responsibility to ensure that their custom ops are built for the correct platform.

Example:

["/path/to/my/custom_op.so"]

package_as_zip

default: True

Whether to package the neuropod as a single file or as a directory.

test_input_data

default: None

Optional sample input data. This is a dict mapping input names to values. If this is provided, inference will be run in an isolated environment immediately after packaging to ensure that the neuropod was created successfully. Must be provided if test_expected_out is provided.

Throws a ValueError if inference failed.

Example:

{
    "x": np.arange(5),
    "y": np.arange(5),
}

test_expected_out

default: None

Optional expected output. Throws a ValueError if the output of model inference does not match the expected output.

Example:

{
    "out": np.arange(5) + np.arange(5)
}

persist_test_data

default: True

Optionally saves the test data within the packaged neuropod.