TensorFlow

create_tensorflow_neuropod

Packages a TensorFlow model as a neuropod package.

create_tensorflow_neuropod(
    neuropod_path,
    model_name,
    node_name_mapping,
    input_spec,
    output_spec,
    frozen_graph_path = None,
    graph_def = None,
    init_op_names = [],
    input_tensor_device = None,
    default_input_tensor_device = GPU,
    custom_ops = [],
    package_as_zip = True,
    test_input_data = None,
    test_expected_out = None,
    persist_test_data = True,
)

Params:

neuropod_path

The output neuropod path

model_name

The name of the model

node_name_mapping

Mapping from a neuropod input/output name to a node in the graph. The :0 is optional.

Example:

{
    "x": "some_namespace/in_x:0",
    "y": "some_namespace/in_y:0",
    "out": "some_namespace/out:0",
}

input_spec

A list of dicts specifying the input to the model. For each input, if shape is set to None, no validation is done on the shape. If shape is a tuple, the dimensions of the input are validated against that tuple. A value of None for any of the dimensions means that dimension will not be checked. dtype can be any valid numpy datatype string.

Example:

[
    {"name": "x", "dtype": "float32", "shape": (None,)},
    {"name": "y", "dtype": "float32", "shape": (None,)},
]

output_spec

A list of dicts specifying the output of the model. See the documentation for the input_spec parameter for more details.

Example:

[
    {"name": "out", "dtype": "float32", "shape": (None,)},
]

frozen_graph_path

default: None

The path to a frozen tensorflow graph. If this is not provided, graph_def must be set

graph_def

default: None

A tensorflow GraphDef object. If this is not provided, frozen_graph_path must be set

init_op_names

default: []

A list of initialization operator names. These operations are evaluated in the session used for inference right after the session is created. These operators may be used for initialization of variables.

input_tensor_device

default: None

A dict mapping input tensor names to the device that the model expects them to be on. This can either be GPU or CPU. Any tensors in input_spec not specified in this mapping will use the default_input_tensor_device specified below.

If a GPU is selected at inference time, Neuropod will move tensors to the appropriate devices before running the model. Otherwise, it will attempt to run the model on CPU and move all tensors (and the model) to CPU.

See the docstring for load_neuropod for more info.

Example:

{"x": "GPU"}

default_input_tensor_device

default: GPU

The default device that input tensors are expected to be on. This can either be GPU or CPU.

custom_ops

default: []

A list of paths to custom op shared libraries to include in the packaged neuropod.

Note: Including custom ops ties your neuropod to the specific platform (e.g. Mac, Linux) that the custom ops were built for. It is the user's responsibility to ensure that their custom ops are built for the correct platform.

Example:

["/path/to/my/custom_op.so"]

package_as_zip

default: True

Whether to package the neuropod as a single file or as a directory.

test_input_data

default: None

Optional sample input data. This is a dict mapping input names to values. If this is provided, inference will be run in an isolated environment immediately after packaging to ensure that the neuropod was created successfully. Must be provided if test_expected_out is provided.

Throws a ValueError if inference failed.

Example:

{
    "x": np.arange(5),
    "y": np.arange(5),
}

test_expected_out

default: None

Optional expected output. Throws a ValueError if the output of model inference does not match the expected output.

Example:

{
    "out": np.arange(5) + np.arange(5)
}

persist_test_data

default: True

Optionally saves the test data within the packaged neuropod.