TorchScript

create_torchscript_neuropod

Packages a TorchScript model as a neuropod package.

create_torchscript_neuropod(
    neuropod_path,
    model_name,
    input_spec,
    output_spec,
    module = None,
    module_path = None,
    input_tensor_device = None,
    default_input_tensor_device = GPU,
    custom_ops = [],
    package_as_zip = True,
    test_input_data = None,
    test_expected_out = None,
    persist_test_data = True,
)

Params:

neuropod_path

The output neuropod path

model_name

The name of the model

input_spec

A list of dicts specifying the input to the model. For each input, if shape is set to None, no validation is done on the shape. If shape is a tuple, the dimensions of the input are validated against that tuple. A value of None for any of the dimensions means that dimension will not be checked. dtype can be any valid numpy datatype string.

Example:

[
    {"name": "x", "dtype": "float32", "shape": (None,)},
    {"name": "y", "dtype": "float32", "shape": (None,)},
]

output_spec

A list of dicts specifying the output of the model. See the documentation for the input_spec parameter for more details.

Example:

[
    {"name": "out", "dtype": "float32", "shape": (None,)},
]

module

default: None

An instance of a PyTorch ScriptModule. This model should return the outputs as a dictionary. If this is not provided, module_path must be set.

For example, a model may output something like this:

{
    "output1": value1,
    "output2": value2,
}

module_path

default: None

The path to a ScriptModule that was already exported using torch.jit.save. If this is not provided, module must be set.

input_tensor_device

default: None

A dict mapping input tensor names to the device that the model expects them to be on. This can either be GPU or CPU. Any tensors in input_spec not specified in this mapping will use the default_input_tensor_device specified below.

If a GPU is selected at inference time, Neuropod will move tensors to the appropriate devices before running the model. Otherwise, it will attempt to run the model on CPU and move all tensors (and the model) to CPU.

See the docstring for load_neuropod for more info.

Example:

{"x": "GPU"}

default_input_tensor_device

default: GPU

The default device that input tensors are expected to be on. This can either be GPU or CPU.

custom_ops

default: []

A list of paths to custom op shared libraries to include in the packaged neuropod.

Note: Including custom ops ties your neuropod to the specific platform (e.g. Mac, Linux) that the custom ops were built for. It is the user's responsibility to ensure that their custom ops are built for the correct platform.

Example:

["/path/to/my/custom_op.so"]

package_as_zip

default: True

Whether to package the neuropod as a single file or as a directory.

test_input_data

default: None

Optional sample input data. This is a dict mapping input names to values. If this is provided, inference will be run in an isolated environment immediately after packaging to ensure that the neuropod was created successfully. Must be provided if test_expected_out is provided.

Throws a ValueError if inference failed.

Example:

{
    "x": np.arange(5),
    "y": np.arange(5),
}

test_expected_out

default: None

Optional expected output. Throws a ValueError if the output of model inference does not match the expected output.

Example:

{
    "out": np.arange(5) + np.arange(5)
}

persist_test_data

default: True

Optionally saves the test data within the packaged neuropod.