Keras
create_keras_neuropod¶
Packages a Keras model as a neuropod package. Currently, only the TensorFlow backend is supported.
create_keras_neuropod( neuropod_path, model_name, sess, model, node_name_mapping = None, input_spec = None, output_spec = None, input_tensor_device = None, default_input_tensor_device = GPU, custom_ops = [], package_as_zip = True, test_input_data = None, test_expected_out = None, persist_test_data = True, )
Params:¶
neuropod_path¶
The output neuropod path
model_name¶
The name of the model
sess¶
A TensorFlow session containing weights (usually keras.backend.get_session()
).
model¶
A Keras model object.
node_name_mapping¶
default: None
Optional mapping from a neuropod input/output name to a name of Keras input/output
Example:
{ "x": "input_1", "out": "fc1000", }
Defaults to using Keras input/output names as neuropod input/output names.
input_spec¶
default: None
A list of dicts specifying the input to the model. For each input, if shape
is set to None
, no validation is done on the shape. If shape is a tuple, the
dimensions of the input are validated against that tuple. A value of
None
for any of the dimensions means that dimension will not be checked.
dtype
can be any valid numpy datatype string.
Example:
[ {"name": "x", "dtype": "float32", "shape": (None,)}, {"name": "y", "dtype": "float32", "shape": (None,)}, ]
output_spec¶
default: None
A list of dicts specifying the output of the model. See the documentation for
the input_spec
parameter for more details.
Example:
[ {"name": "out", "dtype": "float32", "shape": (None,)}, ]
input_tensor_device¶
default: None
A dict mapping input tensor names to the device
that the model expects them to be on. This can
either be GPU
or CPU
. Any tensors in input_spec
not specified in this mapping will use the
default_input_tensor_device
specified below.
If a GPU is selected at inference time, Neuropod will move tensors to the appropriate devices before running the model. Otherwise, it will attempt to run the model on CPU and move all tensors (and the model) to CPU.
See the docstring for load_neuropod
for more info.
Example:
{"x": "GPU"}
default_input_tensor_device¶
default: GPU
The default device that input tensors are expected
to be on. This can either be GPU
or CPU
.
custom_ops¶
default: []
A list of paths to custom op shared libraries to include in the packaged neuropod.
Note: Including custom ops ties your neuropod to the specific platform (e.g. Mac, Linux) that the custom ops were built for. It is the user's responsibility to ensure that their custom ops are built for the correct platform.
Example:
["/path/to/my/custom_op.so"]
package_as_zip¶
default: True
Whether to package the neuropod as a single file or as a directory.
test_input_data¶
default: None
Optional sample input data. This is a dict mapping input names to
values. If this is provided, inference will be run in an isolated environment
immediately after packaging to ensure that the neuropod was created
successfully. Must be provided if test_expected_out
is provided.
Throws a ValueError if inference failed.
Example:
{ "x": np.arange(5), "y": np.arange(5), }
test_expected_out¶
default: None
Optional expected output. Throws a ValueError if the output of model inference does not match the expected output.
Example:
{ "out": np.arange(5) + np.arange(5) }
persist_test_data¶
default: True
Optionally saves the test data within the packaged neuropod.