Base class#

class neurio.devices.device.Device(port: any, name: str = '', log_dir: str | None = None, **kwargs)[source]#

Superclass for all devices.

prepare_for_inference(model, **kwargs)[source]#

Prepare the device for inference. This function should be called before any inference.

Parameters:
  • model – model to deploy on the device

  • kwargs – other parameters relevant for the preparation of the device

infer(input_x, batch_size: int, **kwargs)[source]#

Run inference on the device and measures associated performance metrics.

Parameters:
  • input_x – input data to infer

  • batch_size – batch size

  • kwargs – other parameters

Returns:

tuple: (inference_results: np.array, profiler: Profiler)

Raises:

DeviceNotReadyException – if the device is not ready for inference.

predict(input_x, batch_size, **kwargs)[source]#

Run inference on the device and measures associated performance metrics.

Parameters:
  • input_x – input data to infer

  • batch_size – batch size

  • kwargs – other parameters

Returns:

the inference results

abstract is_alive(timeout: int = 20) bool[source]#

Check if the device is alive

Parameters:

timeout – timeout in seconds

Returns:

Return true if the device is alive (connected), false otherwise