Physical Devices#
Canaan#
- class neurio.devices.physical.canaan.kendryte.K210(port: any, name: str = 'k210', log_dir: str | None = None, **kwargs)[source]#
- is_alive(timeout: int = 20) bool [source]#
Check if the device is alive
- Parameters:
timeout – timeout in seconds
- Returns:
Return true if the device is alive (connected), false otherwise
- infer(input_x, batch_size: int, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
tuple: (inference_results: np.array, profiler: Profiler)
- Raises:
DeviceNotReadyException – if the device is not ready for inference.
- predict(input_x, batch_size, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
the inference results
- prepare_for_inference(model, **kwargs)#
Prepare the device for inference. This function should be called before any inference.
- Parameters:
model – model to deploy on the device
kwargs – other parameters relevant for the preparation of the device
STMicrolectronics#
- class neurio.devices.physical.st.stm32.STM32(port: any = 'serial', device_identifier: str = 'STM32', name: str = 'STM32Base', log_dir: str | None = None, **kwargs)[source]#
- __connect_runner()#
- is_alive(timeout: int = 20) bool [source]#
Check if the device is alive
- Parameters:
timeout – timeout in seconds
- Returns:
Return true if the device is alive (connected), false otherwise
- infer(input_x, batch_size: int, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
tuple: (inference_results: np.array, profiler: Profiler)
- Raises:
DeviceNotReadyException – if the device is not ready for inference.
- predict(input_x, batch_size, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
the inference results
- prepare_for_inference(model, **kwargs)#
Prepare the device for inference. This function should be called before any inference.
- Parameters:
model – model to deploy on the device
kwargs – other parameters relevant for the preparation of the device
- class neurio.devices.physical.st.stm32.STM32L4R9(port: any = 'serial', name: str = 'STM32L4R9I-DISCO', log_dir: str | None = None, **kwargs)[source]#
- infer(input_x, batch_size: int, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
tuple: (inference_results: np.array, profiler: Profiler)
- Raises:
DeviceNotReadyException – if the device is not ready for inference.
- is_alive(timeout: int = 20) bool #
Check if the device is alive
- Parameters:
timeout – timeout in seconds
- Returns:
Return true if the device is alive (connected), false otherwise
- predict(input_x, batch_size, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
the inference results
- prepare_for_inference(model, **kwargs)#
Prepare the device for inference. This function should be called before any inference.
- Parameters:
model – model to deploy on the device
kwargs – other parameters relevant for the preparation of the device
- class neurio.devices.physical.st.stm32.NUCLEOH723ZG(port: any = 'serial', name: str = 'NUCLEO-H723ZG', log_dir: str | None = None, **kwargs)[source]#
- infer(input_x, batch_size: int, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
tuple: (inference_results: np.array, profiler: Profiler)
- Raises:
DeviceNotReadyException – if the device is not ready for inference.
- is_alive(timeout: int = 20) bool #
Check if the device is alive
- Parameters:
timeout – timeout in seconds
- Returns:
Return true if the device is alive (connected), false otherwise
- predict(input_x, batch_size, **kwargs)#
Run inference on the device and measures associated performance metrics.
- Parameters:
input_x – input data to infer
batch_size – batch size
kwargs – other parameters
- Returns:
the inference results
- prepare_for_inference(model, **kwargs)#
Prepare the device for inference. This function should be called before any inference.
- Parameters:
model – model to deploy on the device
kwargs – other parameters relevant for the preparation of the device