1. How to inference our own models, my question is, will it only accept, TFlite, ArmNN and CV. If yes, should we convert our models to this format. and also i would like to know what specs or format does the inference model should be to accepted by inference engine in NXP.
2. How do we know the latest pyeIQ version is running on NPU or CPU. because when i check in TOP command, it shows only CPU loading and its like above 150%.