AiBench – Goutham Kumar Vadivelu

AiBench allows users to benchmark the results and latency for a variety of Deep learning models. Choose any of the readily available classification, detection, segmentation, or pose estimation models and it will give you live results on camera feed and also gives the time taken for inference.

State of the art models like EfficientNet, Mobilenet, Yolo, DeepLab are available by default for benchmarking.

In addition to the deep learning models that comes with the App by default, you can add your custom CoreML models for visualizing the result and benchmarking.

Want to test the inference latency specifically on CPU, GPU, or even NPU?
Just choose your desired device to run the inference on!

All the models are downloaded only on demand from the server and can be deleted at any time. So no worry of large models filling up your device space!

For any queries, issues, or instructions regarding custom CoreML model formats visit