Welcome back to the OpenVINO channel We all want to get started with DL based models We all want to use AI for our applications, here is the easiest and fastest route to start.. First stage is to take a model We are using models that are already trained, converged and frozen.. So ready for inference The samples that are provided with OpenVINO can use these pre-trained models for inference and you are getting a sample application We have talked about how to compile the inference-engine samples in Video #5 We have demonstrated many of the samples in other videos, Now let’s talk about the models, There are 2 ways to get a pre-trained model 1st is just to download it and process it.. you can download it from the internet or with the downloader utility, Then you’ll need to prepare it for inference
using the model-optimizer, I’ll show you how in a second.. Option #2 is to download one of Intel pre-trained models here you will download the IR files directly,
which are processed and ready to use right away. In Video #6 you can see how to use the model-downloader, on Video #9, how to use the model-optimizer Let’s do it here again.. Setting the environment.. Go to the downloader directory Run –print_all to see all the topologies Now let’s download squeezenet into my ~/junk directory.. And I have the caffemodel. Now let’s run model-optimizer .
I’m taking the caffe model and.. and Generating the 2 files.. .bin and .xml And I have an IR file ready to use.. Downloading the Intel pre-trained models is even easier. Navigate to the models page Choose a model,
let’s see the face-landmark detection for example You can see here the full description, example, specification, data-set used and more.. Now navigate to the model-zoo web page.. This is part of the OpenVINO open source Choose the model you’d like to use.. You can see here the prototxt Choose the format,
we have here FP16 FP32 and even INT8 format and I can download the .bin and .xml files.. Once you have the model all you have to do it use it in the samples as I have showed in many other videos.. For example. I’m running here the Interactive face demo,
check out Video 21 to see how it works.. And I’m using this model for the face detection And this model for the landmark-detection and so on Not every model could be used for every sample.. If you navigate to the samples directory you can see that the action-recognition model could only be used for the action recognition demo and only for CPU or GPU But the driver-action-recognition-adas model could be also run on FPGA.. The samples also support your private models and public models you have to make sure that the inputs and outputs are the same format.. So we saw how to choose a pre-trained model, how to download it, how to use it in a sample code and which models could be used for which samples.. Subscribe to our channel to get more videos like this one.. Thank you.