This is Unofficial pre-built OpenCV with the inference engine part of OpenVINO package for Python.
Remove previously installed versions of
pip3 install opencv-python-inference-engine
Examples of usage
Please see the
examples.ipynb in the
You will need to preprocess data as a model requires and decode the output. A description of the decoding should be in the model documentation with examples in open-vino documentation, however, in some cases, the original article may be the only information source. Some models are very simple to encode/decode, others are tough (e.g., PixelLink in tests).
Downloading intel models
I needed an ability to fast deploy a small package that able to run models from Intel's model zoo and use Movidius NCS. Well-known opencv-python can't do this. The official way is to use OpenVINO, but it is big and clumsy (just try to use it with python venv or fast download it on cloud instance).
- Package comes without contrib modules.
- You need to add udev rules if you want working MYRIAD plugin.
- It was tested on Ubuntu 18.04, Ubuntu 18.10 as Windows 10 Subsystem and Gentoo.
- It will not work for Ubuntu 16.04 and below (except v220.127.116.11).
- I had not made builds for Windows or MacOS.
- It built with
- No GTK/QT support -- use
matplotlibfor plotting your results.
- It is 64 bit.
Main differences from
- Usage of
TBBused as a parallel framework
- Inference Engine with
Main differences from OpenVINO
- No model-optimizer
- No ITT
- No IPP
- No Intel Media SDK
- No OpenVINO IE API
- No python2 support (it is dead)
- No Gstreamer (use ffmpeg)
- No GTK (+16 MB and a lot of problems and extra work to compile Qt\GTK libs from sources.)
For additional info read
YYYY.MM.DD, because it is the most simple way to track opencv/openvino versions.
Compiling from source
You will need ~7GB RAM and ~10GB disk space
I am using Ubuntu 18.04 (python 3.6) multipass instance:
multipass launch -c 6 -d 10G -m 7G 18.04.
# We need newer `cmake` for dldt (fastest way I know) # >=cmake-3.16 sudo apt remove --purge cmake hash -r sudo snap install cmake --classic # nasm for ffmpeg # libusb-1.0-0-dev for MYRIAD plugin sudo apt update sudo apt install build-essential git pkg-config python3-dev nasm python3 virtualenv libusb-1.0-0-dev chrpath shellcheck # for ngraph # the `dldt/_deps/ext_onnx-src/onnx/gen_proto.py` has `#!/usr/bin/env python` string and will throw an error otherwise sudo ln -s /usr/bin/python3 /usr/bin/python
git clone https://github.com/banderlog/opencv-python-inference-engine cd opencv-python-inference-engine # git checkout dev ./download_all_stuff.sh
cd build/ffmpeg ./ffmpeg_setup.sh && ./ffmpeg_premake.sh && make -j6 && make install cd ../dldt ./dldt_setup.sh && make -j6 # NB: check `-D INF_ENGINE_RELEASE` value # should be in form YYYYAABBCC (e.g. 2020.1.0.2 -> 2020010002)") cd ../opencv ./opencv_setup.sh && make -j6
# get all compiled libs together cd ../../ cp build/opencv/lib/python3/cv2.cpython*.so create_wheel/cv2/cv2.so cp dldt/bin/intel64/Release/lib/*.so create_wheel/cv2/ cp dldt/bin/intel64/Release/lib/*.mvcmd create_wheel/cv2/ cp dldt/bin/intel64/Release/lib/plugins.xml create_wheel/cv2/ cp dldt/inference-engine/temp/tbb/lib/libtbb.so.2 create_wheel/cv2/ cp build/ffmpeg/binaries/lib/*.so create_wheel/cv2/ # change RPATH cd create_wheel for i in cv2/*.so; do chrpath -r '$ORIGIN' $i; done # final .whl will be in /create_wheel/dist/ # NB: check version in the `setup.py` ../venv/bin/python3 setup.py bdist_wheel
Optional things to play with
find_package() in project Cmake files, could help to solve some problems -- сmake will start to log them.
Make next changes in
- change string
-D WITH_GTK=OFF \to
-D WITH_GTK=ON \
export PKG_CONFIG_PATH=$ABS_PORTION/build/ffmpeg/binaries/lib/pkgconfig:$PKG_CONFIG_PATH-- you will need to add absolute paths to
.pcfiles. On Ubuntu 18.04 they here:
ffmpeg somehow messes with default values.
It will add ~16MB to the package.
Integrated Performance Primitives
-D WITH_IPP=ON in
It will give +30MB to the final
cv2.so size. And it will boost some opencv functions.
Official Intel's IPP benchmarks (may ask for registration)
You need to download MKL-DNN release and set two flags:
-D GEMM=MKL ,
-D MKLROOT (details)
OpenVino comes with 30MB
libmkl_tiny_tbb.so, but you will not be able to compile it, because it made from proprietary MKL.
Our opensource MKL-DNN experiment will end with 125MB
libmklml_gnu.so and inference speed compatible with 5MB openblas (details).
I did not try it. But it cannot be universal, it will only work with the certain combination of GPU+CUDA+cuDNN for which it will be compiled for.
- Compile OpenCV’s ‘dnn’ module with NVIDIA GPU support
- Use OpenCV’s ‘dnn’ module with NVIDIA GPUs, CUDA, and cuDNN
It is possible to compile OpenBLAS, dldt and OpenCV with OpenMP. I am not sure that the result would be better than now, but who knows.