Welcome to part 3 of the tutorial. Over at
Beetlebox we are excited for the release of Vitis, which is the unification of all
of Xilinx’s previous software into a single package. We have been working hard
on computer vision using this platform and thought that we could provide some
help to others wanting to get started on Xilinx’s development boards. This
tutorial will be a multi-part series covering the basics of getting started
with computer vision and Vitis and will be covering:
Using OpenCV on the embedded system (Current Page)
Using Vitis Vision Library
Accelerating Computer Vision using XRT and Kernels
We hope these tutorials will be useful for anyone looking to get into computer vision on FPGAs.
Part 3: Using OpenCV on the Embedded System
OpenCV
OpenCV is one of the most popular Computer Vision libraries in the world and forms the backbone of many projects, including on FPGAs. This tutorial we will be focused on running OpenCV on our ARM core, giving us a solid foundation for accelerating our computer vision in later tutorials.
OpenCV comes pre-installed on our Petalinux system, but we still need to do a bit of fiddling with our compiler to be able to use it, so in this tutorial we will be setting it up and running through a simple example of making an image greyscale.
Pre-installed version of OpenCV
For 2019.2, the pre-installed verison of OpenCV is 3.4.3 comes with following enabled:
python3
eigen
jpeg
png
tiff
v4l
libv4l
gstreamer
samples
tbb
gphoto2
It even comes with the experimental versions of OpenCV. Most notably missing from this installation version is ffmpeg which may be needed to read certain video files. We won’t worry about this yet, however, and instead just focus on images.
Transfering files in Software Emulation and in Hardware on Xilinx FPGAs
Using computer vision requires us to test on images and videos, hence knowing how to transfer files properly is critical. This tutorial will also cover transferring images on and off our Zynq.
Begin from the system project we created last tutorial;
File->New->Application Project
In the ‘Create a New Application Project’ window:
Project Name: opencv
Click ‘Next’
In the ‘Platform’ window:
Click the platform that we created in the previous tutorial
Click ‘Next’
In the ‘Domain’ window:
Click ‘Next’
In the ‘Templates’ window:
Click ‘Vector Addition’
Click ‘Finish’
In the Explorer window:
Under ‘opencv’ right click the ‘src’ folder
Delete ‘krnl_vadd.cpp’ and ‘vadd.h’
Go to ‘opencv.prj’ and remove ‘krnl_vadd’ from the Hardware Functions
Rename ‘vadd.cpp’ to ‘main.cpp’
In ‘main.cpp’ replace the code with the following:
#A simple greyscale colour convert in OpenCV
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs/imgcodecs.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <opencv2/videoio/videoio.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat src_image;
Mat grey_image;
src_image=imread("test_image.jpg");
if (!src_image.data)
{
cout << "Could not open image" << endl;
return 0;
};
cvtColor(src_image, grey_image, CV_BGR2GRAY);
imwrite("grey.jpg", grey_image);
cout << "Created grey image" << endl;
return 0;
}
Make sure to begin with that we are in software emulation mode
When we save the file, we will notice a lot of red under our includes, so we need to configure our setting to solve that
By default the OpenCV that Vitis links is the one that is contained in Vivado. This is problematic for us because it isn’t the exact same installation as we have generated with Petalinux, meaning we will get version clashing
The way we can fix this is by going into the installation directory of Vivado, which be default is
/tools/Xilinx/Vivado/2019.2/include
Remove the ‘opencv’ and ‘opencv2’ folders (We can just cut the folders up a level in case we ever need to restore it).
We now need to tell the Vitis where the correct folders are
Looking at our image directory we should have a grey image
As a final note, it would have been possible for us to transfer the image on to the system using XCST and then execute the program, but transfering the file on boot saves a little bit of time
Running our program on the Zynq itself
Since there is no kernels to emulate, we will skip the hardware emulation part and run our program straight on hardware
Swap configurations by going to the ‘opencv.prj’ file and changing ‘Active Build Configuration’ to Hardware
The C/C++ build settings are not saved between different build configurations, so we need to repeat the process of getting the OpenCV libraries in place
Once all the libraries are back in place, clean the project by right clicking on ‘opencv’ and clicking ‘Clean Project’
Build the project
The project build will be quick here because there are no kernels meaning Vitis does not need to perform any synthesis. The issue is that Vitis will only output ‘opencv.exe’ in our ‘sd_card’ folder. Luckily, we can just re-use the SD card image from our previous tutorial and transfer our .exe file on
Boot and connect the Zynq platform as in previous tutorials
To transfer our files we will be using scp.
Connect an ethernet cable from the Zynq board to the host
We need to configure our Zynq’s address by running the following commands on the Zynq:
Now we have a basic framework in place for getting image data into our FPGA, processing it and then getting it back out. Using this we now have the basis for accelerating our computer vision. Next time we will look into what Vitis Vision Libraries and how we can use this great resource for accelerating Computer Vision.
If you have enjoyed this tutorial but are in current need of talent to build advanced Computer Vision systems on FPGAs, consider joining our ClickCV Early Access programme. ClickCV Early Access provides bespoke service and support for developing advanced Computer Vision systems on FPGAs. We use our own proprietary Computer Vision Library, ClickCV, to provide our clients the cutting edge in low latency, high definition processing. Contact us today to find out how we could build your next-generation system.
About the Author: Andrew Swirski is the founder and managing director of Beetlebox, a Computer Vision Acceleration specialist. The company develops the Computer Vision Acceleration library called ClickCV, which is designed to fully utilise the hardware adaptability and performance of FPGA chips. Beetlebox is currently running an Early Access programme, where the company provides bespoke service and support to develop client’s Computer Vision systems on FPGAs. Before Beetlebox, Andrew Swirski previously worked at Intel (formerly Altera) in FPGA encoding and decoding. He completed a Masters’ in Electrical and Electronic Engineering from Imperial College London in 2017.
Are the Opencv Transparent API supported on Multicore SOC FPGAs? In addition to using PL acceleration can Opencv use multicore or GPU acceleration on Xilinx platform?
This makes it easier to port code from other development platforms such as Raspi
Unfortunately, I do not believe the Transparent API is supported by default for multicore SoCs. Also the ZCU104 runs a ARM Mali 400, which I believe only supports OpenGL ES 2.0 and not OpenCL.
In terms of porting over code, you do really want all the processing to be accelerated through the PL otherwise you end up not taking full advantage of the FPGA. One of the major benefits of using FPGAs is that by setting up full video pipelines through Vitis Vision or our own ClickCV you do not need to constantly move data between memories, leading to significant decreases in latency and power consumption.
I am unable to successfully build the project according to the instructions given and so the SW-emulation is failing.
After including the correct paths in the C/C++ build settings for OpenCV, I am unable to get the option of adding an image (_vimage) to the emulator.
The error reports:
makefile:82: recipe for target ‘opencv’ failed
ERROR: [v++ 60-602] Source file does not exist:Emulation-SW/binary_container_1.xclbin file missing
binary_container_1.xclbin is needed for the vector addition example that we are changing. If it is still asking for that, it may indicate that the vadd_krnl has not been successfully removed. Please ensure that you have removed and changed all files associated with the vadd_krnl.
If that does not work, instead of using the vector addition template, you can try start with an empty project or a different template.
Thanks for your reply. What I was wondering is how are we building the project without any kernel code? Am I missing something in the flow?
Since we removed the accelerated function from the binary container, the package generation does not complete and terminates with an error.
In one of your other posts you mentioned that you are using 2020.1. Do keep in mind that 2019.2 and 2020.1 are very different and that we are redoing these tutorials for 2020.1. To your question, Vitis programs both the software and the hardware. There is no requirement to have kernel code to program the CPU and so we can just delete the binary container. This tutorial is specifically to get OpenCV running on our CPU so that is what we focus on doing.
Nice Tutorial!
How can I use video streaming with the tutorial settings? It seems that the petalinux board detects the mp4 file in /mnt but v4l2 driver does not want to read it somehow…
Nice Tutorial!
How can I use video streaming with the tutorial settings? It seems that the petalinux board detects the mp4 file in /mnt but v4l2 driver does not want to read it somehow…
In this example the OpenCV program is ran in the CPU.
But in a more ‘real’ case lets say we want to run a Vitis vision library kernel to be accelerated in the FPGA and the input is a camera live stream, how could we interface the camera stream and send it to the kernel? would it be possible to connect the Opencv stream to the kernel functions?
In this example the OpenCV program is ran in the CPU.
But in a more ‘real’ case lets say we want to run a Vitis vision library kernel to be accelerated in the FPGA and the input is a camera live stream, how could we interface the camera stream and send it to the kernel? would it be possible to connect the Opencv stream to the kernel functions?
12 Comments
Alex
Are the Opencv Transparent API supported on Multicore SOC FPGAs? In addition to using PL acceleration can Opencv use multicore or GPU acceleration on Xilinx platform?
This makes it easier to port code from other development platforms such as Raspi
Andrew Swirski
Hi Alex,
Unfortunately, I do not believe the Transparent API is supported by default for multicore SoCs. Also the ZCU104 runs a ARM Mali 400, which I believe only supports OpenGL ES 2.0 and not OpenCL.
In terms of porting over code, you do really want all the processing to be accelerated through the PL otherwise you end up not taking full advantage of the FPGA. One of the major benefits of using FPGAs is that by setting up full video pipelines through Vitis Vision or our own ClickCV you do not need to constantly move data between memories, leading to significant decreases in latency and power consumption.
Ayushi
I am unable to successfully build the project according to the instructions given and so the SW-emulation is failing.
After including the correct paths in the C/C++ build settings for OpenCV, I am unable to get the option of adding an image (_vimage) to the emulator.
The error reports:
makefile:82: recipe for target ‘opencv’ failed
ERROR: [v++ 60-602] Source file does not exist:Emulation-SW/binary_container_1.xclbin file missing
Andrew Swirski
Hi Ayushi,
binary_container_1.xclbin is needed for the vector addition example that we are changing. If it is still asking for that, it may indicate that the vadd_krnl has not been successfully removed. Please ensure that you have removed and changed all files associated with the vadd_krnl.
If that does not work, instead of using the vector addition template, you can try start with an empty project or a different template.
Ayushi Agarwal
Thanks for your reply. What I was wondering is how are we building the project without any kernel code? Am I missing something in the flow?
Since we removed the accelerated function from the binary container, the package generation does not complete and terminates with an error.
Andrew Swirski
In one of your other posts you mentioned that you are using 2020.1. Do keep in mind that 2019.2 and 2020.1 are very different and that we are redoing these tutorials for 2020.1. To your question, Vitis programs both the software and the hardware. There is no requirement to have kernel code to program the CPU and so we can just delete the binary container. This tutorial is specifically to get OpenCV running on our CPU so that is what we focus on doing.
shinp
Nice Tutorial!
How can I use video streaming with the tutorial settings? It seems that the petalinux board detects the mp4 file in /mnt but v4l2 driver does not want to read it somehow…
Andrew Swirski
To read video files. You can use OpenCV’s Video Capture:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
Do remember you need the correct codec to decode your file and run it, so make sure you have a compatible codec.
shinp
Nice Tutorial!
How can I use video streaming with the tutorial settings? It seems that the petalinux board detects the mp4 file in /mnt but v4l2 driver does not want to read it somehow…
Andrew Swirski
To read video files. You can use OpenCV’s Video Capture:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
Do remember you need the correct codec to decode your file and run it, so make sure you have a compatible codec.
JL
Very helpful tutorial!
In this example the OpenCV program is ran in the CPU.
But in a more ‘real’ case lets say we want to run a Vitis vision library kernel to be accelerated in the FPGA and the input is a camera live stream, how could we interface the camera stream and send it to the kernel? would it be possible to connect the Opencv stream to the kernel functions?
JL
Very helpful tutorial!
In this example the OpenCV program is ran in the CPU.
But in a more ‘real’ case lets say we want to run a Vitis vision library kernel to be accelerated in the FPGA and the input is a camera live stream, how could we interface the camera stream and send it to the kernel? would it be possible to connect the Opencv stream to the kernel functions?