Binocular machine vision system based on PYNQ

Home    Projects    计算机视觉    Binocular machine vision system based on PYNQ

Guo Chao, Niu Yifeng, Sun Zhengyuan, Cao Yunhe (Mentor)

Xidian University

 

Overview

This work designs a low-cost, high-reliability, and easy-to-transplant binocular visual cargo sorting system. Using the heterogeneous chip Zynq7020 as the core processor, the binocular camera is used to collect image information and transmitted to PYNQ. The image processing algorithm is deployed on the PS side of PYNQ to extract information such as object type and depth of field. The host computer sends the information to the PS through the UDP protocol. The PS terminal sends a control signal to the PL end according to the command and the extracted image information, and the PL end generates a PWM wave according to the control signal, controls the robot arm to reach the designated position, and then grabs the corresponding cargo and places it in the target area.

According to the project requirements analysis, the functions that need to be implemented in the ProgramableLogic side of the FPGA are to realize the hdmi display and the robot arm control of the image. Since the official Base project contains a lot of unneeded functions, PMODA and PMODB even use the Microblaze soft core, which has high complexity and long compilation time. In order to speed up the debugging process, the system organization was re-customized.

In order to save the system debugging time and improve the stability of the system, the video IPIP in the official Base project is used in the design to realize the hdmi output. The module communicates with the ProcessingSystem through the HP0 interface at a high data rate. The Video module and the customized robot arm control module are controlled through the GP0 interface. The 4-way PWM signal used to control the sorting robot control is output from the 1/2/3/4 IO of the PMODB interface for connection to an external robot arm. The system workflow can be described as follows. First, the binocular camera data is collected by cv2 in the hard core. The captured image is two binocular images of size 640x480. The binocular images after acquisition are respectively calculated by the depth of field algorithm and the localization algorithm to calculate the two-dimensional coordinates of the object block, and the positioning of the object to be sorted is realized. In the system, different kinds of objects to be sorted are represented by different colors, and the PS end receives the UDP industrial control data packet sent by the upper computer, and the data package contains the sorting area that needs to be placed for different kinds of items. Then, the sorting robot arm control logic calls the driver according to the control information to send a robot arm control instruction set to the robot arm control module located at the PL end through the MM0 interface through the MMIO interface, and the control robot arm performs the sorting operation. The image processed by the algorithm is transmitted by the Video driver to the VideoIP of the PL end using the HP0 interface, and finally output to the external display. The system consists of a binocular camera module, a robot arm, a display, a PYNQ core board, and a host computer system. In the basic working mode, the PYNQ control binocular camera captures high-definition images in real time and performs depth of field measurement on the ProcessingSystem side, and the target calibration algorithm is processed. The position and distance of the object are obtained, and the calibration information and the distance information are displayed on the LCD screen through the HDMI interface and the real-time screen. At the same time, the object position and color information are obtained by algorithm processing in PYNQ. Then control the arm to adjust the posture to grab the block and place the block to the designated area. In the extended working mode, all functions of the system are controllable. The user can set which color block needs to be captured to a specific area in the upper computer interface, and the system can work in the setting mode.

The innovation of this work is:

1. Through the binocular camera to achieve target recognition and extract object features, compared to other target recognition schemes, this scheme can be applied to a wide range of applications. The identification of different objects does not require changes to the hardware structure, and only the feature samples of the corresponding objects need to be trained. Efficient and accurate object recognition is achieved.

2. The object is recognized, the three-dimensional distance information can be extracted by the parallax, and the richer position information can be obtained with respect to other sensor schemes.

3. The algorithm is deployed on PYNQ, which greatly enhances the scalability of the system. The hardware acceleration of the recognition algorithm using FPGA will be much better than other platforms. The entire system is low power, low cost, and highly scalable.

 

System Architecture

C:\Users\Administrator\Desktop\图片2.wmf

 

 

 

 

C:\Users\Administrator\Desktop\图片5.wmf

       

        

img4

C:\Users\Administrator\Desktop\图片9.wmf

Design Demonstration

 

C:\Users\Administrator\Desktop\QQ图片20181201172135.jpg

img7

 

 

Source Code Github Link

Description: After continuing to improve, the project plans to participate in the next year's graduate electronic design competition, temporarily unable to open source.

2019年3月12日 12:53
浏览量:0
收藏