FPGA-based drawing robot
Yang Yuxia, Xu Chencheng, Wang Mengrou
The design aims to liberate the human resources of children's drawing education, and design a robotic arm that can display, copy and repeat graphics in real time. Among them, controlling mechanical brush movement, acquiring images, detecting gesture trajectories in real time, transforming image data, transforming data and hardware interaction, and improving image transmission speed are all important. We therefore decided to design a robotic arm that combines machine vision, image transmission technology, and control technology to draw built-in images, simple copying still images, and real-time tracking gesture track drawing.
The work focuses on machine vision, collection of image transmission technology, control technology in one intelligent application design, with FPGA (PYNQ) as the core processor, control the robotic arm to draw the built-in image; capture the static image through the camera in real time; capture through the camera The video stream realizes the recognition and tracking of the camera's opponent's motion trajectory, and the corresponding robotic arm tracks the drawing in real time. The robotic arm has a wide range of application scenarios, which can be used as you go, saving time and teaching simultaneously. It has three optional modes, which can be easily switched with user intent.
The system is composed of a camera module, a motor drive module, a contour acquisition module, a display module and a processor module. In the built-in mode, the built-in image is read by the FPGA, and then the command is issued to control the rotation of the motor to realize the drawing motion of the robot arm. In the processor mode, the arm core transmits the stored png image to the bram and is read by fpga. In the contour acquisition mode and gesture trajectory tracking mode, the image capture or gesture trajectory is captured by the USB camera, and the information is transmitted to the ARM core, and the image contour or gesture trajectory is processed by the ARM, and the data obtained by the acquisition process is passed. The AXI bus is transmitted to the FPGA control board, and the motor is controlled by the aforementioned code to realize the control of reappearing the image captured by the camera, and the image information obtained by the processing is outputted from the HDMI interface to the display.
The innovation point of this work is:
1. Implemented custom pynq coverage, speeded up data transfer rates, and enabled efficient data interchange transfer from ARM core to FPGA.
2. The recognition of the image contour is realized. Compared with the algorithm of the same contour recognition, the noise is less and the contour is clearer.
3. The gesture trajectory tracking is realized, and the position and trajectory information of the fingertip can be obtained in real time, and the tracking drawing can be realized.
4. In the FPGA section, we used the connected-domain algorithm and prioritized it to make it more suitable for continuous drawing of images, avoiding point-by-point printing. And set the return to the origin of the situation, convenient for the next run.
FPGA built-in diagram:
ARM built-in diagram:
Source Code Github Link