With the continuous growth of the domestic new energy vehicle (NEV) fleet, the charging problem has become increasingly prominent, and the demand for supporting charging infrastructure has further expanded. However, the construction of traditional fixed charging piles inevitably leads to issues such as charging queues and internal combustion vehicles occupying charging spots. The "one charging pile per parking space" model has proven to be inefficient in actual operations, resulting in a simultaneous shortage of charging infrastructure and low overall utilization rates. Against this backdrop, an innovative charging solution has emerged—mobile charging robots.
The mobile charging robot functions like a "large portable power bank." It not only possesses high autonomy and flexibility but also integrates advanced technologies such as AI and big data. With just a simple operation on a mobile phone, the robot autonomously responds, accurately navigates to the side of the vehicle, and provides charging.
The advantages of Qiyang RK3588 in mobile robots.
The RK3588 core board integrates a high-performance octa-core processor, supports AI algorithms, and offers a rich set of input/output interfaces. It can coordinate the functions of various modules, process data from multiple sensors, and perform tasks such as environmental perception, path planning, and charging operations.
The core of the mobile robot lies in precise navigation. The RK3588 integrates a quad-core Cortex-A76 and a quad-core Cortex-A55, providing powerful computing capabilities to process navigation algorithms and sensor data in real time. With a built-in 6TOPS NPU, it can deploy AI algorithms, which is crucial for real-time environmental perception and decision-making in dynamic environments. This enables accurate identification of vehicle models and charging port locations, flexible adaptation to changing routes and obstacles, and significantly enhances navigation precision.
It supports simultaneous input from four cameras, enabling the capture of multi-angle scene features. Combined with the built-in ISP, it facilitates image fusion, providing comprehensive environmental understanding and enhancing the accuracy of localization algorithms.
The core board possesses powerful encoding and decoding capabilities, supporting 8K@60fps H.265, 8K@30fps H.264 hardware video decoding, and 8K@30fps H.265/H.264 hardware video encoding. It can efficiently compress the data captured by the cameras, reducing the bandwidth requirements during transmission.
It supports multiple interfaces such as HDMI, MIPI-CSI, USB, and PCIe for connecting human-machine interfaces, cameras, LiDAR, and sensors. It also allows the connection of 4G/5G and WiFi modules for uploading sensor and video data. Additionally, through network connectivity, it enables remote monitoring of the robot's status and location, facilitating management.
It supports Android and Debian operating systems, providing a stable system foundation and a user-friendly interface. This enhances functionality and scalability, improving the user experience.