Innovative business outcomes can be delivered when automation is powered by artificial intelligence at the core of an intelligent enterprise. The emerging consumer products retail ecosystem can deliver high value use cases connecting digital consumer insights with shopper-centric supply chain. Some of these high return use cases can be brought to life by integrating enterprise systems with autonomous mobility systems. The fully integrated artificial intelligence (AI) system can recognize objects and handle it without human intervention while navigating store fronts, shop floors, fulfillment centers, distribution centers, and warehouses.

Our autonomous picking robot can accurately detect and pick items in any unstructured environment. The robot receives an order to pick several items and navigates itself to the shelf. The robot recognizes the item to pick using machine learning and navigates its positions itself to reach out and grab the item. The robot view of the picking action can be seen on the video above. On the left side of the screen is the machine vision as the robot navigates to pick the item. When the desired object is recognized, the robot extends its arm and picks the item. The right side of the screen is the Lidar map that the robot uses to navigate. The mobile robot brings the item to a transport robot that contains the rest of the order. From here the transport robot delivers the filled bin to the packing station for boxing and shipping. The picking robot returns to pick its next order. The AI mobile picking robot delivers low cost, fast, reliable, mobile picking for structured and unstructured environments including retail, warehouse, and medical applications.

This fully integrated AI system is built by unifying the power of edge semiconductor processors, accelerators, sensor fusion, and trained neural networks. Both deep learning at the cloud and machine learning optimization at the edge are implemented. Industry 4.0 business use cases are defined and deployed with these AI systems. The technical competencies include semantic segmentation neural network, machine vision, sensor fusion with LiDAR, RGB-D camera, IMU, and robotic operating system.