5G-ERA (5G Enhanced Robot Autonomy) aims alongside all other ICT-41 projects to provide an enhanced 5G experimentation facility and relevant Network Applications for 3rd party application developers so as to provide them with a 5G experimentation playground to test and qualify their applications.


Robot autonomy is essential for many 5G vertical sectors and can provide multiple benefits in automated mobility, Industry 4.0 and healthcare. 5G technology, on the other hand, has the great potential to enhance robot autonomy.


Use cases from four vertical sectors, namely public protection and disaster relief (PPDR), transport, healthcare and manufacturing will be validated in the project by rapid prototyping of Network Applications solutions and enhanced vertical experiences on autonomy. These case studies can be regarded as showcases of the potential of 5G and 5G-ERA to the acceleration of the ongoing convergence of robotics, Artificial Intelligence (AI), Machine Learning (ML) and cloud computing; and to unlock a next level of autonomy through 5G based learning in general.

BringAuto in the project

Our use case is situated in the industrial environment where long distances between various buildings and halls need to be covered by autonomous vehicles in a timely manner and the terrain is far from ideal at the same time. In most cases, such automation/robotization trends are motivated by e.g. lack of qualified employees or high staff costs, low scalability and time flexibility (shift operation), HSE risks regarding employees injuries and increasing CO2 emissions.


BringAuto’s autonomous industrial Delivery Robot is used for cargo transportation within intralogistics of production plants and its operation is mainly outdoor. Thus, the electric 4-wheel chassis of a passenger car size is designed and produced robust enough to withstand all the everchanging weather and operating conditions. The robotic platform is equipped with the drive-by-wire technology, sensoric set allowing for autonomous driving and teleoperation (remote control) and other electronics. In this case, the robot weighing 900 kg is capable of carrying another 700 kg of cargo and travelling at speeds of up to 40 km/h with a range of up to 70 km. The robot utilises camera-based teleoperation and LiDAR-based autonomy with corrections made by precise GPS. Another set of cameras and radars and odometry data are also used for its navigation on pre-mapped routes. The whole system is modular and customizable, both regarding HW and SW.

5G-enhanced semi-autonomous transport

In most industrial areas there is a mixed traffic consisting of pedestrians, cyclists, motorcyclists, passenger cars, light and heavy-duty vehicles, lorries, trucks, tractors, forklifts, trains, etc., and the operational environment is very complex. Thus, it is not only about the autonomous navigation of the robot through the area, but it is also necessary to detect objects and avoid collisions, which is essential for safe operation of such a robot. Moreover, off-loading of such tasks from the robot to the cloud is very important in order to save battery and computing power.


Within the 5G-ERA project, the BringAuto’s Delivery Robot is used for development, testing, validation, verification and deployment of two Network Applications called Train Sensor and Collision Avoidance. There is the NVIDIA® Jetson Nano™ integrated in the robot for these Network Applications on which the respective clients are running and communicating via public 5G networks with the 5G-ERA Middleware deployed in the cloud, which is orchestrating utilisation of the Network Applications (deployed in the cloud as well). In this way, video streams from the cameras (which are already integrated in the robot and used by the autonomy system) are sent to the cloud for processing and results are obtained from the Network Applications in order for the autonomy to act accordingly. This is a brief description of how the object (train) detection/tracking and collision warning (avoidance) work and allow the robot to drive safely along the route. In case there is a poor network performance in the area (weak signal strength, high latency/turnaround, low throughput), it is always possible to switch from cloud to local processing for better performance, even though the off-loading would be naturally absent in this case. For this reason, both Network Applications can be deployed also locally on the robot.

Train Sensor

Railway level crossings are typical for heavy industry production plants, where trains are needed for carrying heavy loads, or larger traditional industrial areas built in the past when rail transport was often used for carrying cargo. The railway crosses the road in many places on the robot’s route and the trains are sometimes approaching the crossing from the blind spot direction of the robot. Thus, it is essential for safe operation of the robot to detect them before it crosses the railway. Two automotive grade GMSL side cameras with 120° FoV mounted in the front sensor box of the robot are used for this purpose – front right and front left.


Workflow of the Train Sensor Network Application is as follows:


  • The robot stops in front of a crossing.
  • It triggers the detection for how much time is needed in order to be 100 % sure of a safe crossing.
  • The YOLOv5 TensorRT Detector detects all trains in the vicinity of the particular crossing. Four types of obtained results are NO TRAIN DETECTED; TRAIN DETECTED, NOT MOVING; TRAIN DETECTED, MOVING; NOT POSSIBLE TO DETERMINE.
  • If there are no detected trains or all detected trains are not moving for e.g. 7 seconds (this value can be set), the robot continues in its drive.


In this way, the Train Sensor Network Application is not running the whole time, but it is rather triggered only on the railway crossings, which contributes to the desired computing power off-loading result.



Collision Avoidance

Obstacle Detection and Collision Avoidance systems are essential for safe vehicle navigation and halt within autonomous operations regardless of the exact robot deployed or its designated environment. In this case, the camera-based front collision warning system detects objects of defined classes, tracks them, projects them to the robot space and predicts their path in relative coordinates. If the object is about to enter or if it has already entered a caution zone of the robot, which can be configured depending on the exact needs and the robot speed on the route, the Collision Avoidance Network Application generates a corresponding signal for the robot to act accordingly. Ideally, the object should never enter a danger zone of the robot – otherwise a collision may occur even though the robot starts to perform emergency braking immediately.


For this purpose, one automotive grade GMSL front camera with 120° FoV mounted in the front sensor box of the robot is used. Workflow of the Collision Avoidance Network Application is as follows:


  • The robot drives at full speed as long as there is no object detected or the detected object is not about to enter any of the robot’s zones (no signal generated).
  • If the object is about to enter the caution zone, the robot slows down to half of its full speed (CAUTION signal generated).
  • If the object is in the caution zone, the robot stops (WARNING signal generated).
  • If the object is in the danger zone, the robot performs emergency braking (DANGER signal generated).
  • The robot remains stopped and does not continue until it is cleared to do so (CAUTION or no signal generated).


Ability of the autonomous driving system to respond accordingly is given by the real-time object detection and tracking in this case, which is running for the whole time of the robot operation.



Project Partners

  • Robotnik Automation
  • Cognitechna
  • eBOS Technologies
  • NEC Laboratories Europe
  • Brno University of Technology
  • OTE Group
  • Cal-Tek
  • Wings ICT Solutions
  • University of Bedfordshire
  • HAL Robotics
  • TWI
  • Iquadrat Informatica

More information










Follow us @BringAuto and you will not miss anything