Cognitive Auto Pilot
C-Pilot is a computer vision-based self-driving software for all types of ground transportation: passenger cars, commercial vehicles, agricultural machines, rail transport etc. The strongest features of our technology include road scene object detection in bad weather conditions. That allows us to develop cutting-edge software components applicable for various ADS systems. With 99% accuracy C-Pilot performs the following tasks:
- Car detection;
- Forward collision warning;
- Pedestrian detection and protection;
- Lane detection;
- Lane departure warning and lane keeping assistance;
- Traffic sign recognition;
- Traffic Jam Pilot;
- Highway Pilot;
C-Pilot has a centralized architecture consisting of four modules:
- “Observer” is responsible for road scene object detection;
- “Geographer” is responsible for event detection and positioning objects;
- “Navigator” is responsible for trajectory and rout planning;
- “Machinist” is responsible for actuation devices operation.
C-Pilot is based on the most advanced and high-precision algorithms for object detection. Some biological approaches to computer vision together with use of the affordable mobile processors decrease development cycle and provide for upgradable low cost solution with a minimum need of computational power. Multiplatform software allows to quickly adapting the system to any hardware platform.
C-Pilot has the following vital advantages:
- C-Pilot has intuition;
- Controls risky zones (detects small objects seeing details);
- Operates on real roads (damaged surface or bad road marking);
- Operates in bad weather and lighting conditions.
Combination of data processing methods allows efficient avoidance of false detections and continuous enhancement of quantitative metrics. We constantly gather data for neural networks training and by now possess hundreds of thousands hours of separated videos and data from other sensors. Apart from that, we permanently improve testing methods.
C-Pilot is an industrial level system. The most meaningful accolade to us, though, is the continued trust and confidence of our OEM and Tier-1 clients from all over the world.
In addition, the company developed a revolutionary approach for data fusion on the low-level, which improves recognition accuracy by 20% and sustainability by 25% compared to high-level data fusion approach.
The principle of operating this approach is as follows. We receive raw data from sensors (for example, from camera and radar) and then adjust it to synchronize data (Concat) → This synchronized data goes to neural networks where it is processing with DNN → Results.
This integration of data from different devices makes it possible to fill the missing information (the information could be missed in high-level data fusion approach) for better understanding of the current road scene. Complex use of all data allows combining information about the speed, type of the object and distance to it, its location and physical characteristics. The implementation of this Fusion technology alone reduces the accident rate of autonomous vehicles to 25%.