| EN

Company service

● Sensor Data Fusion and Environment Understanding >>

Vehicle On-Board Sensing Technology With the development of automated driving technology, the level of automation has increased, and the functional safety and robustness of the system itself has received more and more attention. However, whether it is based on millimeter wave radar, laser radar or camera-based recognition algorithm, due to their working principle and the frequency of the wave used, it has certain detection and perception limitations. So far, no single sensor can achieve a 100% detection rate. Multi-source sensor Data Fusion Due to the increasing of the safety requirements of the automated driving functions and the existence of various sensor detection and recognition errors, it is imperative to improve the quality of the environment-perception without relying solely on hardware technology. Multi-source sensor data fusion technology has been introduced into the field of automated driving. Multi-source sensor data fusion technology applies various types of sensors to the detection and measurement of targets in the same area, and through the post-processing of multi-channel sensor inputs, the sensors become complementary redundant systems to each other. Thus, the disadvantages of an individual sensor are complemented.

In theory, the technology is positive for the detection and measurement effects. However, in application, the wrong parameterization, the wrong selection of an inappropriate algorithm, or the wrong choice of the location where an algorithm to be used may result in a decrease in the algorithm's effect, or even an adverse effect. PilotD has accumulated a large amount of know-how and project experience in the field of multi-source sensor data fusion. We provide customers mainly with services to solve the following problems and ensure a high level of fusion efficiency:
1) Application-oriented algorithm selection and improvement;
2) fusion point selection in fusion or multi-layer fusion;
3) Robustness testing and analysis of the algorithm under the influence of various external influence factors;
4) The working range and accuracy calibration of the fusion algorithm;
5) Dynamic fusion algorithm optimization method;
6) Sensor data preprocessing and prediction of detection results based on information analysis.


Environment Understanding Based on the above described sensor data fusion results, we provide further algorithms on environmental understanding layer. These include: target classification, target intent analysis, target trajectory prediction, target hazard level pre-judgment and other types of post-processing possibilities. Finally, the driving environment is more exactly presented in front of the vehicle ECU through the sensor data, so that the customer's automated driving or advanced driver assistance system can achieve better driving and control ability.

● New Automated Driving System Design and Development >>


In the development of automated driving systems with high automation level, due to the large number of influencing factors, it is difficult to determine all functional modules and their performance requirements at one time through the structural function analysis. Based on the X-in-loop test, we can find out the problems in a new automated driving system for car manufacturers and system suppliers through exhaustive and iterative R&D methods, and provide them with solutions to help them reach higher Driving automation level efficiently. Specifically related to the following areas:
1) Multi-source sensor data acquisition and fusion
2) Electronic actuator control
3) New system architecture design
The development runs mainly in two directions:
1) Development and validation of Level 3 system or above for new application scenarios;
For example: highway congestion automatic driver (consists of full-range adaptive cruise (Full-range ACC) and lane keeping system (Lane Keeping Assistant)) etc.
2) Expansion of the application scenarios of the existing automated driving system.
For example: parking space identification and automatic parking for non-standard parking lots.。
The scope of development services covers the following areas:
1) Main control strategy
2) Actuator control
3) Sensor data analysis and fusion algorithm
……
In addition to algorithms and system development services, we also provide test validation of automated driving-related algorithms and systems, as well as validation results reporting services. We provide customers with one-stop services from algorithm development, to algorithm optimization and finally to algorithm validation testing.

● ViL Test Platform\HiL Test >>

XiL Test Technology According to the system development process, V model, recommended in ISO-26262, in addition to the SiL (Software in the Loop) used for initial principle test and software system validation, the validation test of integrated hardware system is also an indispensable Part of system development and validation. Based on the simulation environment, combined with the calculation results of a part of hardware system, it has three advantages:
1) Software and hardware compatibility and functional integrity can be validated;
2) Validate the functional safety of the subsystem module at a lower cost in the absence of a prototype in the early stage of development;
3) The comprehensive validation test of a local subsystem in purpose of system identification and error tracking.
Application in Automated Driving In the validation of automated driving system, in addition to the pure simulation test SiL, PilotD also provides HiL (Hardware in the Loop) system testing and ViL (Vehicle in the Loop) testing solutions and services for more integrated systems. Our company has reached a strategic cooperation with ZD Automotive GmbH, a leading supplier of test equipment for in-vehicle devices in Germany. Our solutions are as follows:
1) Based on the RT (Real-Time) version of our software GaiA, we provide simulation environment and signals of various environment perception sensors;
2) We use ZD's ZDBOX as a signal converter, generator and collector to enable GaiA to exchange information with the object under test (e.g. a controller ECU, or a vehicle under test);
3) We provide high-fidelity information based on high-precision simulation for various interfaces of various types of hardware, and convert the information into a form that can interact with the interface. (For example: in ViL, for the radar’s Object List, or for Lidar's Point Cloud, etc.) In these HiL and ViL solutions, the GaiA simulation platform and the ZDBOX data logger/transmitter can be connected in two ways:
1)by using a wired data cable,,
2)by using the data cloud for data interaction.

Based on the above described solution, the PilotD solution has the following advantages:
1) Strong real-time ability
The dual synchronization mechanism among the GaiA platform, the ZDBOX software and hardware ensures high real-time signal transmission between the platform and the vehicle or controller.
2) Strong portability
The data cloud-based communication method separates the simulation platform from the vehicle or the controller, which is convenient for the users to remotely control and flexibly build the test environment according to their own test conditions.
3) High adaptability
ZDBOX provides communication possibilities for various vehicle bus protocols including CAN, CAN-FD, FlexRay, LIN, Ethenet, RS232, etc. GaiA software can provide hundreds of vehicle data and environment perception sensor data (millimeter wave radar, laser radar, camera, etc.), even including raw data in each cascade of data processing. As a result, the system can adapt to the various requirements of customers. In addition, PilotD also provides a variety of software and hardware adaptation and calibration services, which can complete the requirements of customers for building a perfect HIL/ViL test platform.
4) High cost-performance ratio
In the architecture of our solution, the simulator and signal generator are separated, which helps customers to save the cost of high-performance in-vehicle simulation equipment and minimize the cost and maximize the performance of the solution itself.

● Reproduction of Real Scenes >>

3D Reproduction of Real Scenes
The three-dimensional reproduction of real scenes, or so-called high-precision maps, has become an indispensable part of autonomous driving. It mainly consists of high-precision 3D scene models and semantic information. The three-dimensional reproduction of real scenes is mainly used in two ways:
1) It is used to match the information detected by the environment perception sensors in real time, which is aimed at positioning the vehicle with high precision.
2) It is used to truly represent the real scene in the simulation environment, which can make the substitution of real road test by the simulation-based test possible. Its high valid representation of the driving environment ensures high availability of simulation test results.
There are two main methods for collecting real scenes (or high-precision maps):
1) Modeling based on the laser sweep of vehicle-mounted lidar;
2) Modeling based on oblique photography using a special camera mounted on an aircraft.
Oblique Photography
PilotD has launch a service based on oblique photography with a partner in China, to build and reproduce real scenes. Compared to the traditional modeling based on the sweep of on-board lidar, this solution has the following advantages:
1) Absolute accuracy and relative accuracy up to 10 cm;
2) In addition to roads, surrounding buildings and road attachments can be recorded and modeled. This guarantees the highest valid representation of the model;
3) The built model directly comes with a real texture (obtained directly by photographing, not by a post-processing);
4) Higher efficiency;
5) More suitable for camera-based location matching (SLAM), or image recognition (deep learning training and testing).

● Test Scenario Generation and Reproduction based on Incident Analysis >>

Accident Analysis & Automated Driving The system test validation can be divided into two types: validation based on positive logic and validation based on negative logic.
1) Positive logic-based test validation aims to validate the functional safety of the system in all cases by analyzing the system functions and exporting test cases for the test system based on all the use cases of the system under test.
2) Negative logic based test validation is the opposite of positive logic based test validation. This type of method is designed to execute the validation test by using error-prone scenarios of the system under test. These scenarios are normally identified by error analysis methods (e.g.: FTA, FMEA, etc.). If the system under test can operate normally in error-prone scenarios, the functional safety of the system can be demonstrated from the reverse side.
The negative logic-based test validation method is more suitable for validation of complex systems (e.g.: high-level automated driving systems) than positive logic-based test validation methods, because it has some advantages like low number of test cases, target-oriented test and so on. Therefore, it is considered by the industry worldwide to be one of the methods for validation of high-level automated driving systems.
For automated driving systems, error analysis is naturally linked to accident analysis. The original meaning of automated driving systems was to copy the driving behavior of a human driver. Therefore, a scene (accident scene) in which the human driver makes a mistake can also be recognized as an error-prone scene of the automated driving system, and therefore identified as a test case for the automated driving system validation.

Automated Driving Test Scenario Based on Accident Analysis
PilotD has entered into a strategic partnership with the Shanghai United Road Traffic Safety Scientific Research Center. The two companies work together to expand and promote automated driving validation technology based on traffic accident analysis and simulation reproduction.
On the one hand, based on thousands of traffic accident scenario information from 2005 collected by the Shanghai United Road Traffic Safety Scientific Research Center, we filtered useful information, and represented and reproduced the scenarios in the GaiA simulation environment to form a test case set. On the other hand, based on the existing traffic accident scenarios, we changed and traversed the essential parameters in the scenario construction, to expand the test case set, and finally reach a high test coverage and complete the test case set.
In this way, our company will help speed up the rising of the validation level and early marketization of automated driving systems.

● Test Case Generation for Automated Driving Systems >>

Test Cases for Automated Driving Systems Validation test is one of the major challenges encountered in the development of automated driving systems in the world today. This challenge is primarily reflected in the integrity of the test cases used for automated driving system validation. Due to the complexity of such systems and the large number of various external influence factors affecting their functional safety, the existing test case generation methods are difficult to ensure the integrity and effectiveness of system-level validation.
Currently, there are three general test case generation methods for validating systems:
1) Interface-based test case generation;
2) Specification-based Test case generation;
3) Risk-based test case generation.
Each of these three generation methods has its own advantages and disadvantages. It can neither meet the requirements of the automated driving system validation, nor guarantee the test efficiency under the premise of meeting the functional safety test requirements of the automated driving functions.
PilotD’s Hybrid Test Case Generation Method
PilotD offers a new application-oriented use case generation solution for automated driving system validation: a hybrid test case generation method. The method is based on and integrates the above three methods, and promotes the strengths and avoids shortcomings, so that the test cases for the validation test of the automated driving systems can be generated with the following advantages:
1) High test coverage
2) Test cases are well structured
3) High test efficiency
etc..
PilotD's test case generation method takes various external influence factors into consideration, models and analyzes specific applications in ADS, and obtains the application-oriented working conditions of each component of ADS system. Subsequently, based on the classification of the error of the various components of the system, the type of component error that is critical to each use case is identified. At the same time, according to the working principle of each component, the corresponding relationship between external influence factors and error types of each component is summarized. By combining these factors, test cases can be generated to test whether a particular type of error for a particular component that is critical to a usage condition is "stimulated." Thus, these test cases can be used for system validation purposes.
PilotD uses this structured test case generation method, as well as a test case simplification method, to help customers to quickly generate test case sets with high test coverage based on the functional specification of their systems or automated driving applications; The test cost by using the generated test cases can be controlled within an acceptable range as well. In this way, PilotD can help customers quickly complete the validation test process of a new automated driving system and speed up system release.
References 1)Cao, P. and Huang, L., "Application Oriented Testcase Generation for Validation of Environment Perception Sensor in Automated Driving Systems," SAE Technical Paper 2018-01-1614, 2018, https://doi.org/10.4271/2018-01-1614.

● Video and Image Recognition >>

SRS Image Recognition System Design ideas: The SRS image recognition system is a system based on camera sensors, which can detect and recognize the dimensions and location of other traffic participants, traffic signs and information, for automated driving vehicles. The advantage of this system is that its accuracy is higher than that of the general image recognition systems. The system mainly consists of three modules: video acquisition module, image entity recognition module and image calibration and size calculation module. Video Acquisition Module: The video acquisition module of SRS is mainly aimed at the acquisition of video by camera. Now the standard version of the video acquisition module will complete an acquisition of 30-50 frames video image in 3 seconds, and transfer the key frame to the image entity recognition module, and finally control the system operation time within an acceptable range for the system. The better the performance of the used camera is, the better the effect can be. Image Entity Recognition Module: The image entity recognition module of SRS is also the core part of the system. It is mainly designed to recognize the size of the traffic participants (including its height and diameter, or height and width), the center position of the traffic participants (including radial position), and the traffic sign. For this purpose, we provide two solutions. The first one is based on the traditional image recognition technology: template matching solution. The second one is automated image recognition and calibration solutions based on deep learning algorithms: resnet, vggnet and inception. According to the specific requirements of customers, we can select and implement the optimal solution to complete the requirements.
● Solution Based on the Traditional Image Recognition Technology This solution mainly uses some traditional image recognition technologies including preprocessing, feature extraction, classifier design, and template matching. Since the traffic sign and information (including the lanes) in the automated driving environment are relatively in a fixed size, these objects can be easily detected and positioned by matching the real view with the image templates (pictures) taken previously. The advantage of this solution lies in the fact that the amount of data used is small, and a relatively accurate identification system can be constructed quickly. For automated driving, this solution can provide high quality and high efficiency detection of traffic signs and information (including lanes) in the case where the types of the objects to be detected are fixed and the quantity is small. ● Automated Image Recognition and Calibration Solutions Based on Deep Learning Algorithms: Resnet, Vggnet and Inception The solution lies in the use of the latest deep learning image recognition technology to construct an automated image recognition and calibration solution. The specific method in this solution is based on the existing Resnet (depth residual network). It performs a fine-tuning process on pictures of automated driving scenarios and ultimately enables the existing depth residual network to learn the elements in the automated driving scenario pictures, so that the network can be more effectively applied in automated driving scenarios. The so-called fine-tuning and Transfer Learning are two different concepts. However, limited to the field of CNN training, you can basically regard fine-tuning as a method of Transfer Learning. For example, we have a new data set and let engineers do a picture classification. This data set is about cars. The problem is that compared to the wide variety of car models on the market, there are few categories of cars in data, and there are not many data. The training of CNN from zero is very difficult and it is easy to overfit. In this case, Transfer Learning is a good solution. Transfer Learning is performed by using an ImageNet model that has already been trained. The ImageNet may not be used for automated driving, but because ImageNet can have a lot of tagged training data set, the generalization ability of pre-trained models (such as CaffeNet) can be expanded. Ignoring the middle layers, only by fine-tuning the latter layers, the results can usually be very satisfactory. Simply put, we are fine-tuning and adapting our existing scenarios with a model that others have built using millions of images, so that the complex image features of that millions of images can be applied to this case. This makes the system obtain better accuracy when using fewer pictures. Most importantly, the amount of data required to achieve high performance recognition classifier is relatively small. In this way, the automatic driving scenarios can be efficiently and stably modeled to accurately obtain the desired recognition classifier.
Another advantage of this solution is that it can use the GPU to solve very complex matrix and vector operations. It can still achieve faster response speed and excellent recognition quality when the identified traffic participants are various and different in form. For ordinary machine learning image recognition technology, an important difficulty in image recognition in an automated driving environment is that the reflection of ambient light on highly reflective materials can affect image recognition. Our automated image recognition based on deep learning resnet, vggnet and inception can automatically learn these interferences and solve the problem well.
Image Calibration and Size Calculation Module: The function of the image calibration and size calculation module of the SRS makes it possible to measure the size and position of the traffic participants. After the image entity recognition module detects the position of the specific objects, this module performs a series of image processing, including pixel calculation, to determine their final calibrated position and size. By using a second template matching the accuracy of the detection can be ensured. After that, the result will be finally outputted. This entire process ensures a stable detection and measurements with high quality.