《1.Introduction》

1.Introduction

Intelligent vehicle test platform is a recent theory. It is the practical achievement of several academic fields, including cognitive intelligence, artificial intelligence, and control science. This test platform also provides a foundation for intelligent driving theory and technique studies. In 1950, the American company Barrett Electronics Corporation conducted research on unmanned vehicles and developed the world’s first autonomous navigation vehicle [1,2], although this concept originated from the Defense Advanced Research Projects Agency of US Department of Defense, which is one of the world’s leaders in this area of study. European countries have been developing unmanned driving technologies since the mid-1980s; considering these unmanned vehicles to be independent individuals, these countries tested these vehicles in normal traffic flow[3]. In 1987, the Programme for a European Traffic of Highest Efficiency and Unprecedented Safety (PROMETHEUS Project) was jointly launched by Bundeswehr University Munich, Daimler-Benz, BMW, Peugeot, Jaguar, and other popular R&D institutions and automotive companies, significantly influencing the global automobile industry[4,5]. The Advanced Cruise-Assist Highway System Research Association, which is affiliated with the Japan Ministry of Land,Infrastructure and Transport, has launched advanced safety vehicle projects since the 1990s to support research initiatives on unmanned driving technology every five years[6,7].  In China, research on unmanned driving technology started in the late 1980s, supported by the National High-tech R&D Program (863 Program) and by a related research program of the Commission on Science,Technology, and Industry for National Defense [8]. Several Chinese researchers have been addressing the future challenges of intelligent vehicle since 2008, with support from the National Natural Science Foundation of China. In spite of the increasingly difficult competition, more and more teams take part in the future challenge of smart cars in China every year. Car enterprises are gradually improving their R&D for the rapid application of unmanned driving technology to domestic cars[9].

Sensors are configured to perceive external and internal environments, including the surroundings, state, orientation, and location of the intelligent vehicle; therefore, sensor configuration is the foundation for driving intelligent vehicles. However, there is no uniform standard strategy for sensor configurations; therefore, intelligent vehicle test platforms with various sensor configurations, types, and installation locations are based on different strategies. Certain research teams, such as the teams from VisLab at the University of Parma in Italy [10] and the Karlsruhe Institute of Technology [11], use visual sensors; by contrast, other research teams,such as Google’s self-driving vehicle team[12] and those from the Munich University[13], adopt radar sensors. A decision-making program is necessary when configuring sensors. Several essential redundancies and capabilities are also used to improve the reliability of environmental perception and minimize the costs of sensor configurations. Finding a single or final solution for the sensor configurations and types is infeasible. In this paper, we introduce a self-driving vehicle technical framework based on a driving brain to embody human cognition in the framework design. The proposed framework improves the current practice of using different sensor quantities, types, and installation locations. As a result, the driving-brain-based technical framework can be relocated to different smart vehicle platforms despite variations in sensor installations.

Research on intelligent vehicles and intelligent driving technology is conducted to improve transportation safety, prevent and reduce traffic accidents, reduce fuel consumption, minimize environmental pollution,and accelerate social intelligence. An intelligent vehicle is a kind of wheeled robot, and its application combines cognitive intelligence, artificial intelligence, and control sciences, as well as several other advanced and novel technologies.  In general, the goals of intelligent vehicle research are as follows: to realize human and machine dualmotor driving and control, to achieve human-and machine-driving harmony, to improve vehicle driving safety, and to promote the development of the intelligent vehicle industry.

In this work, we establish a hardware platform for an intelligent vehicle based on driving brain theory. In particular, the driving activities of a human driver are analyzed. The main contributions of this paper are summarized as follows:

(1)An analysis of the working principles of brain cognition and the driving activities of a human driver is conducted, following the MengShi intelligent vehicle test platform, in order to establish the relationship between the different functional areas of a driving brain and the computer software modules. Thus,driving cognition is expressed in formal cognitive language; that is, a universal intelligent driving software architecture is developed for intelligent vehicles, with the driving brain as the core of the design.

(2)Various sensors are used by an intelligent vehicle. These sensors are installed in different locations. A uniform framework is therefore established for information consolidation. In this work, a method for low coupling between the intelligent decision module and the sensor is realized according to natural human cognitive laws and corresponding to the abovementioned design (that is, with the driving brain as the core).

This paper is organized as follows. In Section 2, we analyze human driving activities and establish the driving brain frame-work. In Section 3, we establish the hardware configuration and the connection of the MengShi intelligent vehicle based on a driving brain framework. In Section 4, we present the analysis of the MengShi intelligent vehicle sensors and configuration and the experimental result. Finally, Section 5 discusses and Section 6 summarizes this paper.

《2.Region correspondence of driving brain and human brain functions》

2.Region correspondence of driving brain and human brain functions

Human and intelligent driving systems act in three areas;namely, perceptive, cognitive, and physical spaces. In the perceptive space, human and intelligent driving systems derive signals about the surrounding environment and their own states through various senses, such as vision, smell, and touch. In the cognitive space, the human and driving brains of self-driving vehicles initiate selective attention mechanisms to extract key traffic elements using signals in the perceptive space. The brain analyzes current and historical driving situations to make a decision by using acquired knowledge and experience. In the physical space, humans use their limbs to control the steering wheel, throttle, and brake; thus, the vehicle at or near the expected state can send signals to the perceptive space, thereby forming a closed-loop control. Self-driving vehicles perform similarly by using mechanical structures and electrical signals (Fig. 1).

《Fig. 1》

Fig.1.Driving activities in the three spaces.

A human brain accomplishes learning and memory processes to realize driving activities through the different parts of the brain working as a single unit. The driving brain uses computer technoogy to deconstruct activity mechanisms, analyze information, and complete driving tasks through functional modules[14]. The human brain comprises sensory memory, working memory, long-term memory, computing hubs and thinking, motivation,personality, emotion, and other functional areas.

Sensory memory completes the instantaneous storage of sensory information, which demonstrates extensive accumulation despite a brief storage time. Intelligent vehicles utilize sensors to derive images, send cloud and other raw signals (i.e., stored in buffer areas), and quickly produce new data from old data consistent with their task of perceiving their surrounding environments. This mechanism is similar to the working principle of the sensory memory[15].

The sensory information of the sensory memory can be analyzed quickly by computing hubs and thinking functional areas to extract current activity-related contents through selective attention mechanisms and then deliver these types of sensory information to the working memory. Pretreatment of the information-processing modules of a driving brain sensor is completed by analyzing all kinds of vehicle sensors and driving process information, such as lane lines, traffic lights and signs, surrounding vehicles, pedestrians, self-status, and position; information unrelated to the driving process is discarded[16].

Long-term memory stores important driving experience,knowledge, scenes, and other information related to intelligent driving maps and operation models. A driving map accurately records driving-related geographic information, such as lane width, traffic signs, and static obstacle information. A driving operation model comprises trajectory tracking, follow-up, lane-changing, and overtaking models. The driving operation model is the operation standard of the intelligent vehicle. Driving maps and operation models comprise collective prior knowledge of intelligent driving systems. Computing hubs and thinking processes complete the extraction and transmission of information to the working memory of current activity -related contents in the long-term memory. This extraction process corresponds to the mapping module of intelligent vehicles.

Job memory contains important information about the current driving activity[17]. This information originates from real-time information partly extracted from the sensory memory and partly derived from prior knowledge extracted from the long-term memory. This real-time information and prior knowledge are integrated; these data form the information pool that is used for the analysis and decision-making of the computing hubs and thinking functional areas. The intelligent driving system focuses on a common data pool, which is a formal expression of driving cognition. The multidimensional real-time driving information provided by each sensor information-processing module and the prior driving information from the driving map are expressed as a formal description of the driving situation. This formal language fully reflects the traffic around the intelligent vehicle.

Human computing hubs and thinking functions perform real-time decision-making on the basis of the information gathered from the working memory, and then use human limbs to control and respond; a similar mechanism is used for the intelligent decision-making and automatic control modules of intelligent driving systems[18]. Intelligent decision-making modules based on either current or historical driving situations combine with prior knowledge to complete behavior selection, path and speed planning, and other functions. Automatic control modules receive planned paths and speeds to complete the synergistic control of throttles, brakes, and direction-setting for the vehicle to reach or approach the expected state.

The human brain has personality, emotions, and other functional areas. These characteristics reflect the driving styles of various drivers at different times and locations. For intelligent driving systems, driving style is determined by the parameters in the driving operation model. Emotions, such as anxiety and fear, are unique to humans. Human driving behavior is affected by emotions that can hinder safe driving in certain cases. By contrast, the driving brain excludes the emotions that are typically embodied in the human brain; thus , it is necessary to ensure the safety and stability of driving behavior[19]. The correspondence between the human brain functional areas and the driving brain functional modules is illustrated in Fig. 2. The box describes the function of the driving brain.

《Fig. 2》

Fig.2.Correspondence between the human brain functional areas and the driving brain functional modules. SLAM:simultaneous localization and mapping.

Different intelligent vehicle test platforms have various sensor models, quantities, installation locations, and information-processing modules. In addition, there are insufficient standards regarding the information granularity that must be provided in different driving maps. Therefore, the number of intelligent driving system software modules and interfaces are different in different platforms. The driving brain is considered to be the core of the intelligent system design and formalizes driving awareness. If the formalized language leads to driving awareness, then the intelligent driving software framework is designed successfully. In this framework,  the intelligent decision-making module is indirectly coupled with sensor information; alternatively, intelligent decision-making is performed according to a comprehensive driving situation that is formed from sensor information and the prior mapping information[20]. The structure of the MengShi intelligent vehicle test platform based on the driving brain is depicted in Fig. 3.

《Fig. 3》

Fig.3.MengShi intelligent vehicle test platform based on the driving brain. CAN: controller area network; CT: computed tomography; GP:global positioning; MMW: millimeter-wave; OBD:on board diagnostics; RTK: real-time kinematic.

《3.Hardware platform framework based on the driving brain》

3.Hardware platform framework based on the driving brain

《3.1.Hardware configuration》

3.1.Hardware configuration

Different kinds of intelligent vehicle systems, such as oil-or electric-powered cars and buses, have different mechanical structures and mechanical–electrical transformations. After several engineering tests, intelligent vehicle test platforms have been found to be capable of realizing real-time communication, such as in controller area network (CAN) buses, and can accurately complete and control the directions, throttles, and brakes of intelligent vehicles in real time. Real-time communication also ensures the dynamic performance of intelligent vehicles to ensure their conformity with typically finished vehicles.

Vehicle sensor configurations are conducted to meet the reliability requirements of environmental perception and to minimize production cost. The following radar sensors can be utilized:SICK laser radars, millimeter-wave radars for low obstacles, four-line laser radars for dynamic obstacles, eight-line laser radars for drivable road areas,and Velodyne 64-line laser radars for obstacle speed, road boundaries,and body positioning. The Velodyne 64-line laser radar or its combination with other laser radars can realize the requirements of decision-making processes, but the corresponding cost for intelligent vehicle hardware platforms must be considered. For example, the visual sensor is configured to deploy a wide-angle or panoramic camera on a site for an intelligent vehicle and then to achieve a parallel multi-target detection by using only one camera to detect and identify stop lines, zebra crossings,lane lines, traffic lights and signs, pedestrians, vehicles, and dynamic or static obstacles. Several wide-angle or panoramic cameras are deployed for self-driving vehicles at different sites because their combined use can easily permit the completion of special tests and the identification of tasks. The hardware platform for intelligent vehicles based on the driving brain has been verified and validated for various sensor types and by various producers. The tests that have been conducted on the combinations of different intelligent vehicle test platforms have ensured information reliability and redundancy.

Fig.4 displays the sensor configuration of the MengShi intelligent vehicle. An Ibeo eight-line laser radar is installed on top of the vehicle. Meanwhile, the SICK single-line and Ibeo four-line laser radars are installed in front of the vehicle to detect low obstacles. Another SICK single-line laser radar and a millimeter-wave radar are installed at the rear side of the vehicle. All of these radars can provide abundant data support for radar-based simultaneous positioning and mapping. Radars are widely used sensors in intelligent vehicle test platforms. In addition to radars, the MengShi intelligent vehicle test platform is installed with three AVT 1394 Pike F-100C cameras over the front windshield of the intelligent vehicle to perceive traffic signs and lights. The MengShi intelligent vehicle test platform also adopts the NovAtel SPAN-CPT for its navigation and positioning system; this technology is composed of a global positioning system (GPS) and an inertial navigation system(INS).

《Fig. 4》

Fig.4.Sensor configuration of the MengShi intelligent vehicle.

Sensor configuration tests have been conducted for the MengShi intelligent vehicle. Different configuration options were considered in order to test the different functions, although all of the features in the configuration complement each other. The driving brain is considered to be the core of the intelligent vehicle software and hardware frameworks. These are three functional modules that describe the driving brain,  including perception, cognition, and decision. The formal language of driving cognition describes the driving situation cognitive map and formalizes the driving cognition. the driving brain cognition forms the driving situation map cluster to make decision, and the decision result is the visualized cognitive arrow cluster. The driving brain embodies the architectural design of a human brain. The impact of changes in sensor quantity, types, and installation locations on the overall framework is reduced by formalizing the driving cognition, thereby allowing the framework to be easily relocated to different vehicle configurations.

《3.2.Hardware connection》

3.2.Hardware connection

The physical connection of the MengShi intelligent vehicle is shown in Fig.5. The SICK laser radar and the Ibeo laser radar are connected to an industrial personal computer (IPC) using the switch. The Delphi millimeter-wave (MMW) radar is directly connected with the IPC through a CAN bus. The GPS and INS are directly connected with the IPC by a serial port cable. The AVT 1394 Pike F-100C camera is connected with the IPC through a 1394 standard video transmission line. The IPC fulfills the function of data fusion, decision-making and planning, dynamics, and control. The control commands are sent to the actuators of the accelerating pedal, brake pedal, and steering wheel through the CAN bus.

《Fig. 5》

Fig.5.Physical connection of the MengShi intelligent vehicle.

《3.3.Analysis of the hardware platform performance》

3.3.Analysis of the hardware platform performance

The intelligent vehicle architecture based on the driving brain decouples intelligent decision-making and sensor information. The output of each sensor information-processing module is unified by the formal language of driving cognition, which constitutes real-time information on a driving situation. The information in the driving map is applied to the driving situation according to the real-time position and orientation of the vehicle. The driving map is also integrated with the real-time driving situation information to form a common data pool, thereby fully reflecting the current driving situation. The intelligent decision-making module is based on the public data pool, which considers traffic rules, driving experience, a priori path, and other prior knowledge in order to complete intelligent decision-making. Changes in the sensor quantity, model, types, and installation locations cannot directly affect the intelligent decision-making, given the formal language of driving cognition and the condition of complete driving information. The whole structure requires only minimal changes ,if any, for the architecture to be easily relocated to other test platforms.

《4.Real vehicle hardware framework based on the driving brain》

4.Real vehicle hardware framework based on the driving brain

《4.1.The MengShi intelligent vehicle hardware framework》

4.1.The MengShi intelligent vehicle hardware framework

The MengShi intelligent vehicle was cooperatively designed and developed by Tsinghua University and Army Military Transportation University, led by Professor Deyi Li. Fig.6 shows the outlook of MengShi. Fig.7 shows the sensor deployment of MengShi, which consists of five radar sensors, three vision sensors,and one integrated position/ attitude sensor. The radar sensors include two SICK radars (SICK® LMS291-S05), one four-line laser radar (Ibeo®LUX4L), one eight-line laser sensor (Ibeo® LUX8L), and one millimeter-wave radar (Delphi® ESR). The vision sensors include three cameras (AVT® 1394 Pike F-100C) evenly equipped on the back of the frontal mirror. The integrated position/ attitude sensor includes a navigation and positioning system composed of GPS and INS (NovAtel® SPAN-CPT). A detailed description of each sensor is provided in Table 1.

《Fig. 6》

Fig.6.Out look of MengShi.

《Fig. 7》

Fig.7.Sensor deployment of the MengShi.

《Table 1》

Table 1 Sensor description of the MengShi.

The central controller is an IPC (Intel® CoreTM i7-3520 M 2.9 GHz processor), and the software development platform is Visual Studio 2013.

The actuator includes the steering system, electronic control hydraulic braking system, and electronic throttle control system. Based on the hydraulic power system of the original vehicle, the steering system is equipped with a set of independent electric power steering (EPS) system. Based on the hydraulic braking of the original vehicle, an electric control hydraulic braking system is added with a separate electric control hydraulic system. The system is connected in series to the original hydraulic pipeline, and the two systems are not in conflict. The electronic throttle control system operates through the direct transformation of the electronic throttle of the original car, in order to achieve engine control of the vehicle. The actuator working mode, communication interface, baud rate, and minimum execution cycle are shown in Table 2.

《Table 2》

Table 2 Actuator description of the MengShi.

《4.2.Experimental result of the MengShi intelligent vehicle hardware framework》

4.2.Experimental result of the MengShi intelligent vehicle hardware framework

The hardware framework described above was applied to the MengShi intelligent vehicle, which took part in the 3rd to the 7th Intelligent Vehicle Future Challenge (IVFC competitions organized by the National Natural Science Foundation of China. Of these competitions, the MengShi intelligent vehicle won second place in the 3rd and 5th competition, and won the championship in the 4th, 6th, and 7th competition[21,22].

In 2012, the MengShi intelligent vehicle,based on the driving brain hardware framework, completed a full intelligent driving test from the Beijing–Taihu toll station to the Tianjin–Dongli toll station,with a total highway length of 114 km. On 29 August 2015, an intelligent bus based on the driving brain hardware framework successfully completed a road test from Zhengzhou to Kaifeng, thus beginning a new era for intelligent bus driving[23,24].

《5.Discussion》

5.Discussion

Given the long-term testing process, we recognize that the human brain cannot be replaced by sensors with multiple vehicle platforms and a variety of sensor configurations;thus, perception cannot replace cognition. Sensors can only perform limited cognition, despite the advanced technology of these sensors, even when they incorporate certain human senses. In addition,sensors are seldom perfect. However, driving brain cognition can approximate the overall cognitive awareness of the human brain. Driving brain cognition can integrate not only sensory information but also prior knowledge in the brain and driving experience-associated knowledge in time and space. The decision-making of an intelligent vehicle is simultaneously completed by the driving brain;intelligent vehicles do not rely on a kind of sensor. Moreover, driving decisions are not entirely based on current and historical driving conditions of multichannel sensors; they also combine prior driving knowledge.

UART:universal asynchronous receiver transmitter.

《6.Conclusion》

6.Conclusion

In the software and hardware frameworks for intelligent vehicles based on the driving brain, the decision-making module is indirectly associated with the sensor information-processing module. Driving cognition is also defined through the formalized language of driving cognition,thus realizing the decision-making process of the driving brain. The formalization of driving cognition reduces the impact of changes in sensor quantity, types, and installation locations on the entire software framework. As a result,the software framework can be easily relocated to vehicle platforms with different sensor configurations.

《Acknowledgements》

Acknowledgements

This work was supported by China Postdoctoral Science Foundation Special Funded Projects(2018T110095), project funded by China Postdoctoral Science Foundation (2017M620765), National Key Research and Development Program of China (2017YFB0102603), and Junior Fellowships for Advanced Innovation Think-tank Program of China Association for Science and Technology (DXB-ZKQN-2017-035).

《Compliance with ethics guidelines》

Compliance with ethics guidelines

Deyi Li and Hongbo Gao declare that they have no conflict of interest or financial conflicts to disclose.