《1 Engineering research fronts》

1 Engineering research fronts

《1.1 Trends in Top 10 engineering research fronts》

1.1 Trends in Top 10 engineering research fronts

Table 1.1.1 summarizes the Top 10 engineering research fronts in the information and electronic engineering field, which encompasses the subfields of electronic science and technology, optical engineering and technology, instrument science and technology, information and communication engineering, computer science and technology, and control science. Table 1.1.2 shows the number of core papers published from 2016 to 2021 related to each research front.

(1)  Space–air–ground–sea integrated networking theory and technique

The space–air–ground–sea integrated network (SAGSIN) based on the ground network and supplemented and extended by the space, air, and sea networks, can provide ubiquitous, intelligent, collaborative, and efficient information assurance infrastructure for various network services in wide areas. The space network comprises the satellite backbone and access network and supports global coverage, ubiquitous connection, and broadband network access. The air network consists of the high-altitude platform and the unmanned aerial vehicle self-organized network, which can enhance network coverage, enable edge computing service, and provide a flexible network configuration. The ground network in SAGSIN comprises the ground internet and mobile cellular network, which controls the network service in areas with dense requests. The sea network uses maritime mobile and satellite networks to fulfill the communication requirements in maritime activities. Additionally, based on the deep integration of heterogeneous networks, the SAGSIN can use various network resources effectively and comprehensively to perform intelligent network management and information processing to cope with different network service requirements and realize network integration, network function modularity, and service customization. Therefore, the SAGSIN has shown unprecedented prospects in diverse fields, such as wide-area coverage, the Internet of Things (IoT), intelligent transportation, remote sensing and monitoring, and the military. Specifically, technologies of the low-earth-orbit (LEO) satellite constellation play a core role in establishing an all-coverage, all-connected, and all- knowing SAGSIN. SpaceX’s “Starlink” from the USA currently leads the competition for low-orbit satellite constellations. It intends to launch 42 000 satellites to build a broadband satellite communication network capable of providing global

《Table 1.1.1》

Table 1.1.1 Top 10 engineering research fronts in information and electronic engineering

No. Engineering research front Core papers Citations Citations per paper Mean year
1 Space–air–ground–sea integrated networking theory and technique 41 3 283 80.07 2019.6
2 Theories and algorithms for trustworthy AI 157 29 067 185.14 2019.5
3 Silicon-based CMOS terahertz imaging technique 122 1 528 12.52 2018.3
4 Silicon-based AI photonic computing chip theory and design 86 2 505 29.13 2019.8
5 High-precision space-based gravitational wave detection technology 220 38 208 173.67 2018.9
6 Manufacture of integrated circuits at the atomic scale 69 5 595 81.09 2018.7
7 Brain–machine interfaces and their clinical applications 219 8 489 38.76 2018.8
8 Humanoid robot behavioral developmental learning and cognitive technology 77 519 6.74 2018.5
9 Quantum circuits and chips 57 7 432 130.39 2019
10 Future industrial internet architecture and full-element interconnection 77 5 146 66.83 2019.4

Note: ① AI is short for artificial intelligence; CMOS is short for complementary metal oxide semiconductor;

​② For No.3 and No.4, all the detected papers are adopted as the core papers.

《Table 1.1.2》

Table 1.1.2 Annual number of core papers published for the Top 10 engineering research fronts in information and electronic engineering 

No. Engineering research front 2016 2017 2018 2019 2020 2021
1 Space–air–ground–sea integrated networking theory and technique 2 2 6 5 10 16
2 Theories and algorithms for trustworthy AI 5 14 22 31 32 53
3 Silicon-based CMOS terahertz imaging technique 22 25 22 17 20 16
4 Silicon-based AI photonic computing chip theory and design 3 6 5 10 25 37
5 High-precision space-based gravitational wave detection technology 25 29 34 35 43 54
6 Manufacture of integrated circuits at the atomic scale 10 10 12 12 12 13
7 Brain–machine interfaces and their clinical applications 25 31 35 39 41 48
8 Humanoid robot behavioral developmental learning and cognitive technology 12 12 14 14 13 12
9 Quantum circuits and chips 7 7 7 8 14 14
10  Future industrial internet architecture and full-element interconnection 4 6 12 14 15 26

coverage. More than 3 000 low-orbit satellites have already been operational in orbit as of August 2022, with more than 500 000 subscribed to broadband access users worldwide. However, several challenges exist in the SAGSIN, as follows: high-dynamic, extra-heterogeneity, ultra-complexity, and multidimensional requirements. The focus of this study includes network architectural design, protocol design, network resource management and scheduling, efficient transmission technology, network security, and privacy preservation, among others.

(2) Theories and algorithms for trustworthy AI

Trustworthy artificial intelligence (AI) enhances the credibility of complex AI systems and algorithms, such as deep neural networks. Specifically, the concept of credibility includes the following four aspects: ① the interpretability and quantifiability of AI systems concerning knowledge representations; ② the interpretability and quantifiability of AI systems regarding representation capacity, which include generalization, robustness, fairness, and privacy protection, among others; ③ the interpretability of AI systems in learning and optimization; ④ the interpretability of the internal mechanism of various AI algorithms.

Therefore, current studies focus on the following to promote the development of trustworthy AI:  ①  qualitatively or quantitatively interpreting knowledge representations modeled using AI systems, such as visualizing semantic information in intermediate-layer features and quantifying the importance of input variables to final decisions; ② evaluating, interpreting, and improving AI systems’ representation capacity, including generalization, robustness, and fairness, among others; ③ explaining the reasons for the effectiveness of AI system optimization algorithms, and exploring and identifying latent defects in recent empirical optimization algorithms; ④ designing interpretable AI systems to enhance credibility in the system design stage.

Although trustworthy AI has recently received extensive attention, several key bottlenecks are rarely involved and explored, including the following: ① exploring, identifying, and quantifying the essential factors that determine the AI system’s representation capacity; ② unifying and interpreting the internal mechanisms of various empirical AI algorithms, thereby uncovering the common nature behind their effectiveness and grasping the essential parts of previous algorithms; ③ the theory-driven design and optimization of AI systems. Particularly, a few international research institutions and teams, such as the Massachusetts Institute of Technology (MIT) and Shanghai Jiao Tong University, have recognized and particularly considered the abovementioned key issues and conducted some forward-thinking explorations on them.

(3) Silicon-based CMOS terahertz imaging technique

The terahertz (THz) imaging technique involves directing continuous or pulsed THz waves onto a target and collecting signals reflected by or transmitted through the object. Fourier transformation algorithm can obtain information of the intensity and phase of signals from each target point. Subsequently, target imaging can be achieved after spectrum analyses and digital signal processing. The THz frequency band is also located in the electromagnetic spectrum between microwave and infrared waves. It has special characteristics, such as high transmission, low energy, high coherence, and high-transient response. Therefore, imaging techniques at THz frequencies have several incomparable advantages over traditional methods, such as visible light, ultrasonic, and X-ray imaging, and have wide application prospects in national security, safety inspection, biomedicine, and environmental monitoring.

Recently, the THz imaging technique has attracted broad research interest worldwide with the continuous upgrading of silicon-based technology and the great improvement of its radio frequency (RF) performance. The CMOS-assisted technique is characterized by its small size and low power consumption, among others, which can meet the commercial demand for high integration density and low cost. Additionally, the silicon-based CMOS THz imaging technique has achieved several technological breakthroughs in resolution. For example, Cornell University developed a 220 GHz imaging system with a 2 mm lateral resolution and a 2.7 mm range resolution based on the 55 nm bipolar complementary metal oxide semiconductor (BiCMOS) technology. However, overcoming the diffraction limit and further improving the imaging resolution remain an important research focus. Furthermore, the complex parasitic and coupling effects of silicon-based technology, integrated circuit (IC) distribution effects, and source synchronization technology are also the research focus in this field.

(4)   Silicon-based AI photonic computing chip theory and design

AI is a strategic technology leading the future, and computing power is a solid foundation to support AI’s rapid development. However, as the performance of microprocessors increases slowly, Moore’s law is starting to fail, and conventional electronic computing circuits cannot meet the growing demand for AI computing power because of the existence of a “power wall” and “memory wall”. Additionally, photons possess inherent advantages, such as low latency, low power consumption, high throughput, and parallelism, over electrons. The silicon-based AI photonic computing chip realizes linear analog calculations using the physical transmission characteristics of light in the silicon-based waveguide to provide powerful photonic chips for AI applications by leveraging the silicon photonics integration technology.

Considerable efforts have recently been directed to studying silicon-based AI photonic computing chips. Currently, these study areas include limited to image matrix convolution photonic computing chips, integral and differential photonic chips, complex-domain Fourier transform photonic chips, reservoir photonic computing chips, photonic neuromorphic computing (brain-like computing) chips, heuristic algorithm solvers for NP-complete problems, and spiking neural network photonic chips, among others.

Silicon-based photonic computing is envisioned as a potential solution for pushing the limits of electronic computing in the post-Moore era. Interestingly, with the continuous improvement in silicon-based optoelectronic integration, photonic computing chips can greatly accelerate AI algorithms’ processing speed and create possibilities for new-processor architectures. Furthermore, combining photonic analog computing and electronic digital logic operation will provide integrated optoelectronic computing with complementary advantages that would change the existing computing system model and build a novel computing basic system with high computing capacity and low power consumption. Therefore, it is an inevitable development trend in the future.

(5)  High-precision space-based gravitational wave detection technology

Space-based gravitational wave detection refers to using multiple spacecraft to form a giant laser interferometer in space to detect gravitational waves. Space-based gravitational wave detection mainly focuses on the millihertz frequency band, which contains gravitational wave signals characterized by large magnitude and long duration and also enjoins the rich types, large numbers, and diverse spatial distribution of gravitational wave sources. These factors make the millihertz frequency band the gold mine in gravitational wave detection, with significant implications for astrophysics, cosmology, and fundamental physics research.

Particularly, space-based gravitational wave detection involves two core technologies. The first involves establishing “probe heads” for gravitational wave detection, using a group of objects in nearly ideal inertial motion to provide reference points in space position for measuring the distance change caused by gravitational waves. The corresponding technology is known as space inertial reference, in which one is faced with the problems of high-precision inertial sensing, micro-Newton-level propulsion, and high-precision drag- free control, among others. The second involves developing a “ruler” for gravitational wave detection, using a laser to measure the distance change between inertial reference points on different satellites. The corresponding technology is known as inter-satellite laser interferometry. Here one faces problems of the ultra-stable optical bench, long life satellite-borne frequency-stabilized laser, and weak light phase locking, among others.

Furthermore, space-based gravitational wave detection necessitates a novel concept of spacecraft development. For example, the thrusters, which originally belonged to the satellite bus, are currently a key element in constructing the gravitational wave “probe heads”. Additionally, the structure and thermal stability of the satellite bus have become key factors that can affect successful gravitational wave detection. Therefore, in designing and developing spacecraft for gravitational wave detection, the boundary between the satellite bus and the payload must be broken down and considered as a whole.

Moreover, any scientific and technological powers that may be capable of space-based gravitational wave detection face a great challenge. After approximately 30 years of planning, the European Space Agency is seeking to launch the first space- based gravitational wave detector in the 1930s, possibly with some minor contribution from the USA. Japan has been relentless in the pursuit of its own space-based gravitational wave detection mission. China is actively studying space- based gravitational wave detection and is striving to play an important role in this field. In 2020, the Chinese Ministry of Science and Technology launched the key R&D plan for “gravitational wave detection”, with the most funds going to support the key genetic technologies of space-based gravitational wave detection.

(6) Manufacture of integrated circuits at the atomic scale

The term “atomic scale” in IC technology generally refers to the thickness of a single atomic layer. This thickness scale depends on the size of atoms and the crystal structure and is typically in the range of 0.1 nm, such as 0.2–0.5 nm. With the development of ICs beyond the 10 nm technology node, critical physical dimensions, tolerance of deviations in the critical dimensions, and the precision of measurement tools all enter the range of atomic scale. Transistors feature an increasing number of critical layers that are only a few atoms thick or wide. Such critical layers include gate dielectrics, metal gate layers for work function engineering, and fins of the fin field-effect transistor (FinFET), all with a feature size less than 10 atomic layers thick. Interestingly, as the common process for IC manufacture, atomic layer deposition (ALD) produces a film of 0.03–0.07 nm per cycle, far below the atomic scale thickness. Besides the absolute feature size, the mass production of ICs emphasizes the tolerance of thickness or width control, which benefits the yield rate. For example, the work function metal gate layers variation should be less than one atomic layer thick; otherwise, the transistors would suffer from unacceptable deviations in threshold voltage and performance, causing failure of the IC chips integrated with billions of devices. Therefore, the measurement tools applied in IC manufacturing have high- precision of approximately 0.01 nm, which is lower than a single atomic layer, to ensure the precise measurement of the above dimensions. Additionally, two-dimensional (2D) and oxide channel materials have been widely applied in academic research to fabricate transistors and simple circuits, providing a new solution to realizing atomic scale IC manufacture in the future.

(7) Brain–machine interfaces and their clinical applications

The brain–machine interface (BMI) is an intelligent system that establishes a direct and bidirectional communication pathway between the brain and external devices. The BMI either controls the external device using brain signals or modulates the brain state using external devices, aiming to monitor the brain state, treat brain disorders, and enhance brain functions. Since the 1970s, when the concept of “BMI” was first proposed, BMI technology has been continuously advancing, with an explosive development trend in the past decade. The key technologies in BMI are as follows: ① electrode design, fabrication, and minimally invasive implantation technology to obtain large-scale neural signals; ② neural decoding technology to understand the brain activities from complex, large-scale neural signals; ③ electrical, magnetic, and optical technology to stimulate neural populations; ④ closed-loop neural modulation technology based on real-time neural signals; ⑤ high-performance, low power neurochip technology, which integrates neural signal storage, decoding, stimulation, and modulation.

BMIs have many applications in the diagnosis, treatment, rehabilitation, and other aspects of mental/neurological diseases. For example, first, BMIs will restore motor and sensory functions. Such BMIs use neural signals to decode the brain’s movement intentions to drive external devices to complete the intended movements; simultaneously, sensory feedback is provided. Interestingly, these BMIs will provide new approaches to treating motor disabilities, such as paralysis. Recently, this BMI type will be extended to explore the restoration of more precise motor and sensory functions, such as human speech decoding and vision recovery. Second, BMIs will enhance cognitive functions. Such BMIs employ external devices to reconstruct or facilitate the communication between brain regions, aiming to repair or enhance specific cognitive functions. One example is the development of memory prostheses to explore the enhancement of impaired memory functions in patients. Third, BMIs will facilitate the treatment of neurological and neuropsychiatric disorders. Such BMIs use neural signals to guide the external devices in delivering optimal brain stimulation in real-time for a precise intervention. Notably, these BMIs have demonstrated great potential in treating neurological and neuropsychiatric disorders, such as Parkinson’s disease, epilepsy, and depression.

Although BMI technology has broad clinical applications, its performance, accuracy, efficiency, and safety still necessitates resolving several major challenges. Some challenges are as follows: ① developing neural recording and stimulation hardware that is stable in the long term, biocompatible with brain tissue, and has high spatial– temporal resolutions; ② developing accurate and stable BMI decoding algorithms to achieve the fine-grained control of various complex external devices; ③ developing precise and robust BMI modulation algorithms to effectively and safely modulate various brain states; ④ studying the ethics and data security of BMI technology.

(8)  Humanoid robot behavioral developmental learning and cognitive technology

Humanoid robots should interact with the surrounding physical world in a developmental learning manner to strengthen their behavioral capabilities, enhance their level of human-like cognition, such as movement, manipulation, understanding, memory, and reasoning, and exhibit more intelligent behavioral actions. The related technology is known as humanoid robot behavioral developmental learning and cognitive technology. The research directions are as follows: ① autonomous behavioral development; ② embodied intelligence (the evolutionary process of ontological structure and intelligence of robots performing various tasks in actual physical environments); ③ affordance learning (potential behaviors between robots and environments and the effects of these potential behaviors); ④ robot learning platforms (simulation software or real physical machines). Therefore, novel algorithms for brain-like constructs with learning and cognitive capabilities, particularly memory and learning, should be developed. The development of behavioral cognitive systems will enable robots to be more human-like and perform active, intrinsically motivated, lifelong learning and development in motor skills and behavioral cognition. Perceptual representations generalizable in different environments and tasks and intertwined with multimodal perceptual joint learning are crucial. Subsequently, AI is considered a physical entity to study, and the changes in robotic bodies in response to natural selection are studied in a simulation environment. This is a completely different paradigm from viewing AI as only an algorithm. Additionally, studying the affordance of robots is necessary to rescue and exploration tasks. Therefore, analyzing the potential behaviors between the robot and the environment and the effects of these potential behaviors can enable the robot to perform better in the unknown environment. Finally, we must upgrade or develop robotic platforms that will enable a better analysis of the functional interaction between robots, humans, and the environment.

(9)  Quantum circuits and chips

The quantum circuit model is a universal language for describing quantum algorithms, with each computation process comprising a sequence of quantum gates, measurements, and other possible operations. Most quantum algorithms, including Shor’s algorithm, Grover search algorithm, and HHL algorithm, are presented via the quantum circuit model. Additionally, the quantum circuit model is widely used to simulate quantum physical and chemical systems. Currently, quantum computing has entered the era of noisy intermediate-scale quantum systems. Given the current technology and engineering level of physical hardware, there are inherent limitations on the scale, depth, and qubit number of quantum circuits. Therefore, the degree of simplification of quantum circuits directly affects the scope of quantum computer applications. Designing quantum circuits for various practical computation problems with as few gates as possible, as shallow as possible, and as few qubits as possible is an important research direction in quantum computation. Besides, it is an important research direction to characterize the computation power of quantum circuits under various resource constraints and the separation between quantum and classical circuits.

Quantum chips are compact quantum circuits that are integrated on various physical platforms. The realization of quantum chips is an inevitable path toward the practical commercialization of quantum communication and computation. Additionally, the technical routes of quantum chips can be categorized into the following, based on the dependence of quantum circuits on the different physical platforms: superconducting quantum chips, semiconductor quantum dot (QD) chips, and optical quantum systems. Among these different implementations, superconducting circuits outperform other platforms in the scaling of the qubits that can be integrated; semiconductor QDs have good integrability and scalability, making them strong candidates for realizing solid-state quantum computing; the realization of optical quantum chips benefits from the techniques using traditional photonic ICs and classical optical communication. The major challenges of quantum chip implementations include improving the fidelity of quantum gates, coherence time, and decreasing the crosstalk and measurement error. Fault tolerant circuits/chips will play an important role. For future directions, people are devoted to expanding the scale of integration and improving the coherence time, manipulation accuracy and speed, and scalability.

(10)  Future industrial internet architecture and full-element interconnection

The term “industrial internet” was first proposed by General Electric (GE) in 2012. It focuses mainly on predictive maintenance to realize industrial automation and intelligence. After that, European countries, mainly Germany, put forward “Industry 4.0” in 2013. China also announced the “Made in China 2025” plan in 2015, which gave a richer connotation to the “industrial internet” and gradually established the full- element interconnection architecture.

The industrial internet architecture includes the network infrastructure, enabler platform, and security mechanism. The so-called “full-element interconnection” represents connecting people, machines, materials, rules, and the environment through networks and identification systems. The interconnectivity also includes the link between the entire lifecycle management among the value, supply, and industrial chains and R&D, production, and logistics, among others. The related technology expands across four directions as follows: ① network technologies, such as interconnection, deterministic transmission, identifier resolution, and computing and network convergence; ② data-handling operations, such as data collection, cleaning, training, and analyses; ③ intelligent platform and management technologies, such as cyber–physical systems, model and application analyses, supply chain and lifecycle management; ④ security technologies, such as the security of network, data, and physics, among others.

The industrial internet has advanced to the trial and deployment stage from the conceptual consensus. Specific technologies, such as the platform, identifier, and 5G have been applied in the industry. Interestingly, the identifier resolution system has been implemented at the five National Top Nodes in China. The following trends appear on the horizon from the perspective of architecture and technology:

1)  Toward specific scenarios: Derives architectures for guiding the implementation of various scenarios and expands the deployment to small- and medium-sized enterprises.

2)  More integrated technologies: IT (Information Technology), OT (Operations Technology), and CT (Communication Technology) are further integrated to solve the ecological interconnection of cloud networks and improve the efficiency and quality of all production and manufacturing links, by combining virtual reality, digital twin, and deterministic lossless connections.

3)   Trusted and more secure: Privacy-preserving and data- trusted technology will be further adopted to solve the personnel, systems, and equipment security problems.

4)    Systematic promotion: This improves the overall automation system and leverages new communication technologies. This also enables the generation of a full- element interconnected system, connects the business chain, and creates new industrial models.

《1.2 Interpretations for three key engineering research fronts》

1.2 Interpretations for three key engineering research fronts

1.2.1 Space–air–ground–sea integrated networking theory and technique

The SAGSIN comprises the space, air, ground, and sea networks that complement the conventional terrestrial network in coverage, flexibility, and monotonicity and is essential to realize ubiquitous network access and service customization. Because of the non-uniform mechanisms of existing networks, massive variations in resource distribution, complex and dynamic wireless channels, and network security challenge the network operators. The four main research directions of the SAGSIN must cope with the following: network architecture design, protocol design, resource management, and transmission technique. Therefore, we introduce the advances in these aspects as follows:

1)  Two main trends exist in network architecture design. The international standardization organization 3GPP promotes the integration of non-terrestrial networks (NTNs), including all NTNs, such as satellite and aerial networks, and terrestrial cellular networks, where the NTNs are vital in the 5G network and future 6G networks to form an interconnected network. However, the other trend is to design an efficient, globally controllable, and low cost virtualized SAGSIN architecture, based on software-defined networking (SDN) and network function virtualization (NFV). Notably, the main research institutions in this direction include the University of Waterloo, Tsinghua University, and Beijing Jiaotong University, among others.

2)   For communication protocol design, the Consultative Committee for Space Data Systems (CCSDS) protocol can achieve near-lossless multimedia streaming transmission in the SAGSIN, despite payload limitations through the iterative processing of adjacent frames, which greatly expands the supporting exchange capability of space mission information systems. Digital video broadcasting (DVB) series protocols overcome the limitation of traditional uplink power control on the volume of the RF front-end, which effectively improves the spectrum efficiency of satellite communication links, optimizes the space segment of the network, and significantly reduces the cost of satellite-based internet protocol (IP) services. However, these two protocols were previously proposed, and organizations and institutions, including 3GPP, are also exploring new communication protocols.

3)   Two main trends exist in SAGSIN resource management. One is the AI-driven resource management and scheduling mechanism, which can adapt to the characteristics of several network nodes, large-decision-making space, and heterogeneous resources in SAGSINs, thereby effectively improving the use of network resources. Another trend is the service function chain or network slicing resource scheduling technology, which uses SDN and NFV technologies to abstract and pools the physical network resource to maintain the service isolation between users and the satisfaction of multi- dimensional requirements. The main research institutions in this direction include Tsinghua University, University of Waterloo, Xidian University, and the National University of Defense Technology, among others.

4)    Inter-satellite laser communication is a potential technology for efficient transmission technology to realize high-speed inter-satellite links. Compared with RF based inter-satellite communication, it can achieve a much higher data transmission rate through a smaller antenna size. Simultaneously, the laser beam can provide the inter-satellite laser link with a narrower beam and higher directivity, eliminating interference and promoting network security. Currently, microwave communication remains the main inter-satellite link communication method in engineering. Therefore, the preliminary inter-satellite laser communication test and deployment are expected to be performed by the end of 2023. The main research institutions in this direction include Beihang University, Xidian University, Southeast University, Beijing Jiaotong University, and Northeastern University, among others.

The LEO constellation system construction is also a necessary R&D focus for the SAGSIN. The Iridium system is the earliest planned and deployed global coverage satellite network. In 2015, the SpaceX projected “Starlink” LEO satellite network sparked interest in academia and the industry. SpaceX announced that tens of thousands of low-orbit satellites would be launched to provide high-speed network access globally. Until now, “Starlink” has completed its initial deployment, which can reach the highest download speed of 301 Mbps and has provided network access in dozens of European and American countries. Additionally, China has several LEO satellite communication systems that are expected to be built, including “Apocalypse”, “Hongyan”, “StarRides”, and “China SatNet”, among others. Furthermore, the earliest of them are expected to be deployed by the end of 2023.

Table 1.2.1 presents the countries with the greatest output of core papers on “space–air–ground–sea integrated networking theory and technology”. China has obvious advantages, and the number of core papers is about three times that of second-placed Canada. China’s international cooperation target is mainly Canada; however, it has some collaboration with the UK, the USA, and Japan (Figure 1.2.1). Among the Top 10 institutions with the greatest output of core papers (Table 1.2.2), the University of Waterloo in Canada produces the most papers. Additionally, six of the Top 10 institutions with the greatest output of core papers globally are from China; the rest are distributed in Japan, Norway, and the UK. Regarding institutional cooperation (Figure 1.2.2), five institutions from China collaborate closely with the University of Waterloo and two institutions from China collaborate closely with the University of Surrey, and the Beijing Institute of Technology and the University of Oslo have some cooperation. Concerning the greatest output of citing papers (Table 1.2.3), China ranks first, accounting for 49.62%, followed by the USA. The rest of the countries are all less than 10%. Among the Top 10 citing paper producers (Table 1.2.4), all are from China, excluding the fifth-ranked University of Waterloo, which reflects China’s considerable attention in this direction.

Currently, the theory and technology of space–air–ground– sea integrated communication networking are at different levels of development in China and abroad. However, they are all in the design and initial deployment stage. Figure 1.2.3   is the roadmap of the engineering research front of “space–air–ground–sea integrated networking theory and technique”. From the perspective of technical indicators, the greatest LEO constellation will contain thousands of satellites by 2025. Additionally, it is expected that the scale of a single constellation will reach tens of thousands by 2030. In the next 5 years, the LEO satellite network test rate can reach 500

《Table 1.2.1》

Table 1.2.1 Countries with the greatest output of core papers on “space–air–ground–sea integrated networking theory and technique”

No. Country Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 China 35 85.37 2 958 84.51 2019.6
2 Canada 13 31.71 1 260 96.92 2019.9
3 UK 8 19.51 613 76.62 2019.9
4 Japan 7 17.07 781 111.57 2020
5 USA 6 14.63 328 54.67 2020.7
6 Norway 3 7.32 499 166.33 2019
7 Saudi Arabia 3 7.32 127 42.33 2021
8 Singapore 3 7.32 126 42 2020
9 Australia 2 4.88 220 110 2019
10 India 2 4.88 129 64.5 2020.5

《Figure 1.2.1》

Figure 1.2.1 Collaboration network among major countries in the engineering research front of “space–air–ground–sea integrated networking theory and technique”

《Table 1.2.2》

Table 1.2.2 Institutions with the greatest output of core papers on “space–air–ground–sea integrated networking theory and technique”

No. Institution Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 University of Waterloo 8 19.51 978 122.25 2019.6
2 Xidian University 7 17.07 788 112.57 2019.9
3 Southeast University 7 17.07 483 69 2020
4 Tsinghua University 5 12.2 296 59.2 2019.8
5 Tohoku University 4 9.76 677 169.25 2019.2
6 University of Oslo 3 7.32 499 166.33 2019
7 Beijing Institute of Technology 3 7.32 456 152 2018.3
8 University of Surrey 3 7.32 362 120.67 2020.3
9 Beijing Jiaotong University 3 7.32 341 113.67 2018.3
10 Purple Mountain Laboratories 3 7.32 285 95 2020.7

《Figure 1.2.2》

Figure 1.2.2 Collaboration network among major institutions in the engineering research front of “space–air–ground–sea integrated networking theory and technique”

《Table 1.2.3》

Table 1.2.3 Countries with the greatest output of citing papers on “space–air–ground–sea integrated networking theory and technique”

No. Country Citing papers Percentage of citing papers/% Mean year
1 China 1 634 49.62 2020.4
2 USA 340 10.32 2020.4
3 Canada 310 9.41 2020.2
4 UK 219 6.65 2020.5
5 South Korea 147 4.46 2020.6
6 India 141 4.28 2020.5
7 Australia 130 3.95 2020.3
8 Saudi Arabia 119 3.61 2020.6
9 Japan 107 3.25 2020.4
10 Germany 75 2.28 2020.3

Mbps, and the delay can reach 60 ms. Furthermore, the LEO satellite network test rate will reach a minimum of 5 Gbps, with a delay of at least 20 ms from 2027 to 2032. Therefore, the total network throughput of LEO satellite networks is predicted to reach 97 Tbps, 218 Tbps, and 820 Tbps from 2022 to 2024, 2025 to 2028, and 2032, respectively. From the viewpoint of research direction, the main research directions in current “space–air–ground–sea integrated networking

《Table 1.2.4》

Table 1.2.4 Institutions with the greatest output of citing papers on “space–air–ground–sea integrated networking theory and technique”

No. Institution Citing papers Percentage of citing papers/% Mean year
1 Xidian University 166 15.26 2020.3
2 Beijing University of Posts and Telecommunications 161 14.8 2020.3
3 Southeast University 141 12.96 2020.6
4 Tsinghua Univer sity 111 10.2 2020.2
5 University of Waterloo 110 10.11 2019.9
6 Nanjing University of Posts and Telecommunications 76 6.99 2020.6
7 Beihang University 76 6.99 2020.2
8 Beijing Jiaotong University 67 6.16 2020
9 Peng Cheng Laboratory 66 6.07 2020.7
10 Nanjing University of Aeronautics and Astronautics 59 5.42 2020.5

《Figure 1.2.3》

Figure 1.2.3 Roadmap of the engineering research front of “space-air-ground-sea integrated networking theory and technique”

theory and technology” include network construction, inter-satellite laser communication technology, space–air– ground–sea networking protocol designing, multimode fusion terminal design, and potential application development. The construction of the integrated network is currently in the initial stage, which includes the construction of the LEO satellite backbone network and the testing of the system terminal. It is expected that the construction of the backbone will be completed by the end of 2025, and more low- and super-low- orbit satellites will be launched according to application needs by 2032. The current inter-satellite communication technology in the LEO satellite network is also under development regarding inter-satellite laser communication technology, and the current communication technology is primarily microwave communication. Nevertheless, laser communication remains in the test stage, which is expected to be deployed by 2025. Furthermore, concerning satellite multimode fusion terminals, several requirements on mass, volume, compatibility in a heterogeneous network, and application integration must be satisfied to adapt to multi-system, multi-band, multi- network, and multi-application, among others. Recently, the satellite multimode technique is only in the preliminary stage, and the related products are limited to gateways and larger terminals; however, portable terminals are expected to be designed and introduced into the market from 2025 to 2032. Moreover, for potential application development, the current application scenarios for the SAGSIN mainly focus on wide- area broadband access, military communications, IoT, and the Internet of Vehicles, among others. However, in the future, more potential services will be explored to further exploit the potential capabilities of the SAGSIN. Additionally, with the support of international organizations, such as 3GPP and IMT- 2030, the standardization of space–air–ground–sea integrated communication and networking has officially started, and part of the agenda is gradually being conducted. Therefore, relevant technologies, protocols, and index requirements will be further improved in the next 5–10 years.

1.2.2 Theories and algorithms for trustworthy AI

The AI field has been revolutionized by leaps and bounds, considering the great success of complex AI systems, such as deep neural networks. However, these systems are frequently regarded as black-box systems because of their complex architectures and massive parameters. Therefore, people neither understand the internal decision logics of these systems nor explain the advantages and disadvantages of these systems based on representation capacity, such as why neural networks can achieve superior performance and are simultaneously highly vulnerable to adversarial attacks. Given the uninterpretability of AI systems in the decision-making process and the representation and optimization capabilities, the credibility, controllability, and security of these systems are greatly impaired, thereby hindering the widespread application of AI, particularly in intelligent healthcare, self- driving, and other high-risk areas.

Therefore, academia and industry are committed to enhancing the interpretability of AI systems and algorithms to build trustworthy, controllable, and secure AI. Specifically, trustworthy AI aims to enhance the interpretability and quantifiability of AI systems in knowledge representation, representation capacity, optimization, and learning ability, and the interpretability of the AI algorithms’ internal mechanism.

Recently, major research interests of trustworthy AI are as follows:  ① qualitatively or quantitatively interpreting the knowledge representation modeled with AI systems, such as visualizing the semantic information contained in intermediate-layer features and quantifying the importance of input variables to final decisions; ② evaluating, explaining, and improving the representation capacity of AI systems, such as theoretically proving or empirically studying the boundaries of generalization and robustness, explaining the internal mechanisms of neural networks in generalization, robustness, and representation bottlenecks, and developing various methods (e.g., the adversarial training) to improve the robustness, fairness, or to avoid privacy leakage; ③ explaining the internal mechanism of the effective optimization of AI system algorithms, and exploring the latent defects in current empirical optimization algorithms, for example, to explain why optimization methods, such as stochastic gradient descent and dropout are effective and to identify potential mathematical defects in classical optimization operations, such as batch normalization; ④ designing interpretable AI systems and incorporating credibility into the system architecture in the system design stage, for example, with a specifically designed objective function for the convolutional neural network, each filter of the high-level convolutional layer automatically represents specific semantics.

Notably, the field of trustworthy AI has recently received extensive attention, achieving many achievements. Tables 1.2.5 and 1.2.6 present the countries and institutions, respectively, with the greatest output of core papers on this research front. Tables 1.2.7 and 1.2.8 present the countries and institutions with the greatest output of citing papers on this front, respectively. Typical institutions include MIT, Chinese Academy of Sciences, and Shanghai Jiao Tong University, among others, which are mainly in the USA and China. Additionally, many of the core papers are jointly produced by different research institutions in different countries. Figures 1.2.4 and 1.2.5 show the cooperation networks between major countries and institutions, respectively.

Although trustworthy AI has received extensive attention recently, most studies remain at the level of engineering algorithms, such as visualizing neurons of a neural network, estimating the importance of input variables, and evaluating the robustness of neural networks using the accuracy under adversarial attacks. However, several critical and fundamental bottlenecks in trustworthy AI are rarely involved and explored, which include:

1)  Exploring, identifying, and quantifying the essential factors that determine the representation capacity of AI systems: Specifically, many factors, such as network architectures and optimization tricks, affect the representation capacity of AI systems. However, such factors frequently produce complex and diverse effects at the representation capacity level. Therefore, determining the fundamental factors that impact

《Table 1.2.5》

Table 1.2.5 Countries with the greatest output of core papers on “theories and algorithms for trustworthy AI”

No. Country Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 USA 67 42.68 21 672 323.46 2018.9
2 China 36 22.93 1 990 55.28 2020
3 UK 19 12.1 1 389 73.11 2019.8
4 Germany 18 11.46 1 501 83.39 2019.5
5 Italy 12 7.64 319 26.58 2020.3
6 Austria 9 5.73 627 69.67 2020
7 South Korea 8 5.1 750 93.75 2019.8
8 Australia 7 4.46 1 170 167.14 2019.9
9 Canada 7 4.46 776 110.86 2019.3
10 Switzerland 7 4.46 282 40.29 2019.3

《Table 1.2.6》

Table 1.2.6 Institutions with the greatest output of core papers on “theories and algorithms for trustworthy AI”

No. Institution Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 University of California, Los Angeles 11 7.01 875 79.55 2019.2
2 Stanford University 8 5.1 2 638 329.75 2018.2
3 Korea University 6 3.82 716 119.33 2019.5
4 Medical University of Graz 6 3.82 586 97.67 2019.7
5 University of Pisa 6 3.82 199 33.17 2019.8
6 University of California, Berkeley 5 3.18 4 561 912.2 2017.2
7 Technical University of Berlin 5 3.18 710 142 2019.2
8 Fraunhofer Heinrich Hertz Institute 5 3.18 695 139 2019.4
9 Shanghai Jiao Tong University 5 3.18 123 24.6 2020.6
10 University of Granada 4 2.55 1 086 271.5 2020.2

《Table 1.2.7》

Table 1.2.7 Countries with the greatest output of citing papers on “theories and algorithms for trustworthy AI”

No. Country Citing papers Percentage of citing papers/% Mean year
1 China 7 372 33.09 2020.4
2 USA 5 719 25.67 2020.2
3 UK 1 722 7.73 2020.3
4 Germany 1 471 6.6 2020.4
5 South Korea 1 130 5.07 2020.4
6 Australia 915 4.11 2020.4
7 Canada 901 4.04 2020.3
8 Japan 890 3.99 2020.3
9 Italy 771 3.46 2020.4
10 India 739 3.32 2020.4

《Table 1.2.8》

Table 1.2.8 Institutions with the greatest output of citing papers on “theories and algorithms for trustworthy AI”

No. Institution Citing papers Percentage of citing papers/% Mean year
1 Chinese Academy of Sciences 750 22.27 2020.3
2 Zhejiang University 333 9.89 2020.4
3 Tsinghua University 325 9.65 2020.1
4 Harvard University 282 8.37 2020.4
5 Stanford University 276 8.19 2020.2
6 Shanghai Jiao Tong University 266 7.9 2020.3
7 Massachusetts Institute of Technology 260 7.72 2020.2
8 University of Electronic Science and Technology of China 245 7.27 2020.4
9 Peking University 216 6.41 2020.2
10 Wuhan University 212 6.29 2020.4

《Figure 1.2.4》

Figure 1.2.4 Collaboration network among major countries in the engineering research front of “theories and algorithms for trustworthy AI”

《Figure 1.2.5》

Figure 1.2.5 Collaboration network among major institutions in the engineering research front of “theories and algorithms for trustworthy AI”

representation capacity is challenging. Moreover, researchers can accurately evaluate and interpret the representation capacity of AI systems only when the decisive factors of the representation capacity are clearly identified.

2)   Unifying and interpreting the internal mechanisms of various empirical AI algorithms: To address a specific research problem, scholars propose different AI algorithms from different empirical perspectives. Specifically, these algorithms often contain identical or similar internal mechanisms. The common essence of these algorithms may be revealed through the unification and explanation of the internal mechanisms of these different empirical algorithms, and the reliability of these algorithms can be evaluated and compared.

3)   Theory-driven design and optimization of AI systems, particularly neural network systems: The current architecture design and training optimization of AI systems are mostly empirical, which implies that people develop effective architectural design and optimization algorithms based on several experimental observations. However, unified theoretical feedback is required to guide the design and optimization of AI systems so that the representation capacity of AI systems can meet the demands of specific tasks. Therefore, the controllability of AI systems is truly realized.

Specifically, a few international research teams have realized and considered the abovementioned key issues and conducted forward-thinking explorations on them. For example, the team from the Shanghai Jiao Tong University uniformly explained numerous algorithms for improving adversarial transferability, whereas those from University of California, Berkeley proposed that the principles of “self- consistency” and “simplicity” are the cornerstones of AI systems and used these principles to guide AI systems’ design with interpretable representations and training.

In the past decade, much research progress has been achieved for trustworthy AI. However, from the perspective of the development process of the entire field, trustworthy AI remains in its infancy, and many key bottlenecks exist, which need to be urgently solved. Specifically, important research directions for the next 5–10 years include the following aspects (Figure 1.2.6).

First, further improve the explanation for knowledge representations. Most of current explanations stem from heuristics, which lack theoretical reliability. Additionally, these explanations have no ground-truth, making it impossible to verify the reliability of explanations. Therefore, future research priorities may include: ① unifying numerous existing empirical explanations and revealing their common mechanism; ② proposing new explanations with theoretical guaranties; ③ evaluating the reliability of explanations objectively.

Second, develop the explanation and quantification for representation capacity. Future research priorities may include: ① exploring and identifying the essential factors that affect the representation capacity of AI systems; ② uniformly explaining the internal mechanism of various AI algorithms proposed for improving representation capacities; ③ proposing precise quantitative metrics to evaluate the representation capacity of AI systems; ④ explaining and proving properties and defects of representation capacities of AI systems.

Third, conduct theory-driven design and optimization of AI systems. Today, effective architectural design and optimization methodologies are typically developed based on experimental observations. However, trustworthy AI requires a unified theory, which can provide targeted feedback to guide the design and optimization of AI systems. Future research priorities in this area could be ① investigating the connection between network architectures and knowledge representations, ② investigating the connection between the model performance and knowledge representations, and ③ investigating the relationship between knowledge representations and various types of representation capacity, such as generalization, robustness, and fairness.

1.2.3 Silicon-based CMOS terahertz imaging technique

Traditional terahertz (THz) imaging methods are mainly based on pure electronic or photoelectric setups. The former mainly depends on Schottky diodes and III–V devices, whereas the latter focuses on photoconductance, optical rectification, and quantum cascade lasers. In reality, all the devices are expensive and bulky, and some even call for cooling equipment. Additionally, they are incompatible with traditional microelectronics packaging methods, which further increase the difficulty of circuit integration. However, with the rapid development of silicon-based technology, the silicon-based CMOS THz imaging technique can achieve lower power consumption and a smaller size, which can meet the market demand for low cost and high integration density and has gradually become a hot research topic in the field of THz imaging.

Table 1.2.9 summarizes the research status on the silicon-based CMOS THz imaging technique. The top three countries in core papers are the USA, Germany, and China. In terms of citations, China is overtaken by Japan and France and drops to the fifth place. The main research institutions with the greatest output of core papers are shown in Table 1.2.10. The University of Wuppertal and the Vilnius University are the top two institutes with the greatest output of core papers. In China, only Nanjing University breaks into the top 10. Regarding total citations, the top three institutes are Princeton University, University of Wuppertal, and University of Michigan, while Nanjing University

《Figure 1.2.6》

Figure 1.2.6 Roadmap of the engineering research front of “theories and algorithms for trustworthy AI”

《Table 1.2.9》

Table 1.2.9 Countries with the greatest output of core papers on “silicon-based CMOS terahertz imaging technique”

No. Country Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 USA 38 31.15 519 13.66 2018.3
2 Germany 33 27.05 359 10.88 2018.7
3 China 18 14.75 107 5.94 2018.4
4 France 10 8.2 147 14.7 2017.6
5 Poland 10 8.2 42 4.2 2020.4
6 Lithuania 10 8.2 41 4.1 2020.3
7 Israel 6 4.92 17 2.83 2016.8
8 Switzerland 6 4.92 12 2 2017.2
9 Japan 5 4.1 222 44.4 2018.8
10 UK 4 3.28 102 25.5 2017

of China ranks around tenth. Based on cooperation among major countries (Figure 1.2.7), the main partners of China is the USA. Germany has developed extensive cooperative ties with countries in Europe, America, and Asia. Based on cooperation among major institutions (Figure 1.2.8), stable cooperation has been established in continental Europe, including the General Jonas Žemaitis Military Academy of Lithuania, Vilnius University, and Institute of High-Pressure Physics of the Polish Academy of Sciences, while Cornell University built partnerships with University of California, Los Angeles and University of Michigan, as listed in the table. Institutions with the greatest output of citing papers on this front are shown in Table 1.2.11. China accounts for over a third of the publications and ranks first worldwide. The USA and Germany rank second and third, respectively. In Table 1.2.12, China dominates the list of institutions, with seven in the world’s top 10, whereas only two are in the USA, and one in Germany.

Three factors are primarily the focus of initial research on CMOS silicon-based THz imaging: high sensitivity, high integration density, and high resolution. Incoherent direct- detection technology was used initially, but its low sensitivity and high input power requirement are the main challenging factors for solid-state electronic design. Based on the 0.13 μm silicon germanide (SiGe) BiCMOS technology, a specific coherent imaging transceiver chip can upgrade the sensitivity by more than 10 times. The array’s scale must be expanded to achieve a higher imaging resolution, but this is impossible for traditional coherent detection arrays because

《Table 1.2.10》

Table 1.2.10 Institutions with the greatest output of core papers on “silicon-based CMOS terahertz imaging technique”

No. Institution Core papers Percentage of core papers/% Citations Citations per paper Mean year
1 University of Wuppertal 20 16.39 294 14.7 2018.5
2 Vilnius University 9 7.38 39 4.33 2020.7
3 Princeton University 8 6.56 301 37.62 2018.8
4 Institute of High Pressure Physics, Polish Academy of Sciences 8 6.56 39 4.88 2020.8
5 University of California, Los Angeles 8 6.56 20 2.5 2019.6
6 University of Michigan 6 4.92 109 18.17 2018
7 General Jonas Žemaitis Military Academy of Lithuania 5 4.1 20 4 2020.8
8 Nanjing University 5 4.1 8 1.6 2018.6
9 University of Glasgow 4 3.28 102 25.5 2017
10 Cornell University 4 3.28 97 24.25 2017.5

《Figure 1.2.7》

Figure 1.2.7 Collaboration network among major countries in the engineering research front of “silicon-based CMOS terahertz imaging technique”

《Figure 1.2.8》

Figure 1.2.8 Collaboration network among major institutions in the engineering research front of “silicon-based CMOS terahertz imaging technique”

《Table 1.2.11》

Table 1.2.11 Countries with the greatest output of citing papers on “silicon-based CMOS terahertz imaging technique”

No. Country Citing papers Percentage of citing papers/% Mean year
1 China 464 34.73 2020.1
2 USA 276 20.66 2019.7
3 Germany 150 11.23 2019.7
4 Japan 76 5.69 2020.3
5 South Korea 76 5.69 2019.9
6 UK 63 4.72 2019.8
7 Spain 52 3.89 2019.9
8 India 51 3.82 2020.2
9 Italy 50 3.74 2020
10 France 44 3.29 2019.6

《Table 1.2.12》

Table 1.2.12 Institutions with the greatest output of citing papers on “silicon-based CMOS terahertz imaging technique”

No. Institution Citing papers Percentage of citing papers/% Mean year
1 Chinese Academy of Sciences 89 23.67 2019.9
2 Huazhong University of Science and Technology 40 10.64 2019.8
3 University of Wuppertal 36 9.57 2019.3
4 Tianjin University 35 9.31 2019.9
5 University of Electronic Science and Technology of China 34 9.04 2020.2
6 Princeton University 29 7.71 2019.8
7 Peking University 26 6.91 2020
8 Zhejiang University 23 6.12 2020.2
9 Southeast University 23 6.12 2020
10 Massachusetts Institute of Technology 21 5.59 2019.6

of the limitation of the centralized design methods for local oscillator signals. Instead, the 32-element phase-locked dense heterodyne receiver array, based on 65 nm CMOS technology, enables the integration of two interleft 4×4 arrays onto a chip of 1.2 mm2, significantly shrinking the whole receiver array. Based on lateral resolution improvement, the fully integrated ultra-wideband (UWB) inverse synthetic aperture imaging technology in terms of 55 nm BiCMOS technology can achieve a lateral resolution of 2 mm and a range resolution of 2.7 mm.

To date, many technological advances have been achieved to improve THz imaging resolution, but it is still restricted to a spot size in the range of millimeters, because of the diffraction limit in the silicon-based technique. Resolution in the micron range is necessary for biomedical or material characterization. It can be achieved using imaging from a far-field to a near-field method, with a lateral resolution of 10–12 μm.

The silicon-based CMOS THz imaging technique has grown rapidly in popularity over the past decade because of market demand for cheap cost and high integration density. Many investigations were conducted and prompted the development of THz imaging technology. With the ongoing advancement of technology, the development of the THz imaging technique is moving toward high integration density, high-accuracy, and a large array. However, it simultaneously encounters three unavoidable challenges:

1)  At higher frequencies up to the THz range, the effectiveness of active device models is reduced, and the loss of passive devices is increased, which restricts the rapid development of the THz circuit by silicon-based technology. Simultaneously, multilayer metal and medium characteristics in silicon-based technology introduce complex parasitic and coupling effects into the devices operating at THz frequencies.

2)  As the THz wavelength is so short, more circuit integration space is available for the systems. However, THz circuits are more susceptible to distribution effects and surface roughness. Therefore, the system must be integrated according to innovative packaging and interconnection technologies.

3)  When expanding from a single channel to an array chip, to achieve a high angle resolution, it is necessary to ensure the cooperation of multiple channels, which puts a higher requirement on the technology of source synchronization. Finally, more sophisticated calibration systems are needed to ensure the accuracy of signal detection and transmission.

According to the BCC Research forecast, the global market for the main terahertz technology may reach 3.5 billion dollars by 2029. However, the market shares from the silicon-based integrated circuit industry are not included here because of its relatively backward development and maturation. Figure

1.2.9 presents a roadmap for the evolution of the engineering research frontiers of the CMOS silicon-based THz imaging technology. The chip fabrication is expected to finish around 2029, and on-chip testing will be on schedule. By 2032, technology optimization and integration research will be performed with breakthroughs in chip size and resolution. In the next 10 years, the integration of the THz technique by CMOS silicon-based methods will push the THz imaging technology toward a much larger market size.

《2 Engineering development fronts》

2 Engineering development fronts

《2.1 Trends in Top 10 engineering development fronts》

2.1 Trends in Top 10 engineering development fronts

The Top 10 engineering development fronts in the information

《Figure 1.2.9》

Figure 1.2.9 Roadmap of the engineering research front of “silicon-based CMOS terahertz imaging technique”

and electronic engineering field are summarized in Table 2.1.1, encompassing the subfields of electronic science and technology, optical engineering and technology, instrument science and technology, information and communication engineering, computer science and technology, and control science. The annual number of core patents published for the Top 10 engineering development fronts from 2016 to 2021 is shown in Table 2.1.2.

(1)  Super-large-scale digital twin visualization and simulation system

A technical “digital twin” enables dynamic interactions between physical and virtual entities. It allows real-time connection, two-way mapping, simulation analyses,

《Table 2.1.1》

Table 2.1.1 Top 10 engineering development fronts in information and electronic engineering

No. Engineering development front Published patents Citations Citations per patent Mean year
1 Super-large-scale digital twin visualization and simulation system 483 2865 5.93 2020.1
2 Integrated on-chip light source 832 1548 1.86 2018.7
3 Localization by multisource information fusion 909 3745 4.12 2018.7
4 Ubiquitous operating systems for human–cyber–physical computing 404 1095 2.71 2018.6
5 Quantum microwave measurement technology 638 4523 7.09 2017.3
6 Atomic and close-to-atomic scale manufacturing and measurement technologies for optical components 224 1766 7.88 2017.8
7 Ultra-low power IoT chip technology 987 2691 2.73 2019.3
8 Artificial intelligence for electronic design automation technology 954 4468 4.68 2019.7
9 Reinforcement learning-based evolutionary algorithm for unmanned systems 990 5867 5.93 2020
10 Medium- and low-orbit space communication network technology 908 4957 5.46 2019.1

《Table 2.1.2》

Table 2.1.2 Annual number of core patents published for the Top 10 engineering development fronts in information and electronic engineering

No. Engineering development front 2016 2017 2018 2019 2020 2021
1 Super-large-scale digital twin visualization and simulation system 6 7 28 69 164 209
2 Integrated on-chip light source 122 97 164 153 121 175
3 Localization by multisource information fusion 116 125 182 145 145 196
4 Ubiquitous operating systems for human–cyber–physical computing 73 50 78 52 85 66
5 Quantum microwave measurement technology 188 247 120 40 22 21
6 Atomic and close-to-atomic scale manufacturing and measurement technologies for optical components 62 46 44 34 26 12
7 Ultra-low power IoT chip technology 40 79 179 213 245 231
8 Artificial intelligence for electronic design automation technology 31 39 81 181 289 333
9 Reinforcement learning-based evolutionary algorithm for unmanned systems 15 29 54 192 266 434
10 Medium- and low-orbit space communication network technology 77 101 143 155 179 253

dynamic interaction, and feedback comtrol. It can also map the structure, attribute, state, and behavior of physical entities or systems into the virtual environment to form a high-fidelity dynamic digital model. A digital twin provides effective technical tools for observing, understanding, learning, controlling, and transforming the physical world. The super-large-scale digital twin visualization and simulation technology is a key part of the digital twin technology system. It plays a vital role in scientific research and production manufacturing and is an important tool in the fields of geographic information, biomedical research, large-scale engineering design, production manufacturing, etc. With the development of cloud computing, GPU, and other technologies, the resulting huge parallel processing capacity combined with the technologies of space-time synchronization, distributed parallel simulation, visualization algorithms, data structures, and system architecture has made computationally intensive operations possible. It enables the distributed access, scheduling, and management of large-scale simulation data on the cloud computing platform and supports large-scale grids with hundreds of millions of triangles, enabling simulation and visualization to reach extremely high resolutions. Multiscale modeling and distributed high-performance computing have become necessary technologies for overcoming these issues. With the development of data-driven machine learning methods and IoT technology, more physical elements are being simultaneously digitized online. Unrestricted innovation potential is offered by integrating machine learning, multiscale modeling, and distributed computing technologies to solve the modeling, simulation, and visualization of super-large- scale digital twins. Moreover, it provides infinite possibilities for the sustainable evolution of the system itself.

(2) Integrated on-chip light source

With the advent of the post-Moore era, ICs are moving toward an integrated photonic chip, aiming to achieve photon generation, ultra-high-speed transmission, processing, and detection. Integrating the light source on the chip is a big problem in the field of integrated photonic chips. Silicon- based optoelectronic chips can be mass produced using the established CMOS technology, but the optical efficiency is poor because silicon is an indirect bandgap semiconductor. Carrier injection technology can improve silicon’s luminescence intensity to integrate light-emitting devices on a chip. The reverse bias PN junction of polysilicon combined with the avalanche doubling phenomenon is used to generate visible and infrared light. Another method is integrating Ⅲ– Ⅴsemiconductor lasers on silicon wafers by wafer bonding or epitaxial growth. Currently, the integration technology of InP, silicon nitride, InGaAs, and other materials on silicon wafers has been mature and commercialized. Recently, there have been new development trends and directions in the field of the on-chip integrated light source: ① multi-material integrated optoelectronic chips: according to the functional division of an integrated photonic chip, multiple semiconductor materials are integrated on a chip, considerably enhancing the chip’s functionality and applicability; ② aiming at the urgent need for multi-wavelength output of the on-chip light source, an optical parametric oscillation-integrated chip is adopted to actualize an efficient nonlinear wavelength conversion on the chip through the weak pump light and the nonlinear impact of the material in the micro-cavity; ③ the on-chip spectrum output of multiple frequency laser combs is realized using an on-chip light source combined with optical frequency comb technology, widely used in optical atomic clock and on-chip precision detection; ④ integrating a single-photon quantum source in an optical quantum chip, using QDs or a color center light source to achieve a multifunction optical quantum chip.

(3) Localization by multisource information fusion

Multisource information fusion is the technique used to localize sources based on processing position-dependent information collected by multiple sensors distributed in space. Typical position-dependent information includes the time of arrival, time difference of arrival, angle of arrival, and received signal strength. The resulting nonlinear estimating issues can be solved in many ways to determine source positions by establishing mathematical models for these different kinds of information and adopting appropriate parameter estimation criteria. Research on performance improvement in this area mainly focuses on the following three aspects:

1)  Accuracy: High-accuracy localization is generally the most crucial objective for any developed localization technique.

2)   Robustness: Different uncertainties usually exist in measurements obtained in complex propagation environments; hence, the robustness of the developed localization techniques against different models and measurement errors is always a desired feature.

3)  Easy implementation: Computational complexity is a crucial factor to consider for the developed localization techniques in practical implementation scenarios.

The following three directions are trends for developing localization techniques in the future:

1)  Heterogeneous and synchronized multisource information fusion by fusing different measurements from different types of sensors: This includes, for example, high-precision range and angle information that can be acquired in millimeter wave localization systems, making localization with centimeter precision possible. Additionally, a fusion of image and microwave information may provide the opportunity to meet the localization requirements in both visible and invisible environments.

2)  Localization of multiple sources, group sources, and weak sources in complex scenarios: This includes, for example, the localization of multiple point sources and rigid bodies, or even group sources in mixed near- and far-field scenarios and localization of weak sources in deep space and underwater environments.

3)  Data-driven intelligent localization: The rapid development of machine learning techniques and AI provides new solutions for complicated localization problems.

(4)  Ubiquitous operating systems for human–cyber–physical computing

We have entered a new digitalization era with the fast development of new generation information technologies, such as the internet, big data, AI, and IoT. As a result, we are witnessing the emergence of many ubiquitous operating systems (UOSs) for human–cyber–physical systems, which are developed for various new application paradigms and new application scenarios. The UOS has the same functionality goals as traditional operating systems, such as Linux and Windows, because they both focus on managing heterogeneous resources while providing application development and runtime support. However, the UOS also represents the extensibility of the traditional operating system concept because many new computing paradigms and application scenarios need to be built for different operating systems. Conceptually, UOSs could also include operating systems built for computing devices with different scales, such as servers, personal computers (PCs), mobile devices, and sensors and for different application scenarios such as IoT, robotics, smart cities, and smart homes. However, in the specific context of human–cyber–physical computing, the term “UOS” refers to new types of operating systems that follow the concept of ubiquitous computing while equipped with novel features such as ubiquitous sensing, ubiquitous interconnection, light-weight computation, light-weight cognition, feedback control, and natural interaction.

Multiple UOS instances have been proposed in different countries, for areas including IoT, smart robots, and smart cities. However, as they have not been investigated or deployed in large scale, there are not many systematic research outputs, uniform technical frameworks, or industry standards. In China, the UOS has received wide attention in academia and industry because it has received support from the National Natural Science Fund of China and National Key Research & Development Projects. Additionally, the most recent Five-Year Plan from the Ministry of Industry and Information includes the promotion of UOSs. Simultaneously, companies such as Tencent and Haier have been actively investigating the development of new UOSs in areas including IoT, cloud computing, smart cities, intelligent transportation, smart building, and smart homes, which have provided necessary system software support during the process of digital transformation for various industrial customers.

(5)  Quantum microwave measurement technology

Quantum microwave measurement technology is a subject that generates, manipulates, transmits, and measures microwave in quantum systems according to quantum mechanical properties, especially quantum entanglement, quantum superposition, and quantum tunneling. This field combines the advantages of high correlation, high reuse, and strong robustness of quantum technology with the flexibility, all-weather, and easy regulation of microwaves. As a result, the microwave detection ability can be expanded from the current hundreds of kilometers to more than 2 000 kilometers, and the detection sensitivity can be increased by up to 50 dB compared to standard microwave technology. Compared with traditional microwave technology, the instantaneous bandwidth is increased by up to one order of magnitude, reaching 100 GHz, which significantly improves the capacity of existing microwave detection channels; three orders of magnitude increase the antidamage ability, and the reflection cross section is reduced by four orders of magnitude under the same detection power, realizing the “self-stealth” function. Therefore, it is considered a subversive technology for the cross-generational transformation of modern information systems.

The two primary categories of quantum microwave measurement directions are as follows: one is to apply quantum systems (atoms, diamonds, photons, etc.) to microwave systems such as radar and electronic countermeasures and use the unique advantages of quantum systems to transmit and process microwave signals. The other is to realize the quantum correlation of the microwave frequency band signal using conventional systems (optical frequency comb and mechanical resonator) and expand the development of quantum information technology in the multifrequency domain. The further development of quantum microwave measurement will emphasize solving the scientific problems of the cross band, cross-medium, cross-scale, and cross- system faced by future microwave detection and information systems. It will develop toward multifunction, miniaturization, integration, networking, and collaboration.

Currently, quantum microwave detection guided by practical applications has been carried out worldwide. It has surpassed conventional microwave technology in terms of power sensitivity, damage resistance, dynamic range, and other technical indicators. The advantages of quantum microwave measurement engineering have also been put into a defined technical improvement plan regarding bandwidth and electric field sensitivity. Overall, quantum microwave technology will be the core technology in the new generation of the microwave information field. It has broad application prospects in military fields, such as radar detection, electronic countermeasures, and intensive communications, and commercial fields such as medical, security, navigation, and telecommunications.

(6)   Atomic and close-to-atomic scale manufacturing and measurement technologies for optical components

High-end core optical components with ultra-smooth and damage-free surfaces and atomic scale functional features are urgently needed for modern optical engineering, represented by extreme ultraviolet lithography, sophisticated light sources, and super-lenses. Current optical manufacturing technology based on machine precision does not meet the demand for the atomic-level precision and performance of such optical components. The next-generation core technology for manufacturing such high-end optical components will be atomic and close-to-atomic scale manufacturing (ACSM), where the production tools and procedures directly affect atoms to remove, add, or migrate materials at the atomic- level. The goal of ACSM for optical components is to pioneer optical manufacturing technologies to atomic-level precision and functional feature size, which requires investigating novel optical manufacturing approaches from common issues in the fields of intrinsic mechanisms, processes, characterization and measurement, and instrumentation and equipment. At the atomic and close-to-atomic scales, the fundamental theoretical framework of ACSM has radically changed from classical to quantum theory. The study of the intrinsic mechanism of single-atom manipulation, multi- atom interactions, and their connection with macroscopic scales in the ACSM process based on quantum theory will be the cornerstone for future research. The ACSM technology needs the direct application of energy to atoms to establish a multidimensional manufacturing system with a certain generality and the innovative use of inter-atomic forces to make atoms spontaneously form specific functional structures to achieve the goal of large-scale, high-efficiency, and high- precision manufacturing of advanced optical components. The ACSM high-precision measurement technology is a prerequisite for guaranteeing end-use performance and reliability of ACSM-based optics. Decoupling the perturbations introduced using the ACSM measurement process will become a key technical issue in improving the measurement accuracy because the quantum characteristics of ACSM can alter the state of the measurement object.

(7) Ultra-low power IoT chip technology

Based on artificial intelligent chips, intelligent IoT devices must support multiple functions, such as data perception, storage, computation, and decision-making. Traditional IoT systems consist of several discrete devices, including sensors, analog-to-digital conversion chips, processor chips, and memory chips. Therefore, the design of IoT systems is fragmented and lacks top-level optimizations to overcome the bottleneck of system power consumption and energy efficiency. The low power IoT chip fuses the sensing, storage, and computing into one integrated heterogeneous chip, effectively reducing the cost of data movement and invalid data processing and fundamentally breaking the system efficiency bottleneck. The cutting-edge technology directions in this research area include low power data acquisition technology, high-efficiency AI hardware acceleration technology, low power chip architecture technology, etc.

Through novel data acquisition circuit topologies, such as changing from the traditional “voltage domain” data conversion to the novel “charge domain” or “time domain” data conversion and changing from the traditional Nyquist conversion to the adaptive sampling conversion, the low power data acquisition technology seeks to lower power consumption and improve the data sensing accuracy. The high-efficiency AI hardware acceleration technology improves computing capacity and energy efficiency through light- weight hardware accelerator design. For instance, changing the traditional “von Neumann architecture” to the “computing in-memory” architecture can reduce the data movement overhead and enhance the computing performance. Through novel architecture design, low power chip architecture technology reduces chip power consumption, especially always-on standby power consumption. For example, changing from the traditional “synchronous computing” architecture to the “asynchronous event-driven” clockless low power architecture can considerably reduce the standby power as the chip activity matches the actual valid event. Additionally, cutting-edge research is continuing to investigate the co-design innovations across the data perception, computing, storage, and transmission to develop one integrated IoT chip that fuses “perception, computing, storage, and transmission” with high-efficiency and low power.

(8)   Artificial intelligence for electronic design automation technology

Electronic design automation (EDA) refers to designing integrated circuits aided by computer algorithms and software, which are widely adopted in the design, verification, and manufacturing of modern very-large-scale integrated circuits. Artificial intelligence (AI) for EDA or AI-assisted EDA aims at leveraging AI technology to assist the modeling, optimization, and verification procedures in the design flow. It can effectively facilitate optimization, speed up design closure, and eventually improve the quality of results. According to the stages of the EDA algorithms in the design flow, research studies on AI for EDA can be roughly divided into six major categories: system-level design space exploration (DSE), synthesis, physical design, manufacturing, verification and testing, and runtime management. Since 2016, related publications on premier EDA conferences and journals have increased about 2x, especially on system-level DSE, synthesis, physical design, and manufacturing. Both industrial and academic research teams from China, the USA, Europe, Japan, Korea, and so on have invested efforts in these directions. In the past two years, Synopsys and Cadence (two major EDA vendors) have announced commercial products, DSO.ai and Cerebrus, respectively, for design space exploration. With the continuous scaling of semiconductor technologies and circuit design complexity, AI for EDA is a promising direction to explore.

(9)  Reinforcement learning-based evolutionary algorithm for unmanned systems

The reinforcement learning (RL)-based evolutionary algorithm for unmanned systems applies RL to create intelligent behavioral decision-making and control strategies without human intervention and to improve performance over time continuously. The interaction between a single agent or a group of agents in a predefined environment and an evaluation mechanism produces training data in RL algorithms.

RL-based evolution algorithms are not constrained using established approximations, such as linearization, and therefore, have wider working intervals and the ability to adapt to complex and changing scenarios in addition to conventional metrics widely considered for nonlearning control algorithms, such as convergence speed, accuracy, and anti-overfitting.

The RL-based evolutionary algorithm for unmanned systems faces additional constraints from the physical performance of the system, the difference between the simulation and reality domain, and safety issues within the strategies compared to RL applied to other industries. New challenges, such as sexuality and continuous evolution, in turn, generate new scientific questions and technological approaches.

As the computing performance of unmanned systems is commonly limited compared with dedicated computing devices, the RL-based evolutionary algorithm for unmanned systems requires lower computing scales to adapt to high frequencies and highly dynamic interactions. Consequently, technologies such as multi-task and multi-scene meta- learning, knowledge distillation, representation learning, cloud computing, and edge computing have emerged at this stage.

Simultaneously, early training is usually done with simulation to improve the efficiency of data collection and avoid mechanical damage. However, the inevitable difference between the simulation and reality domains is damaging to the performance of RL–based strategies when migrating to real-world unmanned systems. Therefore, methodologies such as domain parameters online identification, adversarial domain randomization, distributed robust optimization, and auto-encoding transformations have improved robustness when crossing the reality gap.

Real-world scenarios put forward higher requirements for the security of RL-based evolutionary algorithms. Technologies such as multi-sensor risk detection, trust domain strategy optimization, and action space dynamic constraints have emerged at this stage.

Continuous evolution requires algorithms to be capable of continuously gathering data, optimizing strategies, and improving performance after deployment in the real-world. Additionally, higher data efficiency and the ability to adapt to dynamic systems and environment changes are demanded. Therefore, prior reward shaping, small sample transfer learning, and distributed computing have emerged at this stage.

The next development directions of the RL-based evolutionary algorithm for unmanned systems include the following: accurate simulation environment and online system identification; unmanned system ontology capabilities; embodied intelligence that incorporates prior knowledge of the dynamics of the unmanned system; multi-task comprehensive autonomous decision-making under complex scenes; heterogeneous unmanned system group intelligent cooperation; multi-agent information integration; and efficient, secure capability evolution in real scenarios.

(10)  Medium- and low-orbit space communication network technology

Medium- and low-orbit space communication network technology refers to building a wide-area communication system through inter-satellite and satellite–ground links, where satellite constellations operate in medium- and low-earth orbits. It can actualize network access and the interconnection of space probes, manned spacecraft, satellites, ground stations, ground terminals, and high- altitude vehicles, which serve environment monitoring, military surveillance, space exploration, in-flight internet service, remote area communication, etc. The primary technical directions include space laser communication, inter-satellite routing, satellite beam management, handover control, etc. Space laser communication primarily includes high-speed modulation, capture-tracking-alignment, laser signal detection, and other technologies. Inter-satellite routing principally includes satellite and end-user addressing, route planning, etc. Satellite beam management largely includes technologies such as multi-beam satellite antenna, precoding, agile beam hopping, and multi-color frequency multiplexing. Finally, handover control primarily includes inter-beam handover, inter-satellite handover, and inter- satellite link handover. Future development trends may be as follows:

1)  The satellite constellation evolves from low to high density to improve network capacity. It is essential to carefully consider the problems of on-demand inter-satellite link establishment, interference avoidance, and frequent inter- satellite handover.

2)   The network will be deeply integrated with SDN and virtualization techniques to achieve programmable spatial routing and flexible deployment of network functions. The research primarily focuses on software-defined satellite payload design, network function deployment strategies, etc.

3)  The network with just communication function will develop into a network integrating communication, computing, sensing, and positioning, which can achieve low latency sensing information distribution and accurate positioning. Future research directions include the integrated design of communication and positioning signals, onboard intelligent processing of remote sensing information, and location- assisted communication improvement.

《2.2 Interpretations for three key engineering development fronts》

2.2 Interpretations for three key engineering development fronts

2.2.1 Super-large-scale digital twin visualization and simulation system

The digital twin is the deepening stage and future vision of enterprise digital transformation, which integrates multiple technologies to support the development of a data- centric business. The “twins” idea can be traced back to the Apollo Program of the National Aeronautics and Space Administration (NASA) in the 1960s. With the advancements in the computer simulation, network connection, and sensor technologies, Professor Michael Grieves proposed the digital twin concept and model in 2002 and applied it to product

lifecycle management. The digital twin concept was first suggested in the simulation-based system engineering chapter of the Area 11 technology roadmaps released by NASA in 2010. The concept is as follows: “A digital twin is an integrated multi- disciplinary, multiscale simulation that uses the best available physical models, sensor updates, operation history, etc., to mirror the life of its corresponding flying twin”. According to Gartner, a digital twin is defined as the digital mapping of physical entities or systems in the virtual world. It was one of the top ten strategic technology development trends from 2017 to 2019. Businesses, such as Siemens, GE, and Microsoft, among many others, have consistently promoted the idea and notion of digital twins and introduced related products.

The super-large-scale digital twin visualization and simulation system is the core of the digital twin and the key to the value, scale, and commercialization of the digital twin technology business. The digital twin involves three aspects, namely holistic mapping, simulation maintenance, and closed-loop control. Holistic mapping provides a God-like perspective that can observe and understand the world by utilizing the digital twin technology. It synchronizes together people, places, and things that are distributed in different spaces in the real world and presents them integrally in the virtual environment. It can also realize space folding and time collapse beyond reality, support on-site coordination across time and space, and enable people to immerse themselves in the production process with a more real and closer interaction. The simulation maintenance of digital twins is the simulated reproduction or prediction based on holistic interconnection. It is the integrated simulation of static and dynamic, virtual and real, and past and present, which is driven by large-scale and refined data and optimization and intelligent algorithms. To some extent, it exceeds the traditional simulation. In terms of holistic mapping and simulation maintenance, super-large- scale digital twin visualization and simulation are the core basic capabilities of digital twins. For example, aircraft design and final assembly simulation have approximately 4 to 18 million components. Traditional technologies and methods cannot support the overall evaluation. Another typical case is building information modeling (BIM) visualization and structural analysis in the architecture field. At present, lightweight processing must be performed before using the BIM data for visualization or structural analysis, which greatly limits the possibility of overall research or full structural simulation. Similar practical problems exist in the fields of smart cities, city-level transportation, and ocean governance. The super-large-scale digital twin visualization and simulation system is the necessary core technology for achieving the integration of data and entity of the holistic digital twin.

Tables 2.2.1 and 2.2.2 show the distribution of the primary countries and institutions that published patents in the “super-large-scale digital twin visualization and simulation system” engineering development front, respectively. China, the USA, and Germany have prioritized this in national strategies; hence, fundamental research and application are relatively prominent, and these countries are clearly leading in patents. As for institutions, Siemens and GE have shown obvious leading advantages and achieved outstanding industrial implementation. Siemens considers the digital twin to be the core technical pillar of “Industry

《Table 2.2.1》

Table 2.2.1 Countries with the greatest output of core patents on “super-large-scale digital twin visualization and simulation system”

No. Country Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 China 356 73.71 1 715 59.86 4.82
2 USA 68 14.08 747 26.07 10.99
3 Germany 24 4.97 196 6.84 8.17
4 South Korea 13 2.69 28 0.98 2.15
5 Japan 4 0.83 34 1.19 8.5
6 Australia 4 0.83 21 0.73 5.25
7 Finland 2 0.41 9 0.31 4.5
8 Switzerland 2 0.41 1 0.03 0.5
9 UK 1 0.21 78 2.72 78
10 Canada 1 0.21 17 0.59 17

4.0”. Combined with its advantage in traditional software, Siemens exhibits a clear lead in intelligent manufacturing. Meanwhile, by implementing twin technologies to support condition detection and the predictive maintenance of aircraft engines relatively early, GE has gradually transitioned from selling engines to maintaining the product lifecycle, which is an upgrade of the business model from products to services. In China, universities like Beihang University and Beijing Institute of Technology mainly engage in theoretical and applied fundamental research, reflecting the application demand of digital twins in aerospace and national defense.

For applications, the energy and industrial fields have also shown great potential. The State Grid Corporation of China has already implemented unattended substations and quality inspection and maintenance equipment. Therefore, in the engineering development of a “super-large-scale digital twin visualization and simulation system”, governments of all countries are prioritizing it. Businesses and relevant institutions for scientific studies have also made good progress. However, the cooperation network between countries in Figure 2.2.1 depicts that horizontal cooperation is relatively little, and there is no cooperation between the major

《Table 2.2.2》

Table 2.2.2 Institutions with the greatest output of core patents on “super-large-scale digital twin visualization and simulation system”

No. Institution Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 Siemens Corporation 23 4.76 274 9.56 11.91
2 General Electric Company 22 4.55 305 10.65 13.86
3 Beihang University 17 3.52 89 3.11 5.24
4  State Grid Corporation of China 15 3.11 22 0.77 1.47
5 Guangdong University of Technology 11 2.28 196 6.84 17.82
6 Xi’an Jiaotong University 11 2.28 69 2.41 6.27
7 China Electronics Technology Group Corporation 8 1.66 94 3.28 11.75
8 Guangdong Power Grid Company Limited 8 1.66 22 0.77 2.75
9 Beijing Institute of Technology 8 1.66 21 0.73 2.62
10 International Business Machines Corporation 6 1.24 40 1.4 6.67

《Figure 2.2.1》

Figure 2.2.1 Collaboration network among major countries in the engineering development front of “super-large-scale digital twin visualization and simulation system”

institutions. At the national level, only the USA and Germany engage in some cooperation and exchanges.

One of the critical technologies used for constructing the mirror image, expansion, and extension of an actual society is the super-large-scale digital twin visualization and simulation system. The development of accurate modeling and analysis, real-time presentation, efficient computing, and flexible applications is currently being accelerated for the super- large-scale digital twin visualization and simulation system. Regarding visualization, refinement and immersion are the core development directions that improve the dynamic fidelity of virtual or virtual-real fusion scenarios through sight, hearing, touch, and other holographic technologies. Simultaneously, it will significantly change the interaction mode and add a new dimension to virtual reality. Simulation progresses throughout time from the single body and process simulation to a distributed group simulation, fusing mechanisms and data driven and virtuality and reality. This can better support the development of swarm and human– machine hybrid swarm intelligence. Meanwhile, distributed, parallel, and high-performance computing are continually combined with a digital twin visualization and simulation system to enhance the computing performance of digital twin systems. The IoT and the real-time integration of virtuality and reality can be actualized with the help of high- speed communication networks (e.g., 5G and 6G). A new range of business models and industrial types for digital transformation will be birthed because of the formation of a consensus on digital transformation in all walks of life, the further implementation of digital twins in urban governance, industrial manufacturing, retail, and medical care, and its gradual penetration into the entire industrial lifecycle. These developments have broad and promising prospects.

Figure 2.2.2 presents the roadmap of the engineering development front of the “super large-scale digital twin visualization and simulation system”. According to the report published in 2019 by a consulting firm Markets & Markets, the digital twin market is expected to increase from 3.8 billion dollars in 2019 to 35.8 billion dollars in 2025. As the core of the digital twin technology, the super-large-scale digital twin visualization and simulation system should continue to permeate transportation, logistics, city, and manufacturing, moving from the application of a single scenario to that of the entire lifecycle of the industry. For example, the transportation field will involve road construction and design, traffic optimization analysis, asset intelligent operation and maintenance, safe driving intelligent assistance, and other scenarios. The city domain will contain design optimization and simulation, innovative construction sites, rain and flood simulation, intelligent emergency response, etc. Meanwhile, manufacturing include intelligent product development, production process optimization, workshop intelligent scheduling, predictive maintenance of equipment, safe factory production, etc. These all involve large-scale, wide- dimension, and precise visualization and simulation for better implementation of human-in-the-loop simulation and deduction and ultimately assist in decision-making. Additionally, the digital twin application will extend to various fields, such as medical care and agriculture.

2.2.2 Integrated on-chip light source

Since the introduction of the first transistor in 1947, IC technology has dramatically promoted scientific and technological progress, laying the essential groundwork for the information society. People’s demand for information is increasing in line with advancements in society and technology, which raises the bar for ICs’ capacity for information processing and acquisition. However, in the post-Moore era, ICs are constrained in using insurmountable electrical interconnections in terms of time and power consumption. Therefore, with the end of Moore’s law, people put forward the concept of using photons as information

《Figure 2.2.2》

Figure 2.2.2 Roadmap of the engineering development front of the "super-large-scale digital twin visualization and simulation system"

carriers to substitute electrons, that is, through the fusion of optoelectronics and microelectronics, the application of an on-chip optical interconnection instead of the conventional electrical interconnection, to attain high-speed transmission of data, and reduce the parasitic resistance of electrical interconnection. For microelectronics, deep submicron power-off interconnects severe delay and power consumption difficulties; so, it is imperative to introduce optoelectronics to resolve the issue of electrical interconnect. For optoelectronics, it is essential to use mature microelectronic processing technology platforms to achieve large-scale, high integration, high-yield, and low cost mass production.

Over the past decade, optoelectronic integrated chips, which can produce photons, transmit, process, and detect optical information on a chip, have become one of the most popular directions in academia and industry. The on-chip integrated light source (which is among them) can provide a coherent light source for the photoelectric integrated chip to generate optical information. Its performance determines the chip’s application space and implementation purpose. The integrated on-chip light source has great advantages in reducing size, mass, power consumption, and cost compared with traditional optical equipment through integrated design and modern semiconductor processing technology. Simultaneously, it promotes the industrial upgrading of advanced lithography technology, nanomanufacturing technology, micro/nano manufacturing technology, and material science development.

Silicon-based optoelectronic integrated chip technology refers to the design, manufacture, and integration technology of silicon-based optoelectronic chips. Monocrystalline silicon has become the most mature and broad platform for photonic chips because of its sizable optical bandwidth, robust scalability, low cost, efficient chip routing, and high refractive index. Silicon-based optoelectronic integration circuit (OEIC) is compatible with the CMOS process and can achieve large-scale production with the help of a mature microelectronic processing platform. It is the best solution to realize the integration of optoelectronics and microelectronics and optical interconnection because of its low cost, high degree of integration, and excellent reliability. Optical communication links will benefit from increased bandwidth density and speed because of wafer-integrated on-chip light source technologies in the field of optical interconnection and high-speed optical computing. Additionally, in the field of precision measurement, the characteristics of miniaturization and low power usage will be accomplished, and the optical atomic clock and spectrometer will be relocated from the equipment to the chip. Multi-wavelength optical comb technology can achieve multi-wavelength parallel computing in optical computing and reach speedups of several orders of magnitude. On-chip light source technology will enable parallel lidar systems in the sensing sector to improve sampling rates, reduce power usage, and enable high-speed IoT sensing and processing for complex applications such as autonomous driving.

Silicon-based detectors, optical modulators, optical switches, optical waveguides, and other breakthroughs have been achieved so far. However, for on-chip silicon-based light sources, no mature options are currently available. The indirect bandgap characteristics of silicon material determine its low luminescence efficiency, and it is challenging to create high-performance luminescence devices as active materials. How to integrate light sources into silicon-based chips is a big challenge. Recently, researchers who are major in optics, materials, mechanical engineering, etc., have carried out a large amount of silicon-based light source research. From the initial silicon-based light-emitting diode, like the luminescence of PN junction, structure of metal-isolator- semiconductor, Schottky junction, to the luminescence of the carrier injection silicon avalanche multiplication-glow, silicon of rare earth–doped luminescence, the silicon nanocrystal laser, silicon germanium laser, etc., the luminescence efficiency is continuously improved. However, there is still a difference between these light sources’ III–V lasers’ performance. Therefore, the industry’s solution is to use high-precision packaging to couple the external light source and the silicon optical chip into components before the integrated on-chip light source reaches maturity. So, how do we make a silicon-based optoelectronic chip with excellent performance combining low power consumption, long life, and a high- power light source?

Direct bandgap III–V semiconductors have excellent optical and electrical properties. Gallium arsenide (GaAs), indium phosphide (InP), quantum well, and QD lasers are commercially available. Despite the high quantum efficiency, the conventional III–V family light source is incompatible with the existing IC technology. The idea of incorporating an III– V semiconductor laser with silicon on a silicon wafer came naturally. Ensuring that the manufacturing process of the light source is compatible with the existing IC process is a hot and difficult problem. According to current technology, mature III–V material lasers are added to silicon wafers either through hybrid integration (transfer of materials to silicon wafers, such as direct placement or wafer bonding) or monolithic integration (direct growth of materials on silicon wafers, such as epitaxial growth). Hybrid integrated technology is mature; for instance, through wafer bonding technology, people can be III–V and epitaxial layers using BCB auxiliary adhesive- bonding technology integration above the silicon chip; light can be produced with III–V materials using evanescent wave coupling into silicon photonic circuits, and finished on the hybrid integrated light source and silicon photonic chips. However, its high process cost makes it difficult to achieve large-scale integration. Monolithic incorporation is expected to use the process and technology of native III–V photonic devices to silicon photonic sources, and obtain outstanding on-chip light sources, considered the ultimate solution for the large-scale production of light sources on silicon chips. The primary issue affecting this technology is the severe lattice mismatch between the III–V material and silicon, which results in the generation of dislocations, antiphase domains, and other defects, severely limiting the life and performance of the III–V laser. A dislocation defect is one in which a dislocation- barrier layer or other buffer layer structure may be added between the substrate and the active region during growth. For antiphase domain defects, the selective growth technique can successfully limit the impact of the antiphase domain defects on the active region by epitaxizing the III–V material on the graphic silicon substrate. The main advantage of the monolithic integration scheme over the hybrid integrated light source is that it can improve the integration degree while reducing the line width concurrently with the silicon photonic process. It has great potential in developing large- scale photonic integrated chips, which is also the primary development direction of silicon photonic technology. Currently, the integration technology of InP, silicon nitride (Si3N4), indium gallium arsenide (InGaAs), and other materials on silicon wafers has been mature and commercialized. Additionally, the SiN-on-Si platform with shallow loss, large- transmittance window, and remarkable nonlinear effect make up for the defects of Si transmittance window cut-off when the wavelength of Si is lower than 1 100 nm and has new applications in AR/VR, measurement, biomedicine, sensing, and other fields.

Table 2.2.3 shows the distribution of the major countries for producing patents in the development frontier of the “integrated on-chip light source”. China, the USA, and Japan rank in the top three. The number of China’s patents is more than three times that of the second-place USA, indicating China’s national strategy to prioritize the field of the integrated on-chip light source and its progress in the fields of materials, physics, optoelectronics, and precision manufacturing, among others. However, the mean number of citations is only a third of that in the USA, showing slightly insufficient originality. Regarding national cooperation (Figure 2.2.3), as the country with the most significant number of original technologies in integrated on-chip light sources, the USA has close cooperative relations with South Korea, the UK, and Australia. These countries have a clear division of labor in the on-chip integrated light source field, and their technological advantages can be complementary. There is no cooperation

《Table 2.2.3》

Table 2.2.3 Countries with the greatest output of core patents on “integrated on-chip light source”

No. Country Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 China 565 67.91 724 46.77 1.28
2 USA 165 19.83 602 38.89 3.65
3 Japan 31 3.73 67 4.33 2.16
4 South Korea 20 2.4 14 0.9 0.7
5 Germany 7 0.84 45 2.91 6.43
6 Canada 7 0.84 21 1.36 3
7 UK 5 0.6 34 2.2 6.8
8 Singapore 5 0.6 19 1.23 3.8
9 India 5 0.6 0 0 0
10 Australia 3 0.36 6 0.39 2

《Figure 2.2.3》

Figure 2.2.3 Collaboration network among major countries in the engineering development front of “integrated on-chip light source”

among major institutions at the top of the ranking, signifying that the competition in this area is ferocious, and that the top institutions are cautious in protecting their original technology. Remarkably, AIM Photonics, Intel, and HP LABS have several high-level silicon and optical process lines, from silicon and optical chip design training to production and packaging capabilities. For instance, Intel announced the first commercial silicon-based heterogeneous integration product in 2016, which realized the monolithic integration of the InP laser and Si high-speed Mach–Zehnder interferometer and the product series of 100 Gbps transceivers. Intel’s accomplishments and vertically integrated business model have shown the technical possibility of silicon-based heterogeneous integration. Regarding scientific research and industrialization, China is progressively narrowing its gap with foreign countries. In terms of silicon and optical integration, China currently has silicon and optical platforms of Chongqing United Microelectronics Center Co., Ltd. (CUMEC), Institute of Microelectronics of the Chinese Academy of Sciences (IMECAS), and Shanghai Industrial μTechnology Research Institute (SITRI) with chip-processing capability. For instance, CUMEC has realized a silica-based narrow linewidth laser based on an independent process platform, with a wavelength tuning range of 1520–1580 nm, power > 10 dBm, linewidth < 100 kHz, and features of low phase noise, high integration, and low cost. It has broad application prospects in silicon optical lidar, high-speed coherent optical communication modules, gas detection, and fiber sensing based on coherent detection. Regarding scientific research institutions, Peking University, Zhejiang University, Shanghai Jiao Tong University, Institute of Semiconductors Chinese Academy of Sciences, and other units have conducted numerous cutting-edge studies on the on-chip light source frequency comb and multi-material fusion chips, among others (Table 2.2.4).

New trends and directions exist in the field of on-chip integrated light sources (Figure 2.2.4).

First, the multi-material system incorporates the photoelectric chip to develop the integrated process technology of III–V compounds, silicon nitride, silicon dioxide, polymer, lithium niobate, aluminum potassium arsenic, and InP, on silicon wafers. The target can cover visible light, near-infrared, mid- infrared, THz, and other frequency bands. The methods employed include transfer printing processes according to reversible adhesion methods that incorporate thousands of devices made of different materials onto a single wafer. Multi-material integration is used to create a silicon/advanced photoelectric material (III–V, LiNbO3, etc.) hybrid integration process platform.

Second, because of the urgent demand for the multi- wavelength output of the on-chip light source, the nanoscale optical parametric oscillator is created to realize the successful nonlinear wavelength conversion on the chip via the nonlinear effect of the weak pump light and the material in the micro- cavity. The wavelength output that is difficult to accomplish using the traditional silicon chip technology is widely applied in chip-based atomic clocks or portable biochemical analysis devices.

Third, semiconductor mode-locked laser combined with

《Table 2.2.4》

Table 2.2.4 Institutions with the greatest output of core patents on “integrated on-chip light source”

No. Institution Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 Inphi Corporation 26 3.12 50 3.23 1.92
2 Ningbo Daye Garden Equipment Company Limited 23 2.76 20 1.29 0.87
3  Zhejiang University 14 1.68 21 1.36 1.5
4 Institute of Semiconductors Chinese Academy of Sciences 13 1.56 27 1.74 2.08
5 International Business Machines Corporation 13 1.56 26 1.68 2
6 The Government of the USA of America as represented by the Secretary of the Navy 12 1.44 5 0.32 0.42
7 Peking University 11 1.32 21 1.36 1.91
8  Intel Corporation 9 1.08 57 3.68 6.33
9 Accelink Technology Company Limited 9 1.08 43 2.78 4.78
10 Shanghai Jiao Tong University 7 0.84 28 1.81 4

《Figure 2.2.4》

Figure 2.2.4 Roadmap of the engineering development front of “integrated on-chip light source”

integrated nonlinear optical frequency comb devices realize the compound semiconductor, silicon nitride, lithium niobate, etc., and silicon wafer monolithic integration and mixed integration and attain mass production to achieve low power usage and narrow linewidth ultrashort optical pulse, offer hundreds of isometric and coherence of the laser line, and can be accurate corresponding to the frequency of the comb line spacing. It can not only make optical atomic clocks to accurately measure time, but also reduce the interference between optical fiber communication channels, and upsurge the signal amount transmitted by a single optical fiber using various orders of magnitude. It is also commonly employed in technologies such as lidar, GPS, astronomical observation, and the investigation of gas composition. The current indicators can produce a QD laser comb with a comb tooth width of 12 nm and a thin line width outer cavity laser with a minimum linewidth of 140 Hz.

Fourth, in QD lasers, the discrete distribution of QDs causes QD-based lasers to have better temperature characteristics and lower threshold currents. For instance, a simple template-free self-assembly method for glial QDs can be used to prepare resonators. Furthermore, indium arsenide QDs can be used as gain media to grow GaAs substrate epitaxially. Under the action of the optical pump, the micro- and nanochip laser is realized. The on-chip entanglement light source in the integrated optical quantum chip can also be realized by integrating high-quality semiconductor QDs, diamond color centers, and two-dimensional material defect states, among others. One potential future direction is to study the hybrid integrated on-chip quantum light source of polarized entangled photon pairs of self-assembled QDs. Currently, the best on-demand single-photon and entangled photon QD sources emit energies much higher than the silicon bandgap, so hybrid III–V integration technologies are required.

The application of lidar is a typical usage scenario of an on- chip integrated light source. The current lidar has a large volume, mass, power consumption, and cost. The trend is to use photonic integrated chips to substitute the current lidar built using discrete optical components, which can significantly reduce the volume, mass, power consumption, and cost. This can be achieved by incorporating the on-chip light source through optical interconnection. The optical switch routes the on-chip optical signal to achieve chip-level integration of photon emission and reception. The light source and the germanium silicon photodetector are integrated into a chip. This chip is used to achieve the coherent detection of targets at different distances and achieve the scanning and ranging functions of the coherent lidar.

Sensor is another application scenario. On chip light source is needed to achieve the integration of optical sensing applications. To realize the on-chip integration of sensing chip, processing technologies such as heteroepitaxy/transfer printing/materials bonding are fully developed. Currently, the refractive index sensing detection limit of waveguide chip incorporated optical sensing detection chips has reached the order of 10−6 RIU, the detection limit of gas has reached the order of ppb (10−9), and the detection of chemical and biological molecules has reached the order of pg/mm2, indicating good application potential. The chip can also be easily integrated into platforms such mobile phones and drones, enabling portable applications and robust on- site detection through big data, cloud computing, and IoT technologies. In optical communication, Fujitsu laboratory scholars Tanaka et al. designed a silicon photonic transmitter chip without a temperature regulator. They integrated the III– V material semiconductor optical amplifier on the Silicon-On-Insulator (SOI) substrate by high-precision backloading welding equipment. The hybrid integrated laser is aligned with the waveguide end and the SOI waveguide.

2.2.3 Localization by multisource information fusion

Source localization is a core function in conventional military applications, such as space–air early warning and accurate attacking, and is an indispensable technique in civilian applications such as autonomous driving and intelligent transportation. Moreover, new applications such as ultra- high-speed target tracking, unmanned aerial vehicles, robots, autonomous driving, and future 6G wireless communications have recently motivated even more efforts toward high- performance localization methods and improved their rapid developments.

Conventionally, source localization is primarily used in line- of-sight environments, and the research focuses on the mathematical formulation and solution of the resultant nonlinear estimation problem. Traditional localization techniques, however, struggle to perform satisfactorily in contexts with complex propagation because of factors such as multipath propagation, low-precision measurements, and insufficient prior information. Hence, high-precision source localization in complex propagation environments has become a hot research topic. In such a situation, many low complexity, robust, and precise source localization techniques have been suggested and applied in practice. Specifically, the applications of new sensors, such as UWB sensors, millimeter wave radar, and high-precision vision sensors, have improved the measurement accuracy. However, they have also generated a large amount of redundant data. Many robust localization methods have been proposed and applied for the problem formulation to address the localization problem in dynamic nonline-of-sight (NLOS)/beyond-line- of-sight (BLOS) environments, where information is limited. For the solution side, convex optimization-based approaches have been proposed and used, significantly enhancing the localization performance at low signal-to- noise ratios and simultaneously maintaining relatively low complexity. Moreover, machine learning–based methods have been introduced to solve various localization problems. However, the excellent performance of these methods has not been well understood because of the black-box nature of many deep- learning structures.

Recently, the development of new sensors, such as a millimeter wave radar, UWB sensor, Bluetooth, and high- precision vision sensors, have enriched the study and application of localization techniques. For example, millimeter wave systems can generate high-precision range and angle information, which can be applied to simultaneous localization and mapping; a fusion of the measurements from UWB sensors and Bluetooth 5.1 equipment makes low cost indoor localization possible; and the fusion of image processing technique and electromagnetic wave positioning technology can meet the localization requirements in both visible and invisible environments.

Tables 2.2.5 and 2.2.6 list the countries and institutions with the greatest output of core patents on “localization techniques by multisource information fusion”, respectively. Table 2.2.5 shows that China holds most of the patents in this area, accounting for 73.60% of the total number. Additionally, the number of citations received by China exceeds half of the total, more specifically, 55.01%. These two figures show that China is leading the development in this area. Furthermore, four of the Top 10 institutions with the largest core patents are from China, with Baidu Incorporation in the first rank. This trend also indicates that China is the most active in this area. Furthermore, the listed institutions include internet enterprises Baidu and Google and automotive companies GM, Hyundai, and Ford, which reveal the great importance of localization techniques in robotics and autonomous driving. Figure 2.2.5 presents the collaboration network among major countries. As shown, only a few countries collaborated on this topic. Also, there is no collaboration among the major institutions. These signify that the R&D of this method is relatively independent among the countries and institutions.

Figure 2.2.6 shows the main paths in which "localization techniques by multisource information fusion" will grow over the next 5–10 years, including the followings:

(1)  Localization scenarios

1)  Millimeter wave localization: As 5G millimeter wave systems are commercially deployed, localization techniques for millimeter wave systems will also be implemented gradually.

《Table 2.2.5》

Table 2.2.5 Countries with the greatest output of core patents on “localization by multisource information fusion”

No. Country Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 China 669 73.6 2 060 55.01 3.08
2 USA 97 10.67 1 046 27.93 10.78
3 South Korea 39 4.29 121 3.23 3.1
4 Germany 30 3.3 178 4.75 5.93
5 Japan 23 2.53 72 1.92 3.13
6 Netherlands 6 0.66 124 3.31 20.67
7 Canada 6 0.66 31 0.83 5.17
8 France 5 0.55 9 0.24 1.8
9 India 5 0.55 4 0.11 0.8
10 Sweden 4 0.44 37 0.99 9.25

《Table 2.2.6》

Table 2.2.6 Institutions with the greatest output of core patents on “localization by multisource information fusion”

No. Institution Published patents Percentage of published patents/% Citations Percentage of citations/% Citations
per patent
1 Baidu Incorporation 17 1.87 79 2.11 4.65
2 GM Global Technology Operations Limited Liability Company 7 0.77 46 1.23 6.57
3 Nanjing University of Posts and Telecommunications 7 0.77 19 0.51 2.71
4 Hyundai Motor Company 7 0.77 3 0.08 0.43
5 Google Incorporation 6 0.66 59 1.58 9.83
6 International Business Machines Corporation 6 0.66 54 1.44 9
7  Ford Global Technologies Limited Liability Company 6 0.66 29 0.77 4.83
8 Zhejiang University 6 0.66 17 0.45 2.83
9  South China University of Technology 6 0.66 11 0.29 1.83
10  Denso Corporation 6 0.66 6 0.16 1

《Figure 2.2.5》

Figure 2.2.5 Collaboration network among major countries in the engineering development front of “localization by multisource information fusion”

《Figure 2.2.6》

Figure 2.2.6 Roadmap of the engineering development front of “localization by multisource information fusion”

Moreover, millimeter wave localization can cooperate with the BeiDou Satellite Positioning System to realize high-precision indoor and outdoor localizations, facilitating emergency rescuing, smart transportation, and IoT applications.

2)  Underwater localization: Given the unique characteristics of underwater propagation, localization techniques developed for electromagnetic wave signals may not be applied directly to underwater environments. Therefore, further development in underwater localization techniques is needed, which will be extremely important to the application of defense and surveillance in an oceanic environment.

(2) Source type

The source type will be moved from the point source to the rigid body and even group sources. Applications such as autonomous driving and robotics require both position and orientation information. In such a situation, the object can no longer be regarded as a point source but as a rigid body. Therefore, rigid body localization will become an essential task in the future. Armed with high-precision radar, rigid body localization will be widely used in autonomous driving and robotics applications.

(3) Localization techniques

As machine learning–based methods mature and the chip computing ability becomes more powerful, localization problems in complex propagation environments will be solved using intelligent machine learning–based methods, which may revolutionize the area and significantly improve the performance with added value in various application scenarios.

 

 

 

Participants of the Field Group

Review Members of the Expert Group

Leaders: PAN Yunhe, LU Xicheng

Members (in alphabetical order of the last names):

Group 1: JIANG Huilin, LI Detian, LI Tianchu, LIU Zejin, LUO Yi, LYU Yueguang, TAN Jiubin, ZHANG Guangjun

Group 2: CHEN Zhijie, DING Wenhua, DUAN Baoyan, LONG Teng, WU Manqing, YU Shaohua, ZHANG Hongke

Group 3: CHAI Tianyou, CHEN Jie, FEI Aiguo, JIANG Changjun, LU Xicheng, PAN Yunhe, ZHAO Qinping

Selection Members of the Expert Group

(in alphabetical order of the last names, subject convenors are marked *)

Group 1: CHEN Fan, CHEN Lin*, DING Ye, FAN Zhiyuan, GUO Xin, HE Wei, HU Chunguang, HU Huan, KONG Lingjie, LI Jiusheng, LIN Xiao, LIU Dong, LU Zhengang*, MA Jianjun, SU Quanmin, WU Guanhao, XIAO Dingbang, YANG Shuming, YANG Weiqiang, YUAN Luqi, ZHANG Huzhong, ZHANG Wenxi

Group 2: BU Weihai, CAI Yimao*, CHEN Hao, CHEN Wenhua, HU Cheng, HUANG Tao, LI Gang, LI Xiao, LIU Qi, LIU Wei*, LIU Yongpan, MA Zhiqiang, PI Xiaodong, QUAN Wei, SHI Longfei, SHI Xin, SUN Tao, TANG Hai, TANG Kechao, WANG Shaodi, WU Jun, XIA Zhiliang, XU Ke, YAO Yao, YU Zhiyi, ZHANG Jianhua*, ZHANG Jie, ZHAO Bo, ZHAO Luyu, ZHAO Ning

Group 3: CHEN Bo, CHEN Zhang, DING Zhijun, DONG Dezun, DONG Wei, GAO Fei, GUO Yao, LI Tiancheng*, LU Jianquan, SHANG Chao, SU Ran, SUN Xiaoming, WANG Gang, WANG Mengchang, WANG Xiaohui, WANG Xiaoying, WU Hongzhi, YAN Shengen, YU Jun, YU Tao, ZHANG Guangyan*, ZHANG Jun*, ZHANG Yu, ZHENG Yongbin, ZHU Qiuguo

 

Library and Information Specialists

Literature: LI Hong, XIONG Jinsu, ZHAO Huifang, CHEN Zhenying

Patent: YANG Weiqiang, LIANG Jianghai, LIU Shulei, HUO Ningkun, WU Ji, XU Haiyang, SONG Rui

 

Report Writers (in alphabetical order of the last names)

For the engineering research fronts: BU Weihai, CHENG Nan, DENG Huiqi, FANG Bin, HU Cheng, MA Jianjun, MEI Jianwei, PAN Gang, SUN Tao, SUN Xiaoming, YE Xianji, ZHANG Huzhong, ZHANG Quanshi, ZHANG Weifeng

For the engineering development fronts: CHEN Lin, FANG Fengzhou, GUO Yao, LI Tiancheng, LIN Yibo, LIU Jianguo, LIU Wei, PENG Mugen, SU Kuifeng, WANG Gang, YE Le, ZHU Qiuguo

 

Working Group

Liaisons: GAO Xiang, ZHANG Jia, ZHANG Chunjie, DENG Huanghuang, WANG Bing

Secretaries: ZHAI Ziyang, CHEN Qunfang, YANG Weiqiang, HU Xiaonv

Assistant: HAN Yushan