State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
Artificial intelligence (AI) has taken breathtaking leaps forward in recent years, evolving into a strategic technology for pioneering the future. The growing demand for computing power—especially in demanding inference tasks, exemplified by generative AI models such as ChatGPT—poses challenges for conventional electronic computing systems. Advances in photonics technology have ignited interest in investigating photonic computing as a promising AI computing modality. Through the profound fusion of AI and photonics technologies, intelligent photonics is developing as an emerging interdisciplinary field with significant potential to revolutionize practical applications. Deep learning, as a subset of AI, presents efficient avenues for optimizing photonic design, developing intelligent optical systems, and performing optical data processing and analysis. Employing AI in photonics can empower applications such as smartphone cameras, biomedical microscopy, and virtual and augmented reality displays. Conversely, leveraging photonics-based devices and systems for the physical implementation of neural networks enables high speed and low energy consumption. Applying photonics technology in AI computing is expected to have a transformative impact on diverse fields, including optical communications, automatic driving, and astronomical observation. Here, recent advances in intelligent photonics are presented from the perspective of the synergy between deep learning and metaphotonics, holography, and quantum photonics. This review also spotlights relevant applications and offers insights into challenges and prospects.
Artificial intelligence (AI) is a powerful method for augmenting and accelerating scientific research, as it aims to mimic, extend, and expand human intelligence to perform complex tasks [1]. Inspired by the information-processing mechanisms in the brain, deep learning utilizes multilayered artificial neural networks to automatically learn data representation and abstraction, exhibiting high-speed information processing and statistical inference capability [2]. Since AlexNet won the 2012 ImageNet competition [3], deep-learning-based AI technology has been in the global spotlight. Generative AI models such as ChatGPT and DALL-E developed by OpenAI have stunned the world, showcasing the power of AI to the general public. Technology giants such as Google, Microsoft, and Apple are increasing their investments in developing diverse AI applications to dominate the AI market. The Chinese technology companies Baidu, Alibaba, and Tencent are also positioning themselves to be worldwide innovation leaders in AI.
In recent years, significant breakthroughs have been made in AI, and its influence has spread in various fields, including physics, economics, engineering, and medicine. In particular, photonics—a key enabling technology in numerous scientific fields—has significantly benefited from the progress of deep neural networks (DNNs). The emerging applications of photonics in digital infrastructure, virtual and augmented reality (VR/AR), high-performance computing, and other areas pose increasingly demanding requirements on optical devices and systems. Traditional solutions for optical engineering problems are time-consuming. The integration of deep learning in photonics helps to speed up calculations [4] and enhance computational accuracy [5], [6].
Moore’s law, which claims that the number of transistors and resistors on a chip doubles every 18–24 months, has projected significant improvements in computer hardware performance over the past few decades. The exponential growth in computing power predicted by Moore’s law has allowed AI to make breathtaking progress. The proliferation of AI applications results in a great demand for high computing power. However, the exponential surge in the computational performance of electronic hardware predicted by Moore’s law has experienced a significant slowdown recently. The sustainability of Moore’s law is impeded due to the thermal effect from power dissipation and the physical limits that prevent transistors from being infinitely scaled down [7]. In addition, electronic computing systems relying on the von Neumann architecture currently provide essential support for executing AI algorithms. As datasets continue to grow, the separation of memory and the central processing unit (CPU) in the von Neumann architecture is causing long latency and large energy consumption [8]. Since 2012, the amount of computing in training AI systems has been undergoing exponential growth with a 3.4-month doubling time, far outstripping the supply of electronic integrated circuits that follow Moore’s law with a two-year doubling period [9]. From the perspective of energy consumption, training a single large language model is estimated to produce the same carbon dioxide emissions as generated by 125 round-trip flights between Beijing and New York City [10]. It is thus challenging for the computing power and energy efficiency of traditional electronic computing to meet the demands of further AI applications.
Various efforts have been made to explore advanced techniques for developing AI processors with high speed and low energy consumption. Designing specialized hardware accelerators for AI—such as graphics processing units (GPUs), tensor processing units (TPUs), field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs)—is conducive to increasing computing power [11]. Neuromorphic computing architectures that break the constraints of the von Neumann architecture and achieve memory in computing—such as International Business Machines Corporation (IBM) TrueNorth, Neurogrid, and IntelLoihi—are being investigated as a means of providing an energy-efficient computing paradigm [12]. Nevertheless, such approaches involve improvements within traditional electronic computing systems, making it difficult to achieve an order-of-magnitude improvement in computing power per unit area. The rise of neural networks has triggered an urgent demand for alternative computing approaches that offer speed and energy-efficiency benefits over electronics for computing. Consequently, numerous computing technologies beyond the scope of electronic computing systems are attracting growing interest, including carbon nanotube computing [13], quantum computing [14], biological computing [15], and photonic computing [16].
Photons, which are characterized by a broad bandwidth, low power dissipation, high speed, and massive spatial parallelism, are expected to replace electrons as information carriers [17]. The appealing features of photons have led to a surge in photonics research, including integrated optics [18], [19], [20] and fiber-optic communication [21]. With the development and commercialization of photonic integrated circuits and fibers, optical interconnects have matured into a practical technology for information processing. For example, Ayar Labs delivers optical input/output (I/O) chiplets and lasers to enable a bandwidth surpassing 2 tera bits per second (Tbps) and energy losses below 5 pJ·bit−1, with superior performance compared with traditional electrical-based I/O [22].
With the constant search for photonic materials that exhibit special optical effects, substantial progress has also been made in developing photonic devices that can overcome some inherent electronic limitations. Thus far, an impressive range of photonic signal processors has been demonstrated to implement reconfigurable signal processing functions, such as temporal integration, temporal differentiation, Hilbert transformation [23], and nonlinear photonic activation [24]. Silicon-based materials and thin-film lithium niobate platforms have been demonstrated to generate and control optical quantum states [25], showing a promising path forward for scalable photonic quantum computing.
Furthermore, recent years have witnessed enormous advances in metaphotonics, holography, and quantum photonics. Benefiting from breakthrough advances in metaphotonics, Polar ID, the world’s first polarization sensor for smartphones, launched by Metalenz, provides secure and simple facial authentication [26]. By using synthetic wavelength holography, a high-resolution holographic camera can reveal unseen objects, such as those positioned around corners and through scattering media like fog, skin, and even the human skull [27]. The Jiuzhang 3.0 quantum computer prototype, which features 255 detected photons, is ten quadrillion times faster than a supercomputer in solving Gaussian boson sampling (GBS) problems [28]. All these groundbreaking advancements make photonic computing an attractive alternative for reinventing AI computing.
The profound fusion of AI and photonics has led to the development of intelligent photonics, an emerging multidisciplinary field that appears poised to drive a paradigm shift in the computing ecosystem and revolutionize practical applications. AI, with its powerful data processing and inference capabilities, has recently taken the photonics landscape by storm. Functional AI tools push optical imaging and sensing to reach the next levels [29]. For example, AI-aided endoscopes enable fast and accurate diagnosis of cancer, coupled with endoscopic treatment that could offer surgeons real-time feedback [30]. Photonic sensors embedded in smartphones, wearables, and stationary equipment support the high-speed monitoring of health-related parameters and reliable diagnostics of a broad spectrum of diseases by using machine-learning algorithms [31].
Another aspect of intelligent photonics research is the development of photonics technology to implement neural networks. For example, smart home devices experience a delay between sending a voice query and receiving inference results because inadequate device memory and power limit these devices from directly storing and running enormous machine-learning models. Employing photonics as a platform for encoding data and performing computing holds promise for reducing latency in home automation systems [32]. Also, applying photonic neural networks to autonomous vehicles provides possibilities for real-time decision-making while consuming only a fraction of the energy required by power-hungry electronic computers [33]. As these examples illustrate, there is an emerging symbiosis between AI and photonics, which propels mutual advancement in both fields.
Therefore, the intersection of AI and photonics not only presents effective avenues to solve challenging photonics problems but also provides high-speed and power-efficient photonic computing platforms to further expand AI applications. Advances in intelligent photonics seem ready to promote the fifth generation mobile communication technology (5G), cloud computing, and the Internet of Things to an unprecedented level, fueling a data-rich, sustainable, and efficient future. Here, we review the exciting progress that has been achieved in intelligent photonics from the perspective of deep learning, metaphotonics, holography, and quantum photonics. Section 2 presents an overview of intelligent photonics. In Section 3, we focus on deep learning for computational optics and the optical realization of DNNs. After introducing metasurfaces empowered by deep learning and metasurface-based neural networks in Section 4, new paradigms in holography enabled by deep learning and holography-inspired neural networks are described in Section 5. The interplay of quantum photonics and AI technology is also discussed in Section 6. Then, relevant applications of intelligent photonics are highlighted in Section 7. Finally, in Section 8, we offer insights into the challenges and perspectives of this emerging interdisciplinary research area.
2. An overview of intelligent photonics
Driven by the deep integration of AI and photonics, intelligent photonics is a disruptive technology that is shaping our present and will redefine our future. Fig. 1 illustrates the synergy between deep learning and metaphotonics, holography, and quantum photonics, highlighting the intersection of the digital and physical worlds. This emerging multidisciplinary field primarily concentrates on leveraging the power of AI in photonics and exploring the potential of photonics in AI.
2.1. AI for photonics
AI has taken huge leaps forward in recent years, evolving into a powerful technology that makes real-world applications more convenient and intelligent. Given the ever-growing scale and complexity of datasets, AI is being increasingly integrated into diverse scientific disciplines to facilitate fast and efficient data processing and analysis. Leveraging AI in the photonics community to assist in solving challenging photonics problems has become a major focus. AI can be incorporated into forward modeling and inverse design to significantly advance the efficiency and accuracy of addressing photonics problems [34]. In the forward process, neural networks can serve as a surrogate physical computing model, providing a shortcut for mapping from optical parameters to physical responses. The ability of DNNs to learn features from data makes them an efficient approach for analyzing and processing complex optical information. Numerous effective deep learning models have been designed for holography [35], quantum experiments [36], light–matter interaction [37], and computational imaging [38] and sensing [39]. In the inverse process, AI can learn inverse mapping from physical responses to design parameters. Recent investigations have demonstrated the remarkable potential of deep learning for providing intelligent design for metasurfaces [40] and quantum photonic devices [41]. Deep learning can also be used to realize the design and flexible control of optical systems for imaging and communications [42], [43], thereby helping to reduce system complexity, upgrade system performance, and mitigate noise and crosstalk [44].
Deep learning exhibits remarkable performance in photonics applications. Nevertheless, challenges remain that require further attention and effective solutions [4]. The difficulty of constructing massive and high-quality datasets presents a major impediment to the further expansion of network scales for processing more complex problems. Another deficiency is that, when applied in photonics, deep learning permits limited control of the multidimensional optical field. The backpropagation algorithm for network training is related to issues such as a reliance on digitized training models, a lack of scalability, and operation complexity during training, which have given rise to recent proposals of physics–aware training [45] and backpropagation-free training [46].
2.2. Photonics for AI
Photonics benefits a great deal from AI, and vice versa. The thriving development of AI technology presents the challenges of requiring substantial processing time in linear algebraic calculations and large energy consumption. Developing photonics as a platform to implement AI computing is a highly promising research area for alleviating the current bottlenecks. Methods of manipulating photons to perform calculations can be generally classified into two categories: digital computing and analog computing. Digital computing, which includes directed logic operations, optical transistors, and optical logic devices, is achieved by using mechanisms like those of electronic computers. Fig. 2(a) shows an example of directed logic that conducts AND and NAND Boolean functions optically [47]. Great progress has been made in the demonstration of macroscale, microscale, and nanoscale optical systems as potential candidates for optical logic gate systems [48]. Optical logic gates hold potential in specialized applications such as optical signal processing for fiber communications [49], optical data storage [50], and large-scale optical quantum computing [51]. Analog computing realizes certain mathematical operations without additional circuits based on the inherent physical characteristics of the light field, including amplitude, phase, polarization, and light–matter interaction [52]. The principle of the optical vector–matrix multiplier proposed by Goodman in 1978 [53] is illustrated in Fig. 2(b). A vector array of light sources duplicated by lenses is input to multiply with a matrix loading on the spatial light modulator (SLM). After multiplication, all the pixels in each row of the SLM are gathered at each detector of the charge-coupled device. This linear multiply-accumulate model provides a basis for other optical calculations. The 4f (where f denotes the focal length of lens) optical system shown in Fig. 2(c) performs a cascade of two Fourier transforms with two lenses and utilizes the principle of convolution [54] to enable the realization of matrix multiplication, which is widely used and indispensable for information processing [55]. As shown in Fig. 2(d), optical differential operations [56] are realized with properly designed metasurfaces, which can be directly applied for the edge detection of an image.
Cutting-edge photonics—including metaphotonics, holography, and quantum photonics—is at the forefront of information technology research and industry, holding the potential to address the surging demand for computing power driven by emerging AI applications. Metaphotonics, which is based on metasurfaces, features miniaturization, ease of integration, and flexible light-field modulation; it shows immense potential for substituting for conventional, bulky photonic components to develop high-performance AI computing architecture [57]. As a means of optical information processing, holography can record and subsequently recover a three-dimensional (3D) optical field. The ability of holography to capture, store, and transmit optical data enables it to process information optically within AI systems [58]. Quantum photonics, as another burgeoning research area, presents exponential computational speed-ups over classical computation in performing certain tasks [59]. With its ability of ultra-fast parallel computation, optical quantum computing is promising for carrying out large-scale computations in AI systems. This review focuses on the interaction between AI and metaphotonics, holography, and quantum photonics.
At present, photonics-based devices and systems for the physical realization of neural networks primarily involve integrated and free-space optical neural networks (ONNs). The principles and implementations of photonic integrated circuits encompass coherent silicon photonics [60], wavelength-division multiplexing (WDM) technology [61], and integrated quantum photonics [62]. These on-chip integrated ONNs with compact architectures exhibit the capabilities of integration, programmable control, and hardware scalability and reconfigurability. Large-scale photonic integrated circuits are desired in applications of AI computing. However, expanding photonic integrated circuits to meet future petabit-per-second capacity demands is greatly challenging with the current technologies [63]. The further scaling up of photonic integrated circuits is limited by the footprints of computing units, bandwidth density, energy efficiency, and precision of on-chip control and calibration. Technological innovations and breakthroughs in the research area of photonic integrated circuits require the development of wide-band materials [64], heterogeneous integration [65], densely packed waveguides [66], and high-efficiency tunable and modulation devices [67]. Free-space ONN architectures can be realized through various approaches including holographic optical elements with nanofabrication technology [68], optoelectronic devices [69], and metasurfaces [70]. Optical matrix computation based on free-space light can fully exploit the capability of 3D optical interconnects and realize the flexible control of light fields. Nevertheless, the inference accuracy of these free-space computing architectures is impacted by the fabrication and alignment precision of cascade photonic devices. The optical components are bulky and difficult to be integrated, resulting in low scalability of the networks. Although photonic computing offers appealing advantages over electronic computing in terms of computing speed, energy efficiency, and information density, implementing photonic computing in AI systems remains at the laboratory research scale and still has a long way to go before practical deployment and large-scale commercialization can be achieved.
Photonic computing processors have advanced by leaps and bounds, showing better speed-to-efficiency features on certain specific tasks compared with electronic computing platforms such as GPU and TPU. Fig. 3[69], [71], [72], [73], [74], [75], [76], [77], [78], [79] presents comparisons of the computing speed and energy efficiency of current representative photonic and electronic computing architectures. The photonic computing implementations include free-space ONN [69], integrated ONN involving Mach–Zehnder interferometers (MZIs) [71], phase-change materials (PCMs) [72], multimode interference cells [73], microrings [74], and others [75], in addition to the incorporation of free-space and integrated ONN [76]. The electronic computing processors include Neurogrid, TrueNorth [77], Haswell E5-2699 v3 CPU, Nvidia Tesla P40 GPU, Google TPU [78], and Nvidia A100 GPU [79]. Photonic computing is creating a paradigm shift in AI computing, with immense potential to surpass the energy efficiency wall of electronic computing by many orders of magnitude while achieving ultra-fast computation.
3. Deep learning in intelligent photonics
Deep learning [2] is widely recognized as a mainstream algorithm for information processing and statistical inference. Recent breakthroughs in deep learning have been fueled by emerging network architecture [80], abundant training datasets, and the continuous growth of computing power. In deep learning, AI simulates the biological nervous system to build a DNN—a multilayer computational geometry comprising linear operation and nonlinear activation. Interconnected neurons form the structure of a DNN, which includes input layers, multiple hidden layers, and output layers. By means of cascading neural network layers and parameter training, the universal function approximation power of a DNN makes it a powerful, versatile, and computationally efficient tool in many fields of science and engineering [81]. The operation process of neurons in deep learning is illustrated in Fig. 4(a). The functional blocks of each neuron in the network contain a weighted addition unit and a nonlinear unit. The outputs in the th layer can be expressed as follows [82]:
where is a output vector, . is an input vector, . describes the activation function. denotes a weight matrix, . is a bias vector, . and are the number of elements in the output and input vectors, respectively. denotes the index of the network layer. ℝ is the real number field. The objective of deep learning is to find a mapping function (). A simplified framework of DNN is shown in Fig. 4(b); it can be formalized by stacking several single-layer networks into a DNN with layers.
3.1. Deep learning for computational optics
Owing to their capability for efficient computation, deep learning algorithms yield high-quality solutions to the forward modeling and inverse design problems of computational optics. The effectiveness of deep learning in forming learning models to optimize image reconstruction and address aberrations in computational imaging and sensing configurations has been demonstrated [83], [84]. Significant progress has been made in the use of deep learning for computational imaging and sensing, in areas such as microscopy [85], [86], scattering imaging [87], [88], quantitative phase retrieval [89], and lensless imaging [90]. An example of designing a U-net model to achieve rapid image reconstruction for Fresnel zone aperture (FZA) lensless imaging is shown in Fig. 4(c)[91]. It has been verified that deep learning provides substantial performance improvements over previous computational imaging and sensing methods. The extraordinary fitting ability of a deep learning model is effective in optimizing the imaging quality while simultaneously accelerating the processing speed. As deep learning schemes are particularly suitable for digital data processing, the application of deep learning to achieve high fidelity and long-lifespan optical storage has been considered [92]. A deep-learning-based approach that can push the diffraction limit has been demonstrated to realize high-density optical information storage [93].
The inherent data-driven attributes of deep learning can also help to solve optical and photonic design problems, such as those presented by nanophotonic devices, optical communication devices, and computational imaging and sensing systems. Deep learning has been a powerful tool in nanophotonic design, as it can obtain nonlinear mapping between nanophotonic structure and desired functional properties [94]. In the field of fiber communications, increases in the transmission rate and frequency bands make signals more sensitive to nonlinear distortion effects. The application of deep learning in communication systems has been demonstrated to hold potential in fiber nonlinearity mitigation, which is crucial to improving the capacity of optical communication systems. End-to-end deep learning has been introduced for the design of fiber communication transceivers, achieving data transmission rates of 42 Gbit·s−1 beyond 42 km [95]. The use of deep learning to optimize the design of fiber communication devices and systems can facilitate the realization of high-dynamic-range, high-bandwidth, and large-scale optical networks. DNNs have also permeated the design of imaging and sensing systems, enabling the implementation of compact and low-cost imagers and sensors. An example of deep learning for intelligent sensor design is shown in Fig. 4(d)[96]. As described above, the feature extraction ability of deep learning enables it to process huge and complex datasets of candidate structures, resulting in efficient, high-fidelity, and multi-degree-of-freedom inverse structure design.
However, most designed DNNs for computational optics need gigantic amounts of labeled data to train the network. The acquisition of datasets predominantly relies on experimental measurements or physical simulations by conventional algorithms. This not only requires massive data collection for demanding and costly network training but also results in the unavailability of accurate and reliable ground truth. The quality of labeled datasets sets an upper limit on network performance and generalization. Although emerging strategies such as hybrid learning [97] and physical-informed learning [98] can mitigate the difficulty of data collection to some extent, the performance of these strategies struggles to rival that of supervised learning using large datasets. Because of the huge computational resources required by network training, it is always impractical to deploy large-scale neural networks for imaging and sensing in mobile or wearable devices. Moreover, the existing deep learning models can only control the optical properties of photonic structures with limited degrees of freedom. The capability of deep learning models for photonic design requires enhancement to include additional degrees of freedom such as topology, phase, and angular momentum. Related solutions to design versatile models for the full control of light involve transfer-learning techniques [99] and reinforcement learning [100].
3.2. Optical realization of neural networks
Significant advances in computational optics—attributed to the application of deep learning—have in turn propelled research on hardware accelerators for neural networks. The maturity of electronic hardware accelerators such as GPU and TPU accelerates the computation speed in deep learning algorithms. However, with the continuous and substantial increase in information capacity, such electronic counterparts still seem to be incapable of meeting the demands of computing performance in the foreseeable future. As a fundamental operation in a DNN, matrix computation makes up a significant portion of the computational tasks. Therefore, optimizing the performance of matrix computations is essential for deep learning computing hardware. Photon-based accelerators manifest remarkable advantages in high-capacity, low-latency, and energy-efficient matrix information processing [101]. Since Farhat et al. [102] experimentally realized an ONN that implements a fully connected network of 32 neurons with a feedback mechanism, constructing neural networks in the field of optics has triggered extensive interest. A major current focus in the implementation of ONNs is how to realize linear operation and nonlinear activation in an optical way [103]. Linear operation can be realized by offloading some mathematical computation to the propagation of light, such as interference, diffraction, and scattering. The implementation of nonlinear activation layers in ONNs primarily involves photoelectric conversion and all-optical nonlinear activators with nonlinear optical effects. At present, research on the construction of ONN architecture is centered on the fields of integrated photonic platforms [104] and free-space optics [105].
3.2.1. Photonic computing with photonic integrated circuits
Photonic integrated circuits [106], which integrate various optical components such as lasers, waveguides, modulators, and detectors onto a single chip, provide a framework to perform computational tasks. Recent advances in silicon-based optoelectronic technology have paved the way for achieving on-chip integrated photonic computing systems with compact design, high-density integration, and robust stability. The fabrication of silicon photonic integrated circuits is compatible with complementary metal-oxide semiconductor (CMOS) technology, thereby permitting low-cost mass production. Typical silicon photonic integrated circuit designs for chip-scale ONNs include MZI architecture and WDM technology.
The principle of MZI-based photonic integrated circuits is depicted in Fig. 5(a). Each layer consists of an optical interference unit that enables the implementation of arbitrary matrix multiplication and an optical nonlinear unit that performs nonlinear activation. The arbitrary real-valued matrix can be decomposed via singular-value matrix decomposition, which is given by . is a unitary matrix and is the complex conjugate of the unitary matrix , which can be realized through optical beamsplitters and phase shifters. is a diagonal matrix that can be implemented with optical attenuators.
In 2017, Shen et al. [60] proposed a fully ONN architecture that cascades 56 programmable MZIs in a silicon photonic integrated circuit based on optical coherence, as shown in Fig. 5(b). This research laid the foundation for unveiling the world’s first optical AI accelerator prototype in 2019. Subsequently, in 2021, a fully integrated photonic computing platform known as photonic arithmetic computing engine (PACE) was developed, which integrates over 10 000 photonic devices and has a system clock of 1 GHz. PACE exhibits a performance 100 times better than the Nvidia RTX 3080 GPU at specific tasks [71]. As the integration density of the MZI units employed by the coherent silicon integrated circuit increases, the implementation of photonic accelerators is typically limited by the escalating noise levels. Intrinsic noise sources that inevitably exist in photonic devices may significantly reduce the effective bit resolution of the mechanisms, degrading the analog accuracy during inference. To improve the noise-resistive behavior, approaches such as the auto-configuration algorithm [107] and noise-resilient deep learning with noise-aware training [108] have been demonstrated. The effect of limited bit resolution can also be mitigated by quantization-aware training [109] and dynamic precision inference [110]. An increasing number of MZI units leads to clock synchronization problems under high-speed modulation, which makes it difficult for MZI-based photonic accelerators to exhibit improved computing density. Moreover, the limitations of electrical I/O in terms of power efficiency, latency, reach, and bandwidth density may present crucial bottlenecks for developing high-throughput computing interconnects. MZI-based architectures are also burdened by the large footprints of MZI units and excessive energy consumption associated with phase tuning. These challenges hinder the feasibility of developing large-scale silicon photonic computing circuits. A recently proposed strategy involves reducing the number of MZI units and using integrated ultra-compact diffractive units to implement Fourier-transform operations, which is an effective avenue to achieve scalable and power-saving silicon photonic chips [111].
In addition to the MZI-based structure, which utilizes interference between different paths of coherent input light, another category of photonic integrated circuits is based on WDM technology, the principle of which is illustrated in Fig. 5(c). WDM technology can be employed to realize multiplication and weighted interconnection through direct element-wise correspondence between wavelengths and matrix elements, which does not require matrix decomposition. Research on microring resonator (MRR) weight banks has established a connection between integrated photonic filters and weighted addition operations, facilitating analog signal-processing functions in silicon photonic integrated circuits [112]. A WDM broadcast-and-weight protocol configured by an MRR weight bank was proposed by Tait et al. [113], enabling massively parallel interconnection between photonic spiking neurons. Following this route, an all-optical spiking neuronal circuit (Fig. 5(d)[61]) using PCM units and an MRR was proposed and fabricated, advancing toward optical neuromorphic systems with fast signaling and high bandwidth. Despite the appealing advantages of parallel information processing, it remains remarkably challenging to realize high-level integration in WDM-based photonic integrated circuits. Due to their complicated system, WDM schemes present more difficulties in precise calibration and control compared with the single-wavelength method when scaling up. Recent work has proposed a microcomb-driven chip-based photonic processing unit (PPU) and developed a dedicated control protocol, making a significant stride toward a fully integrated PPU for industrial deployment [114]. Microcomb sources with a high signal-to-noise ratio have also been reported and are in high demand for WDM-based parallel photonic neuromorphic computing [115].
3.2.2. Photonic computing by free-space optics
Aside from adhering to the principle of integrated coherent optics, significant efforts have been spent on constructing different ONN architectures following the scalar diffraction and scattering theories. Performing massive-scale complex computing with various holographic elements, SLMs and lenses provide a distinctive optical deep-learning engine that operates in a high-speed and power-efficient manner. As a representative of free-space ONN architecture, the diffractive ONN physically employs multiple layers of diffractive surfaces to perform deterministic tasks and statistical inference; the principle of this architecture is presented in Fig. 6(a). The free-space propagation of complex fields between diffractive layers can be modeled based on the Rayleigh–Sommerfeld diffraction equation:
where represents the th neuron of the th network layer located at . represents the complex field at location , which can be viewed as a secondary wave generated from the source at . , and . denotes the distance between adjacent diffractive layers, and is the wavelength. The transmission coefficient of a neuron that can be iteratively adjusted through an error back-propagation algorithm during the training process satisfies the following:
where and refer to the amplitude and phase terms, respectively. The output function of the th neuron in layer can be described as
where represents the input wave to the th neuron in layer . denotes the index of the neuron on the th layer.
As presented in Fig. 6(b)[68], an all-optical diffractive deep neural network (D2NN) operating at 0.4 THz wavelengths was reported in 2018, which could implement advanced image identification and classification tasks via all-optical computing. Since then, various D2NN design approaches have been investigated to increase the all-optical processing capacity, improve the inference accuracy, and broaden the working wavelengths. By examining and analyzing the information-processing capacity of D2NN, it was found that the dimensionality of the solution space is linearly proportional to the number of neurons and diffractive layers [116]. Given that the disparity between computational simulations and physical implementations limits the free-space diffractive processors from achieving maximized accuracy, numerous strategies—such as improving the misalignment tolerance [117] and the Fresnel number [118]—have been employed to enhance the performance precision. In addition to extending the D2NN from the terahertz spectrum to visible wavelengths [119], a broadband network design that can process a continuum of wavelengths has been proposed [120]. A wavelength-multiplexing scheme is executed to massively increase the throughput of all-optical computing [121] and perform multiple tasks in parallel with high accuracy [122]. Moreover, a great deal of work has been done to explore different variations and modifications of D2NN structures in order to optimize network performance. Fourier-space modulation is carried out with 4f systems and diffractive layers on the Fourier plane, enabling image-to-image inference with enhanced resolution compared with real-space modulation [123], [124].
In such diffractive ONN frameworks, the training part that leads to the design for physical implementation is typically carried out using an electronic computer, which results in limited scalability and prolonged training time due to the high computational complexity of the in silico training process. To overcome these limitations, an approach for the in situ training of diffractive ONNs by adopting SLM cascading was proposed in 2020 according to the light reciprocity and phase conjunction principles, achieving accelerated training speed and enhanced energy efficiency for core computing modules [125]. The quest to address the limitations of static D2NNs, which have fixed architectures and functions once fabricated, has also led to the development of reconfigurable diffractive processing units [69]. These units can be cascaded or nested to accommodate various D2NN architectures for real-world intelligent multichannel-processing tasks [126] and node/graph classification tasks [127]. Another technical challenge for current D2NN architectures is that they typically require bulky optical components, which are costly, space-consuming, and have limited manufacturing precision, posing challenges for scaling to large numbers of neurons. Recent research has found that scattering media are analogous to the diffractive modulation layers in a D2NN, which can be regarded as an optical realization of a neural network. The ONN architecture based on light scattering is depicted in Fig. 6(c); it relies on wave–matter interactions to achieve a more compact device form factor. The internal disordered arrangement of dielectric particles can provide large numbers of neurons, supporting large-scale and random matrix multiplication including reservoir computing and computational imaging. Thus, a biophotonic random optical-learning machine with a tumor spheroid was designed for the detection of cancer morphodynamics, with thousands of cells acting as wave-mixing nodes to form a large-scale computing reservoir [128]. Another example of photonic computing with scattering media is presented in Fig. 6(d)[129]. Subwavelength scatterings with ultra-high computing density were demonstrated to enable artificial neural inference in a continuous and layer-free manner.
Although all-ONNs provide a promising avenue for tackling computationally costly challenges in electronic neural networks, implementing optical nonlinear activations remains a challenge. A hybrid optical–electronic computing architecture with the synergy of optical and electronic neurons can address this challenge and can be interpreted as an optical encoder-electronic decoder system. A conventional machine vision system records a scene through image sensors and subsequently decodes the information to implement desired inference tasks via electronic neural networks [130], [131]. Based on this perspective, hybrid optical–electronic computing systems replace the imaging optics with optical computing systems to serve as analog front-end processors integrated with back-end electronic neural networks. An example of a hybrid optical–electronic computing system for image classification, which incorporates a photonic computing layer prior to electronic computing to save on computational cost, is presented in Fig. 6(e)[132]. Many efforts have since been dedicated to designing hybrid optical–electronic neural networks with optical preprocessing followed by an electronic platform to process recorded data, empowering AI applications in programmable sensing [133], machine vision [134], [135], and orbital angular momentum (OAM) spectrum measurement [136].
3.3. Summary and perspectives
Deep learning empowers computational optics and, in turn, photonics technology leverages the characteristics of photons to develop high-performance AI computing architecture. Advances in deep learning provide optimization of both the front-end optical design and the back-end digital processing models. Photonic design by deep learning brings about innovative and effective design strategies for nanophotonic components and optical systems. Various deep learning models have been developed to improve performance in areas such as computational imaging and sensing, optical storage, and fiber communications. However, deep learning in computational optics presents challenges in massive data collection, generalizability, large-scale network training, and precise control of optical properties, making it necessary to find appropriate network models. Integrated and free-space photonic computing architectures for neural network implementation show great potential for accelerating deep learning by several orders of magnitude and reducing energy consumption. Remarkable progress has been made in leveraging optoelectronic devices to construct ONNs of feed-forward [60], recurrent [137], and spiking models [61], [138]. Future research on photonic integrated circuits using cascades of multiple MZI units and WDM technology must expand computational scales and advance system integration. While free-space ONN architectures can effectively harness the massive parallelism of light propagation and enable flexible light field modulation, they suffer from limitations in scalability. Thus, implementing photonic computing in lieu of deep learning neural networks has emerged as a promising optimization methodology toward a low-latency, power-efficient, generalizable network system. In a word, the future of deep learning may indeed be photonic.
4. Metaphotonics in intelligent photonics
Smart vision systems combined with AI can extract optical information hidden from our eyes, such as phase and polarization. However, conventional optical components are too bulky to enable advanced functionalities in a small device [139]. The pursuit of achieving exceptional vision performance while keeping vision systems compact and lightweight poses unprecedented challenges for optical engineering. The concept of metaphotonics might provide a viable pathway for overcoming the existing bottlenecks, facilitating the development of advanced-vision technologies with miniaturized footprints and flexible light-field modulation [140]. Recent years have witnessed rapid progress in the emerging field of metaphotonics, which employs varieties of recently highlighted metamaterials to achieve tunability. The development of metaphotonics and metamaterials makes it possible to replace bulky optical components with thin nanostructured films, termed “metasurfaces” [141]. Metasurfaces are composed of a specially arranged planar array of nanostructures, called ‘‘meta-atoms,” whose geometry, size, orientation, and arrangement can be designed to deliver enhanced optical functionalities through light–matter interactions. Compared with conventional reflective, refractive, and diffractive optical components, metasurfaces possess dominant advantages including ultra-compact form factor, low absorption loss, ease of fabrication and integration, and flexible control of light fields [142]. These characteristics have led to breakthroughs in a wide range of applications, such as tunable structured light [143], light detection and ranging (LiDAR) [144], holography [145], advanced imaging [146], and high-speed communications [147]. Underpinned by meta-optical physics, meta-devices formed from a composition of metasurface elements are characterized by ultra-compactness, light weight, a small pixel pitch, and broadband operation [148]; they include metalenses [149], microscopy coverslips [150], and biosensors [151]. The compact integration of metasurfaces also enables hybridization with other standard optical components [152], leading to the re-envisioning of a near-future disruptive optical platform.
4.1. Metasurfaces empowered by deep learning
In parallel with the development of metasurfaces, AI has emerged as a powerful tool for advancing the frontiers of digital information, rapidly penetrating the field of metaphotonics [153]. The development of metaphotonics and AI is illustrated in Fig. 7[68], [69], [154], [155], [156], [157], [158], [159], [160], [161]. Early pioneering work of metamaterials can be traced back to 1968 [154], but extensive research on this topic emerged in 1996 [155]. As one of the initial proposals, metasurfaces based on plasmonic V-shaped antennas were suggested by Capasso’s research team [156] in 2011 and were experimentally confirmed to present anomalous reflection and refraction phenomena in agreement with the generalized Snell’s law. Unlike standard optical components that rely on gradual phase shifts accumulated along the optical path, such metasurfaces can imprint phase discontinuities on propagating light at the interface between two media, thus allowing for great versatility in sculpting the optical wavefront. Following that, new designs of metasurfaces exploiting different types of phase modulation, including geometric phases [162] and dynamic propagation phases [163], have been explored. Due to the inability of these function-specific metasurfaces to satisfy the demands of multifunctional photonic platforms, emerging interests in multifunctional metasurfaces—including polarization-multiplexing metasurfaces [164], wavelength-multiplexing metasurfaces [165], and hybridizing metasurface with other novel materials [166]—have been ignited with the aim of achieving multichannel control of the light field and providing versatile functionalities for biomedical, computational, and quantum applications.
Traditional metasurfaces only enable the engineering of specific or a limited range of electromagnetic functionalities due to their static topological geometry. In pursuit of programmable metasurfaces that can manipulate electromagnetic fields in real time, the concept of digital coding metasurfaces was proposed in 2014 [157], [167], building a bridge between metasurface physics and digital information. Della Giovampaola and Engheta [167] developed a structural design method that constructs metamaterial bytes through spatial mixtures of two appropriately chosen elemental materials, while still being confined to the digital description of comparable medium parameters. In contrast, Cui et al. [157] demonstrated the use of digital codes with opposite phase responses rather than medium parameters to characterize metasurfaces, which enables connection with the coding streams of digital information. Under digital instruction from FPGA hardware, the proposed real-time reprogrammable metasurfaces make it possible to modulate electromagnetic waves and process digital information simultaneously, evolving into a new branch of metasurfaces referred to as ‘‘information metasurfaces.” Since then, many efforts have been carried out on information metasurfaces for extending coding forms [168], coding dimensions [169], and working frequencies [170], as well as achieving various applications such as OAM generators [171], spatial modulators [172], wireless communication systems [173], and many others.
In terms of AI, the proposal of neural networks occurred in 1943 [158], and the recent revival of machine learning took place around 1980. Since the successful implementation of deep learning algorithms in 2006 [159], the combination of metaphotonics and AI has been the focus of attention. Canonical methods for metasurface modeling and design to achieve desired properties rely on numerical algorithms for optimizing the structural parameters, which can be time-consuming [174]. Deep learning provides a new framework for the modeling and design of metasurface structure and function. The model in Fig. 8(a)[175] shows the use of deep learning to simulate the interaction of light and meta-atoms for performance prediction. A deep learning approach that can be applied to a variety of metasurface device designs across the entire electromagnetic spectrum is illustrated in Fig. 8(b)[176]. Many attempts have demonstrated the intelligent designs of meta-atoms [177] and metasurface patterns [178] by using deep learning. Moreover, with the support of effective DNN architectures, metalenses can be designed and optimized to endow more functionalities [179], including a wide field of view (FOV), aberration correction, and high numerical aperture.
Aside from being used for the automatic design of metasurfaces and meta-devices, the unprecedented ability of deep learning to analyze massive data can be leveraged to process information received from metasurfaces, in fields such as image analysis [180], infrared absorption spectroscopy [181], and microwave signals [182]. Furthermore, AI technology is being incorporated into programmable metasurfaces to manipulate their reconfiguration, developing toward intelligent metasurface systems. Li et al. combined machine learning techniques with digital coding metasurfaces to implement real-time imaging [160] and realized high-quality recognition performance [183], an achievement that could open up a new avenue for future intelligent interfaces between humans and machines. Inspired by this, the flexible control of programmable metasurfaces through deep learning for automatic target tracking and wireless communications is depicted in Fig. 8(c)[184]. Overall, AI technology can improve and accelerate metasurface modeling and design, while also enabling data analysis and intelligent regulation of metasurfaces.
4.2. Metasurface-enabled ONNs
With the rapid progress of AI, the exponential growth of computing power requires advanced computing hardware with high computing performance and low energy consumption. Hence, research into photonic devices for all-optical AI has received extensive attention due to the inherent characteristics of photons. Metasurfaces featuring miniaturized footprints have been developed for all-optical calculations [185], with huge potential to replace bulky optical assemblies in ONN construction. In 2014, Silva et al. [186] demonstrated the potential use of metasurfaces for performing a diverse set of mathematical operations—including spatial differentiation, integration, and convolution—without analog-to-digital conversion or other systematic delays. This work has stimulated numerous theoretical studies and experimental validations of metasurfaces for implementing optical analog computing [187], which can be applied to wavefront engineering [188], nonlinear operation [189], and real-time edge detection [190].
The function of metaphotonics for all-optical information processing supports the use of metasurfaces to build new ONN architectures. A design strategy that adopts cascaded multilayer metasurfaces to simulate the functionality of hidden layers in a DNN has been reported, the principle of which is illustrated in Fig. 9(a). It should be noted that the metasurface comprises a dense array of subwavelength meta-atoms. Each meta-atom acts as an active artificial neuron and connects to other meta-atoms in subsequent layers, following the diffraction theory of light [57]. According to this principle, Qian et al. [70] developed a diffractive neural network implemented by a compound metasurface to perform logic operations in 2020, which is shown in Fig. 9(b).
Beyond performing mathematical operations via metasurface-based ONNs, another neuromorphic metasurface structure with high-density integration was developed that takes advantage of the metaphotonics platform to conduct neural computing that enables the recognition of handwritten digits [191]. In a recent study, an all-optical D2NN [68] was created by passive structures and thus had a fixed function once fabricated, which restricted it from being re-trained for other goals and tasks. In pursuit of advanced diffractive optical platforms, a programmable and re-trainable wave-based D2NN architecture was created, which could directly process electromagnetic waves for tackling various intelligent tasks, including wave sensing, identification, and wireless communications [161]. Unlike the research work of realizing an optoelectronic D2NN with reflection-type SLMs [69], such a programmable transmission-type D2NN structure uses multi-layer digital-coding metasurface arrays, as presented in Fig. 9(c)[161]. Furthermore, the ultra-thin and ultra-flat characteristics of metasurfaces allow them to occupy just a small amount of the system volume, which can contribute to realizing an on-chip integrated ONN structure. As depicted in Fig. 9(d)[192], a metasurface-enabled on-chip multiplexed diffractive neural network integrated with an imaging sensor has been presented, providing a chip-scale structure for multichannel sensing and multitasking in the visible range. As described here, metasurfaces with flexible control of light fields and high integration hold great potential for promoting the development of all-optical diffractive neural networks.
Another line of research on metasurface-based ONNs combines the advantages of both metaphotonics and electronics. It was recently pointed out that a metasurface can be leveraged as an optical front-end unit for a deep-learning-based neural network, resulting in a metasurface-based optical–electronic architecture [193]. As shown in Fig. 9(e), such a paradigm can be divided into a physical layer based on the metasurface, to encode relevant information, and digital layers for subsequent information decoding, based on deep learning. As shown in Fig. 9(f)[194], reconfigurable metasurface transceivers working as the physical layer can encode the information of a scene for subsequent machine-learning-based classification. A hybrid optical–electronic neural network with a single all-dielectric metasurface (Fig. 9(g)[195]) was demonstrated to expand the neural network capacity. Joint optimization of the optical front-end and electronic back-end outperformed a fully electronic neural network in terms of processing speed and accuracy of massive data.
4.3. Summary and perspectives
As discussed above, it is noteworthy that the integration of metaphotonics and AI techniques has led to significant advancements in both fields. During the past few years, AI has been extensively employed in the field of metaphotonics for the modeling and design of metasurfaces, the powerful analysis of data from metasurfaces, and intelligent programmable meta-devices. In addition to AI-empowered metaphotonics, metasurfaces have been demonstrated to allow for all-optical calculations via appropriately engineered meta-atoms, which is of great importance in developing high-performance computing hardware for neural networks. Recently, all-optical and hybrid optical–electronic neural networks based on metasurfaces have attracted significant research interest. Traditional optical components with a bulky form factor and difficulty of integration limit the functionalities of ONNs. The emergence of meta-devices with the advantages of being extremely thin, lightweight, compact, and easy to integrate provides a solution to the drawbacks of traditional optics. Therefore, the maturation of metasurfaces is injecting new vitality into the exploration of more compact and programmable ONN models.
However, the construction of metasurface-based neural networks for large-scale commercial applications still presents various challenges [196]. The tunability of metasurfaces enables adaptive responses, and the reconfigurability of metasurfaces allows for the real-time tailoring of metasurface-based ONNs to adapt to various tasks based on the measured responses. Because of the limited tunability and reconfigurability in metasurface-based ONNs, in situ network training is currently not feasible. Also, the signal strength gradually attenuates as light passes through each computing layer in a multilayer metasurface structure. This deficiency restricts the realization of complex network models and thus influences the inference capability and accuracy of metasurface-based ONN architectures. Furthermore, the network performance is influenced by the fabrication accuracy of the metasurface nano-units and the alignment precision of the cascade metasurfaces. These current deficiencies will gradually be addressed with the continuous exploration of advanced intelligent photonics technology based on metasurfaces. Overall, the integration of AI and metaphotonics is conducive to making breakthroughs in computing hardware performance for ONNs, thus realizing more complex and intelligent tasks.
5. Holography in intelligent photonics
Holography enables a light field to be recorded as a hologram and then later reconstructed, giving rise to a true-to-life reproduction of the recorded 3D scene. A hologram is an interference pattern that encodes both the amplitude and phase of the object’s wavefront through the interaction of the object wave and the reference wave. Since the pioneering research by Gabor [197], Leith and Upatnieks [198], and Denisyuk [199], holography has headed toward becoming a technology with transformative potential in a wide range of applications, such as optical imaging [200], 3D display [201], full-color display [202], optical metrology [203], and data storage [204]. Benefiting from the advent of computer technology and the maturation of SLMs and imaging sensors, both wavefront acquisition and wavefront reconstruction can be executed through either optical or digital means. The type of holography that involves optical recording by a digital sensor with digital reconstruction by means of numerical calculations is known as digital holography [205]. There are two main approaches for holographic wavefront recording [206]. For in-line holography, the axes of the diffracted object wave and the reference wave are parallel. This in-line configuration provides full bandwidth utilization and high phase sensitivity, but the reconstruction quality is susceptible to the overlapping of out-of-focus twin-image artifacts. Alternatively, off-axis holography can be implemented to address the twin-image problem by introducing an oblique-angle reference beam, although this approach simultaneously suffers from the sacrifice of space-bandwidth product. To fuse the advantages of both approaches, a coupled configuration for combining in-line and off-axis holograms has been proposed in order to achieve high-resolution and full-field reconstruction [207]. In comparison, the type of holography that generates holograms via numerical calculation but reconstructs the object optically is referred to as computer-generated holography (CGH) [208]. Commonly used methods for computer-generated holograms include iterative projection algorithms such as the Gerchberg–Saxton [209] and nonconvex optimization algorithms [210].
5.1. Holography empowered by deep learning
The prosperous advancement of deep learning has given rise to new paradigms in digital holography and CGH. Digital holography can solve inverse imaging problems to retrieve amplitude, phase, polarization, and spectral information from recorded holograms. Previous physics-driven approaches involving diffraction calculation and heuristics in regularization design are time-consuming [211]. The emerging data-driven method based on deep learning outperforms such physically based algorithms and has been demonstrated to be a powerful tool for digital holographic tasks, with superior image quality, real-time performance, and less imaging-system complexity [212]. Fig.10[35], [213], [214] presents the principles and examples of digital holography, CGH, and metasurface holography enabled by deep learning. The principle of digital holographic imaging based on deep learning is illustrated in Fig. 10(a): A two-dimensional (2D) hologram recorded optically by an image sensor is input to a trained DNN to obtain an improved reconstruction. Fig. 10(b)[35] presents an example of deep-learning-empowered digital holographic reconstruction, which uses a training convolutional neural network for phase recovery. The use of deep learning in the areas of depth estimation [215], resolution enhancement [216], noise suppression [217], phase unwrapping [218], and object classification [219] has also aroused great interest, opening avenues for significantly advancing digital holography.
CGH provides a 3D projection with high spatio-angular resolution, which has a profound and broad impact on the applications of direct-view displays as well as VR/AR. Due to the tremendous computational cost required for performing Fresnel diffraction simulations, the existing physically based CGH algorithms impose an explicit tradeoff between the achieved image fidelity in experiments and the computing speed, which fundamentally prevents the practical deployment of dynamic holography. Since Horisaki et al. [220] successfully designed a DNN for rapid hologram generation, significant research efforts have been dedicated to advancing high-resolution hologram synthesis at high speed by designing DNN models. Fig. 10(c) presents the principle of a computer-generated holographic display based on deep learning, which adopts different deep learning frameworks to generate holograms that are uploaded onto the SLM, and then displays the reconstruction optically.
At present, learning-based CGH is primarily divided into data-driven deep learning [220], [221] and model-driven deep learning [213], [222], [223], [224], [225]. The data-driven deep learning approach can effectively accelerate the hologram generation process outside the training dataset. However, it requires large-scale datasets with target images and holograms generated by traditional iterative algorithms in advance, which consumes huge computing resources and calculation time. Such a supervised method trains the neural network by calculating the loss function between the output holograms and the label holograms, which limits the performance of the network outputs based on the categories and volumes of the training samples.
To tackle the challenges described above, model-driven deep learning introduces a forward process model of the inverse problem as the constraints to train the network in an unsupervised manner [222], calculating the loss function between the output reconstructions and the image dataset without the need for labeled datasets. Recently, Peng et al. [223] developed a camera-in-the-loop optimization strategy that makes it possible to optimize holograms by achieving a match between the reconstructions captured by a camera and the target images. This proposal enables speckle-free holography with both coherent and partially coherent light sources [224]. To break through the limitations of hologram dataset size and quality, an autoencoder-based neural network was devised, which could generate high-fidelity holograms in 0.15 s [225]. The 4K diffraction-driven network depicted in Fig. 10(e)[213] combines a residual neural network and sub-pixel convolution, enabling the generation of 4K resolution holograms for both 3D and color displays. A physics-guided deep-learning pipeline—dubbed “tensor holography”—was proposed by Shi et al. [226] to resolve the quality-speed tradeoff. Further explorations into deep-learning-based holography pipelines make it possible to develop speckle-free, real-time, photorealistic, and high-resolution 3D holography.
Metasurfaces are opening up new frontiers for hologram design [142]. Photonic devices such as SLMs, digital micromirror devices (DMDs), and holographic elements are flawed, with limited light-field modulation capability and large pixel sizes, causing the holograms to suffer from low resolution, a narrow FOV, undesired high orders of diffraction, and twin image issues. Metasurfaces feature subwavelength pixel sizes and provide enormous design freedom for arbitrarily modulating various parameters of the light field. Thanks to their superior features, metasurfaces show great potential for generating reconstructed images with unprecedented spatial resolution, high precision, a broad FOV, a wide working bandwidth, and only the desired diffraction orders [145]. Numerous studies have been conducted to demonstrate meta-holography according to various modulation mechanisms. Emerging investigations on dynamic meta-holography with multiplexing techniques aim to optimize the space-bandwidth product and enlarge the information capacity of metasurface holograms [227]. Moreover, a 3D-integrated metasurface device makes it possible to realize full-color holography [228]. However, existing iterative algorithms for metasurface-based hologram design are time-consuming, which impedes the progress of dynamic holographic imaging in practical scenarios. AI permits a rapid exploration of the optimal design parameters for metasurface holograms [214], [229]. The principle of deep learning for metasurface holography is exhibited in Fig. 10(d). Deep learning enables the rapid generation of holograms according to the target images. The information on each pixel of the hologram can be encoded on the metasurface by controlling the parameters of each meta-atom. The deep-learning-based method for optimizing the inverse design of metasurface holograms is shown in Fig. 10(f)[214]. By establishing the mapping between the physical responses and the geometrical parameters of meta-atoms, deep learning can enhance design accuracy and efficiency.
5.2. Holography-inspired neural networks
In addition to AI technology empowering holography, holography has emerged as a significant avenue for implementing ONNs. A hologram enables the recording of the light field at a specific moment in space and time, and is later utilized to recover the 3D light field carrying optical information. This appealing performance makes captured holograms well-suited for capturing, storing, and transmitting data in ONNs [230]. Holographic implementation for fully connected neural networks engendered a great deal of research interest in the early stages. As indicated in Fig. 11(a), a holographic neural network involves multifaceted planar interconnection holograms for interconnecting a 2D array of neurons. Each neuron situated at each pixel of the hologram modulates the incident light originating from the preceding layer and then proceeds to interconnect with all other neurons in the following layer. Photorefractive crystals behaving as real-time recording materials have been demonstrated to create holographic neural networks [231]. Such dynamic nonlinear crystals can establish adaptive constraints between neurons arranged in planes, which are inherently 3D devices that allow for the storage of an extremely large number of weights [232]. However, research interest in the further investigation of holographic neural networks is restricted because the difficulty of controlling analog weights poses challenges in reliably managing large ONNs [105]. Benefiting from major improvements in AI, optical modulators, and manufacturing technology, the idea of implementing neural networks with holography has been reignited.
Recent research works have concentrated on the experimental realization of diffractive surfaces in D2NNs via nanofabrication technology, including 3D printing and nanolithography technology. The phase-only holograms shown in Fig. 11(b)[68] are diffractive optical elements (DOEs), which can be designed by training a five-layer D2NN for specific inference tasks. Diffractive surfaces with 0.2 million neurons were 3D printed layer-by-layer with discrete distances in the axial direction. A compact D2NN with a neuron density of 625 million neurons per square centimeter was printed on imaging sensors by means of two-photon nanolithography, enabling the direct retrieval of Zernike-based pupil phase distributions [233]. However, these fabricated diffractive optical processors cannot be programmed to support diverse functionalities. The use of programmable devices such as SLMs to encode holograms provides avenues for constructing a reconfigurable optoelectronic neuromorphic processor, which can be seen in Fig. 11(c)[125]. Such reconfigurable processing unit comprises millions of neurons and permits the addition of nonlinearity to each layer of the D2NN, realizing the flexible design of different network architectures [69]. Many other diffractive optical processors have been constructed to impact various applications, such as biomedical imaging [234], secure surveillance [235], and telecommunication [236].
Although holographic neural networks have advanced at an impressive rate, crucial challenges remain in scalability, computational accuracy, and the realization of nonlinear activation [196]. The design of diffractive photonic processors with large-scale neurons is restricted by the neuron size, current fabrication techniques, and numerical simulation. For example, the diffractive neurons in diffractive surfaces must have dimensions greater than 100 nm for modulating visible or infrared wavelengths, while the most advanced chip manufacturing approaches a 2 nm process. Compared with state-of-the-art transistors, diffractive neurons have larger dimensions, which brings limitations in developing scalable photonic processors. Moreover, each pixel in the designed DOE layers represents a neuron, and a reduction in neuro size leads to an increase in fabrication difficulty. Limited by computing speed and memory, it remains challenging to accurately model and design large-scale diffractive neural networks. The inference accuracy of diffractive photonic processors is not yet comparable to that of their existing electronic counterparts, owing to low model complexity and errors in design, fabrication, and measurement processes. The nonlinearity of DNNs has a significant impact on their computation accuracy and precision. The implementation of nonlinearity in diffractive optical processors generally uses sensors to approximate the rectified linear unit (ReLU) function [233]. Other nonlinear activation approaches involve the use of photorefractive crystals [123], image intensifiers [237], or photodiode arrays [238]. However, these strategies may increase the system complexity and energy consumption. Future explorations of superior material platforms may bring new solutions to optical nonlinearity in photonic processors.
5.3. Summary and perspectives
As discussed above, deep learning and metasurfaces are propelling further breakthroughs in holography, while the progress in holography is making it one of the dominant approaches for realizing ONNs [58]. The incorporation of deep learning algorithms and metaphotonics into holography is contributing to unlocking the potential of holography, paving the way for generating high-fidelity 3D images at high speed. Advanced deep-learning models are required to reduce the reliance of neural networks on the quantity and quality of training datasets. In relation to holography-inspired neural networks, the experimental implementation of holograms in D2NN can rely on nanofabrication techniques or programmable devices. Notably, the development of a scalable photonic processor with competitive computation accuracy and inference capability compared with its electronic counterparts faces explicit limitations in terms of fabrication and optical nonlinearity. Advanced nanofabrication techniques are essential to realize the fabrication of smaller neuron sizes. In terms of using SLMs to implement diffractive surfaces, the pixel pitch, pixel number, modulation capability, and frame rate of current SLMs remain to be improved to enable higher scalability and reconfigurability of neural computing. Compared with SLMs, metasurfaces feature a subwavelength-scale pixel size and provide multidimensional light field modulation, higher resolution, and lower noise. With the development and maturity of nanofabrication technology, metasurface holograms have the potential to be a promising candidate for high-performance ONNs. In addition, the statistical inference accuracy and function approximation capability of diffractive photonic processors would benefit from effective optical nonlinearity, a compact form, and energy conservation.
6. Quantum photonics in intelligent photonics
Over the past few decades, quantum information science using quantum mechanics has been developed in order to realize enhanced power and functionalities in the encoding, transmission, and processing of information. As a promising quantum technology, quantum computing exploits the full complexity of a many-particle quantum wavefunction to solve computational problems that are intractable on classical computers. Nowadays, increasing effort is being devoted to the physical realization of quantum computing in many systems including photons, nuclear magnetic resonance, ions, and atoms, among which photonic qubits are especially ideal carriers for quantum information [14]. This is because photons exhibit remarkable immunity to the noise and decoherence that plague other quantum systems, and can also be encoded in various degrees of freedom [239]. Although the earliest proposals discovered the ease of realizing singe-qubit gates by manipulating photons, achieving the required photon interactions for universal multi-qubit control is challenging due to the technical difficulty of nonlinear couplings between optical modes at adequate strengths. In 2001, a major breakthrough showed that linear optics is sufficient for efficient quantum information processing with photons [240]. Quantum gates based on linear optics can be realized with arbitrary high fidelity through efficient architectures and fault-tolerant encoding. However, this quantum computing paradigm employing linear optical devices only permits probabilistic measurements and thus does not yield deterministic outcomes [241]. Due to the low efficiency of parametric down-conversion photon sources and bucket photon detectors, this technology lacks scalability in a practical scene. Thus, great attention has been directed toward developing advanced single-photon sources, photon detectors, and quantum memories for scalable linear optical quantum computing [242].
As optical quantum computing promises exponential computational speed-ups in solving particular tasks that are unattainable using classical computers, there have been a surge of proposals on implementing quantum computers. In the past few years, substantial efforts have been put into developing photonic quantum computers and providing increasingly convincing proof of their quantum computational advantage [59], [243], [244]. In 2020, Pan’s group [59] successfully constructed a new light-based quantum computer named Jiuzhang that performs GBS with up to 76 detected photons and a 100-mode ultra-low interferometer, achieving a quantum computational advantage using photons. On this basis, Jiuzhang 2.0 was reported in 2021 under the principle of the stimulated emission of squeezed photons; it allows phase-programmable GBS and obtains up to 113 photon-detection events out of a 144-mode photonic circuit [243]. The proposed quantum computers presented superior efficiency in solving sampling problems compared with classical computers. Then, Madsen et al. [244] reported a photonic processor named Borealis that achieved a quantum computational advantage with 216 independent quantum systems and could provide dynamic programmability on all implemented gates. In addition to the ongoing development of larger-scale, higher-fidelity, and fully programmable GBS, the effectiveness of these quantum devices in tackling real-world problems has been explored. In 2023, a noisy intermediate-scale photonic quantum computer was used to demonstrate the enhancement of GBS to stochastic algorithms in solving graph problems [245].
6.1. AI for quantum photonics
With the rapid progress of quantum technology, the increased complexity of photonic quantum systems is leading to the generation of large amounts of data, which strongly requires automated data processing. AI is fueling a new paradigm of discoveries in quantum technology. Various AI algorithms appear to be especially useful in addressing certain challenging quantum photonics problems. As for quantum optical measurement, a long data collection time is required to obtain complete data and ensure the accuracy of reconstruction. Compared with conventional statistical methods, AI permits more rapid and precise quantum measurements by exploiting self-learning features. In Fig. 12(a)[246], which presents an experimental configuration for the measurement and preparation of spatial qudit states, a supervised neural network is trained to filter quantum optics experimental data and enhance the fidelity of quantum state reconstruction. AI can also assist in measuring the quantum statistical properties of a light field to achieve intelligent quantum statistical imaging beyond the Abbe–Rayleigh resolution limit [247].
Another application of AI in quantum photonics is the deep-learning-assisted efficient design of multifunctional and multi-constrained quantum photonic devices. Fig. 12(b)[41] illustrates an inverse design procedure in quantum nanophotonics that adopts a fully connected neural network, extending the inverse design engineering of nanophotonics into the quantum domain to advance quantum device designs. Moreover, AI has emerged as a promising route for realizing the automatization of quantum experiments. The autonomous learning model depicted in Fig. 12(c)[248] can autonomously search for new quantum experiments, aiming at creating and manipulating intricate quantum states. As discussed above, AI with its real-time data processing and high-performance computing is injecting new vitality into quantum photonics, allowing for quantum optical measurement speed-ups, the efficient design of quantum devices, and autonomous learning for optimizing quantum experiments.
6.2. Quantum photonics for AI
Fueled by remarkable advancements in deep learning, there has been an extraordinary proliferation of AI applications in recent years. However, the rapidly increasing need for computing resources is inevitably surpassing the progress in classical computing hardware, which will impose limitations on future AI breakthroughs. It has been validated that quantum computers enable ultra-fast parallel computation, giving them incredible power to handle large-scale computations of complex tasks that cannot be feasibly solved classically within a reasonable time frame. Moreover, quantum computing has potential advantages over classical computing models in performing linear algebraic operations and nonclassical effects such as entanglement, superposition, and interference. However, qubit-based quantum computers are not completely continuous, because the measurements of qubit-based quantum circuits tend to be discrete. A quantum neural network was recently constructed in a continuous-variable model. Continuous-variable neural networks encode quantum information in continuous degrees of freedom instead of in qubits. Quantum versions of various specialized models such as residual, convolutional, and recurrent neural networks have been presented [249]. With these ingredients, quantum machine learning can be considered to be a promising paradigm for alleviating the computational bottlenecks in future AI applications.
An example of a quantum ONN shown in Fig. 12(d)[250] merges the versatility of neural networks with the intricacies of optical quantum systems, outstripping classical networks in a broad range of tasks. The inputs of quantum ONNs are encoded as photonic Fock states , which correspond to photons in the optical mode. photons in modes can be described by a -dimensional complex vector of unit magnate. Single-site optical nonlinearities situated between consecutive layers comprise single-mode Kerr interactions applying a phase quadratic in the number of photons, which can be expressed as follows:
where Ω represents the nonlinear layer, denotes effective nonlinear phase shifts, and and denote Dirac notation. The total transfer function Γ comprising layers takes the following form:
where represents a -mode linear optical unitary parameterized by a vector of phase shifts, , and is an -dimensional vector.
Later, a quantum optical convolution neural network was constructed to gain a substantial increase in computational efficiency for the execution of future computer vision applications [251]. Quantum state tomography is a technique for reconstructing quantum information from an unknown quantum state by measuring its copies; it is essential for characterizing the generation and preservation of quantum states. The scheme of quantum state tomography illustrated in Fig. 12(e)[252] can accurately determine the phase parameter of a qubit state. The integration of ONNs and quantum information is contributing to the implementation of high-fidelity quantum operations. In addition to a quantum ONN with idealized components, the imperfect quantum ONN circuit depicted in Fig. 12(f)[253] was investigated, providing a guide for emerging large-scale quantum technologies. It has been demonstrated that quantum computing can meet the urgent need for AI computing resources, unlocking new possibilities for evolving performance-enhanced AI systems.
Efforts have been made to investigate the potential of quantum photonic computing in machine-learning tasks, including the acceleration of existing machine-learning algorithms [254] and variational quantum circuits [255]. However, quantum photonic computing lacks feasible benchmarks at realistic scales, with theory serving as the primary tool used to evaluate its potential for problem-solving. In contrast, powerful machine-learning algorithms have been applied to address practical problems. Still, there are theoretical difficulties in terms of explanation. Thus, assessing the practical power of quantum computers for machine learning appears to be challenging [256]. Arbitrary quantum neural network models generally require a greater amount of information than classical ones. The focus on quantum advantage is often confined to a biased subset of data, models, and theoretical methods. Without training in the quantum data domain by utilizing quantum computers, quantum neural networks require initialization or design according to specific tasks, due to their limited capacity [257]. In addition, the inputs for training machine-learning algorithms are increasingly large, making it difficult for early-stage quantum computers to process. Furthermore, fundamental problems stemming from the human domain often involve more complex mathematical structures than the tasks tackled by quantum computers. Effective strategies, such as exploring suitable building blocks for quantum models, bridging quantum computing and classical learning theory, and developing quantum software solutions for experimental expansion, can help to explore the potential benefits of quantum computing for machine learning.
Another compelling platform is integrated quantum photonics combined with wafer-scale fabrication processes, which utilizes single photons of light as the carriers of quantum information [62]. Integrated quantum photonics employs optical waveguides to guide and direct photons, enabling the generation, manipulation, and detection of quantum states of light [258]. Since the demonstration of the first integrated quantum photonic logic gate in 2008 [259], integrated quantum photonics has advanced in scale and complexity, with the potential to develop programmable devices approaching thousands of components in millimeter-scale footprints with the integrated generation of multiphoton states.
Many efforts are currently being focused on achieving a large-scale integration of quantum photonic circuits with high performance. Arrazola et al. [260] fabricated a programmable nanophotonic chip for executing many-photon quantum circuit operations. The reported device enabled dynamic programmability, scalability to hundreds of modes and photons, and access to a class of quantum circuits that cannot be efficiently simulated by classical hardware. Later, a graph-based quantum photonic device in large-scale integrated nanophotonic circuits was fabricated on an eight-inch (1 inch = 2.54 cm) silicon-on-insulator wafer with the integration of 2500 components, showcasing arbitrary programmability, high architectural modularity, and massive manufacturing scalability [261]. Hybrid integrated quantum photonic devices assemble optical elements produced from different material systems, exhibiting great potential to overcome the limitations of monolithic integration [262]. Although integrated photonics can provide quantum information processing with required technological functionalities and scaling, integrated photonic devices have limitations in fidelity and on-chip state control and analysis. When scaling to the wafer scale, processing error and variability also severely impact the precision of integrated quantum photonic circuits [62]. To further improve the performance of the integrated quantum photonic platform, the integration of high-programmable integrated quantum photonic hardware and machine-learning algorithms may provide a solution to compensate for fabrication imperfections [263].
6.3. Summary and perspectives
Photonics is a critical enabler of quantum technology. The linear optical approach provides a viable means for efficient quantum computing with photons, albeit with probabilistic results and limited scalability. The emergence of photonic quantum computers, such as Jiuzhang and Borealis, showcases the potential of quantum photonics to revolutionize computational speed and efficiency. The intersection of AI and quantum photonics facilitates great advances in both fields. Introducing AI algorithms to quantum systems enables rapid and precise realization of quantum measurements, quantum device design, and automated quantum experiments. Quantum photonic computing also offers new avenues for implementing neural networks and machine-learning algorithms. Challenges remain in quantum machine learning owing to the limited capacity of quantum neural networks, large inputs for many applications, and complex mathematical structures of fundamental problems. Moreover, integrated quantum photonics offers the promise of a platform with functional scalability, stability, and integrability for applications in quantum communications and information processing. Tremendous progress in optical quantum computing will gradually drive photonic quantum computers from proof-of-principle demonstrations toward practical realization.
7. Applications of intelligent photonics
AI shows great potential for addressing challenging tasks that underlie scientific problems, primarily thanks to the availability of large datasets, the support of massively parallel computing hardware, and coupling with multilayered neural networks capable of identifying essential features. Advances in AI are propelling a paradigm shift in physics and optical engineering, and incorporating photonics into AI models enables the handling of complicated tasks in a timely and energy-conserving manner. Intelligent photonics represents a research area at the convergence of physics, engineering, and computer science. Concomitant with the substantial progress being made in deep learning algorithms and photonics technology, intelligent photonics technology is being extensively applied in a wide range of fields, including metaverse, biomedicine, automatic driving, advanced manufacturing, optical communications, and astronomical observation. Intelligent photonics-related applications are exhibited in Fig. 13.
7.1. The metaverse
AI is transforming metaverse platforms by providing a more immersive and interactive experience for users, from creating realistic landscapes to allowing interactions through natural language. Metaverse-related technologies such as holographic displays and VR/AR facilitate the convergence of virtual and physical reality, making it possible for interaction between individuals and virtual elements. Recent research has demonstrated that the deep-learning-enabled holographic paradigm makes 3D vectorial holography [264] and dynamic holography practical [226], [265]. The use of deep learning enables compact, computationally efficient, low-power holographic display systems that can project high-resolution images over a large FOV [266]. The future importance of VR/AR as a next-generation display platform lies in its potential to facilitate deeper human-digital interactions, not only in the emerging digital economy of the metaverse but also in the ongoing digitalization of industry. AI empowers VR/AR headsets with real-time, natural-looking, and high-resolution performance. Metasurface-enabled VR/AR displays are being exploited to overcome the limited performance in terms of bulky form factor provided by current VR/AR devices [139]. Thus, as discussed above, intelligent photonics technology is steadily evolving VR/AR technology to be lighter, smaller, less power-hungry, and more intuitive for users.
7.2. Biomedicine
In the field of biomedicine, intelligent photonics technology offers widespread possibilities for more biomedical diagnoses and therapies. For example, all-optical diffractive networks can provide instantaneous image reconstruction and intelligent image analysis to improve image-based diagnostics and image-guided therapy [237], [267]. The microscopic imaging of tissue samples assists in the diagnosis of various diseases, serving as a vital tool for pathology and biological sciences. Intelligent optical microscopy via deep learning can significantly enhance spatial resolution and imaging speed [85], [268], [269], thereby elevating the efficiency of healthcare.
7.3. Automatic driving
As for AI-assisted transportation, photonics is currently pervasive within the supply chain of the automotive industry, transitioning from mere lighting functions to providing advanced technology for sensing, imaging, and displaying. For example, automatic driving utilizes arrays of high-speed optical sensors to monitor the surroundings and leverages machine intelligence for real-time decision-making [270], enabling panoramic vision, emergency obstacle avoidance, automatic parking, adaptive cruise control, and so forth. With the support of ONNs, the features of different input signals can be extracted for subsequent intelligent analysis [33]. Hence, the automatic detection, classification, and tracking of targets with high speed and low energy consumption can be achieved [184], [271], which is essential for automatic driving.
7.4. Advanced manufacturing
In the field of advanced manufacturing, deep learning provides an efficient way to design photonic structures, metamaterials, and devices, showing great potential to enable large-scale photonic designs with unparalleled optical functionalities [5]. Deep learning is also transforming the design and manufacturing of precision instruments such as smart cameras [42], sensors [96], [272], and fiber lasers [273]. These intelligent photonics systems provide both the sensing and the measurement functionalities required to achieve smart industrial environments. The progress of intelligent manufacturing is contributing to evolving photonics systems for extensive applications.
7.5. Optical communications
In the field of optical communications, AI-based techniques are contributing toward anticipating and optimizing the performance of optical communication systems and networks. Photonics-enabled optical communication systems ensure the security of data transfer. For example, OAM detection underpins advances in optical communications. The utilization of intelligent optical–electronic processors promises a powerful and compact platform for OAM-based information processing in a rapid, accurate, and robust manner [136].
7.6. Astronomical observation
For astronomical observation, adaptive optics realizes high-quality imaging through real-time detection and compensation for distorted wavefronts, playing an important role in ground-based telescopes. To overcome the bottlenecks encountered by classical adaptive optics systems, the combination of diffractive neural networks with adaptive optics technology enhances the real-time performance and noise immunity in adaptive optics systems [274].
7.7. Outlook
Overall, intelligent photonics is bringing transformative advances in numerous aspects of our lives, encompassing entertainment, healthcare, transportation, manufacturing, communication, astronomy, and other fields. Recent years have witnessed a considerable proliferation of research in intelligent photonics. With continuous breakthroughs, intelligent photonics will gradually infiltrate into industry production and human lifestyle. Intelligent photonics technology seems poised to accomplish complicated tasks efficiently in diverse industries and make our daily lives more convenient. It is foreseeable that intelligent photonics will be a key enabling technology in the coming decades.
8. Discussion and prospect
As an emerging area of interdisciplinary research and innovation, intelligent photonics has taken significant strides forward in recent years, having a transformative influence on real-world applications. The fruitful interplay between AI and photonics is inspiring a surge of research interest in intelligent photonics. Photonic systems are responsible for creating and transmitting large volumes of data, whereas AI extracts knowledge and makes inferences from massive datasets. In photonics, AI is concentrated on leveraging deep learning to optimize the design of photonic structures, materials, devices, and systems, and to perform complicated optical data processing and analysis. The incorporation of AI in photonics technology will continually unlock new avenues toward the realization of an unparalleled speed-up in handling optical issues and will reveal unique optical effects and functionalities. The extraordinary power of AI has caused a paradigm shift in the photonics community, ushering photonics technology into an even brighter era. While great success has been achieved in applying AI technology to photonics, challenges such as the burden of data collection, poor network generalization ability, lack of a standard paradigm for network design and training, and deficiency in interpretability need to be addressed [4].
The conventional electronic computing approaches used in AI are gradually approaching their performance limits and are struggling to keep up with the explosive growth of data available for processing. Future AI applications urgently require powerful computing power, which has ignited a resurgence of photonic computing. It is worth noting that metaphotonics, holography, and quantum photonics exhibit huge potential for alleviating the burden on electronic AI through their outstanding optical properties. The application of photonics in AI tends to focus on the development of photonic processors for overcoming the challenges of computing speed and energy consumption. Taking advantage of photonic concepts, components, and materials to construct all-optical platforms for performing AI algorithms will lead to a more intelligent era.
Although the use of photons for communications has attained great success, it is still extremely challenging to develop general-purpose photonic computing systems that are comparable to those of the advanced electronic computing systems in today’s dominant computing platforms. The difficulty arises from how to develop a computing framework that can fully leverage the advantages of optics. One of the challenges is that current ONNs are primarily trained electronically in silico to obtain their design parameters for physical realization; the inference tasks are then optically completed. Such in silico network training models inevitably require a long training time, exhibit limited scalability, and result in low inference accuracy. Recent explorations in network training have made substantial progress in the photonic implementation of backpropagation for in situ training [125], [275]. Further investigation is required to pursue a network architecture that can achieve real-time training and perform both training and inference processes optically. Another challenge is the implementation of nonlinear activation in all-ONN frameworks. The nonlinear activation employed in photonic neuromorphic processors is different from that traditionally used in deep learning. This divergence results in a training process with slow convergence and instability for such architectures. Various training schemes such as adaptive initialization [276] and variance preserving initialization [277] techniques have been investigated to overcome the training issues.
In addition, nonlinear activation—an indispensable part of neural networks—is difficult to implement in the optical domain. This is because optical nonlinearity is relatively weak and typically requires high optical power to induce, leading to high energy consumption. Alternative solutions such as efficient electro-optical conversion mechanisms and diverse optical effects based on light-matter interactions have been reported. For example, the commonly used nonlinear activation function ReLU in DNNs can be realized in ONNs by means of photodetector/sensor arrays, the inverted filling light-absorption mechanism of quantum dots [278], and resonance between the probe pulse and ring resonator [61]. Notably, the difficulty of implementing optical nonlinear activation functions can be addressed by a hybrid optical–electronic computing approach. However, such hybrid computing architectures lag behind ideal all-optical computing architectures in terms of processing speed, energy consumption, and throughput. Considering this situation, it is essential to explore optical nonlinearity with low energy consumption, ease of realization, high speed, and versatile expression forms.
Finally, on-chip integrated ONNs with high energy efficiency and high throughput provide a potential photonic computing platform for extensive research and commercial applications. However, these types of network architectures are costly and technically challenging. Technical problems such as how to integrate optical operators with diverse functionalities onto a single chip and how to achieve flexible and reconfigurable optical operators within the chip remain to be addressed. Consequently, on the path of photonic computing systems’ maturation into a practical technology, there are still urgent challenges to be solved, including a lack of appropriate optical training mechanisms, methods for developing optical nonlinear activation with high efficiency, and scalable photonic processors with ease of integration.
Achieving a practical photonic computing system requires extensive cross-disciplinary integration in photonics, computer science, engineering, materials science, and so on. Cutting-edge photonics technologies such as metaphotonics, holography, and quantum photonics are transforming current AI computing systems [279], [280]. Before the maturity of all-optical computing architectures [281] and general-purpose quantum computers can be achieved, the development of hybrid optical–electronic computing systems with high speed, low energy consumption, and high throughput is an essential trajectory to alleviate current AI bottlenecks. Such hybrid computing modes combine the advantages of both optical and electronic computing, and can be seen as a transitional stage in the journey toward practical all-optical computing architectures. From a long-term perspective, the development of photonic computing should focus on taking full advantage of photons and making photons undertake as much of the work as possible in AI systems.
In summary, the profound fusion of AI and photonics facilitates mutual advancement, giving rise to the emerging interdisciplinary field of intelligent photonics. Intelligent photonics has made significant progress in recent years, showing outstanding performance and huge potential for practical applications. AI, with its efficient data processing and inference capabilities, can be utilized in forward modeling and inverse design for photonics. Photonics-based devices and systems can be leveraged for the physical implementation of neural networks, achieving high speed and low energy consumption in AI computing systems. In the pursuit of general-purpose photonic computing systems, continuous research is being devoted to breaking through the existing challenges and developing high-performance ONN architectures. In this paper, we have reviewed the recent progress in intelligent photonics from the perspectives of deep learning, metaphotonics, holography, and quantum photonics. We have also described some applications of intelligent photonics in numerous fields, such as the metaverse, biomedicine, automatic driving, advanced manufacturing, optical communications, and astronomical observation. Challenges and prospects for the future development of intelligent photonics were further discussed. It is believed that advances in intelligent photonics will promote further progress in photonics technology and facilitate AI systems in achieving more powerful computing performance, thereby shaping an intelligent future for human life.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (62035003 and 62235009).
Compliance with ethics guidelines
Danlin Xu, Yuchen Ma, Guofan Jin, and Liangcai Cao declare that they have no conflict of interest or financial conflicts to disclose.
LundstromM.Moore’s law forever?.Science2003; 299(5604):210-211.
[8]
LiC, ZhangX, LiJW, FangT, DongXW.The challenges of modern computing and new opportunities for optics.PhotoniX2021; 2:20.
[9]
AI and compute [Internet]. San Francisco: Open AI; 2018 May 16 [cited 2024 Jul 7]. Available from: https://openai.com/blog/ai-and-compute.
[10]
DharP.The carbon impact of artificial intelligence.Nat Mach Intell2020; 2(8):423-425.
[11]
AkhoonMS, SuandiSA, AlshahraniA, SaadAMHY, AlbogamyFR, AbdullahMZB, et al.High performance accelerators for deep neural networks: a review.Expert Syst2022; 39:e12831.
HanCH, ZhengZ, ShuHW, JinM, QinJ, ChenRX, et al.Slow-light silicon modulator with 110-GHz bandwidth.Science2023; 9(42):eadi5339.
[21]
JAAørgensen, KongD, HenriksenMR, KlejsF, YeZ, HelgasonOB, et al.Petabit-per-second data transmission using a chip-scale microcomb ring resonator source.Nat Photonics2022; 16(11):798-802.
[22]
WadeM, AndersonE, ArdalanS, BhargavaP, BuchbinderS, DavenportML, et al.TeraPHY: a chiplet technology for low-power, high-bandwidth in-package optical I/O.IEEE Micro2020; 40:63-71.
[23]
LiuW, LiM, GuzzonRS, NorbergEJ, ParkerJS, LuMZ, et al.A fully reconfigurable photonic integrated signal processor.Nat Photonics2016; 10(3):190-195.
[24]
XuZF, TangBS, ZhangXY, LeongJF, PanJM, HoodaS, et al.Reconfigurable nonlinear photonic activation function for photonic neural network based on non-volatile opto-resistive RAM switch.Light Sci Appl2022; 11:288.
[25]
SundPI, LomonteE, PaesaniS, WangY, CarolanJ, BartN, et al.High-speed thin-film lithium niobate quantum processor driven by a solid-state quantum emitter.Sci Adv2023; 9(19):eadg7268.
[26]
PolarID: enabling the next level of biometric security.Boston: Metalenz; [cited 2024 Jul 7].Available from: https://met al.enz.com/polareyes-polarization-imaging-system/polar-id/.
[27]
WillomitzerF, RangarajanPV, LiFQ, BalajiMM, ChristensenMP, CossairtO.Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography.Nat Commun2021; 12:6647.
[28]
DengYH, GuYC, LiuHL, GongSQ, SuH, ZhangZJ, et al.Gaussian boson sampling with pseudo-photon-number-resolving detectors and quantum computational advantage.Phys Rev Lett2023; 131(15):150601.
[29]
PanW, ZhengJY, WangL, LuoY.A future perspective on in-sensor computing.Engineering2022; 14:19-21.
[30]
CaoJF, YipHC, ChenYY, ScheppachM, LuoXB, YangHZ, et al.Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study.Nat Commun2023; 14:6676.
[31]
BakerSB, XiangW, AtkinsonI.Internet of Things for smart healthcare: technologies, challenges, and opportunities.IEEE Access2017; 5:26521-26544.
[32]
SluddsA, BandyopadhyayS, ChenZJ, ZhongZZ, CochraneJ, BernsteinL, et al.Delocalized photonic deep learning on the internet’s edge.Science2022; 378(6617):270-276.
[33]
FuTZ, ZangYB, HuangYY, DuZM, HuangHH, HuCY, et al.Photonic machine learning with on-chip diffractive optics.Nat Commun2023; 14:70.
[34]
AlagappanG, OngJR, YangZF, AngTYL, ZhaoWJ, JiangY, et al.Leveraging AI in photonics and beyond.Photonics2022; 9(2):75.
[35]
RivensonY, ZhangYB, GünaydHın, TengD, OzcanA.Phase recovery and holographic image reconstruction using deep learning in neural networks.Light Sci Appl2017; 7:17141.
[36]
KrennM, LandgrafJ, FoeselT, MarquardtF.Artificial intelligence and machine learning for quantum technologies.Phys Rev A2023; 107(1):010101.
[37]
KiarashinejadY, AbdollahramezaniS, ZandehshahvarM, HemmatyarO, AdibiA.Deep learning reveals underlying physics of light–matter interactions in nanophotonic devices.Adv Theory Simul2019; 2(9):1900088.
[38]
BarbastathisG, OzcanA, SituGH.On the use of deep learning for computational imaging.Optica2019; 6(8):921-943.
[39]
ZhuXX, TuiaD, MouL, XiaGS, ZhangLP, XuF, et al.Deep learning in remote sensing: a comprehensive review and list of resources.IEEE Geosci Remote Sens Mag2017; 5(4):8-36.
[40]
MaQ, LiuC, XiaoQ, GuZ, GaoXX, LiLL, et al.Information metasurfaces and intelligent metasurfaces.Photon Insights2022; 1(1):R01.
[41]
LiuGX, LiuJF, ZhouWJ, LiLY, YouCL, QiuCW, et al.Inverse design in quantum nanophotonics: combining local-density-of-states and deep learning.Nanophotonics2023; 12(11):1943-1955.
[42]
BradyDJ, FangL, MaZ.Deep learning for camera data acquisition, control, and image estimation.Adv Opt Photonics2020; 12(4):787-846.
[43]
WangD, ZhangM.Artificial intelligence in optical communications: from machine learning to deep learning.Front Comms Net2021; 2:656786.
[44]
MenguD, Sakib RahmanMS, LuoY, LiJX, KulceO, OzcanA.At the intersection of optics and deep learning: statistical inference, computing, and inverse design.Adv Opt Photonics2022; 14(2):209-290.
[45]
WrightLG, OnoderaT, SteinMM, WangTY, SchachterDT, HuZ, et al.Deep physical neural networks trained with backpropagation.Nature2022; 601(7894):549-555.
[46]
MomeniA, RahmaniB, MallMéjac, del HougneP, FleuryR.Backpropagation-free training of deep physical neural networks.Science2023; 382(6676):1297-1303.
JiaoSM, LiuJW, ZhangLW, YuFH, ZuoGM, ZhangJM, et al.All-optical logic gate computing for high-speed parallel information processing.Opto Electron Sci2022; 1(9):220010.
[49]
WangJ, LongY.On-chip silicon photonic signaling and processing: a review.Sci Bull2018; 63(19):1267-1310.
[50]
LinX, LiuJP, HaoJY, WangK, ZhangYY, LiH, et al.Collinear holographic data storage technologies.Opto Electron Adv2020; 3(3):190004.
[51]
DingXM, ZhaoZH, XieP, CaiDY, MengFY, WangC, et al.Metasurface-based optical logic operators driven by diffractive neural networks.Adv Mater2023; 36(9):2308993.
FeldmannJ, YoungbloodN, KarpovM, GehringH, LiX, StappersM, et al.Parallel convolutional processing using an integrated photonic tensor core.Nature2021; 589(7840):52-58.
[73]
MengXY, ZhangGJ, ShiNN, LiGY, AzanaJ, CapmanyJ, et al.Compact optical convolution processing unit based on multimode interference.Nat Commun2023; 14:3000.
[74]
ChengJW, XieYZ, LiuY, SongJJ, LiuXY, HeZM, et al.Human emotion recognition with a microcomb-enabled integrated optical neural network.Nanophotonics2023; 12(20):3883-3894.
[75]
XuXY, TanMX, CorcoranB, WuJY, BoesA, NguyenTG, et al.11 Tops photonic convolutional accelerator for optical neural networks.Nature2021; 589(7840):44-51.
[76]
ChenYT, NazhamaitiM, XuH, MengY, ZhouTK, LiGP, et al.All-analog photoelectronic chip for high-speed vision tasks.Nature2023; 623(7985):48-57.
[77]
BouvierM, ValentianA, MesquidaT, RummensF, ReybozM, VianelloE, et al.Spiking neural networks hardware implementations and challenges: a survey.ACM J Emerg Technol Comput Syst2019; 15(2):1-35.
[78]
JouppiNP, YoungC, PatilN, PattersonD, AgrawalG, BajwaR, et al.In-datacenter performance analysis of a tensor processing unit.In: Proceedings of the 44th Annual International Symposium on Computer Architecture; 2017 Jun 24–28; Toronto, ON, Canada. New York City: IEEE; 2017. p. 1–12
[79]
NVIDIA A100 tensor core GPU [Internet]. Santa Clara: NVIDIA Corporation; [cited 2024 Jul 7]. Available from: https://www.nvidia.com/en-us/data-center/a100/.
[80]
ShresthaA, MahmoodA.Review of deep learning algorithms and architectures.IEEE Access2019; 7:53040-53065.
[81]
HintonGE, SalakhutdinovRR.Reducing the dimensionality of data with neural networks.Science2006; 313(5786):504-507.
[82]
De LimaTF, ShastriBJ, TaitAN, NahmiasMA, PrucnalPR.Progress in neuromorphic photonics.Nanophotonics2017; 6(3):577-599.
[83]
PellizzariCJ, BateTJ, DonnellyKP, BuzzardGT, BoumanCA, SpencerMF.Coherent plug-and-play artifact removal: physics-based deep learning for imaging through aberrations.Opt Lasers Eng2023; 164:107496.
[84]
VishniakouI, SeeligJD.Wavefront correction for adaptive optics with reflected light and deep neural networks.Opt Express2020; 28(10):15459-15471.
QiaoC, LiD, GuoYT, LiuC, JiangT, DaiQH, et al.Evaluation and development of deep neural networks for image super-resolution in optical microscopy.Nat Methods2021; 18(2):194-202.
[87]
LiS, DengM, LeeJ, SinhaA, BarbastathisG.Imaging through glass diffusers using densely connected convolutional networks.Optica2018; 5(7):803-813.
[88]
LiYZ, XueYJ, TianL.Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media.Optica2018; 5(10):1181-1190.
[89]
WangKQ, SongL, WangCT, RenZB, ZhaoGY, DouJZ, et al.On the use of deep learning for phase recovery.Light Sci Appl2024; 13:4.
[90]
SinhaA, LeeJ, LiS, BarbastathisG.Lensless computational imaging through deep learning.Optica2017; 4(9):1117-1125.
[91]
WuJC, CaoLC, BarbastathisG.DNN-FZA camera: a deep learning approach toward broadband FZA lensless imaging.Opt Lett2021; 46(1):130-133.
[92]
WangCH, MaJ, FengYD, XuXY, ZhangTY, ChengK, et al.Error-free long-lifespan optical storage enhanced by deep learning.Laser Photonics Rev2024; 18(6):2301042.
[93]
WiechaPR, LecestreA, MalletN, LarrieuG.Pushing the limits of optical information storage using deep learning.Nat Nanotechnol2019; 14(3):237-244.
ZhouHL, DongJJ, ChengJW, DongWC, HuangCR, ShenYC, et al.Photonic matrix multiplication lights up photonic accelerator and beyond.Light Sci Appl2022; 11:30.
[102]
FarhatNH, PsaltisD, PrataA, PaekE.Optical implementation of the Hopfield model.Appl Opt1985; 24:1469.
[103]
LiuJ, WuQH, SuiXB, ChenQ, GuGH, WangLP, et al.Research progress in optical neural networks: theory, applications and developments.PhotoniX2021; 2:5.
[104]
ShastriBJ, TaitAN, Ferreira de LimaT, PerniceWHP, BhaskaranH, WrightCD, et al.Photonics for artificial intelligence and neuromorphic computing.Nat Photonics2021; 15(2):102-114.
[105]
WetzsteinG, OzcanA, GiganS, FanSH, EnglundD, SoljacicM, et al.Inference in artificial intelligence with deep optics and photonics.Nature2020; 588(7836):39-47.
WilkesCM, QiangX, WangJ, SantagatiR, PaesaniS, ZhouX, et al.60 dB high-extinction auto-configured Mach–Zehnder interferometer.Opt Lett2016; 41(22):5318-5321.
[108]
Mourgias-AlexandrisG, Moralis-PegiosM, TsakyridisA, SimosS, DabosG, TotovicA, et al.Noise-resilient and high-speed deep learning with coherent silicon photonics.Nat Commun2022; 13:5572.
[109]
KirtasM, OikonomouA, PassalisN, Mourgias-AlexandrisG, Moralis-PegiosM, PlerosN, et al.Quantization-aware training for low precision photonic neural networks.Neural Netw2022; 155:561-573.
[110]
GiamougiannisG, TsakyridisA, Moralis-PegiosM, PappasC, KirtasM, PassalisN, et al.Analog nanophotonic computing going practical: silicon photonic deep learning engines for tiled optical matrix multiplication with dynamic precision.Nanophotonics2023; 12(5):963-973.
[111]
ZhuHH, ZouJ, ZhangH, ShiYZ, LuoSB, WangN, et al.Space-efficient optical computing with an integrated chip diffractive neural network.Nat Commun2022; 13:1044.
[112]
TaitAN, De LimaTF, ZhouE, WuAX, NahmiasMA, ShastriBJ, et al.Neuromorphic photonic networks using silicon photonic weight banks.Sci Rep2017; 7:7430.
[113]
TaitAN, NahmiasMA, ShastriBJ, PrucnalPR.Broadcast and weight: an integrated network for scalable photonic spike processing.J Lit Technol2014; 32(21):4029-4041.
ZhengMJ, ShiL, ZiJ.Optimize performance of a diffractive neural network by controlling the Fresnel number.Photonics Res2022; 10:2667-2676.
[119]
ChenH, FengJA, JiangMW, WangYQ, LinJ, TanJB, et al.Diffractive deep neural networks at visible wavelengths.Engineering2021; 7(10):1483-1491.
[120]
LuoY, MenguD, YardimciNT, RivensonY, VeliM, JarrahiM, et al.Design of task-specific optical systems using broadband diffractive neural networks.Light Sci Appl2019; 8:112.
[121]
LiJX, GanTY, BaiBJ, LuoY, JarrahiM, OzcanA.Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network.Adv Photonics2023; 5(1):016003.
[122]
DuanZY, ChenH, LinX.Optical multi-task learning using multi-wavelength diffractive deep neural networks.Nanophotonics2023; 12(5):893-903.
[123]
YanT, WuJM, ZhouTK, XieH, XuF, FanJT, et al.Fourier-space diffractive deep neural network.Phys Rev Lett2019; 123(2):023901.
QuYR, ZhuHZ, ShenYC, ZhangJ, TaoCN, GhoshPT, et al.Inverse design of an integrated-nanophotonics optical neural network.Sci Bull2020; 65(14):1177-1183.
[129]
KhoramE, ChenA, LiuDJ, YingL, WangQQ, YuanM, et al.Nanophotonic media for artificial neural inference.Photon Res2019; 7(8):823-827.
[130]
MuminovB, VuongLT.Fourier optical preprocessing in lieu of deep learning.Optica2020; 7(9):1079-1088.
[131]
MuminovB, PerryA, HyderR, AsifMS, VuongLT.Toward simple, generalizable neural networks with universal training for low-SWaP hybrid vision.Photon Res2021; 9(7):B253-B261.
[132]
ChangJ, SitzmannV, DunX, HeidrichW, WetzsteinG.Hybrid optical–electronic convolutional neural networks with optimized diffractive optics for image classification.Sci Rep2018; 8:12324.
[133]
MartelJNP, MuellerLK, CareySJ, DudekP, WetzsteinG.Neural sensors: learning pixel exposures for HDR imaging and video compressive sensing with programmable sensors.IEEE Trans Pattern Anal Mach Intell2020; 42:1642-1653.
[134]
LiJX, MenguD, YardimciNT, LuoY, LiXR, VeliM, et al.Spectrally encoded single-pixel machine vision using diffractive networks.Sci Adv2021; 7(13):eabd7690.
XiangSY, ShiYC, GuoXX, ZhangYH, WangHJ, ZhengDZ, et al.Hardware-algorithm collaborative computing with photonic spiking neuron chip based on an integrated Fabry–Perot laser with a saturable absorber.Optica2023; 10(2):162-171.
NieS, AkyildizIF.Metasurfaces for multiplexed communication.Nat Electron2021; 4(3):177-178.
[148]
ZhaoXG, SunZC, ZhangLY, WangZL, XieRB, ZhaoJH, et al.Review on metasurfaces: an alternative approach to advanced devices and instruments.Adv Devices Instrum2022; 2022:9765089.
[149]
YangF, ShalaginovMY, LinHI, AnSS, AgarwalA, ZhangHL, et al.Wide field-of-view metalens: a tutorial.Adv Photonics2023; 5(3):033001.
[150]
WesemannL, RickettJ, SongJC, LouJQ, HindeE, DavisTJ, et al.Nanophotonics enhanced coverslip for phase imaging in biology.Light Sci Appl2021; 10:98.
[151]
AltugH, OhSH, MaierSA, HomolaJ.Advances and applications of nanophotonic biosensors.Nat Nanotechnol2022; 17(1):5-16.
[152]
ChengJP, ShaXB, ZhangH, ChenQM, QuGY, SongQH, et al.Ultracompact orbital angular momentum sorter on a CMOS chip.Nano Lett2022; 22(10):3993-3999.
[153]
KrasikovS, TranterA, BogdanovA, KivsharY.Intelligent metaphotonics empowered by machine learning.Opto Electron Adv2022; 5(3):210147.
[154]
VeselagoVG.Electrodynamics of substances with simultaneously negative and values of ε and μ.Sov Phys Usp1968; 10(4):509-514.
[155]
PendryJB, HoldenAJ, StewartWJ, YoungsI.Extremely low frequency plasmons in metallic mesostructures.Phys Rev Lett1996; 76(25):4773-4776.
[156]
YuNF, GenevetP, KatsMA, AietaF, TetienneJP, CapassoF, et al.Light propagation with phase discontinuities: generalized laws of reflection and refraction.Science2011; 334(6054):333-337.
[157]
CuiTJ, QiMQ, WanX, ZhaoJ, ChengQ.Coding metamaterials, digital metamaterials and programmable metamaterials.Light Sci Appl2014; 3(10):e218.
[158]
McCullochW, PittsW.A logical calculus of the ideas immanent in nervous activity.Bull Math Biol1943; 5(4):115-133.
[159]
HintonGE, OsinderoS, TehYW.A fast learning algorithm for deep belief nets.Neural Comput2006; 18(7):1527-1554.
LiuC, MaQ, LuoZJ, HongQR, XiaoQ, ZhangHC, et al.A programmable diffractive deep neural network based on a digital-coding metasurface array.Nat Electron2022; 5(2):113-122.
[162]
HuangLL, MuhlenberndH, LiXW, SongX, BaiBF, WangYT, et al.Broadband hybrid holographic multiplexing with geometric metasurfaces.Adv Mater2015; 27(41):6444-6449.
[163]
ArbabiA, HorieY, BagheriM, FaraonA.Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission.Nat Nanotechnol2015; 10(11):937-943.
[164]
ZhaoRZ, SainB, WeiQS, TangCC, LiXW, WeissT, et al.Multichannel vectorial holographic display and encryption.Light Sci Appl2018; 7:95.
[165]
ChengH, WeiXY, YuP, LiZC, LiuZ, LiJJ, et al.Integrating polarization conversion and nearly perfect absorption with multifunctional metasurfaces.Appl Phys Lett2017; 110(17):171903.
[166]
ChuCH, TsengML, ChenJ, WuPC, ChenYH, WangHC, et al.Active dielectric metasurface based on phase-change medium.Laser Photonics Rev2016; 10(6):986-994.
[167]
Della GiovampaolaC, EnghetaN.Digital metamaterials.Nat Mater2014; 13(12):1115-1121.
[168]
WuHT, LiuS, WanX, ZhangL, WangD, LiLL, et al.Controlling energy radiations of electromagnetic waves via frequency coding metamaterials.Adv Sci2017; 4(9):1700098.
[169]
ZhangL, ChenXQ, LiuS, ZhangQ, ZhaoJ, DaiJY, et al.Space–time-coding digital metasurfaces.Nat Commun2018; 9:4334.
[170]
LiuS, ZhangHC, ZhangL, YangQL, XuQ, GuJQ, et al.Full-state controls of terahertz waves using tensor coding metasurfaces.ACS Appl Mater Interfaces2017; 9(25):21503-21514.
[171]
MaQ, ShiCB, BaiGD, ChenTY, NoorA, CuiTJ.Beam-editing coding metasurfaces based on polarization bit and orbital-angular-momentum-mode bit.Adv Opt Mater2017; 5(23):1700548.
[172]
ChenL, MaQ, NieQF, HongQR, CuiHY, RuanY, et al.Dual-polarization programmable metasurface modulator for near-field information encoding and transmission.Photon Res2021; 9(2):116-124.
[173]
ZhangL, ChenMZ, TangWK, DaiJY, MiaoL, ZhouXY, et al.A wireless communication scheme based on space- and frequency-division multiplexing using digital metasurfaces.Nat Electron2021; 4(3):218-227.
AmenabarI, PolyS, NuansingW, HubrichEH, GovyadinovAA, HuthF, et al.Structural analysis and mapping of individual protein complexes by infrared nanospectroscopy.Nat Commun2013; 4:2890.
[182]
YaoHM, LiM, JiangLJ.Applying deep learning approach to the far-field subwavelength imaging based on near-field resonant metalens at microwave frequencies.IEEE Access2019; 7:63801-63808.
[183]
LiLL, ShuangY, MaQ, LiHY, ZhaoHT, WeiML, et al.Intelligent metasurface imager and recognizer.Light Sci Appl2019; 8:97.
[184]
LiWH, MaQ, LiuC, ZhangYF, WuXN, WangJW, et al.Intelligent metasurface system for automatic tracking of moving targets and wireless communications based on computer vision.Nat Commun2023; 14:989.
[185]
WesemannL, DavisTJ, RobertsA.Meta-optical and thin film devices for all-optical information processing.Appl Phys Rev2021; 8(3):031309.
SemmlingerM, ZhangM, TsengML, HuangTT, YangJ, TsaiDP, et al.Generating third harmonic vacuum ultraviolet light with a TiO2 metasurface.Nano Lett2019; 19(12):8972-8978.
[190]
HuoPC, ZhangC, ZhuWQ, LiuMZ, ZhangS, ZhangS, et al.Photonic spin-multiplexing metasurface for switchable spiral phase contrast imaging.Nano Lett2020; 20(4):2791-2798.
GaborD.A new microscopic principle.Nature1948; 161(4098):777-778.
[198]
LeithEN, UpatnieksJ.Reconstructed wavefronts and communication theory.J Opt Soc Am1962; 52:1123-1130.
[199]
DenisyukYN.On the reflection of optical properties of an object in a wave field of light scattered by it.Dokl Akad Nauk SSSR1962; 144:1275-1278.
[200]
ZhangWH, CaoLC, BradyDJ, ZhangH, CangJ, ZhangH, et al.Twin-image-free holography: a compressive sensing approach.Phys Rev Lett2018; 121(9):093902.
[201]
ZhaoY, CaoLC, ZhangH, KongDZ, JinGF.Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.Opt Express2015; 23(20):25440-25449.
[202]
ZhengHD, ZhouCJ, ShuiXH, YuYJ.Computer-generated full-color phase-only hologram using a multiplane iterative algorithm with dynamic compensation.Appl Opt2022; 61(5):B262-B270.
[203]
WangZ, MiccioL, CoppolaS, BiancoV, MemmoloP, TkachenkoV, et al.Digital holography as metrology tool at micro–nanoscale for soft matter.Light Adv Manuf2022; 3:10.
[204]
LiJH, CaoLC, GuHR, TanXD, HeQS, JinGF.Orthogonal-reference-pattern-modulated shift multiplexing for collinear holographic data storage.Opt Lett2012; 37(5):936-938.
[205]
SchnarsU, JüptnerW.Digital holography: digital hologram recording, numerical reconstruction, and related techniques. Springer-Verlag, Berlin (2005)
[206]
SchnarsU, JüptnerWPO.Digital recording and numerical reconstruction of holograms.Meas Sci Technol2002; 13(9):R85-R.
SongJ, SwisherCL, ImH, JeongS, PathaniaD, IwamotoY, et al.Sparsity-based pixel super resolution for lens-free digital in-line holography.Sci Rep2016; 6:24681.
[212]
RivensonY, WuYC, OzcanA.Deep learning in holography and coherent imaging.Light Sci Appl2019; 8:85.
[213]
LiuKX, WuJC, HeZH, CaoLC.4K-DMDNet: diffraction model-driven network for 4K computer-generated holography.Opto Electron Adv2023; 6(5):220135.
[214]
ZhuRC, WangJF, FuXM, LiuXS, LiuTH, ChuZT, et al.Deep-learning-empowered holographic metasurface with simultaneously customized phase and amplitude.ACS Appl Mater Interfaces2022; 14(42):48303-48310.
[215]
PitkäahoT, ManninenA, NaughtonTJ.Focus prediction in digital holographic microscopy using deep convolutional neural networks.Appl Opt2019; 58(5):A202-A208.
[216]
LiuTR, de HaanK, RivensonY, WeiZS, ZengX, ZhangYB, et al.Deep learning-based super-resolution in coherent imaging systems.Sci Rep2019; 9:3926.
[217]
YinD, GuZZ, ZhangYR, GuFY, NieSP, FengST, et al.Speckle noise reduction incoherent imaging based on deep learning without clean data.Opt Lasers Eng2020; 133:106151.
ZhengH, HuJB, ZhouCJ, WangXX.Computing 3D phase-type holograms based on deep learning method.Photonics2021; 8(7):280.
[222]
Hossein EybposhM, CairaNW, AtisaM, ChakravarthulaP, PNCégard.DeepCGH: 3D computer-generated holography using deep learning.Opt Express2020; 28(18):26636-26650.
[223]
PengYF, ChoiS, PadmanabanN, WetzsteinG.Neural holography with camera-in-the-loop training.ACM Trans Graph2020; 39(6):185.
[224]
PengYF, ChoiS, KimJ, WetzsteinG.Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration.Sci Adv2021; 7(46):eabg5040.
[225]
WuJC, LiuKX, SuiXM, CaoLC.High-speed computer-generated holography using an autoencoder-based deep neural network.Opt Lett2021; 46(12):2908-2911.
[226]
ShiL, LiBC, KimC, KellnhoferP, MatusikW.Towards real-time photorealistic 3D holography with deep neural networks.Nature2021; 591(7849):234-239.
[227]
GaoH, FanXH, XiongW, HongMH.Recent advances in optical dynamic meta-holography.Opto Electron Adv2021; 4(11):210030.
[228]
HuYQ, LuoXH, ChenYQ, LiuQ, LiX, WangYS, et al.3D-integrated metasurfaces for full-colour holography.Light Sci Appl2019; 8:86.
[229]
ZouYJ, ZhuRR, ShenL, ZhengB.Reconfigurable metasurface hologram of dynamic distance via deep learning.Front Mater2022; 9:907672.
[230]
KaikhahK, LoochanF.Computer generated holograms for optical neural networks.Appl Intell2001; 14:145-160.
[231]
KellerPE, GmitroAF.Design and analysis of fixed planar holographic interconnects for optical neural networks.Appl Opt1992; 31(26):5517-5526.
[232]
LiHYS, QiaoY, PsaltisD.Optical network for real-time face recognition.Appl Opt1993; 32:5026-5035.
[233]
GoiE, SchoenhardtS, GuM.Direct retrieval of Zernike-based pupil functions using integrated diffractive deep neural networks.Nat Commun2022; 13:7531.
BaiB, LuoY, GanT, HuJ, LiY, ZhaoY, et al.To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects.eLight2022; 2:14.
[236]
HuangZB, HeYL, WangPP, XiongWJ, WuHS, LiuJM, et al.Orbital angular momentum deep multiplexing holography via an optical diffractive neural network.Opt Express2022; 30(4):5569-5584.
[237]
WangTY, SohoniMM, WrightLG, SteinMM, MaSY, OnoderaT, et al.Image sensing with multilayer nonlinear optical neural networks.Nat Photonics2023; 17(5):408-415.
SchuldM, KilloranN.Is quantum advantage the right goal for quantum machine learning?.PRX Quantum2022; 3(3):030101.
[257]
WrightLG, McMahonPL.The capacity of quantum neural networks.In: Proceedings of the 2020 Conference on Lasers and Electro-Optics (CLEO); 2020 May 10–15; online. Washington, DC: Optica Publishing Group; 2020. p. JM4G.5.
[258]
WangJW, PaesaniS, DingYH, SantagatiR, SkrzypczykP, SalavrakosA, et al.Multidimensional quantum entanglement with large-scale integrated optics.Science2018; 360(6386):285-291.
ArrazolaJM, BergholmV, BrádlerK, BromleyTR, CollinsMJ, DhandI, et al.Quantum circuits with many photons on a programmable nanophotonic chip.Nature2021; 591(7848):54-60.
RenHR, ShaoW, LiY, SalimF, GuM.Three-dimensional vectorial holography based on machine learning inverse design.Sci Adv2020; 6(16):eaaz4261.
[265]
WangD, LiZS, ZhengY, ZhaoYR, LiuC, XuJB, et al.Liquid lens based holographic camera for real 3D scene hologram acquisition using end-to-end physical model-driven network.Light Sci Appl2024; 13:62.
[266]
ICşıl, MenguD, ZhaoYF, TabassumA, LiJX, LuoY, et al.Super-resolution image display using diffractive decoders.Sci Adv2022; 8(48):eadd3433.
[267]
Sakib RahmanMS, OzcanA.Computer-free, all-optical reconstruction of holograms using diffractive networks.ACS Photonics2021; 8(11):3375-3384.
[268]
RivensonY, WangHD, WeiZS, de HaanK, ZhangYB, WuYC, et al.Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning.Nat Biomed Eng2019; 3(6):466-477.
YuanSF, MaC, FetayaE, MuellerT, NavehD, ZhangF, et al.Geometric deep optical sensing.Science2023; 379(6637):eade1220.
[271]
AshtianiF, GeersAJ, AflatouniF.An on-chip photonic deep neural network for image classification.Nature2022; 606(7914):501-506.
[272]
ZhangQH, GamekkandaJC, PanditA, TangWL, PapageorgiouC, MitchellC, et al.Extracting particle size distribution from laser speckle with a physics-enhanced autocorrelation-based estimator (PEACE).Nat Commun2023; 14:1159.
[273]
YanQQ, DengQH, ZhangJ, ZhuY, YinK, LiT, et al.Low-latency deep-reinforcement learning algorithm for ultrafast fiber lasers.Photon Res2021; 9(8):1493-1501.
PaiS, SunZH, HughesTW, ParkT, BartlettB, WilliamsonIAD, et al.Experimentally realized in situ backpropagation for deep learning in photonic neural networks.Science2023; 380(6643):398-404.
[276]
PassalisN, Mourgias-AlexandrisG, PlerosN, TefasA.Adaptive initialization for recurrent photonic networks using sigmoidal activations.In: Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS); 2020 Oct 12–14; Seville, Spain. New York City: IEEE; 2020. p. 1–5.
[277]
PassalisN, Mourgias-AlexandrisG, TsakyridisA, PlerosN, TefasA.Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations.In: Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2019 May 12–17; Brighton, UK. New York City: IEEE; 2019. p. 1483–7.
[278]
GeorgeJ, AminR, MehrabianA, KhurginJ, El-GhazawiT, PrucnalPR, et al.Electrooptic nonlinear activation functions for vector matrix multiplications in optical neural networks.In: Proceedings of the Advanced Photonics 2018; 2018 Jul 2–5; Zurich, Switzerland. Washington, DC: Optica Publishing Group; 2018. p. SpW4G.3.
[279]
XuDY, XuWH, YangQ, ZhangWS, WenSC, LuoHL.All-optical object identification and three-dimensional reconstruction based on optical computing metasurface.Opto Electron Adv2023; 6(12):230120.
[280]
YangYQ, ForbesA, CaoLC.A review of liquid crystal spatial light modulators: devices and applications.Opto-Electron Sci2023; 2(8):230026.
[281]
LiaoK, ChenY, YuZC, HuXY, WangXY, LuCC, et al.All-optical computing based on convolutional neural networks.Opto Electron Adv2021; 4(11):200060.