Heterogeneous catalysis remains at the core of various bulk chemical manufacturing and energy conversion processes, and its revolution necessitates the hunt for new materials with ideal catalytic activities and economic feasibility. Computational high-throughput screening presents a viable solution to this challenge, as machine learning (ML) has demonstrated its great potential in accelerating such processes by providing satisfactory estimations of surface reactivity with relatively low-cost information. This review focuses on recent progress in applying ML in adsorption energy prediction, which predominantly quantifies the catalytic potential of a solid catalyst. ML models that leverage inputs from different categories and exhibit various levels of complexity are classified and discussed. At the end of the review, an outlook on the current challenges and future opportunities of ML-assisted catalyst screening is supplied. We believe that this review summarizes major achievements in accelerating catalyst discovery through ML and can inspire researchers to further devise novel strategies to accelerate materials design and, ultimately, reshape the chemical industry and energy landscape.
As the scale of urban rail transit (URT) networks expands, the study of URT resilience is essential for safe and efficient operations. This paper presents a comprehensive review of URT resilience and highlights potential trends and directions for future research. First, URT resilience is defined by three primary abilities: absorption, resistance, and recovery, and four properties: robustness, vulnerability, rapidity, and redundancy. Then, the metrics and assessment approaches for URT resilience were summarized. The metrics are divided into three categories: topology-based, characteristic-based, and performance-based, and the assessment methods are divided into four categories: topological, simulation, optimization, and data-driven. Comparisons of various metrics and assessment approaches revealed that the current research trend in URT resilience is increasingly favoring the integration of traditional methods, such as conventional complex network analysis and operations optimization theory, with new techniques like big data and intelligent computing technology, to accurately assess URT resilience. Finally, five potential trends and directions for future research were identified: analyzing resilience based on multisource data, optimizing train diagram in multiple scenarios, accurate response to passenger demand through new technologies, coupling and optimizing passenger and traffic flows, and optimal line design.
New types of aerial robots (NTARs) have found extensive applications in the military, civilian contexts, scientific research, disaster management, and various other domains. Compared with traditional aerial robots, NTARs exhibit a broader range of morphological diversity, locomotion capabilities, and enhanced operational capacities. Therefore, this study defines aerial robots with the four characteristics of morphability, biomimicry, multi-modal locomotion, and manipulator attachment as NTARs. Subsequently, this paper discusses the latest research progress in the materials and manufacturing technology, actuation technology, and perception and control technology of NTARs. Thereafter, the research status of NTAR systems is summarized, focusing on the frontier development and application cases of flapping-wing micro-air vehicles, perching aerial robots, amphibious robots, and operational aerial robots. Finally, the main challenges presented by NTARs in terms of energy, materials, and perception are analyzed, and the future development trends of NTARs are summarized in terms of size and endurance, mechatronics, and complex scenarios, providing a reference direction for the follow-up exploration of NTARs.
The issue of opacity within data-driven artificial intelligence (AI) algorithms has become an impediment to these algorithms’ extensive utilization, especially within sensitive domains concerning health, safety, and high profitability, such as chemical engineering (CE). In order to promote reliable AI utilization in CE, this review discusses the concept of transparency within AI utilizations, which is defined based on both explainable AI (XAI) concepts and key features from within the CE field. This review also highlights the requirements of reliable AI from the aspects of causality (i.e., the correlations between the predictions and inputs of an AI), explainability (i.e., the operational rationales of the workflows), and informativeness (i.e., the mechanistic insights of the investigating systems). Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE. Furthermore, a comprehensive transparency analysis case study is provided as an example to enhance understanding. Overall, this work provides a thorough discussion of this subject matter in a way that-for the first time-is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization. With this vital missing link, AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.
Non-ionic deep eutectic solvents (DESs) are non-ionic designer solvents with various applications in catalysis, extraction, carbon capture, and pharmaceuticals. However, discovering new DES candidates is challenging due to a lack of efficient tools that accurately predict DES formation. The search for DES relies heavily on intuition or trial-and-error processes, leading to low success rates or missed opportunities. Recognizing that hydrogen bonds (HBs) play a central role in DES formation, we aim to identify HB features that distinguish DES from non-DES systems and use them to develop machine learning (ML) models to discover new DES systems. We first analyze the HB properties of 38 known DES and 111 known non-DES systems using their molecular dynamics (MD) simulation trajectories. The analysis reveals that DES systems have two unique features compared to non-DES systems: The DESs have ① more imbalance between the numbers of the two intra-component HBs and ② more and stronger inter-component HBs. Based on these results, we develop 30 ML models using ten algorithms and three types of HB-based descriptors. The model performance is first benchmarked using the average and minimal receiver operating characteristic (ROC)-area under the curve (AUC) values. We also analyze the importance of individual features in the models, and the results are consistent with the simulation-based statistical analysis. Finally, we validate the models using the experimental data of 34 systems. The extra trees forest model outperforms the other models in the validation, with an ROC-AUC of 0.88. Our work illustrates the importance of HBs in DES formation and shows the potential of ML in discovering new DESs.
Information on the physicochemical properties of chemical species is an important prerequisite when performing tasks such as process design and product design. However, the lack of extensive data and high experimental costs hinder the development of prediction techniques for these properties. Moreover, accuracy and predictive capabilities still limit the scope and applicability of most property estimation methods. This paper proposes a new Gaussian process-based modeling framework that aims to manage a discrete and high-dimensional input space related to molecular structure representation with the group-contribution approach. A warping function is used to map discrete input into a continuous domain in order to adjust the correlation between different compounds. Prior selection techniques, including prior elicitation and prior predictive checking, are also applied during the building procedure to provide the model with more information from previous research findings. The framework is assessed using datasets of varying sizes for 20 pure component properties. For 18 out of the 20 pure component properties, the new models are found to give improved accuracy and predictive power in comparison with other published models, with and without machine learning.
The steel industry is considered an important basic sector of the national economy, and its high energy consumption and carbon emissions make it a major contributor to climate change, especially in China. The majority of crude steel in China is produced via the energy- and carbon-intensive blast furnace-basic oxygen furnace (BF-BOF) route, which greatly relies on coking coal. In recent years, China’s steel sector has made significant progress in energy conservation and emission reduction, driven by decarbonization policies and regulations. However, due to the huge output of crude steel, the steel sector still produces 15% of the total national CO2 emissions. The direct reduced iron (DRI) plus scrap-electric arc furnace (EAF) process is currently considered a good alternative to the conventional route as a means of reducing CO2 emissions and the steel industry’s reliance on iron ore and coking coal, since the gas-based DRI plus scrap-EAF route is expected to be more promising than the coal-based one. Unfortunately, almost no DRI is produced in China, seriously restricting the development of the EAF route. Here, we highlight the challenges and pathways of the future development of DRI, with a focus on China. In the short term, replacing natural gas with coke oven gas (COG) and byproduct gas from the integrated refining and chemical sector is a more economically feasible and cleaner way to develop a gas-based route in China. As the energy revolution proceeds, using fossil fuels in combination with carbon capture, utilization, and storage (CCUS) and hydrogen will be a good alternative due to the relatively low cost. In the long term, DRI is expected to be produced using 100% hydrogen from renewable energy. Both the development of deep processing technologies and the invention of a novel binder are required to prepare high-quality pellets for direct reduction (DR), and further research on the one-step gas-based process is necessary.
The security of the seed industry is crucial for ensuring national food security. Currently, developed countries in Europe and America, along with international seed industry giants, have entered the Breeding 4.0 era. This era integrates biotechnology, artificial intelligence (AI), and big data information technology. In contrast, China is still in a transition period between stages 2.0 and 3.0, which primarily relies on conventional selection and molecular breeding. In the context of increasingly complex international situations, accurately identifying core issues in China’s seed industry innovation and seizing the frontier of international seed technology are strategically important. These efforts are essential for ensuring food security and revitalizing the seed industry. This paper systematically analyzes the characteristics of crop breeding data from artificial selection to intelligent design breeding. It explores the applications and development trends of AI and big data in modern crop breeding from several key perspectives. These include high-throughput phenotype acquisition and analysis, multiomics big data database and management system construction, AI-based multiomics integrated analysis, and the development of intelligent breeding software tools based on biological big data and AI technology. Based on an in-depth analysis of the current status and challenges of China’s seed industry technology development, we propose strategic goals and key tasks for China’s new generation of AI and big data-driven intelligent design breeding. These suggestions aim to accelerate the development of an intelligent-driven crop breeding engineering system that features large-scale gene mining, efficient gene manipulation, engineered variety design, and systematized biobreeding. This study provides a theoretical basis and practical guidance for the development of China’s seed industry technology.
Arch bridges provide significant technical and economic benefits under suitable conditions. In particular, concrete-filled steel tubular (CFST) arch bridges and steel-reinforced concrete (SRC) arch bridges are two types of arch bridges that have gained great economic competitiveness and span growth potential due to advancements in construction technology, engineering materials, and construction equipment over the past 30 years. Under the leadership of the author, two record-breaking arch bridges—that is, the Pingnan Third Bridge (a CFST arch bridge), with a span of 560 m, and the Tian’e Longtan Bridge (an SRC arch bridge), with a span of 600 m—have been built in the past five years, embodying great technological breakthroughs in the construction of these two types of arch bridges. This paper takes these two arch bridges as examples to systematically summarize the latest technological innovations and practices in the construction of CFST arch bridges and SRC arch bridges in China. The technological innovations of CFST arch bridges include cable-stayed fastening-hanging cantilevered assembly methods, new in-tube concrete materials, in-tube concrete pouring techniques, a novel thrust abutment foundation for non-rocky terrain, and measures to reduce the quantity of temporary facilities. The technological innovations of SRC arch bridges involve arch skeleton stiffness selection, the development of encasing concrete materials, encasing concrete pouring, arch rib stress mitigation, and longitudinal reinforcement optimization. To conclude, future research focuses and development directions for these two types of arch bridges are proposed.
In this paper, we propose mesoscience-guided deep learning (MGDL), a deep learning modeling approach guided by mesoscience, to study complex systems. When establishing sample dataset based on the same system evolution data, different from the operation of conventional deep learning method, MGDL introduces the treatment of the dominant mechanisms of complex system and interactions between them according to the principle of compromise in competition (CIC) in mesoscience. Mesoscience constraints are then integrated into the loss function to guide the deep learning training. Two methods are proposed for the addition of mesoscience constraints. The physical interpretability of the model-training process is improved by MGDL because guidance and constraints based on physical principles are provided. MGDL was evaluated using a bubbling bed modeling case and compared with traditional techniques. With a much smaller training dataset, the results indicate that mesoscience-constraint-based model training has distinct advantages in terms of convergence stability and prediction accuracy, and it can be widely applied to various neural network configurations. The MGDL approach proposed in this paper is a novel method for utilizing the physical background information during deep learning model training. Further exploration of MGDL will be continued in the future.
With the future substantial increase in coverage and network heterogeneity, emerging networks will encounter unprecedented security threats. Covert communication is considered a potential enhanced security and privacy solution for safeguarding future wireless networks, as it can enable monitors to detect the transmitter’s transmission behavior with a low probability, thereby ensuring the secure transmission of private information. Due to its favorable security, it is foreseeable that covert communication will be widely used in various wireless communication settings such as medical, financial, and military scenarios. However, existing covert communication methods still present many challenges toward practical applications. In particular, it is difficult to guarantee the effectiveness of covert schemes based on the randomness of eavesdropping environments, and it is challenging for legitimate users to detect weak covert signals. Considering that emerging artificial-intelligence-aided transmission technologies can open up entirely new opportunities to address the above challenges, we provide a comprehensive review of recent advances and potential research directions in the field of intelligent covert communications in this work. First, the basic concepts and performance metrics of covert communications are introduced. Then, existing effective covert communication techniques in the time, frequency, spatial, power, and modulation domains are reviewed. Finally, this paper discusses potential implementations and challenges for intelligent covert communications in future networks.
Scientific and technological revolutions and industrial transformations have accelerated the rate of innovation in environmental engineering technologies. However, few researchers have evaluated the current status and future trends of technologies. This paper summarizes the current research status in eight major subfields of environmental engineering—water treatment, air pollution control, soil/solid waste management, environmental biotechnology, environmental engineering equipment, emerging contaminants, synergistic reduction of pollution and carbon emissions, and environmental risk and intelligent management—based on bibliometric analysis and future trends in greenization, low carbonization, and intelligentization. Disruptive technologies are further identified based on discontinuous transformation, and ten such technologies are proposed, covering general and specific fields, technical links, and value sources. Additionally, the background and key innovations in disruptive technologies are elucidated in detail. This study not only provides a scientific basis for strategic decision-making, planning, and implementation in the environmental engineering field but also offers methodological guidance for the research and determination of breakthrough technologies in other areas.
The concept of precision nutrition was first proposed almost a decade ago. Current research in precision nutrition primarily focuses on comprehending individualized variations in response to dietary intake, with little attention being given to other crucial aspects of precision nutrition. Moreover, there is a dearth of comprehensive review studies that portray the landscape and framework of precision nutrition. This review commences by tracing the historical trajectory of nutritional science, with the aim of dissecting the challenges encountered in nutrition science within the new era of disease profiles. This review also deconstructs the field of precision nutrition into four key components: the proposal of the theory for individualized nutritional requirement phenotypes; the establishment of precise methods for measuring dietary intake and evaluating nutritional status; the creation of multidimensional nutritional intervention strategies that address the aspects of what, how, and when to eat; and the construction of a pathway for the translation and integration of scientific research into healthcare practices, utilizing artificial intelligence and information platforms. Incorporating these four components, this review further discusses prospective avenues that warrant exploration to achieve the objective of enhancing health through precision nutrition.
Reactive transport equations in porous media are critical in various scientific and engineering disciplines, but solving these equations can be computationally expensive when exploring different scenarios, such as varying porous structures and initial or boundary conditions. The deep operator network (DeepONet) has emerged as a popular deep learning framework for solving parametric partial differential equations. However, applying the DeepONet to porous media presents significant challenges due to its limited capability to extract representative features from intricate structures. To address this issue, we propose the Porous-DeepONet, a simple yet highly effective extension of the DeepONet framework that leverages convolutional neural networks (CNNs) to learn the solution operators of parametric reactive transport equations in porous media. By incorporating CNNs, we can effectively capture the intricate features of porous media, enabling accurate and efficient learning of the solution operators. We demonstrate the effectiveness of the Porous-DeepONet in accurately and rapidly learning the solution operators of parametric reactive transport equations with various boundary conditions, multiple phases, and multi-physical fields through five examples. This approach offers significant computational savings, potentially reducing the computation time by 50-1000 times compared with the finite-element method. Our work may provide a robust alternative for solving parametric reactive transport equations in porous media, paving the way for exploring complex phenomena in porous media.
Road infrastructure is facing significant digitalization challenges within the context of new infrastructure construction in China and worldwide. Among the advanced digital technologies, digital twin (DT) has gained prominence across various engineering sectors, including the manufacturing and construction industries. Specifically, road engineering has demonstrated a growing interest in DT and has achieved promising results in DT-related applications over the past several years. This paper systematically introduces the development of DT and examines its current state in road engineering by reviewing research articles on DT-enabling technologies, such as model creation, condition sensing, data processing, and interaction, as well as its applications throughout the lifecycle of road infrastructure. The findings indicate that research has primarily focused on data perception and virtual model creation, while real-time data processing and interaction between physical and virtual models remain underexplored. DT in road engineering has been predominantly applied during the operation and maintenance phases, with limited attention given to the construction and demolition phases. Future efforts should focus on establishing uniform standards, developing innovative perception and data interaction techniques, optimizing development costs, and expanding the scope of lifecycle applications to facilitate the digital transformation of road engineering. This review provides a comprehensive overview of state-of-the-art advancements in this field and paves the way for leveraging DT in road infrastructure lifecycle management.
Underground salt cavern CO2 storage (SCCS) offers the dual benefits of enabling extensive CO2 storage and facilitating the utilization of CO2 resources while contributing the regulation of the carbon market. Its economic and operational advantages over traditional carbon capture, utilization, and storage (CCUS) projects make SCCS a more cost-effective and flexible option. Despite the widespread use of salt caverns for storing various substances, differences exist between SCCS and traditional salt cavern energy storage in terms of gas-tightness, carbon injection, brine extraction control, long-term carbon storage stability, and site selection criteria. These distinctions stem from the unique phase change characteristics of CO2 and the application scenarios of SCCS. Therefore, targeted and forward-looking scientific research on SCCS is imperative. This paper introduces the implementation principles and application scenarios of SCCS, emphasizing its connections with carbon emissions, carbon utilization, and renewable energy peak shaving. It delves into the operational characteristics and economic advantages of SCCS compared with other CCUS methods, and addresses associated scientific challenges. In this paper, we establish a pressure equation for carbon injection and brine extraction, that considers the phase change characteristics of CO2, and we analyze the pressure during carbon injection. By comparing the viscosities of CO2 and other gases, SCCS’s excellent sealing performance is demonstrated. Building on this, we develop a long-term stability evaluation model and associated indices, which analyze the impact of the injection speed and minimum operating pressure on stability. Field countermeasures to ensure stability are proposed. Site selection criteria for SCCS are established, preliminary salt mine sites suitable for SCCS are identified in China, and an initial estimate of achievable carbon storage scale in China is made at over 51.8-77.7 million tons, utilizing only 20%-30% volume of abandoned salt caverns. This paper addresses key scientific and engineering challenges facing SCCS and determines crucial technical parameters, such as the operating pressure, burial depth, and storage scale, and it offers essential guidance for implementing SCCS projects in China.
Lunar habitat construction is crucial for successful lunar exploration missions. Due to the limitations of transportation conditions, extensive global research has been conducted on lunar in situ material processing techniques in recent years. The aim of this paper is to provide a comprehensive review, precise classification, and quantitative evaluation of these approaches, focusing specifically on four main approaches: reaction solidification (RS), sintering/melting (SM), bonding solidification (BS), and confinement formation (CF). Eight key indicators have been identified for the construction of low-cost and high-performance systems to assess the feasibility of these methods: in situ material ratio, curing temperature, curing time, implementation conditions, compressive strength, tensile strength, curing dimensions, and environmental adaptability. The scoring thresholds are determined by comparing the construction requirements with the actual capabilities. Among the evaluated methods, regolith bagging has emerged as a promising option due to its high in situ material ratio, low time requirement, lack of high-temperature requirements, and minimal shortcomings, with only the compressive strength falling below the neutral score. The compressive strength still maintains a value of . The proposed construction scheme utilizing regolith bags offers numerous advantages, including rapid and large-scale construction, ensured tensile strength, and reduced reliance on equipment and energy. In this study, guidelines for evaluating regolith solidification techniques are provided, and directions for improvement are offered. The proposed lunar habitat design based on regolith bags is a practical reference for future research.
To reduce CO2 emissions from coal-fired power plants, the development of low-carbon or carbon-free fuel combustion technologies has become urgent. As a new zero-carbon fuel, ammonia (NH3) can be used to address the storage and transportation issues of hydrogen energy. Since it is not feasible to completely replace coal with ammonia in the short term, the development of ammonia-coal co-combustion technology at the current stage is a fast and feasible approach to reduce CO2 emissions from coal-fired power plants. This study focuses on modifying the boiler and installing two layers of eight pure-ammonia burners in a 300-MW coal-fired power plant to achieve ammonia-coal co-combustion at proportions ranging from 20% to 10% (by heat ratio) at loads of 180- to 300-MW, respectively. The results show that, during ammonia-coal co-combustion in a 300-MW coal-fired power plant, there was a more significant change in NOx emissions at the furnace outlet compared with that under pure-coal combustion as the boiler oxygen levels varied. Moreover, ammonia burners located in the middle part of the main combustion zone exhibited a better high-temperature reduction performance than those located in the upper part of the main combustion zone. Under all ammonia co-combustion conditions, the NH3 concentration at the furnace outlet remained below 1 parts per million (ppm). Compared with that under pure-coal conditions, the thermal efficiency of the boiler slightly decreased (by 0.12%-0.38%) under different loads when ammonia co-combustion reached 15 t·h−1. Ammonia co-combustion in coal-fired power plants is a potentially feasible technology route for carbon reduction.
Large language models (LLMs) have significantly advanced artificial intelligence (AI) by excelling in tasks such as understanding, generation, and reasoning across multiple modalities. Despite these achievements, LLMs have inherent limitations including outdated information, hallucinations, inefficiency, lack of interpretability, and challenges in domain-specific accuracy. To address these issues, this survey explores three promising directions in the post-LLM era: knowledge empowerment, model collaboration, and model co-evolution. First, we examine methods of integrating external knowledge into LLMs to enhance factual accuracy, reasoning capabilities, and interpretability, including incorporating knowledge into training objectives, instruction tuning, retrieval-augmented inference, and knowledge prompting. Second, we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging, functional model collaboration, and knowledge injection. Third, we delve into model co-evolution, in which multiple models collaboratively evolve by sharing knowledge, parameters, and learning strategies to adapt to dynamic environments and tasks, thereby enhancing their adaptability and continual learning. We illustrate how the integration of these techniques advances AI capabilities in science, engineering, and society—particularly in hypothesis development, problem formulation, problem-solving, and interpretability across various domains. We conclude by outlining future pathways for further advancement and applications.
Despite recent advances in understanding the biology of aging, the field remains fragmented due to the lack of a central organizing hypothesis. Although there are ongoing debates on whether the aging process is programmed or stochastic, it is now evident that neither perspective alone can fully explain the complexity of aging. Here, we propose the pro-aging metabolic reprogramming (PAMRP) theory, which integrates and unifies the genetic-program and stochastic hypotheses. This theory posits that aging is driven by degenerative metabolic reprogramming (MRP) over time, requiring the emergence of pro-aging substrates and triggers (PASs and PATs) to predispose cells to cellular and genetic reprogramming (CRP and GRP).
Substantially glazed facades are extensively used in contemporary high-rise buildings to achieve attractive architectural aesthetics. Inherent conflicts exist among architectural aesthetics, building energy consumption, and solar energy harvesting for glazed facades. In this study, we addressed these conflicts by introducing a new dynamic and vertical photovoltaic integrated building envelope (dvPVBE) that offers extraordinary flexibility with weather-responsive slat angles and blind positions, superior architectural aesthetics, and notable energy-saving potential. Three hierarchical control strategies were proposed for different scenarios of the dvPVBE: power generation priority (PGP), natural daylight priority (NDP), and energy-saving priority (ESP). Moreover, the PGP and ESP strategies were further analyzed in the simulation of a dvPVBE. An office room integrated with a dvPVBE was modeled using EnergyPlus. The influence of the dvPVBE in improving the building energy efficiency and corresponding optimal slat angles was investigated under the PGP and ESP control strategies. The results indicate that the application of dvPVBEs in Beijing can provide up to of the annual energy demand of office rooms and significantly increase the annual net energy output by at least compared with static photovoltaic (PV) blinds. The concept of this novel dvPVBE offers a viable approach by which the thermal load, daylight penetration, and energy generation can be effectively regulated.
This paper investigates a distributed heterogeneous hybrid blocking flow-shop scheduling problem (DHHBFSP) designed to minimize the total tardiness and total energy consumption simultaneously, and proposes an improved proximal policy optimization (IPPO) method to make real-time decisions for the DHHBFSP. A multi-objective Markov decision process is modeled for the DHHBFSP, where the reward function is represented by a vector with dynamic weights instead of the common objective-related scalar value. A factory agent (FA) is formulated for each factory to select unscheduled jobs and is trained by the proposed IPPO to improve the decision quality. Multiple FAs work asynchronously to allocate jobs that arrive randomly at the shop. A two-stage training strategy is introduced in the IPPO, which learns from both single- and dual-policy data for better data utilization. The proposed IPPO is tested on randomly generated instances and compared with variants of the basic proximal policy optimization (PPO), dispatch rules, multi-objective metaheuristics, and multi-agent reinforcement learning methods. Extensive experimental results suggest that the proposed strategies offer significant improvements to the basic PPO, and the proposed IPPO outperforms the state-of-the-art scheduling methods in both convergence and solution quality.
The question of whether an ideal network exists with global scalability in its full life cycle has always been a first-principles problem in the research of network systems and architectures. Thus far, it has not been possible to scientifically practice the design criteria of an ideal network in a unimorphic network system, making it difficult to adapt to known services with clear application scenarios while supporting the ever-growing future services with unexpected characteristics. Here, we theoretically prove that no unimorphic network system can simultaneously meet the scalability requirement in a full cycle in three dimensions-the service-level agreement (S), multiplexity (M), and variousness (V)-which we name as the "impossible SMV triangle" dilemma. It is only by transforming the current network development paradigm that the contradiction between global scalability and a unified network infrastructure can be resolved from the perspectives of thinking, methodology, and practice norms. In this paper, we propose a theoretical framework called the polymorphic network environment (PNE), the first principle of which is to separate or decouple application network systems from the infrastructure environment and, under the given resource conditions, use core technologies such as the elementization of network baselines, the dynamic aggregation of resources, and collaborative software and hardware arrangements to generate the capability of the "network of networks." This makes it possible to construct an ideal network system that is designed for change and capable of symbiosis and coexistence with the generative network mor-pha in the spatiotemporal dimensions. An environment test for principle verification shows that the generated representative application network modalities can not only coexist without mutual influence but also independently match well-defined multimedia services or custom services under the constraints of technical and economic indicators.
This paper introduces a systems theory-driven framework to integration artificial intelligence (AI) into traditional Chinese medicine (TCM) research, enhancing the understanding of TCM’s holistic material basis while adhering to evidence-based principles. Utilizing the System Function Decoding Model (SFDM), the research progresses through define, quantify, infer, and validate phases to systematically explore TCM’s material basis. It employs a dual analytical approach that combines top-down, systems theory-guided perspectives with bottom-up, elements-structure-function methodologies, provides comprehensive insights into TCM’s holistic material basis. Moreover, the research examines AI’s role in quantitative assessment and predictive analysis of TCM’s material components, proposing two specific AI-driven technical applications. This interdisciplinary effort underscores AI’s potential to enhance our understanding of TCM’s holistic material basis and establishes a foundation for future research at the intersection of traditional wisdom and modern technology.