From fifth-generation (5G) communication technology onward, non-terrestrial networks (NTNs) have emerged as a key component of future network architectures. Especially through the rise of low-Earth-orbit satellite constellations, NTNs enable a space Internet and present a paradigm shift in delivering reliable services to even the most remote regions on Earth. However, the extensive coverage and rapid movement of satellites pose unique challenges in user equipment access and inter-satellite transmission, impacting the quality of service and service continuity. This paper offers an in-depth review of NTN networking technologies in the context of six-generation (6G) mobile networks evolution, focusing on access management, satellite mobility, and hetero-network slicing. Building on this foundation and considering the latest trends in NTN development, we then present innovative perspectives on emerging challenges, including satellite beamforming, handover mechanisms, and service delivery. Lastly, we identify key open research areas and propose future directions to improve NTN performance and accelerate satellite Internet deployment.
Satellite–terrestrial networks have garnered significant attention in recent years and are extensively applied in intelligent transportation and emergency rescue. This paper provides a comprehensive review of the latest research advancements in satellite–terrestrial integrated network (STIN) technologies from a network perspective, dividing STIN technologies into three categories according to network service flows—namely, topology maintenance, network routing, and orchestration transmission technologies. Furthermore, a novel network-layer perspective is considered to examine the applications of STINs across various domains, along with related frameworks, platforms, simulators, and datasets. Finally, this paper explores the mainstream research directions in STIN technologies, with an innovative focus on the network layer. It reviews the existing literature, outlines future trends, and discusses opportunities for collaboration with related fields.
Satellite mega-constellations (SMCs) encounter significant operational challenges due to various space environmental effects. While the mechanisms underlying some of these effects have been studied from a physical perspective, their precise impact on the network performance of SMCs remains unclear. To elucidate this further, this study investigates the spatiotemporal distribution characteristics of space environmental effects, such as solar radiation, ionizing radiation, and space debris, and the associated failure mechanisms in the nodes and links of SMCs. In addition, the impacts of solar radiation and single-event effects on performance of SMC system, particularly network throughput capacity, are examined. Results reveal that under the effect of the space environment, the throughput capacity degradation of SMC system varies with different parameters such as orbital altitude and inclination. Most importantly, the results bridge the gap between the physical phenomena of space environmental effects and network-level modeling. Finally, future research directions are prospected, regarding network topology control, constellation architecture, network routing techniques, and so on, to help mitigate network performance degradation due to space environmental effects.
Commercial ultra-dense low-Earth-orbit (LEO) satellite constellations have recently been deployed to provide seamless global Internet services. To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation (6G) users, growing attention has been focused on LEO-satellite-based computing networks, to which ground users can offload computation tasks. However, how to design a LEO satellite constellation for computing networks, while considering discrepancies in the computing requirements of different regions, remains an open question. In this paper, we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals (UTs) offload part of their computing tasks to satellites. We formulate the ultra-dense constellation design problem as a multi-objective optimization problem (MOOP) to maximize the average coverage rate, transmission capacity, and computational capability, while minimizing the number of satellites. In order to depict the connectivity characteristics of satellite-based computing networks, we propose a terrestrial–satellite connectivity model to determine the coverage rate in different regions. We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP. Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment, given quality-of-service (QoS) requirements. For the same number of deployed LEO satellites, the proposed constellation outperforms its existing counterparts; in particular, it achieves 25%–45% performance improvements in the average coverage rate.
Traditional federated learning (FL) frameworks rely heavily on terrestrial networks, whose coverage limitations and increasing bandwidth congestion significantly hinder model convergence. Fortunately, the advancement of low-Earth-orbit (LEO) satellite networks offers promising new communication avenues to augment traditional terrestrial FL. Despite this potential, the limited satellite–ground communication bandwidth and the heterogeneous operating environments of ground devices—including variations in data, bandwidth, and computing power—pose substantial challenges for effective and robust satellite-assisted FL. To address these challenges, we propose SatFed, a resource-efficient satellite-assisted heterogeneous FL framework. SatFed implements freshness-based model-prioritization queues to optimize the use of highly constrained satellite–ground bandwidth, ensuring the transmission of the most critical models. Additionally, a multigraph is constructed to capture the real-time heterogeneous relationships between devices, including data distribution, terrestrial bandwidth, and computing capability. This multigraph enables SatFed to aggregate satellite-transmitted models into peer guidance, improving local training in heterogeneous environments. Extensive experiments with real-world LEO satellite networks demonstrate that SatFed achieves superior performance and robustness compared with state-of-the-art benchmarks.
The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning. However, given the inherent volatility of network traffic, ensuring differentiated quality of service in highly dynamic networks remains a significant challenge. In this paper, we propose an online learning-based resource scheduling scheme for satellite–terrestrial integrated networks (STINs) aimed at providing on-demand services with minimal resource utilization. Specifically, we focus on: ① accurately characterizing the STIN channel, ② predicting resource demand with uncertainty guarantees, and ③ implementing mixed timescale resource scheduling. For the STIN channel, we adopt the 3rd Generation Partnership Project channel and antenna models for non-terrestrial networks. We employ a one-dimensional convolution and attention-assisted long short-term memory architecture for average demand prediction, while introducing conformal prediction to mitigate uncertainties arising from burst traffic. Additionally, we develop a dual-timescale optimization framework that includes resource reservation on a larger timescale and resource adjustment on a smaller timescale. We also designed an online resource scheduling algorithm based on online convex optimization to guarantee long-term performance with limited knowledge of time-varying network information. Based on the Network Simulator 3 implementation of the STIN channel under our high-fidelity satellite Internet simulation platform, numerical results using a real-world dataset demonstrate the accuracy and efficiency of the prediction algorithms and online resource scheduling scheme.
With the rapid advancement of satellite communication technologies, space information networks (SINs) have become essential infrastructure for complex service delivery and cross-domain task coordination, facilitating the transition toward an intent-driven task-oriented coordination paradigm across the space, ground, and user segments. This study presents a novel intent-driven task-oriented network (IDTN) framework to address task scheduling and resource allocation challenges in SINs. The scheduling problem is formulated as a three-sided matching game that incorporates the preference attributes of entities across all network segments. To manage the variability of random task arrivals and dynamic resources, a context–aware linear upper-confidence-bound online learning mechanism is integrated to reduce decision-making uncertainty. Simulation results demonstrate the effectiveness of the proposed IDTN framework. Compared with conventional baseline methods, the framework achieves significant performance improvements, including a 4.4%–28.9% increase in average system reward, a 6.2%–34.5% improvement in resource utilization, and a 5.6%–35.7% enhancement in user satisfaction. The proposed framework is expected to facilitate the integration and orchestration of space-based platforms.
The integration of emerging technologies such as artificial intelligence and cloud computing is accelerating the development of intelligent and autonomous satellite systems. However, limitations in onboard sensing, computing, storage, and energy resources continue to constrain the intelligent functionalities of individual satellites. Currently, most studies focus on either satellite intelligence or satellite networking, while systematic studies on their integration remain scarce. To address this gap, this paper introduces the concept of an intelligent satellite cluster system, which leverages satellite networks to enable collaborative intelligence among satellites, thereby enhancing the overall system intelligence. After summarizing the typical use cases of the intelligent satellite cluster system, we analyze the corresponding demands on network capabilities. Based on these demands, we propose the concept of the Internet of satellites (IoS) tailored to support the intelligent satellite cluster system. Specifically, we design both the logical and physical architectures of IoS and elaborate on its key enabling technologies. Finally, we present the research progress and outcomes achieved by our team on these core technologies, and discuss the challenges that remain. This paper aims to build consensus around intelligent and connected satellite technologies, promote innovation and standardization, and enhance the intelligent service capabilities of future large-scale satellite systems.
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), not only affects the lungs but also damages various non-pulmonary organs, resulting in tissue injury and potential long-term sequelae in infected individuals. COVID-19 is likely to persist as a public health concern, given the frequent emergence of new mutations and viral strains. Multiple clinical lines of evidence indicate the efficacy of traditional Chinese medicine (TCM) in the prevention and treatment of COVID-19. However, the exact mechanism underlying these effects remains unclear. In this perspective review, we summarize the utility of in vitro three-dimensional (3D) cultured organoid models and organ-on-a-chip (OoC) technology for studying COVID-19 pathogenesis, viral tropism, and infectious mechanisms across different tissues. We highlight the successful application of these platforms in aiding drug development and discuss their advantages and limitations. We also review how such organotypic models can be employed to study TCMs. Finally, we discuss the opportunities for integrated microphysiological multi-tissue models to rapidly discover active components and potential targets in the context of COVID-19. The utilization of these emerging technologies could accelerate drug discovery and the modernization of TCM.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection disrupts immune function by activating inflammatory pathways related to disease severity. Understanding virus-induced inflammation is crucial for developing anti-coronavirus disease 2019 (COVID-19) therapies. This study used principal component analysis, heatmaps, and other tools to examine mitogen-activated protein kinase (MAPK) pathway gene expression, and found that alterations in MAPK pathway genes were correlated with immune response changes. Further analysis linked P38-related gene expression to clinical symptoms, with transcriptomic data showing a strong association between MAPK gene expression changes and SARS-CoV-2 infection. In infected cell models, P-P38 protein and inflammatory factors were significantly upregulated. Analysis of the GSE217948 dataset showed a significant correlation between plasma markers (interferon inducible protein 10 (IP-10) and interleukin (IL)-6) and symptoms (fever and fatigue). Activation of P38 appears to release inflammatory factors tied to these symptoms making P38 as a key pathway in virus-induced inflammation. Forsythin (KD-1), an anti-inflammatory compound from forsythia showed efficacy against SARS-CoV-2, inhibiting replication, reducing P38 levels, and lowering inflammatory markers (IL-6, IL-8, IP-10, tumor necrosis factor-α (TNF-α), and monocyte chemotactic protein-1 (MCP-1)) in both cell and animal models. In specific cell models, KD-1 blocked P38 activation, thereby reducing inflammation. In a P38 overexpression model, KD-1 decreased P38 phosphorylation and downstream inflammatory proteins. This study identifies the P38 pathway as a therapeutic target for COVID-19, supporting KD-1’s potential in mitigating virus-induced inflammation and guiding further research into anti- inflammatory treatments for respiratory viruses.
Ionizing radiation presents an important solution for virus inactivation. However, its efficacy for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) inactivation and the underlying mechanisms remain unclear. This study demonstrates radiosensitivity and radiation-induced biological changes in SARS-CoV-2 using 20 wild-type and mutant strains. The results show that 1.2 kGy of electron beam (E-beam) or 0.9 kGy of X-ray irradiation can eliminate 99.99% of SARS-CoV-2 particles. The Delta and various Omicron variants exhibit heightened sensitivity to radiation compared to the wild-type, showing nearly 99.99 % inactivation efficiency at 1.0 and 0.8 kGy. The relationship between irradiation dose and the logarithmic reduction in virus load adheres to a dose–response model, characterized by extremely narrow windows. Spike (S) protein disruption, rather than the commonly accepted nucleic acid cleavage, is identified as the primary inactivation mechanism (triggering a conformation transition of S protein from pre-fusion to post-fusion with minimal impact on nucleic acid integrity). This study introduces the concept of targeting critical proteins in coronavirus inactivation, offering valuable insight for infectious coronavirus disease control and vaccine development.
The NOD-like receptor family pyrin domain-containing protein 3 (NLRP3) inflammasome is an intracellular protein complex containing a nucleotide-binding oligomerization domain, leucine-rich repeats, and a pyrin domain. It is a key regulator of inflammation in viral pneumonia (VP). Small-molecule inhibitors targeting various NLRP3 binding sites are advancing into early clinical trials, but their therapeutic utility is incompletely established. Xuanfei Baidu Formula (XF), clinically used for VP treatment, attenuates NLRP3 activation by hampering caspase-11 to impede polarization of pro-inflammatory macrophages in a model of lipopolysaccharide (LPS)-induced lung injury inmice. Herein, we demonstrate that XF attenuated influenza A virus (IAV)-induced lung inflammation as well as lung injury in immunocompetent (but not in macrophage-depleted) mice. RNA-sequencing of sorted lung macrophages from IAV-infected mice revealed that XF inhibited activation of the NLRP3 inflammation and interleukin (IL)-1β production. Quantitative nuclear magnetic resonance of XF enabled us to develop XF-Comb1, a fixed-ratio combination of five bioactive compounds that recapitulated the bioactivity of XF in suppressing NLRP3 activation in macrophages in vitro and in vivo. Interestingly, XF-Comb1 inhibited assembly of the NLRP3 inflammasome through multi-site interactions with functional residues of NLRP3, apoptosis-associated speck-like protein containing caspase recruitment domain (ASC), and caspase-1. Taken together, this work advances the development of NLRP3 inhibitors by translating a complex herbal formula into defined bioactive compounds.
Cytochrome P450 enzymes (P450s or CYPs) are the primary metabolic contributors to the absorption, distribution, metabolism, and excretion (ADME) properties of small-molecule drugs. These enzymes can catalyze various types of reactions, including metabolic reactions that occur at nitrogen (N) and sulfur (S) sites of small molecules. In this review, we conducted a comprehensive statistical analysis of 294 P450s-mediated small-molecule substrates, among which more than 47% substrates contained N and S. The purpose of the analysis is to elucidate the broad-spectrum cross-reactivity and specificity between these substrates and various CYP isoforms across five reaction types. Our findings reveal that substrates with molecular weights greater than 500 Da or less than 200 Da are predominantly governed by the dominant effect of the CYP isoform’s active sites. In contrast, small- to medium-sized molecules with molecular weights ranging from 200 to 400 Da exhibit a stronger dependence on the types of heteroatoms they contain, with the size of the enzyme’s catalytic site (cavity) playing a negligible role in determining substrate specificity. This review starts from the metabolic mechanisms of P450s-mediated N- and S-containing compounds, and systematically analyzes the structural characteristics of substrates involved in N-dealkylation, N-oxidation, and S-oxidation, as well as their metabolic interactions with P450s. These analyses provide a new perspective for improving the existing understanding of the relationship between the P450s substrate specificity and substrate structural characteristics, and offer a valuable perspective for enhancing drug design and predicting metabolic stability based on the P450s-catalyzed reaction framework.
This prospective study aimed to investigate the associations of untreated cholesterol levels and their longitudinal changes, especially low levels, with all-cause and cause-specific mortality in different populations. Participants were drawn from two Chinese cohorts and the UK Biobank, excluding those with lipid-lowering medications, coronary heart disease (CHD), stroke, cancer, clinically diagnosed chronic obstructive pulmonary disease, low body mass index (< 18.5 kg∙m–2) at baseline, and deaths within the first two years to minimize reverse causality. Individual cholesterol changes were assessed in a subset who attended the resurvey after over four years. Mortality data were linked to registries, and risks were estimated using Cox proportional hazards models. A total of 163 115 Chinese and 317 305 UK adults were included (mean age, 49–61 years), with 43%, 81%, and 44% males in Dongfeng–Tongji, Kailuan, and UK Biobank cohorts, respectively. During a median follow-up of 9.7–12.9 years, 9553 and 15 760 deaths were documented in the Chinese cohorts and UK Biobank, respectively. After multivariate adjustments, nonlinear relationships were observed between total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), and non-high-density lipoprotein cholesterol (non-HDL-C) levels and mortality. In both populations, high cholesterol was primarily associated with CHD mortality, while low cholesterol associated with all-cause and cancer mortality (Pnonlinear ≤ 0.0161). The optimal levels for all-cause mortality risk in Chinese adults (TC: 200 mg∙dL–1; LDL-C: 130 mg∙dL–1; non-HDL-C: 155 mg∙dL–1) were lower than those in the UK Biobank but consistent with guideline recommendation. Additionally, decreasing cholesterol levels over four years were associated with higher all-cause and cancer mortality in the Chinese cohorts (Pnonlinear ≤ 0.0100). Participants with low TC, LDL-C, or non-HDL-C levels at both baseline and resurvey experienced elevated all-cause mortality risks in both populations, as did those with low/medium baseline levels and > 20% reductions over time in Chinese adults. In conclusion, higher TC, LDL-C, and non-HDL-C levels are associated with elevated CHD mortality. Importantly, low and/or longitudinally decreasing cholesterol levels are robustly associated with increased all-cause and cancer mortality, potentially serving as markers of premature death. Regular cholesterol monitoring, with attention to both high and low levels, is recommended to inform guideline updates and clinical strategies.
The alleviation of chemotherapy-induced myelosuppression is an integral part of sustained and effective cancer therapy. Although the role of the hematopoietic microenvironment in the regulation of hematopoietic stem/progenitor cells (HSPCs) has been widely studied, no drugs that improve hematopoiesis by targeting and modulating the hematopoietic microenvironment have been used clinically. Here, we show that the active small molecule icaritin (ICT) from the Chinese herb Epimedium brevicornum Maxim effectively alleviates chemotherapy-induced hemocytopenia in both mouse and zebrafish models. We demonstrated that ICT enhanced the number and hematopoietic function of HSPCs and that the beneficial effects of ICT occurred indirectly. Single-cell sequencing analysis confirmed that the target cells of ICT in the bone marrow microenvironment were mesenchymal stromal cells (MSCs). In addition, peroxiredoxin 1 (PRDX1) was identified as a direct target of ICT. Furthermore, ICT stimulated MSCs to express the effector molecule C–X–C motif chemokine ligand 12 (CXCL12) through the PRDX1–reactive oxygen species (ROS)–mitogen-activated protein kinase (MAPK) signaling axis, thereby increasing the number and function of HSPCs. These results suggest that ICT is a promising compound for achieving targeted modulation of the hematopoietic microenvironment to restore hematopoiesis after chemotherapy.
Anemoside B4 (AB4), a triterpenoidal saponin derived from Pulsatilla chinensis, has garnered considerable attention for its potent anti-inflammatory and immunomodulatory activities, culminating in its approval for clinical trials by the Center for Drug Evaluation, National Medical Products Administration, for the treatment of mild to moderate ulcerative colitis. Despite this, AB4’s therapeutic potential remained underexplored until the development of its injection formulation. This review discusses the scientific rationale and theoretical framework behind AB4’s development, offering a new paradigm and innovative research strategy for discovering lead compounds or drug candidates from natural medicines. In-depth investigations into AB4’s cellular targets, biochemical pathways, and administration routes have provided valuable insights into its druggability evaluation and clinical potential. The high water solubility of AB4, attributable to its multiple sugar units, imposes limitations on its bioavailability and pharmacokinetic profiles. To address this, structural modification via chemical methods and enzymatic hydrolysis have been employed, resulting in derivatives with reduced molecular weight, improved bioavailability, enhanced pharmacological activity, and greater clinical potential. These advances lay a solid foundation for the continued development of AB4 and its derivatives as promising therapeutic agents.
The abundant microbe-associated molecular patterns (MAMPs) and nanoscale structures of bacterial extracellular vesicles (bEVs) collectively facilitate their versatile biological activities. Building on these inherent properties, engineering methods encompassing physical, chemical, and genetic modifications have been strategically employed to enhance the functional diversity of bEVs. Therefore, bEVs are being explored as innovative and promising platforms for developing immunotherapeutic strategies targeting diverse pathological states. To establish a foundational understanding of bEVs, we first summarized their biogenesis, classification, structures and biomolecular constituents of bEVs. This review discusses techniques for bEV production and modification and explores the immunological characteristics and effects of engineered bEVs, along with their biomedical applications. Special attention is devoted to advanced engineering approaches and outlining the challenges and emerging avenues in the development of engineered bEVs. This review aims to systematically construct an evidence-based and comprehensive framework that promotes translational optimization and clinical implementation of engineered bEVs, thereby maximizing their application potential in the biomedical field.
Additives are widely employed to regulate the morphology, size, and agglomeration degree of crystalline materials during crystallization to enhance their functional, physical, and powder properties. However, the existing methods for screening and validating target additives require a large quantity of materials and involve tedious molecular simulation/crystallization experiments, making them time-consuming, resource-intensive, and reliant on the operator’s experience level. To overcome these challenges, we proposed a computer vision-assisted high-throughput additive screening system (CV-HTPASS) which comprises a high-throughput additive screening device, in situ imaging equipment, and an artificial intelligence (AI)-assisted image-analysis algorithm. Using the CV-HTPASS, we performed high-throughput screening experiments on additives to regulate the succinic acid crystal properties, generating thousands of crystal images with diverse crystal morphologies. To extract valuable crystal information from the massive data and improve the analysis accuracy and efficiency, the AI-based image-analysis algorithm was implemented innovatively for the segmentation, classification, and data mining of crystals with four morphologies to further screen the target additive. Subsequently, scale-up crystallization experiments conducted under optimized conditions demonstrated that succinic acid products exhibited a preferred cubic morphology, reduced agglomeration degree, narrowed crystal size distribution, and improved powder properties. The proposed CV-HTPASS offers a highly efficient approach for scale-up experiments. Further, it provides a platform for the screening of additives and the optimization of the powder properties of crystal products in industrial-scale crystallization processes.
Tunnels are a crucial component of urban infrastructure, continuously exposed to various hazards, threats, and stressors. Events such as earthquakes, fires, and floods, along with aging and construction-related disturbances, pose significant challenges to tunnel resilience. Reliable fragility, restoration, and traffic reinstatement models are essential for assessing and quantifying resilience, as they allow infrastructure operators to prioritize maintenance and adapt to evolving threats in complex transportation systems. Although the vulnerability and fragility of tunnels have been widely researched over the last decade, studies focusing on tunnel restoration to quantify resilience remain scarce. This gap prevents operators from implementing proactive and reactive adaptation measures to ensure seamless tunnel functionality. To address this issue, this study presents a novel, fit-for-purpose, damage-level-dependent probabilistic approach for quantifying tunnel recovery. It introduces the first realistic, practice-led restoration models that enable resilience quantification in tunnels. To develop these models, a global expert survey was conducted to establish reinstatement (traffic capacity) and restoration (structural capacity) models tailored to tunnel resilience assessments. A detailed questionnaire was designed to gather expert input on required restoration tasks, their duration, sequencing, and cost. The survey focused primarily on damage induced by seismic events, incorporating idle times and traffic capacity gains over time. The results were then used to generate deterministic and probabilistic reinstatement and restoration models. The deterministic models are intended for practical applications, while the probabilistic models account for epistemic uncertainties and are presented in a reproducible format for further development across different hazards and applications. A case study is included to demonstrate the resilience assessment of a typical tunnel using the newly developed restoration models. The findings will help infrastructure operators and city planners to accurately assess tunnel resilience, enabling informed investment decisions.
The photochemical conversion of plastic waste into valuable resources under ambient conditions is challenging. Achieving efficient photocatalytic conversion necessitates intimate contact between the photocatalyst and plastic substrate, as water molecules are readily oxidized by photogenerated holes, potentially bypassing the plastic as the electron donor. This study demonstrated a novel strategy for depositing polystyrene (PS) waste onto a photoanode by leveraging its solubility in specific organic solvents, including acetone and chloroform, thus enhancing the interface contact. We used an anodization technique to fabricate a skeleton-like porous tungsten oxide (WO3) structure, which exhibited higher durability against detachment from a conductive substrate than the WO3 photoanode fabricated using the doctor blade method. Upon illumination, the photogenerated holes were transferred from WO3 to PS, promoting the oxidative degradation of plastic waste under ambient conditions. Consequently, the oxidative degradation of PS on the anode side generated carbon dioxide, while the cathodic process produced hydrogen gas through water reduction. Our findings pave the way for sunlight-driven plastic waste treatment technologies that concurrently generate valuable fuels or chemicals and offer the dual benefits of cost savings and environmental protection.
As advancements in the Internet of Things (IoT) and unmanned technologies continues to progress, the development of unmanned system of systems (USS) has reached unprecedented levels. While prior research has predominantly examined temporal variations in USS resilience, spatial changes remain underexplored. However, USS may involve kinetic engagements and frequent spatial changes during mission execution, affecting signal interference in data layer communications. Although time-dependent factors primarily govern mission effectiveness of the USS, spatial factors influence the transmission stability of the data layer. Consequently, assessing spatiotemporal variations in USS performance is critical. To address these challenges, this study introduces a spatiotemporal resilience assessment framework, which evaluates USS resilience across both temporal and spatial dimensions. Furthermore, we propose a spatiotemporal resilience optimization scheme that enhances system adaptability throughout the mission lifecycle, with a particular emphasis on prevention and recovery strategies. Finally, we validate the validity of the proposed concepts and methods with a case study featuring a regular hexagonal deployment of USS. The results show that the spatiotemporal resilience can better reflect the spatial change characteristics of USS, and the proposed optimization strategy improves the prevention spatiotemporal resilience, recovery spatiotemporal resilience and entire-process spatiotemporal resilience of USS by 0.22%, 7.8% and 11.3%, respectively.
An effective energy management strategy (EMS) is essential to optimize the energy efficiency of electric vehicles (EVs). With the advent of advanced machine learning techniques, the focus on developing sophisticated EMS for EVs is increasing. Here, we introduce LearningEMS: a unified framework and open-source benchmark designed to facilitate rapid development and assessment of EMS. LearningEMS is distinguished by its ability to support a variety of EV configurations, including hybrid EVs, fuel cell EVs, and plug-in EVs, offering a general platform for the development of EMS. The framework enables detailed comparisons of several EMS algorithms, encompassing imitation learning, deep reinforcement learning (RL), offline RL, model predictive control, and dynamic programming. We rigorously evaluated these algorithms across multiple perspectives: energy efficiency, consistency, adaptability, and practicability. Furthermore, we discuss state, reward, and action settings for RL in EV energy management, introduce a policy extraction and reconstruction method for learning-based EMS deployment, and conduct hardware-in-the-loop experiments. In summary, we offer a unified and comprehensive framework that comes with three distinct EV platforms, over 10 000 km of EMS policy data set, ten state-of-the-art algorithms, and over 160 benchmark tasks, along with three learning libraries. Its flexible design allows easy expansion for additional tasks and applications. The open-source algorithms, models, data sets, and deployment processes foster additional research and innovation in EV and broader engineering domains.