This study explored the transformative potential of artificial intelligence (AI) in addressing the challenges posed by terahertz ultra-massive multiple-input multiple-output (UM-MIMO) systems. It begins by outlining the characteristics of terahertz UM-MIMO systems and identifies three primary challenges for transceiver design: computational complexity, modeling difficulty, and measurement limitations. The study posits that AI provides a promising solution to these challenges. Three systematic research roadmaps are proposed for developing AI algorithms tailored to terahertz UM-MIMO systems. The first roadmap, model-driven deep learning (DL), emphasizes the importance of leveraging available domain knowledge and advocates the adoption of AI only to enhance bottleneck modules within an established signal processing or optimization framework. Four essential steps are discussed: algorithmic frameworks, basis algorithms, loss function design, and neural architecture design. The second roadmap presents channel state information (CSI) foundation models, aimed at unifying the design of different transceiver modules by focusing on their shared foundation, that is, the wireless channel. The training of a single compact foundation model is proposed to estimate the score function of wireless channels, which serve as a versatile prior for designing a wide variety of transceiver modules. Four essential steps are outlined: general frameworks, conditioning, site-specific adaptation, and the joint design of CSI foundation models and model-driven DL. The third roadmap aims to explore potential directions for applying pretrained large language models (LLMs) to terahertz UM-MIMO systems. Several application scenarios are envisioned, including LLM-based estimation, optimization, search, network management, and protocol understanding. Finally, the study highlights open problems and future research directions.
Increasing demands for massive data transmission pose significant challenges to communication systems. Compared with traditional communication systems that focus on the accurate reconstruction of bit sequences, semantic communications (SemComs), which aim to deliver information connotation, are regarded as a key technology for sixth-generation (6G) mobile networks. Most current SemComs utilize an end-to-end (E2E) trained neural network (NN) for semantic extraction and interpretation, which lacks interpretability for further optimization. Moreover, NN-based SemComs assume that the application and physical layers of the protocol stack can be jointly trained, which is incompatible with current digital communication systems. To overcome those drawbacks, we propose a SemCom system that employs explicit semantic bases (Sebs) as the basic units to represent semantic connotations. First, a mathematical model of Sebs is proposed to build an explicit knowledge base (KB). Then, the Seb-based SemCom architecture is proposed, including both a communication mode and a KB update mode to enable the evolution of communication systems. Sem-codec and channel codec modules are designed specifically, with the assistance of an explicit KB for the efficient and robust transmission of semantics. Moreover, unequal error protection (UEP) is strategically implemented, considering communication intent and the importance of Sebs, thereby ensuring the reliability of critical semantics. In addition, a Seb-based SemCom protocol stack that is compatible with the fifth-generation (5G) protocol stack is proposed. To assess the effectiveness and compatibility of the proposed Seb-based SemComs, a case study focusing on an image-transmission task is conducted. The simulations show that our Seb-based SemComs outperform state-of-the-art works in learned perceptual image patch similarity (LPIPS) by over 20% under varying communication intents and exhibit robustness under fluctuating channel conditions, highlighting the advantages of the interpretability and flexibility afforded by explicit Sebs.
Semantic communication (SemCom) has emerged as a transformative paradigm for future wireless networks, aiming to improve communication efficiency by transmitting only the semantic meaning (or its encoded version) of the source data rather than the complete set of bits (symbols). However, traditional deep-learning-based SemCom systems present challenges such as limited generalization, low robustness, and inadequate reasoning capabilities, primarily due to the inherently discriminative nature of deep neural networks. To address these limitations, generative artificial intelligence (GAI) is seen as a promising solution, offering notable advantages in learning complex data distributions, transforming data between high- and low-dimensional spaces, and generating high-quality content.
This paper explores the applications of GAI in SemCom and presents a comprehensive study. It begins by introducing three widely used SemCom systems enabled by classical GAI models: variational autoencoders, generative adversarial networks, and diffusion models. For each system, the fundamental concept of the GAI model, the corresponding SemCom architecture, and a literature review of recent developments are provided. Subsequently, a novel generative SemCom system is proposed, incorporating cutting-edge GAI technology—large language models (LLMs). This system features LLM-based artificial intelligence (AI) agents at both the transmitter and receiver, which act as “brains” to enable advanced information understanding and content regeneration capabilities, respectively. Unlike traditional systems that focus on bitstream recovery, this design allows the receiver to directly generate the desired content from the coded semantic information sent by the transmitter. As a result, the communication paradigm shifts from “information recovery” to “information regeneration,” marking a new era in generative SemCom. A case study on point-to-point video retrieval is presented to demonstrate the effectiveness of the proposed system, showing a 99.98% reduction in communication overhead and a 53% improvement in average retrieval accuracy compared to traditional communication systems. Furthermore, four typical application scenarios for generative SemCom are described, followed by a discussion of three open issues for future research. In summary, this paper provides a comprehensive set of guidelines for applying GAI in SemCom, laying the groundwork for the efficient deployment of generative SemCom in future wireless networks.
Mega-constellation networks have recently gained significant research attention because of their potential for providing ubiquitous and high-capacity connectivity in future sixth-generation (6G) wireless communication systems. However, the high dynamics of network topology and large scale of a mega-constellation pose new challenges to constellation simulation and performance evaluation. To address these issues, we introduce UltraStar, a high-fidelity and high-efficiency computer simulator to support the development of 6G wireless communication systems with low-Earth-orbit mega-constellation satellites. The simulator facilitates the design and performance analysis of various algorithms and protocols for network operation and deployment. We propose a systematic, scalable, and comprehensive simulation architecture for the high-fidelity modeling of network configurations and for performing high-efficiency simulations of network operations and management capabilities, while providing users with intuitive visualizations. We capture heterogeneous topology characteristics by establishing an environment update algorithm that incorporates real ephemeris data for satellite orbit prediction, sun outages, and link handovers. For a realistic simulation of software and hardware configurations, we develop a Network Simulator 3 based network model to support networking protocol extensions. We propose a message passing interface-based parallel and distributed approach with multiple cores or machines to achieve high simulation efficiency in large and complex network scenarios. Experimental results demonstrate the high fidelity and efficiency of UltraStar can help pave the way for 6G integrated space-ground networks.
In the upcoming sixth-generation (6G) era, supporting field robots for unmanned operations has emerged as an important application direction. To provide connectivity in remote areas, the space-air-ground integrated network (SAGIN) will play a crucial role in extending coverage. Through SAGIN connections, the sensors, edge platforms, and actuators form sensing-communication-computing-control (SC3) loops that can automatically execute complex tasks without human intervention. Similar to the reflex arc, the SC3 loop is an integrated structure that cannot be deconstructed. This necessitates a systematic approach that takes the SC3 loop rather than the communication link as the basic unit of SAGINs. Given the resource limitations in remote areas, we propose a radio-map-based task-oriented framework that uses environmental and task-related information to enable task-matched service provision. We detail how the network collects and uses this information and present task-oriented scheduling schemes. In the case study, we use a control task as an example and validate the superiority of the task-oriented closed-loop optimization scheme over traditional communication schemes. Finally, we discuss open challenges and possible solutions for developing nerve system-like SAGINs.
The deep integration of mobile networks with artificial intelligence (AI) has emerged as a pivotal driving force for the sixth-generation (6G) mobile network. AI-native 6G represents a paradigm shift for mobile networks, as it not only embeds AI into network components to enhance network intelligence and automation but also transforms 6G into a foundational infrastructure for enabling pervasive AI applications and services. This paper proposes a novel 6G AI-native architecture. The challenges and requirements for the AI-native 6G mobile network are first analyzed, followed by the development of a task-driven approach for architecture design based on insights from system theory. Then, a 6G AI-native architecture is proposed, featuring the integration of distributed AI data and computing components with layered centralized collaborative control and flexible on-demand deployment. Key components and procedures for the 6G AI-native architecture are also discussed in detail. Finally, standardization practices for the convergence of mobile networks and AI in fifth-generation (5G) networks are analyzed, and an outlook on the standardization of AI-native design in 6G is given. This paper aims to provide not only theoretical insights into AI-native architecture design methodology but also a comprehensive 6G AI-native architecture that lays a foundation for the transition from mobile communications toward mobile information services in the 6G era.
While the complexity of fifth-generation wireless networks is being widely commented upon, there is great anticipation for the arrival of the sixth generation (6G), with its enriched capabilities and features. It can easily be imagined that, without proper design, the enrichment of 6G will further increase system complexity. To address this issue, we propose the Agentic-AI Core (A-Core), an artificial intelligence (AI)-empowered, mission-oriented core network architecture for next-generation mobile telecommunications. In A-Core, network capabilities can be added and updated on the fly and further programmed into missions for enabling and offering diverse services to customers. These missions are created and executed by autonomous network agents according to the customer’s intent, which may be expressed in natural language. The agents resolve intents from customers into workflows of network capabilities by leveraging a large-scale network AI model and follow the workflows to execute the mission. As an open, agile system architecture, A-Core holds promise for accelerating innovation and greatly reducing standard release times. The advantages of A-Core are demonstrated through two use cases.
Programmable metasurfaces have garnered significant attention due to their exceptional ability to manipulate electromagnetic (EM) waves in real time, propelling the emergence of reconfigurable intelligent surfaces (RISs) as a transformative advancement in wireless communication for controlling signal propagation and coverage. However, conventional RISs often suffer from a limited operational range and spectral interference, hindering their practical deployment in wireless relay and communication systems. To overcome this limitation, we propose an amplifying and filtering RIS (AF-RIS) to enhance the in-band signal energy and filter the out-of-band signal of the incident EM waves, thereby achieving RIS array miniaturization and improved anti-interference capability. Furthermore, each AF-RIS element features 2-bit phase control, significantly improving the array’s beamforming performance. A meticulously designed 4 × 8 AF-RIS array is presented by integrating the power dividing and combining networks, which substantially reduces the number of amplifiers and filters, drastically decreasing the hardware costs and power consumption. The experimental results demonstrate the powerful capabilities of the AF-RIS in beam-steering, frequency selectivity, and signal amplification. Thus, the proposed AF-RIS offers significant potential for critical wireless relay applications by improving frequency selectivity, expanding signal coverage, and minimizing hardware size.
Cooperative integrated sensing and communication (ISAC), an advanced version of ISAC, is becoming an inevitable paradigm in sixth-generation mobile information networks. Based on the foundation of large-scale deployed mobile networks, cooperative ISAC holds promise to realize ubiquitous sensing, thus becoming a significant step in promoting the transformation from connected things to connected intelligence. In this paper, we depict a sweeping panorama of cooperative ISAC, including the concept, key technologies, a performance evaluation framework, and field trials. We start by introducing the application scenarios of cooperative ISAC, which are the motivation for its commercialization. Next, from the perspective of technical development, we trace the evolution of cooperative ISAC, noting that cooperation within sensing and communication is an objective trend. We reveal the four core features of cooperative ISAC—denoted herein as network-enabled, integration, cooperation, and everything—and provide a general system model. Regarding key technologies, we introduce our contributions to antenna array design, cooperative clustering, synchronization, and data fusion, as well as interference management and networking. We also propose an evaluation framework and define several key performance indicators for cooperative ISAC. Through system-level simulations and field trials, we show the practical application feasibility of cooperative ISAC. Finally, we provide guidance on future research directions in cooperative ISAC.
A new, compact, and dual-band dual-polarized duplex (D3) phased array architecture is proposed in this study. In contrast to studies reported previously, this design integrates four independent beamforming systems within a single printed circuit board (PCB), enabling the proposed 1 × 4 phased array to transmit or receive simultaneously vertically and horizontally polarized signals at 28 and 38 GHz, thereby supporting concurrent, dual-band, and dual-polarized four-beam operations. In addition, the exceptional frequency selectivity of the phased array facilitates frequency-division duplex operations. By adopting a brick-type architecture, the proposed phased array achieves two-dimensional scalability, which allows it to serve either as a standalone, small-scale phased array, or as a sub-block for larger-scale arrays. A novel, dual-polarized end-fire magnetoelectric dipole antenna was developed as the radiating element for the phased array. This antenna exhibits an impedance bandwidth of return loss below −10 dB across the frequency range of 24.8-40.3 GHz (47.6%), which represents one of the broadest operating bands reported for PCB-based, co-apertured, and dual-polarized end-fire antennas. Experimental validation of the fabricated phased array demonstrated that the two orthogonal polarizations could achieve beam-scanning ranges exceeding 90° and 60° at 28 and 38 GHz, respectively. The measured effective isotropic radiated power values exhibited distinct frequency selectivities between the two bands. To the best of our knowledge, this is the first demonstration of a D3 phased array that presents a promising solution for beyond fifth-generation (B5G) and sixth-generation (6G) millimeter-wave multi-standard systems.
With the rapid growth of video traffic and the evolution of video formats, traditional video communication systems are encountering many challenges, such as limited data compression capacity, high energy consumption, and a narrow range of services. These challenges stem from the constraints of current systems, which rely heavily on discriminative methods for visual content reconstruction and achieve communication gains only in the information and physical domains. To address these issues, this paper introduces generative video communication, a novel paradigm that leverages generative artificial intelligence technologies to enhance video content expression. The core objective is to improve the expressive capabilities of video communication by enabling new gains in the cognitive domain (i.e., content dimension) while complementing existing frameworks. This paper presents key technical pathways for the proposed paradigm, including elastic encoding, collaborative transmission, and trustworthy evaluation, and explores its potential applications in task-oriented and immersive communication. Through this generative approach, we aim to overcome the limitations of traditional video communication systems, offering more efficient, adaptable, and immersive video services.
In this paper, we investigate the problem of maximizing the lifetime of robot swarms in wireless networks utilizing a multi-user edge computing system. Robots offload their computational tasks to an edge server, and our objective is to efficiently exploit the correlation between distributed data sources to extend the operational lifetime of the swarm. The optimization problem is approached by selecting appropriate subsets of robots to transmit their sensed data to the edge server. Information theory principles are used to justify the grouping of robots in the swarm network, with data correlation among distributed robot subsets modeled as an undirected graph. We introduce a periodic subset selection problem, along with related and more relaxed formulations such as a graph partitioning problem and a subgraph-level vertex selection problem, to address the swarm lifetime maximization challenge. For additive white Gaussian noise channels, we analyze the theoretical upper bound of the swarm lifetime and propose several algorithms—including the least-degree iterative partitioning algorithm and final vertex search algorithm—to approach this bound. Additionally, we consider the impact of channel diversity on subset selection in flat-fading channels and adapt the algorithm to account for variations in the base station’s channel estimation capabilities. Comprehensive simulation experiments are conducted to evaluate the effectiveness of the proposed methods. Results show that the algorithms achieve a swarm lifetime up to 650% longer than that of benchmark approaches.
Channels are one of the five critical components of a communication system, and their ergodic capacity is based on all realizations of a statistical channel model. This statistical paradigm has successfully guided the design of mobile communication systems from first generation (1G) to fifth generation (5G). However, this approach relies on offline channel measurements in specific environments, and thus, the system passively adapts to new environments, resulting in deviation from the optimal performance. As sixth generation (6G) expands into ubiquitous environments and pursues higher capacity, numerous sensing and artificial intelligence (AI)-based methods have emerged to combat random channel fading. However, there remains an urgent need for a proactive and online system design paradigm. From a system perspective, we propose an environment intelligence communication (EIC) based on wireless environmental information theory (WEIT) for 6G. The proposed EIC architecture operates in three steps. First, wireless environmental information (WEI) is acquired using sensing techniques. Then, leveraging WEI and channel data, AI techniques are employed to predict channel fading, thereby mitigating channel uncertainty. Finally, the communication system autonomously determines the optimal air-interface transmission strategy based on real-time channel predictions, enabling intelligent interaction with the physical environment. To make this attractive paradigm shift from theory to practice, we establish WEIT for the first time by answering three key problems: How should WEI be defined? Can it be quantified? Does it hold the same properties as statistical communication information? Subsequently, EIC aided by WEI (EIC-WEI) is validated across multiple air-interface tasks, including channel state information prediction, beam prediction, and radio resource management. Simulation results demonstrate that the proposed EIC-WEI significantly outperforms the statistical paradigm in decreasing overhead and performance optimization. Finally, several open problems and challenges, including regarding its accuracy, complexity, and generalization, are discussed. This work explores a novel and promising way for integrating communication, sensing, and AI capability in 6G.
Intelligent machine tools operating in continuous machining environments are commonly influenced by the coupled effects of multi-component degradation and updates in machining tasks. These factors result in the generation of vast multi-source sensor data streams and numerous computational tasks with interdependent data relationships. The stringent real-time constraints and intricate dependency structures present considerable challenges to traditional single-mode computational frameworks. Furthermore, there is a growing demand for computational offloading solutions in intelligent machine tools that extend beyond merely optimizing latency. These solutions must also address energy management for sustainable manufacturing and ensure security to protect sensitive industrial data. This paper introduces an adaptive hybrid edge-cloud collaborative offloading mechanism that combines single-edge-cloud collaboration with multi-edge-cloud collaboration. This mechanism is capable of dynamically switching between collaborative modes based on the status of computational nodes, task characteristics, dependency complexity, and resource availability, ultimately facilitating low-latency, energy-efficient, and secure task processing. A novel hybrid hyper-heuristic algorithm has been developed to address large-scale task allocation challenges in heterogeneous edge-cloud environments, enabling the flexible allocation of computational resources and performance optimization. Extensive experiments indicate that the proposed approach achieves average enhancements of 27.36% in task processing time and 7.89% in energy efficiency when compared to state-of-the-art techniques, all while maintaining superior security performance. Validation through case studies on a digital twin gantry five-axis machining center illustrates that the mechanism effectively coordinates task execution across multi-source concurrent data processing, complex dependency task collaboration, high-computational machine learning workloads, and continuous batch task deployment scenarios, achieving a 37.03% reduction in latency and a 25.93% optimization in energy use relative to previous generation collaboration methods. These results provide both theoretical and technical backing for sustainable and secure computational offloading in intelligent machine tools, thereby contributing to the evolution of next-generation smart manufacturing systems.
Ulcerative colitis (UC) is a chronic, non-specific inflammatory disorder of the intestines whose etiology is influenced by various factors. Intestinal barrier impairment due to disturbances in the intestinal microenvironment is a key feature of UC. Current therapeutic strategies are constrained in their capacity to fully restore the intestinal barrier and achieve comprehensive resolution of inflammation in a coordinated manner. In this study, we constructed a pterostilbene (PSB)-loaded prebiotic microcapsule (PSB@MC) using a microfluidic electrospray method and characterized it using various means. Its safety, biodistribution, protective, and therapeutic effects on colitis were evaluated in various animal models. The potential mechanisms by which PSB@MC exerts its therapeutic effects were subsequently explored. The results indicated that PSB@MC exhibited favorable biocompatibility and facilitated targeted delivery of PSB to the colon. Moreover, the wrinkled morphology of PSB@MC contributed to prolonged drug retention in the colon. Oral PSB@MC administration restored intestinal microenvironment homeostasis by scavenging reactive oxygen species (ROS), decreasing pro-inflammatory cytokines, modulating gut microbiota and metabolism, and providing protective and therapeutic benefits against dextran sulfate sodium-induced colitis. Additionally, our research demonstrated that PSB@MC could activate the aryl hydrocarbon receptor/interleukin-22 (AHR/IL-22) pathway to enhance the integrity of the intestinal barrier. These results suggest that PSB@MC could be a new, secure, and efficient UC therapy option.
To address increasing concerns regarding environmental air quality, it is highly desirable to develop low-cost and high-efficiency air-sterilization technology. Herein, as a proof-of-concept, a template-directed growth strategy is designed to fabricate a 3D hierarchical superstructure of well-aligned bimetallic metal-organic framework (MOF) arrays. Taking advantage of the designed electrode material (0.3Co-MOF/Cu@Cu), which provides a greater number of catalytically active sites, better conductivity, and water stability in comparison with pure copper mesh, the proposed strategy exhibits high electrocatalytic efficiency for air sterilization. Under an external electric field, the designed electrode can electroporate bacteria, accelerate the electrocatalytic reduction of oxygen adsorbed by oxygen vacancies, and dynamically generate more exogenous reactive oxygen species (ROS), which will increase the negative ion concentration in the air and thereby increase the comfort level for people in the room. Moreover, the free electrons and exogenous ROS on the surface of the material will disturb the physiological activities inside bacteria, resulting in the production of endogenous ROS inside the bacteria and bacterial death. The sterilization rate of 1.5 m∙s−1 of airflow at 2 V (equivalent to a treatment time of 0.0026 s) is as high as 99.51%, demonstrating the great potential of the proposed strategy for practical application.
In response to the critical national demand for upgrading automotive gasoline quality, the concept of dual reaction zones was developed to intensify both olefin generation and conversion. The successful large-scale implementation of this process has yielded substantial economic benefits and spurred the invention and systematic study of the diameter-transformed fluidized bed (DTFB) reactor, leading to a suite of new catalytic processes. This study begins with the conceptual origins of the DTFB reactor. By analyzing unimolecular and bimolecular mechanisms in hydrocarbon catalysis, the key conditions necessary for maximizing target products are identified. Furthermore, it elucidates the scientific and technological challenges in applying diameter variation to partition the reaction section, highlighting that the primary challenge lies in achieving precise coupling between flow and reaction multimodalities, which necessitates a generalized drag model for accurate prediction of flow regime transitions. Since flow structure is influenced by both macroscopic parameters and local dynamics, a two-way coupled energy minimization multi-scale (EMMS) drag model and a corresponding multi-scale computational fluid dynamics (CFD) approach have been proposed, laying a theoretical foundation for quantitative design of diameter-transformed sections. The subsequent development of ancillary technologies has provided the necessary engineering safeguards for flexible control of temperature, density, and gas-solid contact time in each zone, ultimately enabling the industrialization, large-scale operation, and long-term stability of DTFB-based catalytic technology. Finally, the study outlines several typical processes and their application performance, and prospects future work.
Ammonia combustion does not produce carbon dioxide, and has been recognized as one of the promising approaches to achieving carbon neutrality in transportation sector. Due to the slow flame propagation speed and high ignition energy requirements, reactive fuel pilot ignition is essential for ammonia combustion in compression ignition engines. The reduction of pilot fuel drives CO2 mitigation by curtailing carbon input, but this has a demand of advanced combustion modulation techniques to sustain engine efficiency. Designed pilot fuel stratification enables an activated in-cylinder environment, overcoming the difficulty in ammonia ignition and combustion, and allowing for a minimal pilot fuel amount to trigger premixed ammonia combustion. The minimum pilot fuel permits 99.1% ammonia energy substitution, accounting for only 1.3% of the CO2 emissions from diesel combustion at the same load condition. Optimized intake organization coupled with improved reactivity stratification also achieves over 46% brake thermal efficiency, and reduces unburned ammonia by over 80% compared to baseline operation.
Automatic identification of microseismic (MS) signals is crucial for early disaster warning in deep underground engineering. However, three major challenges remain for practical deployment, namely limited resources, severe noise interference, and data scarcity. To address these issues, this study proposes the lightweight and robust entropy-regularized unsupervised domain adaptation framework (LRE-UDAF) for cross-domain MS signal classification. The framework comprises a lightweight and robust feature extractor and an unsupervised domain adaptation (UDA) module utilizing a bi-classifier disparity metric and entropy regularization. The feature extractor derives high-level representations from the preprocessed signals, which are subsequently fed into two classifiers to predict class probability. Through three-stage adversarial learning, the feature extractor and classifiers progressively align the distributions of the source and target domains, facilitating knowledge transfer from the labeled source to the unlabeled target domain. Source-domain experiments reveal that the feature extractor achieves high effectiveness, with a classification accuracy of up to 97.7%. Moreover, LRE-UDAF outperforms prevalent industry networks in terms of its lightweight design and robustness. Cross-domain experiments indicate that the proposed UDA method effectively mitigates domain shift with minimal unlabeled signals. Ablation and comparative experiments further validate the design effectiveness of the feature extractor and UDA modules. This framework presents an efficient solution for resource-constrained, noise-prone, and data-scarce environments in deep underground engineering, offering significant promise for practical implementations in early disaster warning.
Marine seismic exploration is traditionally conducted using towed streamers to investigate the geological structure of sea shelves and identify mineral deposits. Conventional streamers typically use piezoelectric hydrophones or fiber-optic interferometric hydrophones, which are complex, costly, and challenging to manufacture. In this study, we introduced a fiber-optic marine towed streamer seismic acquisition system based on distributed acoustic sensing technology. This system features a simplified design by removing the need for optical components within the streamer, thereby streamlining system architecture and manufacturing. The system’s effectiveness was validated through a sea trial conducted in the slope zone of a basin, with water depths ranging from 500 to 2000 m. Notably, this study represents the first successful application of distributed fiber-optic towed streamers for marine seismic exploration, enabling the effective detection of complex sedimentary structures in the surveyed area. The results underscore the significant potential of distributed fiber-optic towed streamers for seismic exploration, paving the way for advancements in marine seismic technologies.
Historical legacy effects and the mechanisms underlying primary producer community succession are not well understood. In this study, environmental DNA (eDNA) sequencing technology and chronological sequence analysis in sediments were utilized to examine long-term changes in cyanobacterial and aquatic plant communities. The analysis results indicate that the nutritional status and productivity of aquatic ecosystems have been relatively high since 2010, which could reflect a period of eutrophication due to high long-term rates of organic matter deposition (33.22-42.08 g·kg−1). The temporal and spatial characteristics of community structure were related to environmental filtering based on trophic status between 1849 and 2020. Turnover in the primary producer community was confirmed through change-point model analyses with regime shifts toward new ecological states. On the basis of ecological data and geochronological techniques, it was determined that the quality of habitats at a local scale may affect ecological niche shifts between cyanobacterial and aquatic plant communities. These observations suggest how primary producers respond to rapid urbanization, serving as an invaluable guide for protecting freshwater biodiversity.
Soil could represent a potentially notable source of carbon for achieving global carbon neutrality. However, how the land surface soil organic carbon (SOC) stock, which is more sensitive to climate change than other carbon stocks, will change naturally under the influence of global warming remains unknown. In this work, the global land surface SOC trends from 1981 to 2019 were explored, and the driving factors were identified. A random forest model (a type of machine learning method) was proposed to predict future global surface SOC trends integrated with climate scenarios of the Coupled Model Intercomparison Project Phase 6 (CMIP6) models. The results revealed that the global surface SOC content will increase, while the temperature and precipitation are the main climate drivers at the global scale, and vegetation cover is a crucial local factor influencing the increase in SOC. However, under the 1.5 °C global warming scenario, the land SOC sink will increase by 13.0 petagram carbon (PgC) at most compared with that under the SSP2-4.5 scenario, which accounts for only 19% of the total carbon emission capacity at the current 1.1 to 1.5 °C global warming level. Moreover, this value is far from the Paris Agreement target of four out of one thousand for the annual increase in the soil carbon stock 40 cm below the surface over the next 20 years (2.72 PgC·a−1). This illustrates that overreliance on natural carbon sinks is a high-risk strategy. These findings highlight the urgency of implementing mitigation and removal strategies to reduce greenhouse gas emissions.
Increasing greenhouse gas (GHG) emissions, such as methane (CH4), nitrous oxide (N2O), and carbon dioxide (CO2), from agricultural practices and land use have increased concerns about global warming. Accurate quantification of the GHG using gas sensors is essential for effective management and sustainable agricultural practices. The objective of this study was to make an analytical comparison of the performance of various sensing materials for CH4-, N2O-, and CO2-based sensors in terms of sensitivity, response ratio, response time, and recovery time to establish an efficiency detection level of the GHG emissions. A literature review of 95 different studies showed that palladium-tin dioxide (Pd-SnO2) nanoparticles, indium oxide (In2O3) nanowires, and gold-lanthanum oxide-doped tin dioxide (Au-La2O3/SnO2) nanofibers had better performance compared to other sensing materials in CH4-, N2O-, and CO2-based sensors, respectively. The findings from reviewed studies revealed that nanoporous structures, nanowires, and nanofibers had faster response and recovery compared to conventional materials due to their big specific surface area (SSA). The designed ternary hybrid structure of sensing materials was more effective for CO2 gas detection than the double hybrid structure, unlike CH4- and N2O-based sensors. However, constructive suggestions for researchers were discussed in the conclusion based on the current research status and challenges to improve the performance of GHG sensors.
The real-time monitoring of hydrogen peroxide (H2O2) is significant for understanding the working mechanism of signal molecules, breeding for stress tolerance, and diagnosing plant health. However, it remains a challenge to realize real-time monitoring of the dynamic H2O2 level in plants. Here, we report an implantable and self-powered sensing system for the continuous monitoring of H2O2 level in plants. A photovoltaic (PV) module is integrated into a sensing system to collect sunlight or artificial light in the planting environment in order to continuously power an implantable microsensor. The transmission process of the H2O2 signal was monitored and analyzed in vivo, and the time and concentration specificity of the H2O2 signal for abiotic stress were resolved. This implantable system provides a promising analysis tool for key signal molecules in plants and might be extended to the real-time monitoring of signaling molecules in other crops.
Rheumatoid arthritis (RA) remains a therapeutic challenge because of the suboptimal efficacy and significant adverse effects of current treatments. Obakulactone (OL), a natural tetracyclic triterpenoid isolated from Phellodendri cortex, has emerged as a promising candidate for RA intervention. However, its underlying mechanism remains poorly understood. In this study, we investigated the therapeutic effects of OL and its molecular mechanisms in RA using a multifaceted approach. A complete Freund’s adjuvant (CFA)-induced RA rat model revealed that OL significantly alleviated joint swelling and restored the expression of CD3+ T cells and CD68+ macrophages in joints, and the polarization state of macrophages shifted from proinflammatory M1 (CD86) to anti-inflammatory M2 (CD206) dominant. In addition, OL alleviated pathological changes in lymphoid organs (thymus and spleen), effectively inhibited the differentiation of CD4+ T cells into T helper 17 (Th17) cells, and normalized serum levels of inflammatory cytokines (e.g., interleukin (IL)-6 and tumor necrosis factor-α (TNF-α)) and RA diagnostic markers (e.g., c-reactive protein (CRP) and rheumatoid factor (RF)). Multiomics profiling revealed that OL corrected the dysregulated biosynthesis and metabolism of unsaturated fatty acids (e.g., arachidonic acid and linolenic acid) in RA rats, with acyl coenzyme A (CoA) thioesterase 1 (ACOT1) identified as a critical regulator. In vitro studies have shown that OL significantly inhibits cell proliferation and inflammatory cytokine secretion and promotes the apoptosis of RA synovial fibroblasts (SFs). It inhibited the M1 polarization of Raw264.7 macrophages and promoted M2 polarization. Mechanistically, cellular thermal shift assays (CETSA), microscale thermophoresis (MST), surface plasmon resonance (SPR), and short hairpin RNA (shRNA) experiments revealed ACOT1 as the direct target of OL. OL enhanced ACOT1 ubiquitination-mediated proteasomal degradation, thereby reducing downstream stearoyl-CoA desaturase-1 expression and inhibiting the Janus kinase (JAK)-signal transducer and activator of transcription (STAT) and phosphoinositide 3-kinase (PI3K)-protein kinase B (AKT) signaling pathways, thus suppressing inflammation and fibrosis in SFs. This study establishes OL as a potential RA therapeutic agent and highlights ACOT1 as a novel target for RA intervention, offering insights into fatty acid metabolism reprogramming as a therapeutic strategy.