The Agentic-AI Core: An AI-Empowered, Mission-Oriented Core Network for Next-Generation Mobile Telecommunications

Xu Li , Weisen Shi , Hang Zhang , Chenghui Peng , Shaoyun Wu , Wen Tong

Engineering ›› 2026, Vol. 56 ›› Issue (1) : 104 -119.

PDF (2284KB)
Engineering ›› 2026, Vol. 56 ›› Issue (1) :104 -119. DOI: 10.1016/j.eng.2025.06.027
Research
research-article

The Agentic-AI Core: An AI-Empowered, Mission-Oriented Core Network for Next-Generation Mobile Telecommunications

Author information +
History +
PDF (2284KB)

Abstract

While the complexity of fifth-generation wireless networks is being widely commented upon, there is great anticipation for the arrival of the sixth generation (6G), with its enriched capabilities and features. It can easily be imagined that, without proper design, the enrichment of 6G will further increase system complexity. To address this issue, we propose the Agentic-AI Core (A-Core), an artificial intelligence (AI)-empowered, mission-oriented core network architecture for next-generation mobile telecommunications. In A-Core, network capabilities can be added and updated on the fly and further programmed into missions for enabling and offering diverse services to customers. These missions are created and executed by autonomous network agents according to the customer’s intent, which may be expressed in natural language. The agents resolve intents from customers into workflows of network capabilities by leveraging a large-scale network AI model and follow the workflows to execute the mission. As an open, agile system architecture, A-Core holds promise for accelerating innovation and greatly reducing standard release times. The advantages of A-Core are demonstrated through two use cases.

Graphical abstract

Keywords

Sixth generation / Core network / Generative artificial intelligence / Artificial intelligence agent

Cite this article

Download citation ▾
Xu Li, Weisen Shi, Hang Zhang, Chenghui Peng, Shaoyun Wu, Wen Tong. The Agentic-AI Core: An AI-Empowered, Mission-Oriented Core Network for Next-Generation Mobile Telecommunications. Engineering, 2026, 56(1): 104-119 DOI:10.1016/j.eng.2025.06.027

登录浏览全文

4963

注册一个新账户 忘记密码

1. Introduction

Since their advent in the 1980s, mobile telecommunications have evolved to the fifth generation (5G). Along the journey from the first generation (1G) to 5G, we have witnessed technology shifts from analog to digital transmission, from circuit-switching to packet-switching, and paradigm shifts from voice service to mobile broadband (MBB), machine-type communication (MTC), and ultra-reliable low-latency communication (URLLC). This evolution has brought about ever-increasing system capabilities, which in turn have enabled various usage scenarios for mobile telecommunications, transforming a range of industries including entertainment, manufacturing, healthcare, and transportation. Sixth generation (6G) is the next step of this evolution and is anticipated to arrive around 2030 with even further enriched capabilities and features.

1.1. Integration of artificial intelligence and communication

Artificial intelligence (AI) has become an essential tool for solving challenging problems in business analytics and decision-making. It is anticipated that AI applications will touch every possible sector of the economy and affect all aspects of society [1]. 5G introduced a network data analytics function (NWDAF) [2], which collects data from the core network and provides analytics, possibly based on AI, to support network automation. Various typical AI mechanisms, such as federated learning, have been defined in 5G for multiple NWDAFs to train AI models. While this application of AI in mobile telecommunications is still in its infancy, 6G will showcase a significant leap toward native AI that is trained, inferenced, and managed in the 6G networks.

The International Telecommunication Union (ITU) has published recommendations [3] for the framework and overall objectives of “Future technology trends of terrestrial International Mobile Telecommunications systems towards 2030 and beyond” (IMT-2030, 6G), wherein “AI and communication” is identified as one of the new usage scenarios. This usage scenario will support distributed computing and AI applications—such as in automated driving, healthcare, computation offloading, and digital twins—with new key performance indicators (KPIs) on data rates, latency, and reliability. It will also require new applicable AI-related capabilities to be included into 6G. These new capabilities may include, for instance, distributed data processing and distributed learning, computing, and model execution and inference, which can be used to optimize and automate network management and operation.

1.2. Demand for system agility and autonomy

As mobile telecommunications continue to evolve, a growing number of features have been added to the system through an increasingly complex system architecture via lengthy standardization processes. For example, the 5G system architecture includes over 30 major core network functions (NFs), as a consequence of function disaggregation and migration toward cloud-native technologies, while the fourth generation (4G) has fewer than ten major core network elements. The enrichment of features enables operators to optimize the system and meet diverse service requirements. However, the associated complexity and overhead retard the system deployment and increase the capital expenditure (CAPEX) and operating expenditure (OPEX).

If 6G were to be designed in a similar manner, it is anticipated that even more complex system architecture will be introduced to provide the new capabilities identified by the ITU [3]. In fact, 6G is expected to operate and evolve on its own with minimal human intervention. To this end, an agile, autonomous system architecture is a must. The Next-Generation Mobile Networks (NGMN) Alliance is working on defining an AI-driven high-level autonomous system framework [4]. However, efforts toward system agility still appear to be lacking. With an agile system architecture, system capabilities or features can be added and updated dynamically, with minimal standardization overhead. This will accelerate innovation and greatly reduce the standard release time.

1.3. Our contribution

Since native AI capabilities and system agility and autonomy are desirable for 6G, as analyzed above, it is reasonable to design a system architecture to meet these goals by leveraging and accommodating AI technologies. In this paper, we propose the Agentic-AI Core (A-Core), an AI-empowered, mission-oriented core network architecture for 6G and beyond. In A-Core, a number of network capabilities (NCs) are provided through a mission plane. The mission plane includes NFs that implement the NCs. An NC refers to a set of functionalities in support of certain applications, such as an internal application relating to network control and management, or an external application.

A-Core offers various services in the form of missions and meets diverse service requirements through mission execution. It dynamically creates missions by orchestrating related NCs in the mission plane and executes the missions using respective mission networks, which include supporting NFs and necessary resources. A mission is created according to an intent that describes the goal of the mission (including service requirements). The intent is received from a customer and is resolved into a workflow that describes a networking procedure of involved NCs for achieving the goal. When the mission is executed, the NCs are holistically provisioned to the customer with respect to the workflow. The customer may be an operator and may use the mission to manage and operate the system. Or, the customer may be a third party and may use the mission to provide application services to its own clients.

In the mission plane, NCs and the associated NFs can be added and upgraded on the fly. Once an NC is added or updated, the change is immediately accommodated in subsequent mission creation and execution, in order to enable and offer up-to-date services to customers. Mission creation and execution are performed by autonomous network agents without human intervention. These network agents have different types of expertise and may leverage a large-scale network AI model, generative pretrained transformer for network traffic (NetGPT), to make decisions in their subject domains. They form multi-agent systems and work in collaboration via an agent-based interface. NetGPT can be based on a large foundation model (LFM) and supports processing information expressed in natural language, such as analyzing intent and resolving intent into workflow. Thanks to its openness, agility, and autonomy, A-Core evolves continuously on its own, with significantly reduced network-management (NM) complexity and standardization overhead. Two use cases are provided to demonstrate these advantages.

The remainder of this paper is organized as follows: In Section 2, we review prior research related to agentic AI and its applications. In Section 3, we provide important definitions and terminologies. In Section 4, we present the proposed A-Core architecture, and its advantages are demonstrated through two use cases in Section 5. Finally, we discussed the open issues of A-Core research in Section 6, and conclude the paper in Section 7.

2. Related works

In this section, we first review emerging studies and applications related to LFMs and LFM-based AI agents. Next, we summarize cutting-edge research progress on leveraging LFM and AI agents to revolutionize 6G standards and network management.

2.1. Large foundation models

The LFM concept represents a category of large-scale (e.g., millions of trainable parameters), pre-trained AI models focused on understanding the semantic information of inferencing inputs and accordingly generating related inferencing results [5]. For example, large language models (LLMs) such as the generative pre-trained transformer (GPT) series [6] are a type of LFM that can understand and generate text in natural language. Existing LFMs may further possess multi-modalities beyond text generation, such as the processing of images [7], speech, and video streams [8]. The transformer architecture is the foundation for most LFMs, allowing them to capture long-range dependencies in both inputted and generated data [9]. However, training a transformer-based LFM from scratch has proven to be extremely costly in terms of computational resources and electrical power [6]. Therefore, most state-of-the-art LFM research focuses on adaptions of LFM, such as exploring methods to guide or adapt a pre-trained LFM to generate accurate and comprehensive contents for a specific task. Typical LFM adaption methods include the following:

Prompt engineering. Prompt engineering refers to crafting or rephrasing the inferencing input contents (i.e., prompts) to the LFM in order to influence the LFM to generate the desired outputs. Typical prompt engineering techniques include prompt templates [10], one/few-shot learning [10], and chain-of-thought (CoT) [11].

Fine-tuning. Fine-tuning is a type of transfer learning technique that adjusts pre-trained weights and bias in an LFM based on domain-/task-specific fine-tuning data (usually in small quantities). A fine-tuned LFM can generate the desired contents specialized in the domain/task without losing its pre-trained general knowledge. Facilitated by the low-rank adaptation (LoRA) technique [12], the computational and power resource requirements to fine-tune an LFM can be affordable, even for personal computer users.

Retrieve-augment-generation (RAG). Unlike fine-tuning external knowledge into an LFM, the RAG method concatenates the searched external knowledge in the prompts for LFM inferencing, which does not modify any pre-trained weights/bias in the LFM. A typical RAG method is the three-phases approach [13]: ① In the retrieve phase, the RAG system searches a vector dataset in which formatted knowledge samples (e.g., context chunks) are pre-stored and finds the knowledge sample(s) semantically related to the user query; ② in the argument phase, the RAG system augments the retrieved knowledge samples and the user query to form a prompt to the LFM; and ③ in the generate phase, the LFM generates contents according to the inputted prompt.

2.2. AI agent

An (LFM-based) AI agent can be regarded as a powerful entity that can automatically interact with its surrounding environment. The core of an AI agent is an LFM embedded in or connected to it. A typical AI agent includes the following essential capabilities:

Planning. The planning capability allows the AI agent to decouple a complex task or user intent into sub-tasks that can be individually solved by the agent, such as via CoT or tree-of-thoughts methods [14]. The planning capability of the AI agent relies on the LFM’s semantic understanding and reasoning capabilities.

Memory. The memory stores ① knowledge/data pre-installed in the agent, ② knowledge/data obtained from previously solved tasks, and ③ information perceived from the environment. Knowledge and information stored in the memory can be retrieved by the agent to facilitate planning and action-taking (e.g., through an RAG method).

Tool-using. Tools refer to various external modules/entities that can provide useful information to the AI agent for resolving tasks. For examples, a search engine like Google can be a tool for an agent to retrieve necessary knowledge/data from the Internet, and a sandbox/digital twin can be a tool for the agent to test/verify generated actions/contents.

Action. Actions refer to the outcomes/behaviors generated by the agent to interact with the environment. Detailed actions to be executed by the agent are determined by the planning capability. Environmental feedbacks to a specific action can be sensed/perceived by the agent to facilitate future planning and action-taking.

In many cases, it is beneficial to deploy multiple AI agents in an environment. These simultaneously operating AI agents form a multi-AI-agent system (MAAS) that can achieve more complex tasks.

To complete a complex task, different agents in an MAAS can perform distinct roles, according to each agent’s unique capabilities. To assign a role, a predefined agent profile that describes the characteristics, capabilities, behaviors, and constraints of the role should be “installed” (e.g., fine-tuned in the LFM and stored as a prompt template) in the agent [15]. In addition, an effective inter-agent communication approach is necessary to coordinate works and prevent mutual interference. Unlike a conventional distributed system, in which specific formats or protocols are required for inter-agent communications, in an MAAS, the contents passed between AI agents can be in any form (e.g., texts in natural language and code segments), as long as the semantic information is mutually understandable by the AI agents.

2.3. LFMs and AI agents for future networks

As the pioneer domain integrates with LFM and AI agent techniques, the research area of next-generation networks (i.e., 6G) has seen many advancements in the integration of AI with 6G communication networks [3, [16], [17], [18], [19]]. Among them, using LFM/agent techniques to enhance or revolutionize existing telecommunication networks is an emerging research area in both academia and industry. LFM and AI agent techniques can bring a new level of sophistication to telecommunication-network tasks due to their multi-modality and adaptability. Benchmark works about LFMs and AI agents for telecommunication networks are summarized in Table 1 [[20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]], including the differences in innovation between A-Core and other works. In general, the existing works mainly focus on two topics:

2.3.1. Establishing telecommunication-domain LFMs

These works leverage sophisticated LFM adaption methods, such as fine-tuning and RAG, to adapt general LFMs into domain LFMs specialized in telecommunication knowledge. Existing standard documents (e.g., 3rd Generation Partnership Project (3GPP) technical specifications) are usually selected as the data sources. The main capabilities of these adapted domain LFMs are providing comprehensive and accurate answers to telecommunication-network-specific questions, which are different from A-Core’s ability to generate network procedures and/or manage network operations to support services requested by customers.

2.3.2. Transforming network management/orchestration

Leveraging LFMs’/agents’ semantic-based reasoning and planning capabilities, the works listed in Table 1 [[20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]] involve LFMs/agents in the NM/orchestration workflow to increase flexibility and reduce the complexities of predefined protocols. A detailed comparison between each of these works and A-Core is provided in Table 1 [[20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]].

3. Definitions and terminologies

In this paper, a service refers to a network service offered by the core network of next-generation mobile telecommunications, in the form of a mission, to support external applications or internal applications. While the nature of an external application is not limited, an internal application is one that relates to network management and control.

3.1. Mission

A mission provides a way to achieve a designated goal, referred to as the mission goal, which includes ① providing connectivity and/or ② providing certain data processing. It should be noted that data storage is considered a type of data processing here. When the mission goal includes providing data processing, the mission goal is associated with specific computational problem(s), and the provided data processing refers to solving the specific computational problem(s). In this case, the mission includes one or multiple computing blocks (CBs) and a workflow (i.e., networking procedure) among the CBs for solving the specific computational problem(s). The workflow is referred to as the mission workflow.

A CB in the mission is associated with a purpose, referred to as the block purpose, which corresponds to a computational step toward the mission goal. The CB is supported by an NC in the form of a task or another mission. In other words, the block purpose of the CB can be realized by the NC or by the other mission. Accordingly, the CB is referred to as a task CB or a sub-mission CB.

Fig. 1 illustrates a mission workflow for a mission involving three CBs—CB1, CB2, and CB3—with the goal of supporting an AI application. The respective block purposes of the three CBs are data acquisition, model training, and model inferencing. CB1 may be supported by the sensing capability of the core network, while CB2 and CB3 may be supported by the AI capabilities of the core network.

The mission can be executed to fulfil the mission goal of supporting an application. This can be an external application (e.g., autonomous driving, factory automation, and digital twinning) or an internal application (e.g., session management, mobility management, and policy management).

3.2. Mission network

A mission network is a logical network that provides specific capabilities and characteristics in networking and data processing for a mission. A CB of a mission corresponds to a subnet, referred to as a CB subnet, of the mission network. In the following text, “mission” and “mission network” are used interchangeably for ease of presentation, as are “CB” and “CB subnet.”

When a mission network is instantiated, an instance of the mission network is created. This mission network instance is a deployed mission network. For each CB of the mission, the mission network instance includes an instance of the CB. If the CB is a task CB, the CB instance includes a set of NFs that are associated with the NC supporting the CB. The NFs provide data-processing functionalities for realizing the block purpose of the CB. If the CB is a sub-mission CB, the CB instance is an instance of a mission supporting the CB. For ease of presentation, “mission instance” and “mission network instance” will be used interchangeably in the following text.

The mission is a logical concept, a mission must be instantiated/implemented as an instance that can be executed to support an application. The application may be hosted in a data network (DN); in this case, the application provides an application service to its clients (e.g., user equipment (UE)) by making use of the mission execution. The application may be hosted in the mission; in this case, the network provides an application service to authorized UEs in a native way via the mission execution.

Fig. 2 illustrates a mission, a mission instance, and a mission execution in support of an application. An authorized UE can participate in the mission execution and access the application by using the mission instance. The UE connects to the mission instance via the radio access network (RAN), which is omitted in the figure. The DN is optional when the application is hosted in the mission. When the application is hosted in the DN, the mission instance may optionally provide connectivity between the UE and the DN, as indicated by the dashed line between the UE and the DN.

4. The Agentic-AI Core

In this section, we present A-Core for next-generation mobile telecommunications, such as 6G and beyond. As illustrated in Fig. 3, A-Core includes two planes: a control and management plane (CMP) and a mission plane. Each of the two planes includes a number of logical entities. Here, we describe functionalities of these entities.

A-Core allows NCs to be added and updated dynamically in the mission plane and supports the dynamic programming of the NCs into missions in order to enable and offer new services to customers. We will elaborate on how the CMP entities interwork to support the creation and instantiation of a mission, as well as to execute and participate in a mission.

4.1. System architecture

The CMP includes entities that are responsible for managing missions, mission instances, and mission execution. The CMP entities are connected and communicate through an agent-based interface, as indicated by the thick line in Fig. 3. The interface provides mechanisms such as application programming interface (API) calls and message queues to support synchronous and asynchronous communications. An application function (AF) can interact with the CMP to provide intent or to request the instantiation or execution of a mission, as described later in this section. The functionality and purpose of the AF are not limited. For example, an NF, a server in the DN, or even the UE can act as the AF.

The mission plane includes the RAN and the mission instances. The latter are composed of NFs selected from the NF pool; these NFs use resources allocated from the resource pool to perform data processing and communication to support the mission execution. The UE can interact with the CMP and can also connect to the mission plane to participate in mission executions and access applications. As described before, the applications may be hosted in a DN or in a mission.

4.1.1. Public memory

The public memory maintains user subscription data, policy data, network data regarding (e.g., statistical network conditions and NC availability and performance), mission templates, and mission instance profiles.

A mission is described using a mission template, which specifies the mission goal and the mission workflow for achieving the mission goal. Mission templates stored in the public memory are related to ongoing missions, historical missions that may be reused, and preconfigured sample missions.

A mission instance is described using a mission instance profile, which identifies the respective mission template and composite CB instances. The mission instance profile identifies the latter by specifying the profiles of the CB instances. Mission instance profiles stored in the public memory correspond to ongoing missions.

The public memory also maintains CB instance profiles. For a CB instance corresponding to a sub-mission CB, the CB instance profile refers to a mission instance profile. For a CB instance corresponding to a task CB, the CB instance profile specifies the supporting NC, composite NFs, and respective configurations (e.g., configurations of the data-processing logic of the NFs).

4.1.2. Toolbox

The toolbox maintains the profiles of the available NFs and NCs. These profiles can be dynamically added, updated, and removed so as to support an agile system architecture.

An NF profile describes, for example, an NF’s name, functionalities, interfaces (e.g., in terms of service APIs), the NCs implemented by the NF, and so forth. An NC profile describes an NC’s name, purpose, the NFs implementing the NC, pre-conditions, post-conditions, and so forth. In an NC profile, the pre-conditions identify system state(s) that must be achieved before the NC can be executed. The post-conditions identify system state(s) that can be achieved after the NC is executed.

4.1.3. NF pool

The NF pool includes the NFs whose profiles are maintained by the toolbox. These NFs implement a variety of NCs, as described in their profiles. In Fig. 3, NFs implementing the same NC (e.g., one or multiple functionalities of the NC) are circled by dashed lines. It is possible for an NF to implement more than one NC.

4.1.4. Resource pool

The resource pool includes physical resources such as computing and communication resources. These diverse resources are jointly managed by the resource-management agent (RMA) and are used by NFs to support mission execution, including related data processing and communication.

4.1.5. NetGPT

The NetGPT is a large-scale network AI model that has networking knowledge and provides generative AI functionalities to facilitate network agents in making autonomous decisions. The NetGPT can be based on an LFM (e.g., Mistral series [31], LLaMA series [32], and Pangu [33]) and be a fine-tuned version of the LFM.

The NetGPT parses received information (e.g., intent and block purpose) and generates reasonings or suggestions that are provided to the network agents. For instance, given an intent or block purpose, the NetGPT may provide reasonings for intent resolution or CB instantiation. The NetGPT may provide these reasonings with respect to other information, such as information maintained in the public memory and toolbox.

4.1.6. Sandbox

The sandbox provides a virtual environment for evaluating the performance of a mission instance—that is, whether and how well the mission can meet the mission goals when executed. The sandbox also has the capability to verify whether the mission workflow generated by the mission-planning agent (MPA) and/or the NCs added/updated by the NC providers are valid or not. Existing verification tools for the semantic or contextual level, such as the social, legal, ethical, empathetic, and cultural (SLEEC) rule [34], can be pre-installed in the sandbox to perform this verification. The goal of verifying the generated mission workflows and added/updated NCs is to remove potential redundancy and conflicts in them. More specifically,

•A valid mission workflow should not contain any redundant or conflicting NCs. In a mission workflow, a redundant NC can be an NC whose post-conditions have all be met or achieved (in terms of semantics or context) by other NC(s) that should be executed before it in the mission workflow, while a conflicting NC can be an NC whose pre-condition(s) are mutually exclusive (in terms of semantics or context) with any post-condition(s) of other NC(s) that should be executed before it in the mission workflow.

•In the toolbox, a valid NC should not be redundant—that is, should not have the same pre-conditions and post-conditions as any existing NCs. Moreover, any NC in the toolbox should not be self-conflicting—that is, should not have any two mutually exclusive pre-conditions or post-conditions (in terms of semantics or context).

4.1.7. Network agents

There are six network agents: an MPA, a mission-instantiation agent (MIA), a mission-execution agent (MEA), a connection-management agent (CMA), a computing-block agent (CBA), and an RMA. The network agents have different types of expertise and, possibly by leveraging the NetGPT, make autonomous decisions in their subject domains. As described above, each of these network agents is a logical entity and may have multiple instances.

4.1.7.1. Mission-planning agent

The MPA receives an intent from the AF and resolves the intent to create a mission. The intent can be expressed in natural language. The MPA may use the NetGPT to perform the intent resolution. During the intent resolution, a mission goal is identified for the mission, and a mission workflow is determined for achieving the mission goal. The mission workflow is determined based on the available missions and NCs.

4.1.7.2. Mission-instantiation agent

The MIA is responsible for instantiating a mission after the mission is created; in other words, it instantiates all the CBs in the mission workflow. When instantiating a CB, the MIA identifies an existing CB instance profile or generates a new CB instance profile for the CB according to the block purpose of the CB. The block purpose can be expressed in natural language, and the MIA may use the NetGPT to analyze the block purpose and perform the CB instantiation.

4.1.7.3. Mission-execution agent

The MEA executes a mission over an instance of the mission, in support of an application. During the mission execution, the MEA coordinates (e.g., starts, pauses, resumes, and stops) the executions of the composite CBs of the mission via the CMA, according to the mission workflow, the status of CB execution, and other factors such as the network conditions and performance feedback. The MEA may consult the NetGPT and determine how to coordinate the CB executions.

When the UE wants to access an application supported by the mission execution, the UE as a mission participant sends a request to the MEA. Accordingly, the MEA identifies CB(s) at which the UE can participate in the mission execution and notifies the respective CBA(s) about the arrival of the UE. The CBA(s) will manage the UE’s participation in the execution of the CB(s) during the mission execution.

4.1.7.4. Connection-management agent

When a mission is executed over a mission instance to support an application, the CB instances in the mission instance communicate with each other, with mission participants, and/or with a DN that hosts the application, through mission execution paths, as indicated by the curved lines in Fig. 2. The CMA manages (e.g., selects and reselects) the mission execution paths to optimize communication efficiency with respect to a number of factors, such as mobility and loading. The CMA may consult the NetGPT to make path-management decisions.

A mission participant accesses the application by participating in the mission execution at one or multiple CBs. For each of these CBs, the CMA connects the mission participant to a CBA that manages the CB execution so the CBA can manage how the mission participant participates in the CB execution.

4.1.7.5. Computing-block agent

During a mission execution, the CBA manages the execution of a CB of the mission. If the CB is a sub-mission CB, the CBA refers to an instance of the MEA that manages the execution of the respective mission. According to the trigger received from the MEA, the CBA starts, pauses, resumes, or stops the CB execution.

The CBA selects and configures the serving NFs in the CB instance to execute the CB and to optimize the performance of the CB execution. For instance, the data management and AI performance management illustrated in Fig. 1 are performed by respective CBAs. The CBA may consult the NetGPT to make the selection and configuration decisions. The CBA monitors and notifies the MEA about the status of the CB execution.

Upon a mission participant’s arrival, as notified by the MEA, the CBA determines and enforces how the mission participant should participate in the CB execution. For example, the CBA notifies the mission participant to join in or to refrain from the CB execution and correspondingly configures the related serving NFs to accept or reject data from the mission participant.

4.1.7.6. Resource-management agent

The RMA manages the resources in the resource pool jointly for a mission instance to support the mission execution and optimize the execution performance. For instance, it allocates compute resources to an NF and networking resources (e.g., bandwidth and queue) along a communication path. It may perform resource management according to a request from the CMA and the MEA during mission instantiation and mission execution, as described in 4.2 Mission-planning phase, 4.3.1 Mission execution, respectively. The CMA can consult with the NetGPT to make resource-management decisions.

4.2. Mission-planning phase

Mission creation and mission instantiation take place in the mission-planning phase in a sequential manner. A mission can be instantiated after being created. Interactions among CM entities in the mission planning phase are illustrated in Fig. 4 and elaborated below. The mission creation and instantiation are collaboratively managed by the MPA, MIA, CMA, and RMA, as described below. These network agents form a hierarchical multi-agent system for mission-execution management, where the MPA is at the top of the hierarchy.

4.2.1. Mission creation

The AF can request to create a mission for an application by sending an intent to the MPA. If the AF request is authorized, the MPA can use the NetGPT to resolve the intent and create the mission accordingly. During the intent resolution, the mission goal is identified; if the mission goal is valid, a mission workflow is generated to achieve the mission goal. The mission workflow specifies a networking procedure of CB(s), each corresponding to an NC or another mission, generated based on the available missions and NCs. Templates of the available missions are maintained in the public memory, while profiles of the available NCs are in the toolbox.

The mission workflow may indicate interdependency among the CBs, which limits the ordering or sequence of the executions of the CBs. A CB that is dependent on (or following) another CB should be executed after the other CB has been executed. CB interdependency is directional and transitive. In a case in which a CB1 is dependent on a CB2, the converse is not necessarily true unless clarified otherwise. A CB1 is considered dependent on a CB3 if it depends on (or follows) a CB2 that is dependent on the CB3. The mission workflow does not have circular CB interdependency.

The MPA validates the mission workflow first by validating each of the CB(s) using the MIA. When validating a CB, the MPA provides the block purpose of the CB to an MIA. The MIA uses the NetGPT to analyze the block purpose and attempts to determine a CB instance profile for the CB accordingly. The CB is valid if a CB instance profile can be determined, and invalid otherwise. The mission workflow is valid if and only if all the CB(s) are valid. The MPA may validate the CBs using the same MIA instance or different MIA instances.

The MPA may need to repeat the above steps—that is, workflow generation and validation—multiple times, in order to obtain a valid mission workflow. Afterward, the MPA generates a mission template for the mission, which describes the mission goal and the mission workflow. The MPA stores the mission template in the public memory. The mission is then considered to be created. The NE can later request the MPA to delete the mission at any time, which will cause the mission template to be removed from the public memory.

4.2.2. Mission instantiation

A mission is executed to support an application over an instance of the mission, which includes the necessary NFs and associated resources for the mission execution. The MPA uses the MIA to create the mission instance (i.e., instantiate the mission) based on the mission template. The mission instance may be created upon an internal trigger of the MPA or according to an authorized request, such as from the MEA or the AF. The MPA then generates a mission instance profile for the mission instance and stores the mission instance profile in the public memory.

When instantiating the mission, for each CB in the mission workflow, the MPA interacts with an instance of the MIA, such as an instance associated with the NC corresponding to the CB, to instantiate the CB. The MPA provides the block purpose of the CB to the MIA, and the MIA uses the NetGPT to analyze the block purpose and resolve the block purpose into a CB instance profile.

The CB instance profile describes the block purpose of the CB and specifies the composite NFs of the CB instance. The NFs are selected by the MIA from the NF pool, according to their profiles maintained in the toolbox. For each selected NF, the MIA uses the RMA to allocate or reserve resources from the resource pool to support the mission execution. The CB instance profile can be an existing one identified from the public memory or a newly generated one. In the former case, the CB instance profile refers to a mission instance profile when the CB is a sub-mission CB. In the latter case, the MIA stores the CB instance profile in the public memory.

Before finalizing the mission instance and storing the mission instance profile, the MPA may use the sandbox to evaluate whether it can deliver a satisfactory end-to-end performance, for example, as required in the mission goal. If performance improvement is needed, the MPA will provide performance feedback to the MIA, and the MIA will accordingly optimize individual CB instances (e.g., by reselecting the serving NFs, changing the NF configuration, and/or adjusting the resource allocation or reservation).

4.3. Mission-operation phase

The mission-operation phase includes intertwined mission execution and mission participation, where the latter commences during the former, and the former can be triggered by the latter. Interactions among CM entities in the mission operation phase are illustrated in Fig. 4 and elaborated below. The mission execution is collaboratively managed by the MEA, the CMA, the CBA, and the RMA, as described below. These network agents form a hierarchical multi-agent system for the mission-execution management, where the MEA is at the top of the hierarchy.

4.3.1. Mission execution

After an instance of a mission is created, the mission can be executed over the mission instance to support an application. The mission instance profile is maintained in the public memory. During the course of mission execution, the individual CBs of the mission are executed using resources (e.g., NFs) in the respective CB instances. As a CB is being executed, the corresponding CB instance receives, processes, and transmits data related to the application. The data may be received from or transmitted to another CB instance, a mission participant, or a DN that hosts the application.

There are two modes of mission execution: the data-driven mode and the control-driven mode. These two modes can be engaged in combination to execute a mission; for example, the data-driven mode may be used for one part of a mission and the control-driven mode for another part. In the data-driven mode, executions of the CBs are driven by data reception. For example, after an NF receives data from another NF during the execution of a CB, the NF processes the data and—according to the result of the data processing—starts the execution of another CB (e.g., by transmitting the processed data or new data to an NF, a mission participant, or the DN). To enable the data-driven mode, the MEA provides the NFs in the mission instance with rules that describe when and how to start the execution of a CB. The rules are derived from the mission workflow. Unlike the control-driven mode elaborated below, executions of the CBs do not require coordination by the MEA.

In the control-driven mode, the MEA obtains the mission template from the public memory or from the MPA and coordinates (e.g., starts, pauses, resumes, and stops) executions of the CBs with respect to the mission workflow, as described in the mission template. If a CB is dependent on another CB in the mission workflow, the MEA triggers the execution of the CB only after receiving information about the execution completion of the other CB. On the other hand, if there is no interdependency between the two CBs, the MEA can trigger their execution at the same time.

For each CB of the mission, the MEA identifies a CBA—for example, according to the respective CB instance profile identified in the mission instance profile—and coordinates the execution of the CB via the CBA. It should be noted that, if the CB is a sub-mission CB, the CBA refers to an instance of the MEA that manages the execution of the respective mission.

The CBA obtains the CB instance profile from the public memory or from the MEA, selects NFs from the resources specified in the CB instance profile, and configures the selected NFs (referred to as “serving NFs”) to execute the CB. The CBA may (re)select the serving NFs according to the arrival of mission participants, as notified by the MEA (described later in Section 4.3.2). When the CB is being executed, the serving NFs receive, process, and transmit data. During the course of the CB execution, the CBA monitors the performance of the CB execution and may adjust the configuration of the serving NFs (e.g., the parameters of data processing logic in these NFs) or interact with the RMA to schedule resources for the serving NFs in order to optimize the execution performance at runtime.

The CBA notifies the MEA about the progress (e.g., completed or not) and status (e.g., performance) of the CB execution. According to these notifications, the MEA coordinates the execution of other CBs that have interdependency with the CB. According to real-time feedback from the customer, the MEA may use the RMA to adjust the overall resource allocation and scheduling across all the CBs of the mission, so as to optimize the end-to-end performance of the mission execution.

4.3.2. Mission participation

As described above, a mission supports an application through a mission execution, and the mission is executed over a mission instance using associated NFs and resources. In order to access the application, the UE requests the MEA to establish a mission session through the mission instance and uses the mission session to participate in the mission execution. If the application is hosted in a DN, the NE also indicates to the MEA that the mission session is targeting the DN. If the mission execution has not yet started, the MEA may start it upon the UE’s request.

When establishing the mission session, the MEA identifies the CBs of the mission at which the NE can participate in the mission execution and the respective CBAs that manage the CBs’ execution during the mission execution. The MEA can identify the CBs and the CBAs according to the mission template and the mission instance profile. The MEA notifies the CBAs about the arrival of the mission participant (i.e., the NE) and connects the NE to the CBAs so the CBAs can manage the NE’s participation in the CB executions. The MEA further uses the CMA to (re)select the mission execution paths so data related to the application can be transported among and processed by the CB instances, the NE, and the target DN (if any). The MEA or CMA interacts with the RMA to allocate necessary communication resources along the mission execution paths.

5. Advantages of A-Core with use cases

This section describes the advantages of A-Core through two use cases: ① an NM use case showing the autonomous programming of NCs and mission workflows, and ② a vertical application case showing the dynamic reuse/updating of NCs during mission execution. The advantages of A-Core can be summarized as follows:

Autonomous programming of a network procedure to support any services. In an existing core network, any network workflows or procedures (e.g., protocol data unit (PDU) session establishment, modification, and release) executed by NFs must be manually predefined according to relevant standards (e.g., 3GPP documents). However, in the A-Core network, the MPA can automatically program/generate undefined network procedures—that is, new missions—according to customer intents. This autonomous network procedure programming capability is enabled by the NetGPT’s ability to resolve customer intents and the pre-stored NCs in the toolbox, which can be retrieved by the MPA to compose any new missions. Details of how the multiple agents in A-Core work together to resolve a customer’s intent into a mission workflow are described in the use case in Section 5.1.

Allowing third-party-provided NCs. In an existing core network, any operations (e.g., the Nsmf_PDUSession_CreateSMContext operation used in establishing a PDU session) should be standardized before they can be implemented. In contrast, in the A-Core network, third-party-provided NCs can be dynamically registered to A-Core and can be retrieved/reused by the MPA to compose new missions. The use case in Section 5.1 provides details on how third-party NCs are registered to A-Core.

Supporting the dynamic reuse of NCs. Any predefined network procedures or workflows in an existing core network must be executed under specific network conditions/configurations. To cover as many potential scenarios or cases as possible, various network procedures must be defined for different network conditions/configurations, even if they serve the same purpose or provide the same service. However, in A-Core, the mission workflow for a service or purpose can be reused in any scenario with different network conditions/configurations. When executing a mission, the CMP agents (e.g., the MEA or MPA) in A-Core can dynamically add, update, or delete NCs in the mission workflow to adapt to current network conditions/configurations. Details of how NCs are dynamically updated as a mission is executed are described in the use case in Section 5.2.

5.1. Automated programming of NCs for network management

In existing telecommunication-network standards (e.g., 3GPP documents), massive complex network management procedures are defined. These procedures have no flexibility that can allow them to be adapted to different user intents and network conditions [16]. In contrast, A-Core can automatically program/fabricate registered NM NCs to form customized NM procedures (in the form of NM missions) according to the operator’s intent and current network conditions, thereby resolving the issues of complexity and a lack of flexibility. In this use case, we show how A-Core can automatically generate and execute an NM mission to establish a connection between an idle device and a DN. Fig. 5 shows the detailed workflow of this use case.

5.1.1. NC registration

As the precondition for A-Core-enabled automated network management, the necessary NM NCs must be registered in the toolbox for A-Core to retrieve and call. Each NM NC may contain a series of computing or signaling steps that correspond to a specific NM behavior or action. For example, a device-paging NC corresponds to a paging procedure between an idle device and the RAN; a session-establishment NC corresponds to a procedure to create a PDU session; and a radio-bearer (RB)-establishment NC corresponds to a procedure to establish an RB for a paged device in the RAN. The NM NC can be provided and registered by the operator or by external providers (e.g., a third-party AF) through the NC registration procedure shown in Fig. 6.

Details of the NC registration procedure are explained as follows:

Step 1. The NC provider (either the A-Core operator or a third-party provider) sends a request to the NetGPT to register new NC(s) or update existing NC(s).

Step 2. After receiving the NC registration/update request, the NetGPT notifies the controller of the toolbox to be ready to receive NC profiles from the NC provider. The interface/address information of the NC provider may include the NC update notification. Upon receiving the NC update notification, the toolbox controller may respond to the NetGPT with the required interface information for it to receive the NC profiles and the format requirements for an acceptable NC profile (e.g., the profile must be in JavaScript object notation (JSON) format).

Step 3. According to the received interface and the NC profile format information, the NetGPT notifies the NC provider to transmit the NC profiles to the toolbox through the required interface, following the required format.

Step 4. The NC provider transmits the newly added/updated NC(s) to the toolbox according to the interface and format requirements received in step 3 in Section 5.1.1.

Step 5. The added/updated NC(s) are verified:

Step 5.1. After receiving all added/updated NC(s) from the NC provider, the toolbox controller sends profiles of these NC(s) to the sandbox to check for potential self-conflicts and redundancy among them and all pre-stored NCs in the toolbox.

Step 5.2. The sandbox verifies the received NC(s) according to the pre-/post-conditions in their profiles. Since all pre-stored NCs in the toolbox should have been verified by the sandbox before being stored, the sandbox is assumed to maintain the profiles of the pre-stored NCs in the toolbox.

Step 5.3. The sandbox outputs the verification result to the toolbox controller.

Step 6. If any redundancies/conflicts are found in the received verification result, the toolbox controller sends a request to the NC provider to modify the NC(s). After modification (step 7 in Section 5.1.1), the NC provider can re-execute step 4 in Section 5.1.1 to send the modified NC profile(s) for verification. Steps 4-7 in Section 5.1.1 may be repeated multiple times until all added/updated NC(s) pass the verification.

Step 7. In this step, the NC provider modifies the NC(s), as described in step 6 in Section 5.1.1.

Step 8. The toolbox stores the profile(s) of the newly added/updated NC(s) that pass the verification.

Step 9. The toolbox notifies the NetGPT about the completion of the NC update after all newly added/updated NC(s) are stored.

Step 10. The NetGPT notifies the NC provider about the completion of the NC registration after receiving the NC update completion notification from the toolbox.

5.1.2. Intent resolution by the MPA

Once the customer (i.e., the operator or the third-party AF) has an intent or request for network management, it sends the intent to A-Core for resolution. Upon receiving a customer intent, multiple agents in A-Core perform the following intent-resolution procedure to resolve it into a mission workflow. Fig. 7 shows a detailed intent-resolution procedure.

Step 1. The customer sends an intent-resolution request to the MPA. The request includes the intent described in natural language, such as, “Establish a connection between an idle device and a DN.”

Step 2. The intent is decoupled by MPA through the following sub-procedure.

Step 2.1. According to the predefined/maintained prompt templates, the MPA concatenates the customer intent with predefined prompt words/contexts to form intent-decoupling prompts. In some cases, the MPA may select different prompt templates for different types of customers or intents. The customer type information can be retrieved by the MPA from the user profile stored in the public memory.

Step 2.2. The MPA sends the generated intent-decoupling prompts to the NetGPT for processing.

Step 2.3. The NetGPT retrieves the public memory for one or multiple mission templates of sample/historical missions whose associated intents partially or fully match the customer’s intent. Therefore, the customer intent can be decoupled into one or multiple sub-intents (i.e., the intermediate steps/goals to be achieved/met before completely solving the problem indicated by the intent) with specific interdependencies. Each sub-intent decoupled from the customer intent should correspond to one mission workflow of a mission pre-stored in the public memory. For example, the NetGPT may decouple the intent “Establish a connection between an idle device and a DN” into two sub-intents: ① “page an idle device to enroll it into the RAN,” which corresponds to a mission workflow containing a device paging NC and RB-establishment NC; and ② “establish a PDU session between an enrolled device and a DN,” which corresponds to a mission workflow containing a session-establishment NC.

Step 2.4. The NetGPT sends the intent-decoupling result to the MPA, where the intent-decoupling result includes all sub-intents with their dependencies and their associated mission template information.

Step 2.5. (Optional) In some cases, the MPA may send the intent-decoupling result to the customer to confirm whether the identified sub-intents are acceptable. If the customer accepts the sub-intents, the intent-decoupling procedure is completed; if the customer has further requests, steps 2.1-2.4 in Section 5.1.2 may be repeated to modify the intent-decoupling result until the customer accepts it.

Step 3. The sub-intent resolution sub-procedure. Each sub-intent decoupled in the previous step is further resolved by the MPA into a sub-mission workflow composed of one or multiple NCs with specific execution dependencies. It should be noted that, although the sub-intent is associated with a pre-stored mission workflow composed of specific NC(s), the MPA still needs to go through this sub-intent resolution procedure to retrieve detailed NC(s) from the toolbox that are suitable for the current customer and network conditions.

Step 3.1. The MPA sends the sub-intent information to the NetGPT to request the retrieval of candidate NC(s).

Step 3.2. The NetGPT retrieves both the toolbox, for NC(s) that are semantically similar/related to the sub-intent, and the public memory, for customer profile data, policy data, network status data, and so forth. Then, the NetGPT jointly analyzes the retrieved information to find candidate NC(s) to compose the sub-mission workflow.

·Consider the example described in step 2 in Section 5.1.2. For sub-intent 1, the MPA discovers that the target device is not in an idle state; therefore, sub-mission workflow 1 will only contain an RB-establishment NC, which is different from the pre-stored mission workflow. For sub-intent 2, the MPA discovers that the target device has a registered “session level protection” service, which introduces a session-protection NC into sub-mission workflow 2.

Step 3.3. The NetGPT sends the selected candidate action(s) to the MPA.

Step 3.4. The MPA chains the candidate action(s) to form a sub-mission workflow for the sub-intent. The inter-dependencies of the NC(s) in the sub-mission workflow are determined by their pre-condition and post-condition features. In some cases, where the pre-condition and post-condition of the two NCs are not well defined/aligned, the MPA may send a dependency-resolution request to the NetGPT to ask whether the two NCs can be chained or not. For example, if the pre-condition of the session-protection NC is “session_established,” while the post-condition of the session-establishment NC is “a session is available,” the MPA may need a resolution from the NetGPT such that the pre-condition of the session-protection NC matches the post-condition of the session-establishment NC, allowing them to be chained in sub-mission workflow 2.

Step 4. The mission workflow is generated:

Step 4.1. Given all the sub-mission workflow(s) generated in step 3 in Section 5.1.2, the MPA chains/concatenates all sub-mission workflows to form a complete mission workflow. The interdependencies of the sub-mission workflows are determined by the interdependencies of their corresponding sub-intents.

Considering the example described in the previous steps, the generated NM mission workflow includes the following NCs: an RB-establishment NC, a session-establishment NC, and a session-protection NC. These three involved NCs will be executed sequentially.

Step 4.2. The MPA may run specific algorithms (e.g., the shortest path search) to remove the potential cyclic route (i.e., the post-condition of NC “A” is the pre-condition of NC “B,” which should be executed before NC “A” in the mission workflow) in the generated mission workflow.

Step 5. The mission workflow is validated:

Step 5.1. The MPA sends a mission workflow verification request to the sandbox to verify the generated mission workflow. In the request, the mission workflow can be described as a set of NC(s) with their interdependencies.

Step 5.2. The sandbox verifies the received mission workflow to check for potential redundant and/or conflicting NCs. If any redundant or conflicting NCs are found, the MPA may re-execute steps 3 and 4 in Section 5.1.2 to retrieve alternative candidate NC(s).

Step 5.3. The sandbox sends the mission workflow verification result to the MPA.

Step 6. For a generated mission workflow that passes the verification, the MPA associates the mission goal (i.e., the customer intent) with the mission workflow to form a mission template corresponding to the generated mission workflow. After that, the MPA sends the mission template to the public memory to store it for future reuse.

Step 7. After sending the generated mission template for storage, the MPA sends a notification to the customer regarding the completion of the user intent resolution.

5.1.3. NM mission instantiation by the MIA

The MPA sends the generated NM mission template to the MIA to instantiate it—that is, to select and configure NF(s) for executing each NC involved in the NM mission template.

5.1.4. NM mission execution control by the MEA

Upon receiving the NM mission instance determined in Section 5.1.3, the MIA informs the MEA (via the CMA) to trigger and control the execution of the NM mission instance, which fulfills the intent of the operator/AF. After completing the NM mission execution, the MEA (as well as the MPA and MIA in some cases) stores the information on the NM mission in the historical mission dataset in the public memory. The information to be saved in the public memory includes the generated NM mission template, the NM mission instance, and the NM mission execution performance (e.g., execution time and resource consumption).

5.2. Reuse of intelligence for real-time map service

In the second use case, an autonomous driving service provider (ADSP) wants to offer a real-time map service (RMS) to its client autonomous vehicles (CAVs) in a city or on a highway segment. While driving, a CAV continually receives real-time map segments representing the current objectives (e.g., vehicles/pedestrians) near the CAV from the 6G system This RMS can provide holistic information on the CAV’s surrounding environment, which cannot be sensed by the CAV itself due to blockages and/or a limited sensing range. Each of the real-time map segments is generated by an AI model through an inferencing process. The inferencing input includes CAV sensing data uploaded by CAVs and (optional) network sensing data such as vehicle and/or pedestrian appearance, as sensed by integrated sensing and communication (ISAC) entities. Detailed A-Core workflows to enable this RMS are shown in Fig. 8.

5.2.1. ADSP intent resolution

The ADSP sends an intent to A-Core to request RMS to its CAV clients. In the intent, the ADSP includes ① an untrained AI model to be trained and further to be inferenced to enable the RMS; ② training and inferencing data requirements (e.g., the need for ISAC sensing data if the inferencing performance is lower than a given threshold); and ③ training and inferencing performance requirements (e.g., minimal training accuracy and maximal inferencing latency). Upon receiving the intent, the MPA resolves it into an RMS mission template that contains the following NCs with their interdependencies:

•A data-management (DAM) NC, which provides various network data for other NCs to use; in this use case, the data provided by this NC include historical UE/vehicle sensed data (stored in the network) and the historical network sensed data for training the AI model to generate a real-time map.

•An AI-training NC, which conducts the AI model training according to input the training data and outputs a pre-trained AI model; in this use case, the training data are provided by the DAM NC.

•An AI-inferencing NC, which executes the inferencing of a pre-trained AI model according to the input inference data and outputs the inferencing results; in this use case, the pre-trained AI model is provided by the AI-training NC, while the inference data is CAV sensing data from the clients of the ADSP.

5.2.2. RMS mission instantiation and execution

The MPA sends the generated RMS mission template to the MIA for instantiation. Unlike the NM mission instantiation, in this case, the NC(s) to be executed are configured to the selected CBA (via the CMA) rather than the NF. More specifically, the untrained AI provided by the ADSP in the intent is configured to the CBA of the AI-training NC. After instantiation, the MIA informs the MEA that the RMS mission instance is ready to be executed. During the RMS mission execution, the following steps are carried out:

•The MEA first coordinates the CBA of the DAM NC and the CBA of the AI-training NC to train the ADSP-provided AI model.

•After training completion, the CBA of the AI-training NC transmits the trained AI model to the CBA of the AI-inferencing NC.

•The CBA of the AI-inferencing NC deploys the trained model on specific NFs/resources, then notifies the MEA that the AI-inferencing NC is ready for use.

•After receiving the notification from the CBA regarding the AI-inferencing NC, the MEA informs the ADSP that the RMS is ready for use.

•When using the real-time navigation-map service, interested CAVs subscribe or periodically request real-time map segments from the AI-inferencing NC (via the 6G system). Based on the collected real-time sensing data from interested CAVs, the CBA of the AI-inferencing NC conducts model inferencing with the managed resources to generate real-time map segments, which it then continuously sends to interested CAVs with guaranteed latency.

5.2.3. Adding an NC to improve the mission-execution performance

During the RMS mission execution, the MEA continually monitors the execution performance (e.g., by monitoring the execution status updates from the CBAs of the involved NCs). Once the AI-inferencing performance is lower than a predefined threshold, the MEA sends a new NC request to the MPA to add an NC to provide additional inferencing inputs. According to the request and the ADSP’s intent, the MPA searches the toolbox and selects an ISAC NC, which provides sensing data generated by ISAC entities, as the additional NC. The MPA then re-executes the intent resolution sub-procedure described in Section 5.2.1 to add the ISAC NC to the RMS mission template and further notifies the MIA to update the RMS mission instance by configuring a CBA of the ISAC NC and linking it to the original RMS mission instance. After the RMS mission instance update is completed, the MEA triggers the CBA of the ISAC NC to provide network sensing data to the AI-inferencing NC, thereby involving the ISAC NC in the RMS mission execution.

5.2.4. Updating an NC according to a network status change

When the computing or communication resources of the CB running an NC (e.g., the AI-inferencing NC) experience a temporary shortage as the RMS mission is being executed, the CBA of the AI-inferencing NC informs the MEA about the resource degradation. The MEA then sends an NC replacement request to the MIA (via the CMA) to replace the corresponding CB configured with the NC. After receiving the request, the MIA re-selects another CB with sufficient resources and then configures the CBA of the new CB for instantiating the AI-inferencing NC. After completing the update of the RMS mission instance, the MIA sends the information on the new CBA of the AI-inferencing NC to the MEA for controlling the mission execution.

6. Open issues in A-Core research

Since A-Core is enabled by emerging LFM and AI-agent techniques, which have plenty of unexplored and unsolved issues, A-Core research currently remains in an early stage. In particular, the following open issues should be addressed to fully exploit A-Core’s capabilities.

6.1. Cancellation of faults/errors caused by LFM hallucinations

As an essential infrastructure in any country or region, the core network always has strict constraints or even zero tolerance for operation faults and errors. However, as the foundation that enables A-Core’s capabilities, the LFM’s inferencing capability has an unpreventable hallucination effect that may bring unexpected faults/errors to A-Core’s operation. For example, the NetGPT may compose a mission workflow with NC(s) fabricated by the LFM’s hallucination. Since those NC(s) are not registered in the Toolbox, implementation errors will occur when the MIA instantiates them.

The hallucination issue is still neither fully explored nor resolved in current LFM research [35]. Nevertheless, there are still two promising approaches to reduce the fault/error probabilities in A-Core: ① forcing the AI agent to infer decisions step by step until it reaches a final result, instead of letting the AI agent infer the final result immediately; and ② applying a verification mechanism in intent-resolution and NC-registration procedures to validate the outputs of the AI agents in A-Core.

6.2. A-Core agent data processing with strict latency constraints

Since each AI agent involved in A-Core is enabled by an LFM (either locally deployed or remote-accessed), the computation or data-processing latency of an agent can heavily depend on its LFM’s inferencing latency, which may vary significantly, given the different devices implementing the LFMs. For example, according to our experiments, the average inferencing time of a NetGPT enabled by the Mistral-7B-v0.1 LFM [31] is less than 10 s on a RTX 4080 super graphics processing unit (GPU) but more than 30 s on a RTX 3080 GPU. Moreover, to complete the same task, the most advanced LFM (e.g., DeepSeek R1 with 671 billion parameters [36]) may only require one round of inferencing, while a quantified LFM with smaller size (e.g., DeepSeek-R1-Distill-Qwen-1.5B [36]) may need multiple rounds of inferencing and interactions with the customer, making the inferencing latency unpredictable. Therefore, due to the highly varied and unpredictable inferencing times of LFMs, it is difficult for A-Core to generate/adapt missions for services with strict delay requirements, such as URLLC, in its current stage.

One feasible way to minimize the data-processing latency in A-Core is to try to implement the most advanced LFMs on devices with sufficient computing resources. Research trials on embedding LFMs on personal or mobile devices can help resolve this issue by releasing the computing resource requirements.

6.3. Quantitative benchmarks to evaluate A-Core’s performance

Most of the advantages of A-Core can be interpreted as realizing new features or new functionalities that are not supported in existing core networks, such as autonomous programming and the reuse of mission workflows, allowing third-party provided NCs. It could be difficult to compare the advantages of A-Core with those of existing core networks in a quantified way because ① the existing core network may have no data or performance metrics that can be used as appropriate benchmarks to measure the additional values brought by A-Core; and ② the performance of most of the new functionalities provided by A-Core are impacted by the LFM’s inferencing performance, which is highly dependent on how advanced the LFM and the computing device implementing it are.

To address this issue, more research works with original benchmarks are called for in the areas of both A-Core and LFM inferencing, which can encourage the future emergence of widely accepted performance benchmarks for A-Core.

7. Conclusions

In this paper, we proposed A-Core, a mission-oriented core network architecture empowered by AI for next-generation mobile telecommunications such as 6G and beyond. A-Core allows NCs to be dynamically added and upgraded and orchestrates available NCs as needed to offer services to customers. A service can provide both connectivity and data processing and is offered in the form of a mission. The mission (i.e., service) is created and executed by multiple network agents autonomously, according to the customer’s intent and with assistance from a large-scale network AI model, NetGPT. During the mission execution, the NCs, together with diverse network resources, are holistically provisioned to ensure service-level performance.

A-Core enables an open, agile system that self-organizes and autonomously operates to offer the best extensible services. As further demonstrated by two use cases, a system upgrade simply involves adding or upgrading individual NCs, while system operation procedures can be generated or updated as missions with little or no standardization overhead. While holding the promise of revolutionizing mobile telecommunications, A-Core brings about unique challenges. It is both crucial and challenging to ensure interoperability among NCs without standardization. It is also a challenge to enable the NetGPT to take into account new or upgraded NCs on the fly, without retraining, when providing reasoning or suggestions to network agents. We leave these open issues for future research.

CRediT authorship contribution statement

Xu Li: Writing - review & editing, Writing - original draft, Methodology, Formal analysis, Conceptualization. Weisen Shi: Writing - review & editing, Writing - original draft, Software, Investigation. Hang Zhang: Writing - review & editing, Writing - original draft, Supervision. Chenghui Peng: Writing - review & editing, Conceptualization. Shaoyun Wu: Writing - review & editing, Supervision. Wen Tong: Writing - review & editing, Supervision, Project administration.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

The authors would like to thank Dr. Xueli An for her valuable comments on the writing of this paper.

References

[1]

R. Sagramsingh. AI: a global survey. IEEE-USA, Washington, DC (2019).

[2]

TS 23.288: Architecture enhancements for 5G system (5GS) to support network data analytics services. 3GPP standard. Valbonne: 3rd Generation Partnership Project (3GPP); 2022.

[3]

M.2160: Framework and overall objectives of the future development of IMT for 2030 and beyond. Geneva: Radio Communication Division of the International Telecommunication Union; 2023.

[4]

Thalanany S, Zhang Y, Deng L, Verspecht T, Maglione R, Volk A, et al. Automation and autonomous system architecture framework—phase 2. Next Generation Mobile Networks Alliance; 2024.

[5]

Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S, et al. On the opportunities and risks of foundation models. 2021. arXiv:2108.07258.

[6]

OpenAI; Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, et al. GPT-4 technical report. 2023. arXiv:2303.08774.

[7]

Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In:Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022 Jun 19-24; New Orleans, LA, USA. New York City: IEEE; 2022. p. 10684-95.

[8]

Sun K, Huang K, Liu X, Wu Y, Xu Z, Li Z, et al. T2V-CompBench: a comprehensive benchmark for compositional text-to-video generation. 2024. arXiv:2407.14505.

[9]

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, et al. Attention is all you need.I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan (Eds.), Advances in neural information processing systems 30. Curran Associates, Inc., Red Hook (2017).

[10]

Chen B, Zhang Z, Langrené N, Zhu S.Unleashing the potential of prompt engineering in large language models: a comprehensive review. 2023. arXiv:2310. 14735.

[11]

G. Feng, B. Zhang, Y. Gu, H. Ye, D. He, L. Wang. Advances in neural information processing systems 36, Curran Associates, Inc., Red Hook (2024).

[12]

Hu E, Shen Y, Wallis P, Zhu ZA, Li Y, Wang S, et al. LoRA: low-rank adaptation of large language models. 2021. arXiv:2106.09685.

[13]

Fan W, Ding Y, Ning L, Wang S, Li H, Yin D, et al. A survey on RAG meeting LLMS:towards retrieval-augmented large language models. In:Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024 Aug 25-29; Barcelona, Spain. New York City: Association for Computing Machinery; 2024. p. 6491-501.

[14]

S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, et al. Tree of thoughts:deliberate problem solving with large language models. A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in neural information processing systems 36, Curran Associates, Inc., Red Hook (2024).

[15]

Guo T, Chen X, Wang Y, Chang R, Pei S, Chawla NV, et al. Large language model based multi-agents: a survey of progress and challenges. 2024. arXiv:2402.01680.

[16]

Q. Cui, X. You, N. Wei, G. Nan, X. Zhang, J. Zhang, et al. Overview of AI and communication for 6G network: fundamentals, challenges, and future research opportunities. Sci China Inf Sci, 68 (7) (2025), Article 171301.

[17]

Z. Qin, L. Liang, Z. Wang, S. Jin, X. Tao, W. Tong, et al. AI empowered wireless communications: from bits to semantics. Proc IEEE, 112 (7) (2024), pp. 621-652.

[18]

Shen XS, Huang X, Xue J, Zhou C, Shi X, Zhuang W. Revolutionizing QoE-driven network management with digital agents in 6G. IEEE Commun Mag. In press.

[19]

Z. Yang, M. Chen, K.K. Wong, H.V. Poor, S. Cui. Federated learning for 6G: applications, challenges, and opportunities. Engineering, 8 (2022), pp. 33-41.

[20]

Bornea AL, Ayed F, De Domenico A, Piovesan N, Maatouk A.Telco-RAG: navigating the challenges of retrieval-augmented language models for telecommunications. 2024. arXiv:2404. 15939.

[21]

Zou H, Zhao Q, Tian Y, Bariah L, Bader F, Lestable T, et al. TelecomGPT: a framework to build telecom-specfic large language models. 2024. arXiv:2407.09424.

[22]

Erak O, Alabbasi N, Alhussein O, Lotfi I, Hussein A, Muhaidat S, et al. Leveraging fine-tuned retrieval-augmented generation with long-context support for 3GPP standards. 2024. arXiv:2408.11775.

[23]

Maatouk A, Ampudia KC, Ying R, Tassiulas L.Tele-LLMs: a series of specialized large language models for telecommunications. 2024. arXiv:2409. 05314.

[24]

Karapantelakis A, Thakur M, Nikou A, Moradi F, Olrog C, Gaim F. Using large language models to understand telecom standards. In:Proceedings of IEEE International Conference on Machine Learning for Communication and Networking; 2024 May 5-8; Stockholm, Sweden. New York City: IEEE; 2024. p. 440-6.

[25]

Lin X, Kundu L, Dick C, Galdon MAC, Vamaraju J, Dutta S, et al. A primer on generative AI for telecom: from theory to practice. 2024. arXiv:2408.09031.

[26]

Y. Chen, R. Li, Z. Zhao, C. Peng, J. Wu, E. Hossain, et al. NetGPT: an AI-native network architecture for provisioning beyond personalized generative services. IEEE Netw, 38 (6) (2024), pp. 404-413.

[27]

A. Mekrache, A. Ksentini, C. Verikoukis. Intent-based management of next-generation networks: an LLM-centric approach. IEEE Netw, 38 (5) (2024), pp. 29-36.

[28]

Y. Huang, H. Du, X. Zhang, D. Niyato, J. Kang, Z. Xiong, et al. Large language models for networking: applications, enabling techniques, and challenges. IEEE Netw, 39 (1) (2025), pp. 235-242.

[29]

Manias DM, Chouman A, Shami A. Semantic routing for enhanced performance of LLM-assisted intent-based 5G core network management and orchestration. 2024. arXiv:2404.15869.

[30]

Forbes R, Strassner J, Zeng Y. Transformers and large language models as used in ETSI ISG experiential networked intelligence. In:Proceedings of IEEE International Conference on Communications Workshops; 2024 Jun 9-13; Denver, CO, USA. New York City: IEEE; 2024. p. 1250-5.

[31]

Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, de las Casas D, et al. Mistral 7B. 2023. arXiv:2310.06825.

[32]

Grattafiori A, Dubey A, Jauhri A, Pandey A, Kadian A, Al-Dahle A, et al. The LLaMA 3 herd of models. 2024. arXiv:2407.21783.

[33]

Ren X, Zhou P, Meng X, Huang X, Wang Y, Wang W, et al. PanGu-Σ: towards trillion parameter language model with sparse heterogeneous computing. 2023. arXiv:2303.10845.

[34]

Troquard N, De Sanctis M, Inverardi P, Pelliccione P, Scoccia GL. Social legal, ethical, empathetic, and cultural rules:compilation and reasoning. In:Proceedings of the 38th AAAI Conference on Artificial Intelligence; 2024 Feb 20-27; Vancouver, BC, Canada. Washington, DC: AAAI Press; 2024. p. 22385-92.

[35]

L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, et al. A survey on hallucination in large language models: principles, taxonomy, challenges, and open questions. ACM Trans Inf Syst, 43 (2) (2025), pp. 1-55.

[36]

Guo D, Yang D, Zhang H, Song J, Zhang R, Xu R, et al. DeepSeek-R1: incentivizing reasoning capability in LLMs via reinforcement learning. 2025. arXiv:2501.12948.

AI Summary AI Mindmap
PDF (2284KB)

601

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/