《1 Introduction》

1 Introduction

Cyberspace is the fifth sovereign space after land, sea, air, and sky, and is related to national security, social stability, and information sovereignty. As the primary carrier of cyberspace, the Internet has widely penetrated crucial aspects of human society including politics, economics, national defense, diplomacy, education, healthcare, and society. Thus, the Internet has become an essential infrastructure for human activities. The Internet adopts the transmission control protocol/Internet protocol (TCP/IP) stack, which was originally designed to support end-to-end communication, resulting in its success in continuous evolution for nearly half a century. To date, the belief that “Everything over IP” remains not only intact but also has a strong inertial force, and people continue to have more expectations from the TCP/IP stack-based Internet architecture. However, currently, the communication and transmission functions of Internet are gradually degrading, and content services carried by it have received unprecedented attention. As a result, address-driven Internet architecture no longer matches the requirements of content-centric mainstream applications.

In present, Internet has become an open-air dump of big data that comprises content. Moreover, content big data is increasingly emerging in cyberspace with multiple sources that are heterogeneous, huge, complex, and diverse. Thus, this intractable and fragmented content has led to an increasingly chaotic and disordered Internet and brought considerable challenges to its sustainable development. Therefore, the research community is deeply concerned about its future development. For example, Vint Cerf, the father of the Internet, advocated the concept of inclusive development “The Internet is for everyone” [1]; The Internet Society supported Internet access as a fundamental human right; Tim Berners-Lee, the father of the World Wide Web (WWW), emphasized that the Internet should adhere to the welfare purpose of providing accessible content services for all humanity. Recently, the Chinese government has attached great importance to network security and sustainable development of the Internet. To create an appropriate network ecology, protect the legitimate rights and interests of the people, safeguard national security, and to construct a sound cyberspace, in 2020, the Cyberspace Administration of China issued the Regulations on the Ecological Governance of Network Information Content [2].

The ongoing event of diverse cyberattacks on the Internet directly endangers social stability and national security. However, the Internet architecture did not pay sufficient attention to security at the beginning of its design, which intensified various security concerns. Recently, academia has begun to actively explore new security architectures that are more secure and trustworthy. These include mimetic defense network architecture based on biological mimetic phenomena [3], network security architecture based on biological immune mechanisms [4], active immune computing mode of trusted network architecture [5], address-driven Internet security architecture [6], blockchain-based Internet endogenous security architecture [7], and multilateral co-governance multi-identity future network architecture [8]. In particular, the novel content-driven network architecture [9], which adopt content-centric principles, has also received increasing attention. The representative schemes include TRIAD, DONA, CBCB, PSIRP, NetInf, CCN, SOFIA, NDN [10], and XIA.

Because the Internet essentially adopts an address-driven architecture, its challenges mainly come from its mismatch with cyberspace’s new content-centric application paradigm. To this end, we are inspired by the symbiosis phenomenon among different species in nature and propose a dual-meta network architecture driven by content and address. Its initial principle and implementation mechanism regarding intelligent content governance is represented in [11]. This paper details the dual-meta network architecture from the top-level perspective and systematically present the innovative principles and latest advancements. Our proposed dual-meta network architecture can provide a suitable and feasible solution for solving the Internet development dilemma by integrating address-driven Internet (the primary structure) and content-driven broadcast storage network (the secondary structure).

《2 Requirements for dual-meta network architecture》

2 Requirements for dual-meta network architecture

After clarifying the current situation of Internet architecture and analyzing the Internet’s development requirements in the present scenarios, we believe that it is critical to explore a new network architecture. The development of the Internet has significantly promoted the progress of human civilization. However, it has also caused several crises and challenges. These problems were predominantly observed at the end of the 20th century. Since the early 1990s, with the lifting of the ban on commercial Internet applications and the emergence and rapid development of WWW, the primary application paradigm of the Internet has changed from focusing on end-to-end data transmission to concentrating on content transmission. With the rise of user-generated content (UGC), such as forums, Weibo, WeChat, social networks, short videos, multimedia, and the advent of big data, Internet users have gradually switched from content consumers to producers. In addition, the inherent flaws of Internet architecture in content governance have led to the Internet becoming a chaotic space for the wanton online spread of rumors and false information, and the trustworthiness of the Internet as the typical spiritual home of humankind is significantly weakened.

A conspicuous fact is that the Internet architecture adopts the address-driven TCP/IP protocol stack. However, the Internet’s primary role has shifted from address-centric principles aimed at end-to-end data communication to the provision of content-centric sharing services to numerous users. Because end-to-end is a crucial design principle of the Internet, the current architecture is address-driven and targets the reachability of data transmission. However, as the mainstream application paradigm over the Internet has profoundly transformed to content sharing, the big data trend of Internet content has increased. Moreover, this results in a scale-free user transmitting enormous content through the Internet through multi-hop forwarding, where network resources and mechanisms (network protocols, etc.) are scale-limited. Consequently, this will undoubtedly exacerbate the explosive growth of Internet traffic.

The academic community has noticed a mismatch between address-driven traditional Internet architectures and content-centric mainstream applications. Hence, new content-driven network architectures such as PSIRP and NDN have received extensive attention in recent years, reflecting a significant shift in the focus of network architectures from address-oriented to content-oriented [7]. However, most of these network architectures seek a revolutionary route based on reconstruction (clean slate), which tends to move from one pole (address-driven) to another (content-driven). However, the current Internet is a complex ecosystem that includes numerous operators and users with highly complex structures and various appeals. Therefore, any modification to the Internet architecture will affect the entire body.

In summary, the correct way to study the new network architecture is that content-centric network architecture should be seen as a development goal for the current Internet architecture and not as its reconstruction scheme. Therefore, the relationship between the new architecture (content-driven) and the existing architecture (address-driven) must be negotiated. The address-driven architecture remains the most suitable for end-to-end interactive applications. Additionally, these advantages of the current architecture will be challenging to replace in the foreseeable future. Consequently, the predominant position of the existing address-driven architecture cannot be changed within a short period. Thus, to overcome the current development dilemma encountered by the Internet, an efficient and feasible route is to abandon the shackles of exclusively monistic consideration and adopt a dual-meta architecture of “primary structure + secondary structure.” Specifically, we propose to develop the address-driven architecture as the primary structure and the content-driven broadcast-storage network as the secondary structure. The proposed architecture can cope with the challenges brought about by mainstream content-centric applications by forming a new type of dual-meta-Internet architecture driven by content and address. Next, we further analyze the address-driven features of the current Internet and the challenges it encounters.

《3 Address-driven features of current Internet and its challenges》

3 Address-driven features of current Internet and its challenges

Although it is difficult to define the Internet, its features are highly distinguishable. It embraces the TCP/IP protocol stack and packet switching technology to overlay various infrastructures and has become an extensive global network. At its core, the IP protocol lies at the waist of the hourglass-style TCP/IP protocol stack, enabling the construction of a global packet network that is independent of any explicit underlay technology. Thus, it can not only hide diverse heterogeneous carrier networks, but also bring more openness, inclusiveness, and scalability to the Internet. Reachability in end-to-end data transport requires Internet architecture to embrace address-driven principles at the beginning of its design. Therefore, this intrinsic prioritization of address-driven principles has transformed the Internet into a single address-driven network.

《3.1 Address-driven architecture of the current Internet》

3.1 Address-driven architecture of the current Internet

The Internet namespace is mainly composed of the application layer’s uniform resource locator (URL), the network layer’s IP address, and the link layer’s media access control (MAC). According to their scope, the URL and IP addresses are classified as the global range, while the MAC addresses are local, which depend on the actual transmission medium. The IP address, a global physical identifier in the binary string type, uniformly identifies any entity in the network. Hence, the IP address acts as a bridge between the upper and lower layers of the TCP/IP protocol stack. Because of the destination IP address, the IP protocol can multihop forward each packet from the source to the destination. Both source and destination addresses are included in the header of each packet so that the network element can swiftly look up the forward table and forward the packet to the next node using the provided destination address. The source IP address allows the response packet to be traced back to the initial node. Therefore, the address-based packet-forwarding method enables flexible hop-by-hop forwarding and efficient multiplexing of the transmission channel.

Furthermore, the URL was initially invented by Tim Berners-Lee and adopted by the World Wide Web Consortium as a standard method for describing the location and accessing resources available on the Internet. All resources on the Internet are organized according to the URL, and the domain name system (DNS) is responsible for map resolution between the URL and IP address. The concise and easy-to-use URL has dramatically promoted its mainstream application in the Internet, shifting from address-centric end-to-end communication to content-centric sharing. However, as a locator, URLs still have address-oriented features that only describe the location of content resources on the Internet. This restricted content identification capability makes the URL incapable of describing the rich semantics of content resources, exacerbating the isomerization, fragmentation, and disorder of content big data currently emerging on the Internet.

The address-centric principles most preferably run through the development process of current Internet architecture. Owing to the IP address-centric manner, the connectionless IP protocol can support all types of applications upwards and multiplex various network resources downwards, thereby promoting the rapid development of Internet applications. When the URL was introduced, the WWW was in its infancy. Although there were indefinite requirements for content sharing, URL could not bypass address-centric principles. Consequently, it is difficult to find and manage content resources because they are out of order owing to the increasing content traffic.

《3.2 Challenges introduced by the content sharing application paradigm》

3.2 Challenges introduced by the content sharing application paradigm

The primary application paradigm of the Internet, which aims at end-to-end data transmission and takes address-driven architecture as its implementation standard, has been replaced by a complex content-application paradigm in characterizing sharing. This transition prompted the Internet to evolve from a random network that obeys a Poisson distribution to a scale-free network with a power law. Moreover, the content-centric feature revealed by the emerging application paradigm makes the Internet encounter structural problems such as nonlinear congestion, redundant content transmission, and disordered content. According to prediction reports from Cisco Systems (Cisco) and the Internet Data Center [12,13], more than 90% of global IP traffic is related to content-sharing applications. By 2025, the total amount of global data is expected to reach 175 ZB. Although the analysis results may vary with different statistical methods, the indisputable truth is that the total amount of global data is increasing exponentially either according to Moore’s Law or it may have surpassed Moore’s Law.

The change in mainstream application paradigm to content sharing is becoming increasingly incompatible with address-driven hop-by-hop Internet infrastructure. First, the Client/Server(C/S) or browser/server (B/S) mechanism based on passive pull often fails to ensure the punctuality of content transmission and is easily affected by the network communication performance. From request to response, a delay in content access is inevitable because of the unstable delay in the backbone network. Although it can actively push content to the user side by terminal caching, popular information is prone to critical mass within a short period, thereby aggravating the transmission delay. Moreover, the current Internet mainly adopts C/S or B/S-based content access mechanisms, which usually include two interaction processes: request and response. Thus, its accuracy depends on the availability of the Internet infrastructure. Consequently, in case any network element is unavailable, the timeliness of content access would be impossible.

Owing to the openness and convenience of the Internets, professional obstacles toward generating content are disappearing. Internet users and various UGC platforms are continuously developing all kinds of content, causing the current cyberspace, which has innumerable users for content big data and exhibits isomerization and fragmentation, to be unstructured or semi-structured. On the Internet, content generators are responsible for developing and sharing content in the virtual cyberspace. By contrast, users are accountable for accessing content, providing feedback, and personalized requirements for content generators. To a certain extent, content generation has become dominant. Thus, catering to users is a profit-seeking option for content generators. However, the emergence of content big data on the Internet makes it effortless to fall into the dilemma of “publishing followed by governance”, which results in “easy publishing and difficult governance.” Further, the content value is easily driven by personal benefits and dominated by small circles as the content generators are individuals, small groups, and other UGC organizations. Consequently, this has led to chaotic phenomena, such as fraud, rumors, and one-sided information on the Internet. Moreover, chaotic cyberspace has introduced additional challenges in guiding society’s value orientation and governing the content ecology.

《3.3 Research on content-driven network》

3.3 Research on content-driven network

For realizing the current application paradigm toward content-sharing, academia and industry have proposed various content-driven architecture solutions. Typical examples are content delivery networks (CDN) and information-centric networks (ICN) [14]. The dirty-slate CDN is a logical network overlay on the TCP/IP architecture, which usually employs Internet infrastructure to actively push content to the edge to optimize the performance of content-sharing applications. Simultaneously, the clean-slate ICN considers content as the first element of network architecture and attempts to replace address-centric routing with content-centric routing, fundamentally changing the TCP/IP protocol stack in which IP packet switching is at the core of current Internet architecture. This is a new type of network architecture that constructs a future content-driven network according to a revolutionary concept.

Although the CDN architecture can actively push content resources to the edge closer to users and enhance the performance of applications with the benefit of edge caching and computing, it is still an overlay network constructed on top of the TCP/IP architecture. With the blossoming of upper-layer content-sharing services, the load on the slim waist of address-driven architecture increases effortlessly. Even though CDN can decrease the delay of content access and the transmission of redundant traffic, the effective improvement of the security flaws of the Internet architecture continues to be cumbersome. Moreover, if the Internet infrastructure is unavailable owing to the destruction of DNS servers, backbone network routing, etc., the CDN will be incapable of providing content delivery services. In addition, the relationship between stakeholders, such as content providers, CDN operators, network operators, and network users, has grown increasingly complex.

Compared to constructing a content-driven logical network overlaying the existing Internet, ICNs adopt the clean-slate principle, which reveals that the address-driven architecture does not match the content-sharing application paradigm in terms of principles and technologies. We believe that the perspective of ICN may be more valuable than its specific solutions such as TRIAD, DONA, CBCB, PSIRP, NetInf, CCN, SOFIA, NDN, and XIA. However, an ICN typically employs a reconstruction strategy. In this case, in the construction stage of the ICN, the low return on investment of the Internet service providers inevitably postpones its development speed. In addition, if we compromise the incremental deployment strategy that NDN adopts, it would be challenging to integrate content-centric networking, such as forwarding and routing of Interest packets and data packets, with the TCP/IP protocol stack. Moreover, NDN is currently in the embryonic stage of technological innovation. It will take at least ten years to reach maturity of productivity in terms of the Hype Cycle of Enterprise Networking and Communication Technology published by Gartner [15]. Scott Shenker, a professor at the University of California, Berkeley, agrees that the deployment of NDN will take a long time [16], even though he believes that NDN is a candidate for future Internet architecture.

《4 Principles of dual-meta network architecture driven by content and address》

4 Principles of dual-meta network architecture driven by content and address

With the rapid expansion of the Internet, new technologies and applications are constantly emerging. Simultaneously, the single address-driven architecture encounters various intricate challenges, such as inefficient routing of content big data, lack of semantic identification, and cybersecurity. To resolve these challenges while ensuring a low cost, inspired by the symbiosis between different species in nature, we propose a dual-meta network architecture driven by content and address. The proposed architecture is characterized by security, trustworthiness, ubiquity, guaranteed sovereignty, and governance. The dual-meta network adopts the uniform content label (UCL), the Chinese National Standard, to combine the address-driven primary structure with the content-driven secondary structure, which can provide a new feasible solution to the development dilemma of the single address-driven architecture.

《4.1 Inspiration from the interspecies symbiosis phenomenon》

4.1 Inspiration from the interspecies symbiosis phenomenon

In nature, the interspecies symbiosis refers to the phenomenon in which two organisms live jointly to benefit one or both and can live independently. For example, there is a symbiotic relationship between anemones and hermit crabs. Anemones usually lie on hermit crab shells and defend against predators through cnidocytes, thereby indirectly protecting hermit crabs. On the other hand, hermit crabs carry anemones and crawl around to fetch more food for them. Furthermore, anemones and hermit crabs can also live separately.

This natural symbiosis phenomenon is enlightening for designing future network architectures. On one hand, it is impractical to adopt a reconstruction strategy within a short period. Conversely, the use of the patch strategy to modify the Internet architecture is unsatisfactory. The current Internet has evolved into a complex ecosystem with multiple operators and users, whose requirements differ from each other. The concrete technical inertia of address-driven principles continues to affect the development of Internet architecture, technology, and application.

Moreover, address-driven architectures still have irreplaceable advantages for end-to-end interactive applications. Nevertheless, after an in-depth analysis of the challenges encountered by the Internet, as content-centric applications have become the mainstream paradigm, the Internet shows scale-free characteristics in multiple dimensions, such as scale, traffic, number of users, and data volume. However, single address-driven architecture does not match the mainstream content-sharing application paradigm. Thus, inspired by the interspecies symbiosis phenomenon, it is possible to design a content-driven secondary structure that adjusts to content-sharing applications while maintaining the dominance of address-driven Internet, forming a symbiotic relationship.

Accordingly, we propose a dual-meta network architecture driven by content and address by employing the above principles. In the proposed network architecture, the address-driven Internet is the primary structure, whereas the content-driven broadcast-storage network that follows the “radiation-replication” model is the secondary one.

《4.2 Rich semantic content-driven metadata UCL》

4.2 Rich semantic content-driven metadata UCL

Naming and identification are key technologies for ensuring information uniqueness, assurance, security, traceability, etc. In the dual-meta network architecture, the primary structure continues to use address-centric names, such as IP addresses, MAC addresses, and URL. In contrast, the secondary structure adopts UCL, content metadata with rich semantics, uniform format, security, and reliability, comprising the content-centric identification space. Therefore, the network architecture obeys content-driven principles that can match the sharing and governance of content big data, such as resource aggregation, dissemination, governance, trusted authentication, and personalized services [17]. The format specification of the UCL is illustrated in Fig. 1.

《Fig. 1》

Fig. 1. Format specification of UCL.

 UCL, the content-driven metadata, has the semantics and management capability that URL lacks. Moreover, a UCL package adopts a two-segment format with “code + properties”. Using various UCL fields, as shown in Fig. 2, the UCL package can standardize the feature information (including code, semantics, and governance information) for content resources in a multidimensional manner. In addition, UCL can provide a dual-meta network architecture with architecture support capabilities such as highly efficient aggregation and ubiquitous distribution for different services, personalized recommendation, semantic analysis and knowledge extraction, certification and registration of physical evidence chain management, legal governance and traceability, and accountability. Please refer to [11] regarding the intelligent governance mechanism of UCL’s content.

《Fig. 2》

Fig. 2. Two-segment format of UCL package.

《5 Reference model of dual-meta network》

5 Reference model of dual-meta network

《5.1 Reference model of dual-meta network architecture》

5.1 Reference model of dual-meta network architecture

The reference model of the dual-meta network architecture can bypass the problems related to reconstruction or planning evolution routes toward the future of Internet. As shown in Fig. 3, the architecture maintains the position of the existing Internet architecture and adds a secondary structure that matches the content-centric application paradigm. This method can construct a new dual-meta network architecture driven by content and address, which avoids the problems associated with reconstructing a new architecture. From a vertical perspective, the dual-meta network architecture comprises three layers: content publication, routing, and transmission. The function of the content publishing layer is to index the content utilizing the UCL and broadcast both the content and its corresponding UCL. This layer interacts with the content routing layer mainly via broadcasting or multicast. Detailed broadcast technologies include, but are not limited to, terrestrial, satellite, and high-altitude platforms, such as balloons and airships. The content routing layer accesses content based on the UCL. Its functions include caching, request, forwarding, and response. These procedures are implemented and executed by network or terminal equipment with broadcast reception and caching capabilities. The content transmission layer is responsible for content transmission between communication entities, and the transmission method can be ICN, TCP/IP, or any of the forthcoming methods.

《Fig. 3》

Fig. 3. Reference model of dual-meta network architecture.

In the dual-meta network architecture, address-driven Internet remains dominant and can evolve further. Hence, it will not only take full advantage of the current Internet’s irreplaceable benefits for end-to-end interactive applications, but also maximize compatibility with the existing ecosystem. After replenishing a content-driven secondary structure, the architecture can utilize broadcasting technologies for content transmission in a scale-free manner and simultaneously broadcast the content and its UCL through the radiative transmission mechanism. In this case, there would be no restrictions on the number of receiving users, and the vision of unlimited content replication can be realized. Moreover, through ubiquitous caching at the user terminal, the transmission and browsing processes of content can be decoupled in spatial dimensions. In addition, the benefit of uniform content-driven metadata is the conversion of content big data into semantic-rich UCL identifiers, which in turn assist in identifying content authenticity and resolving content security problems. The features of the primary and secondary structures in the dual-meta network architecture are presented in Table 1.

《Table 1》

Table 1. Features of the primary and secondary structures.

  Primary structure Secondary structure
(Address-driven Internet) (Content-driven broadcast-storage network)
Transmission model Bidirectional Unidirectional/radiative
Delivery method One to one One to many
Fetch method Passive pull Active push
Security mechanism Address-centric Content-centric
Host traceability Content identification

 

《5.2 Advantages of the proposed network for content-sharing applications》

5.2 Advantages of the proposed network for content-sharing applications

A reference implementation of a content-sharing application based on a dual-meta network is shown in Fig. 4. This process includes the following steps: (1) Content producers (CPs) use the secondary structure to upload content to further enhance content criteria and trustworthiness. (2) The government or a third party uses the UCL to check, certify, register, analyze, and index the content. Then, broadcasting directly pushes the content and its corresponding UCL to the network element or the terminal. (3) As the full text of the content does not exist in the terminal, the end user will send a content request indexed by the UCL. Subsequently, the agent entities in the broadcast-storage terminal, broadcast-storage element, or source server on the Internet respond to the request or continue to forward it. In this manner, CPs, users, and other direct stakeholders of content sharing can accomplish content-centric peer-to-peer transactions and establish a ubiquitous, secure, trustworthy, and intermediary-free content ecosystem.

《Fig. 4》

Fig. 4. Reference implementation of a content-sharing application based on the dual-meta network.

In the single- address-driven architecture, an IP address is a virtual network address that can be easily modified or spoofed. In the IP protocol, the IP destination address was used to forward each packet. The IP source address cannot be verified during the forwarding process, and achieving source traceability and accountability is challenging. Thus, the IP protocol’s flaw may further exacerbate the uncontrollability of the Internet, making cyberspace a breeding ground for various online frauds. Fortunately, a dual-meta network adopts rich semantic content-driven metadata, a security energy-level model, data encryption, digital hash digests, and other security technologies. Using BeiDou, GPS, and other satellite navigation resources to bind key evidence elements in the content’s lifecycle, the proposed network can provide a new endogenous security mechanism that is trustworthy and traceable in all processes, and can be utilized by content-sharing applications.

《5.3 Comparison of the proposed architecture with the single address-driven architecture》

5.3 Comparison of the proposed architecture with the single address-driven architecture

Various new network architectures have recently been proposed, most of which have not yet been practically applied and deployed. One of the critical reasons for this is that these new network architectures cannot optimize stakeholders’ benefits from content-sharing applications. For a competitive and feasible network architecture, stakeholder benefits must be emphasized at the design stage. Economic benefits should be considered indispensable for evaluating new network architectures. This principle was adopted by the ChoiceNet [18] project proposed by the US National Science Foundation. ChoiceNet presents an economic layer to examine how to improve stakeholders’ benefits in the network ecology. Based on this idea, we propose a comprehensive evaluation model for content sharing, including multidimensional and multi-index methods. As shown in Fig. 5, detailed evaluation indicators include request-forwarding hops (RFH), to evaluate the content-discovery capability of the backbone network, content request load (CRL) for evaluating the content-request capability of the edge network, content transaction benefit (CTB) to evaluate the content-profitability capability of stakeholders, and round-trip latency (RTL) for evaluating the content-access capability of the end-user.

《Fig. 5》

Fig. 5. Comprehensive evaluation model for content sharing.

Compared with the single address-driven Internet, the dual-meta network architecture can bypass the layered barriers of the TCP/IP protocol stack, decrease the number of forwarding hops in content requests, and improve the content discovery capability in the backbone network. In addition, it promotes the development of the Internet toward the future integration directions of computing, communication, and storage. Regarding the request-load capacity in the edge network, most content requests can be directly hit locally by the ubiquitous edge cache, thereby effectively offloading request loads that initially had to be transmitted through cellular wireless channels. The cellular channels are occupied for transmission only if the local content requests miss. Therefore, the channel volume can be reserved to increase the request-load capacity for popular content.

Regarding stakeholder benefits, the intermediary-free content ecosystem directly drives content transactions between users and CPs, thus removing redundant stakeholders and significantly reducing the loss of direct benefits of stakeholder. Moreover, this also enables accurate content recommendations and pushes it to users according to their personalized interests. Therefore, it can increase user attachment to the content services provided by CP and stimulate CP to continuously innovate in aspects including content type, structure, and quality. Moreover, it can effectively evoke healthy competition among CPs. In terms of content access by end users, active push based on broadcasting can effectively reduce the transmission distance between content and users and effectively save channel resources, such as the bandwidth of the backbone network. Furthermore, as the content is injected from the real space into the virtual space, the content-driven secondary structure can employ UCL to achieve the goal of “governance before publishing”, thus ensuring the security and trustworthiness of the content. As a result, a healthy, open, and orderly cyberspace content ecology can be formed. It can indirectly enhance the public’s ability to judge the truth and distinguish between correct and inappropriate content in cyberspace. Using the above evaluation model, the comparative analysis of content-sharing capability between the single address-driven architecture and dual-meta network architecture is shown in Table 2.

《Table 2》

Table 2. Comparative analysis of content-sharing capability

  Address-driven architecture Dual-meta network architecture
Content-discovery capability of  Restricted by the layered TCP/IP protocol stack. Break through the restriction of the layered model through radiation.
the backbone network The content transmission path is inconsistent with the discovery. Ubiquitous caching speeds up the content discovery.
  The semantic expressiveness of the URL is weak. Rich semantic UCL endows accurate content discovery.
Content-request capability of the edge network Numerous duplicated requests emerge in the process of content sharing. The secondary structure effectively offloads multi-hop requests.
 The cellular channel capacity imposes restrictions. Bypass the limitation of cellular channel capacity.
Content-profitability capability of stakeholders  Redundant stakeholders exist, and the benefits are readily reduced. Intermediary-free transactions can reduce loss in stakeholder benefits.
Users lack attachment to CP content services. Increase user attachment to CP content services.
It is difficult to develop healthy competition among multiple CPs. Encourage multiple CPs to develop a healthy competitive ecology.
Content-access capability of end-users Restricted by the transmission delay in backbone network. Shortens the transmission distance between content and users.
Redundant unicast packets waste network resources. Broadcast and push can save channel resources.
A high number of simultaneous users can further deteriorate network performance. Bypass the simultaneous users to improve performance.

 

《6 Conclusion》

6 Conclusion

Owing to the mismatch between the address-driven architecture and the content-centric application paradigm, this paper first analyzes the address-driven features and challenges of the current Internet and discusses the flaws in content-driven networks. Then, we proposes the design principles, content-driven metadata UCL, and reference model of ​​the dual-meta network architecture driven by content and address, and compared it with the single address-driven architecture. Overall, the dual-meta network architecture can provide a practical and feasible implementation blueprint for solving current Internet development dilemmas.

In China, the core Internet technologies are perceived as Achilles’s heel. Therefore, China firmly advocates the development of dual-drive networks, supporting the content-driven secondary structure to assist the address-driven primary structure of the current Internet. Thus, it will create a symbiotic, divisible, and combinable dual-structure cyberspace, which will help reduce dependency on the single address-driven Internet. Adoption of satellite resources and the original UCL is expected to effectively protect China’s cyberspace independence, equality, self-defense, and jurisdiction. In peacetime, the secondary structure can cover the entire territory and deliver content to the edge users. Additionally, the secondary structure will be a strategic backup for ensuring national information sovereignty if the primary structure is damaged or unavailable, for example, during events such as natural disasters, wars, and unavailable DNS.

《Compliance with ethics guidelines》

Compliance with ethics guidelines

The authors declare that they have no conflict of interest or financial conflicts to disclose.