Hypergraph Computation

Yue Gao , Shuyi Ji , Xiangmin Han , Qionghai Dai

Engineering ›› 2024, Vol. 40 ›› Issue (9) : 202 -215.

PDF (3898KB)
Engineering ›› 2024, Vol. 40 ›› Issue (9) : 202 -215. DOI: 10.1016/j.eng.2024.04.017
Research
Review

Hypergraph Computation

Author information +
History +
PDF (3898KB)

Abstract

Practical real-world scenarios such as the Internet, social networks, and biological networks present the challenges of data scarcity and complex correlations, which limit the applications of artificial intelligence. The graph structure is a typical tool used to formulate such correlations, it is incapable of modeling high-order correlations among different objects in systems; thus, the graph structure cannot fully convey the intricate correlations among objects. Confronted with the aforementioned two challenges, hypergraph computation models high-order correlations among data, knowledge, and rules through hyperedges and leverages these high-order correlations to enhance the data. Additionally, hypergraph computation achieves collaborative computation using data and high-order correlations, thereby offering greater modeling flexibility. In particular, we introduce three types of hypergraph computation methods: ① hypergraph structure modeling, ② hypergraph semantic computing, and ③ efficient hypergraph computing. We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional (3D) object recognition, revealing that hypergraph computation can reduce the data requirement by 80% while achieving comparable performance or improve the performance by 52% given the same data, compared with a traditional data-based method. A comprehensive overview of the applications of hypergraph computation in diverse domains, such as intelligent medicine and computer vision, is also provided. Finally, we introduce an open-source deep learning library, DeepHypergraph (DHG), which can serve as a tool for the practical usage of hypergraph computation.

Graphical abstract

Keywords

High-order correlation / Hypergraph structure modeling / Hypergraph semantic computing / Efficient hypergraph computing / Hypergraph computation framework

Highlight

• Hypergraph computation is a methodology for modeling complex high-order correlations.

• Hypergraph structure modeling, semantic computing, and efficient computing are introduced.

• The applications of hypergraph computation in diverse domains are summarized.

• An open-source library DHG for learning hypergraph neural networks is introduced.

Cite this article

Download citation ▾
Yue Gao, Shuyi Ji, Xiangmin Han, Qionghai Dai. Hypergraph Computation. Engineering, 2024, 40(9): 202-215 DOI:10.1016/j.eng.2024.04.017

登录浏览全文

4963

注册一个新账户 忘记密码

1. Introduction

In numerous domains in the real world, big data analysis has emerged as a mainstream trend in research and application. However, it is important to note that certain specific scenarios still present challenges related to insufficient valid data and complex correlations within and among data. These challenges are due in part to the difficulty of data acquisition, the need for privacy protection, and the inherent complexity of data structures. Therefore, it is crucial to gain a deep understanding of the intrinsic characteristics of data, comprehend the complex correlations encapsulated in data, and employ new computational paradigms to address the issues of data scarcity and complex correlations.

One example of such complex correlations is the process of thinking and remembering in the human brain. When thinking about a problem, the brain will activate various correlations among different details, creating multidimensional connections among diverse objects. We sometimes struggle to recall something, but we can trigger our memory by retrieving related information, such as time, place, person, color, or sound. These pieces of information are linked by common attributes—for example, they may have happened at the same location or at the same time—which can help us access the forgotten memory.

This example shows how seemingly unrelated objects can actually be linked by underlying inherent characteristics. Such connections are complex and implicit, and the literature has demonstrated them to be ubiquitous and essential in the real world. For example, Milo et al. [1] uncovered the significance of high-order interactions among species for stable ecosystem diversity, and Benson et al. [2] pointed out that high-order connection patterns can better represent the intrinsic characteristics and tendencies of data (e.g., networks in biochemistry and neurobiology).

In essence, such complex high-order correlations can be divided into two groups at different scales: intra-subject high-order correlations (Fig. 1 (upper right)) and inter-subject high-order correlations (Fig. 1 (lower right)). We now provide an overview of these two types as follows:

(1) Inter-subject high-order correlations. This type of relationship constructs high-order correlations among multiple subjects. For example, as shown in Fig. 2(a), each person in a social media network can be regarded as a node, and the complex interactions among people indicate their interests, public sentiments, opinions, and so forth. As shown in Fig. 2(b), taking each individual in medical diagnosis as an entity, the correlations among individuals help determine whether they are sick or normal.

(2) Intra-subject high-order correlations. This type of relationship captures an individual subject through the interactions among its internal components. For example, in scene understanding, image patches are correlated to form high-order image feature representations. As shown in Fig. 3, the complex high-order correlations within pathological images can reflect information such as the tumor microenvironment and biomarkers. Here, both the intrinsic features of internal components and the interactions among them are required to obtain an accurate representation of the subject.

The inter-subject high-order correlations and intra-subject high-order correlations bridge the gap between the observed data and the target task. In practice, data collected by various sensors or platforms represent the objects to be analyzed (observed data), and practical problems, such as identifying a cat or dog in an image, aligning two groups of point clouds, or diagnosing the health status of an individual patient, is the target task. In this case, the main challenge lies in how to effectively utilize these multi-modal and multi-type representations, and explore the complex correlations behind the data to achieve accurate modeling and analysis of objects.

In the past few decades, researchers have put a great deal of effort into correlation modeling, such as the graph structure, random field, and Markov chain. Among these, the graph structure is the most popular. A graph structure models the association of multiple objects by establishing the pairwise relationship between two objects. It has been widely applied in many traditional artificial intelligence methods, such as the well-known graph-cut algorithm, which supports data clustering, segmentation, and so forth, through the graph optimal partition. Taking the database as an example, several graph databases have emerged in recent years, including NebulaGraph and Neo4j, to better handle the correlations among data. These databases can leverage the graph structure for efficient indexing and querying of content. In recent years, with the rapid advancement of deep learning, graph neural networks (GNNs) have also received much attention. Recent advances in graph-based methods, ranging from graph attention models to graph contrastive learning, have enabled novel applications and enhanced performance in image processing or data mining.

In practical applications, the suitability of graph structures for representing object relations is a crucial problem. Obviously, graph structures can explicitly capture the second-order relations of objects, but they are unable to represent higher-order correlations directly. As shown in Fig. 4, a graph structure is greatly limited when modeling high-order complex correlations: The graph can naturally describe the pairwise correlations but fails to model the higher-order correlations directly. Under such circumstances, modeling high-order, complex relations among objects has become a vital challenge.

Hypergraph-based methods have been developed for years to address this challenge, and remarkable results have been achieved with their use in various fields, such as traditional hypergraph learning. This paper introduces the notion of hypergraph computation, which aims to leverage high-order correlation to enhance the data and achieve collaborative computation using data and high-order correlations, thereby enabling accurate analysis and understanding of objects. Taking three-dimensional (3D) object recognition as an example, hypergraph computation can reduce the data requirement by 80% while ensuring comparable performance or reduce the error rate by 52% given the same data.

In this paper, we first justify the necessity of using hypergraphs to represent higher-order relations in Section 2. In Section 3, we describe hypergraph computation from three perspectives: ① hypergraph structure modeling, ② hypergraph semantic computing, and ③ efficient hypergraph computing. Section 4 illustrates how to apply hypergraph computation in practice by exemplifying specific application tasks for intelligent medicine and computer vision. Finally, we introduce an open-source tool for hypergraph computation in Section 5 and discuss future research areas in Section 6.

2. Why do we need hypergraph modeling?

Fig. 5 illustrates the differences between conventional computation paradigms and hypergraph-based computation paradigms. In the conventional computation paradigm, the mapping function f(X) is directly learned from the data to the target, which is highly dependent on sufficient training data. Confronted with the challenges of data scarcity and complex correlations, hypergraph computation models high-order correlations R among data, knowledge, and rules through hyperedges and leverages high-order correlation R to enhance the data X. Additionally, hypergraph computation achieves collaborative computation f(X,R) with data X and high-order correlations R, thereby offering greater modeling flexibility.

A comparison between the graph structure and the hypergraph structure is given in Fig. 6. It can be intuitively seen that the hypergraph structure has significant advantages for representing higher-order correlations. In a hypergraph, each hyperedge connects a set of vertices that share similar characteristics.

In contrast, an edge in a graph can only link two vertices and thus is limited in complex correlation modeling, such as multi-modal data modeling. With scalable hyperedges, the multi-modality/multi-type data can be directly embedded in a hypergraph. Under certain restricted conditions, graph structures can approximately model multi-order correlations by extending second-order correlations to represent higher-order correlations. For example, in a label propagation task, when the weight of each vertex is hyperedge-independent, label propagation on a hypergraph can be replaced by that on a weighted graph. However, when the vertex weight is hyperedge-dependent, label propagation on a hypergraph cannot be reduced to label propagation on a weighted graph. A simple proof is provided below.

Lemma 1. Given an irreducible Markov chain M and its finite state space, denoted as S, the transition probability for x,yS is denoted as px,y. M is reversible if and only if a weighted, undirected graph G exists with a vertex set S, such that a random walk on G is equivalent to M [3].

Theorem 1. Assume a hypergraph $\mathscr{G}_{\text {ind }}=\left\{\mathscr{V}, \mathscr{E}, \boldsymbol{W}_{\text {ind }}\right\}$, where $\mathscr{V}$ and $\mathscr{E}$ are the sets of vertices and hyperedges, respectively, and Wind is the edge-independent weights. Then, a weighted, undirected graph G exists such that a random walk on G is equivalent to a random walk on $\mathscr{G}_{\text {ind }}$.

The proof of Theorem 1 is as follows:

(1) Given any finite sequence of vertices v1,v2,⋯,vnS, a Markov chain is reversible if and only if its transition probability satisfies pv1,v2×pv2,v3×...×pvn,v1=pv1,vn×pvn,vn-1×...×pv2,v1 according to Kolmogorov’s criterion [3].

(2) A random walk on the hypergraph $\mathscr{G}_{\text {ind }}$ is equivalent to a random walk on a reversible Markov chain.

(3) According to Lemma 1, a random walk on a reversible Markov chain is equivalent to a random walk on a weighted, undirected graph G (as shown in Fig. 7(a)). The computation process is given by:

$p_{v_{1}, v_{2}} \times p_{v_{2}, v_{3}} \times p_{v_{3}, v_{1}}=0.389 \times 0.111 \times 0.333=0.0143$
$p_{v_{1}, v_{3}} \times p_{v_{3}, v_{2}} \times p_{v_{2}, v_{1}}=0.111 \times 0.333 \times 0.389=0.0143$

Theorem 2. Assume a hypergraph $\mathscr{G}_{\text {dep }}=\left\{\mathscr{V}, \mathscr{E}, \boldsymbol{W}_{\mathrm{dep}}, \gamma\right\}$ with edge-dependent weights Wdep, where γ is the hyper-parameter indicating the edge-dependent weights. Under this condition, no weighted, undirected graph G can be found, such that a random walk on G is equivalent to a random walk on $\mathscr{G}_{\text {dep }}$.

Fig. 7(b) gives the proof of Theorem 2, in which a random walk on $\mathscr{G}_{\text {dep }}$ is not equivalent to a random walk on a reversible Markov chain. The computation process is given by:

$p_{v_{1}, v_{2}} \times p_{v_{2}, v_{3}} \times p_{v_{3}, v_{1}}=0.422 \times 0.022 \times 0.400=0.0037$
$p_{v_{1}, v_{3}} \times p_{v_{3}, v_{2}} \times p_{v_{2}, v_{1}}=0.022 \times 0.533 \times 0.444=0.0052$

Thus, according to the third step of Theorem 1’s proof, Theorem 2 holds.

In practical applications, objects (i.e., vertices of the hypergraph) often play different roles in various group-level correlations. For example, an individual (represented as a vertex) may have varying levels of expertise in soccer, guitar, and computer science. In such cases, the role played by the object (vertex) should be different across different domains (hyperedges). Consequently, in real-world data, vertex weights are typically edge-dependent. As previously stated, under these conditions, label propagation on a hypergraph is not equivalent to label propagation on a graph structure generated by hypergraph expansion. This highlights the necessity of hypergraph modeling.

3. Hypergraph computation

3.1. Overview

In this paper, we introduce the concept of hypergraph computation, specifically tailored to address the issue of modeling multidimensional, complex, and high-order correlations. Hypergraph computation aims to accurately model and represent high-order complex correlations among and within objects, and then support various downstream applications such as object recognition. The hypergraph computation paradigm comprises three key components: hypergraph structure modeling, hypergraph semantic computing, and efficient hypergraph computing. The architecture of hypergraph computation is depicted in Fig. 8.

Given multidimensional or multi-modal data, the essence of hypergraph structure modeling lies in establishing high-order complex correlations among and within objects to achieve task-oriented structure modeling. In practical scenarios, the information correlations are often more intricate and always change over time. Under such circumstances, a fixed hypergraph structure is inadequate in capturing such dynamic complexities comprehensively. Therefore, it is necessary to investigate the evolutionary mechanisms of information correlations.

Moreover, observations derived from multidimensional data slices (e.g., spatio-temporal slices) often suffer from incompleteness, underdetermined information, and data noise issues, limiting the performance of association structures modeling. Therefore, it is crucial to explore the intrinsic methodologies of complex relationship evolution in such cases. Once high-order complex correlations among studied objects (i.e., hypergraph structure) have been established, how to accurately represent these objects with the structure becomes a key step in performing downstream tasks such as semantic cognition. To address this issue, hypergraph neural networks, which aim to achieve structural semantic representation learning through devised hypergraph convolution operators, are developed.

When adopted in practical applications, structure-based approaches (e.g., graph-based methods and hypergraph-based methods) always present challenges in high computational complexity due to the ubiquity of large-scale data. In this case, efficient hypergraph computing mainly focuses on high-order correlation modeling toward large-scale data and computing acceleration.

In the following, we will elaborate on these three key components.

3.2. Hypergraph structure modeling

The initial step of hypergraph computation is to build a hypergraph structure of the studied object. The objective of hypergraph structural modeling is to forge high-order correlations within single-modal data types, between different single-modal datasets, and across multimodal data sourced from varied origins. The modeling is bifurcated into two principal categories: knowledge and rule-driven modeling, data-driven modeling. The former models the high-order correlations from knowledge and rules via network-based and attribute-based methods, while the latter employs distance-based and representation-based methods to model the high-order correlations from the data. To this end, it is essential to establish a conceptual framework of the hypergraph structure, wherein each node, denoted as v, corresponds to the object to be studied, and each hyperedge, denoted as e, captures correlations among multiple nodes.

For example, in the task of visual object classification, correlations among images must be established first, with each image being represented by a node and with hyperedges indicating connections among these images; in the task of medical computer-aided diagnosis, connections among individuals are constructed, with each person modeled as a node and hyperedges representing connections among these individuals based on their medical records; in the task of survival prediction with histopathological images, connections among patches within a single histopathological image should be constructed first, in which each patch is represented by a node, and hyperedges indicate the interconnections among these patches.

In specific applications, data, knowledge, and rules may appear in various forms, such as multi-modal data from different sensors or attribute data associated with time, location, climate, and textual descriptions. Accordingly, different hypergraph structure modeling approaches are required to suit the nature of the data at hand. Table 1 [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34] revisits the literature with respect to these hypergraph modeling methods.

Knowledge and rule-based hypergraph modeling constructs hypergraphs for studied objects by directly leveraging available structural information. This process typically involves two types of hyperedges: attribute-based and network-based hyperedges. For example, given certain attributes of an object, such as geographic location, time, color, and shape, objects with similar or identical attributes can be connected by corresponding hyperedges based on these attributes, as shown in Fig. 9. In this case, objects connected by the same hyperedge share similar or identical attributes, and the hyperedge itself exhibits corresponding properties. Moreover, when studied objects have initial network structures, such as spatial location or topological connection information, network-based hyperedges can be established. These hyperedges preserve first-order correlations based on these existing network structures while extending to k-hop connections, as shown in Fig. 9.

In contrast to knowledge and rule-driven hypergraph modeling, data-driven hypergraph modeling generates hyperedges that do not explicitly exist through a predefined modeling mechanism. This type of hypergraph structure encompasses two categories of hyperedges: distance-based and representation-based. In the case of distance-based hyperedge construction, given observations of an object, the distances between each object and other objects can be calculated in the feature space using certain distance metrics. Objects that exhibit close proximity in the feature space are subsequently connected through hyperedges. Practical implementations of this approach entail employing methods such as k-nearest neighbor (KNN), epsilon-ball, and clustering algorithms to determine the hyperedges. In representation-based hyperedge construction methods, data can be analyzed through linear representation in the feature space. Representation coefficients are employed as a characterization of correlations to form hyperedges. Here, each established hyperedge connects objects that can be linearly represented with each other in the feature space.

The hyperedges established via the aforementioned methods can be effectively integrated to constitute a unified hypergraph structure, facilitating object relation modeling and enabling the representation of complex higher-order correlations. Sometimes, a hypergraph contains nodes of different types. In this case, heterogeneous hypergraph modeling is introduced, which treats different entities as heterogeneous nodes and generates hyperedges based on strategies that are similar yet customized for different scenarios and tasks. For example, in a bibliographic network containing literature, authors, and journals, hyperedges might represent the collaborative publication process, connecting literature to its authors and the publishing journal (network-based hyperedges) or linking literatures sharing a common field like computer science based on their attributes (attribute-based hyperedges).

In real-world applications, data often originates from heterogeneous sources. As the richness of data sources grows, the potential to uncover intricate patterns and valuable information from the data also increases [35]. Integrating hypergraphs derived from such varied sources into a unified structure is beneficial for capturing comprehensive high-order correlations across disparate datasets. It can be achieved by strategically concatenating their hyperedges. Nonetheless, the integration process requires careful alignment of nodes and hyperedges from different hypergraphs, considering both their semantics and attributes, especially when the hypergraphs exhibit significantly different centrality distributions. To address these challenges, potential solutions include normalization techniques and weighted hypergraph fusion methods.

Nonetheless, regardless of how advanced and type-rich the sensors being used are, the obtained data are inevitably sampled within limited spatial and temporal domains, leaving certain multidimensional correlations of data beyond the reach of sensors. Consequently, the information about the studied objects appears to be underdetermined, and their features may suffer from imprecision, potentially leading to a hypergraph structure that inadequately aligns with specific tasks. As such, there arises a need for task-oriented hypergraph structure optimization to enhance the adaptability and relevance of the hypergraph modeling approach. To this end, Bayesian hypergraph modeling (BHGM) is introduced, which automatically generates appropriate hypergraph structures tailored to the specific requirements of various tasks, models, and datasets. BHGM regards hypergraph structure modeling as a black-box issue and sequentially deduces and approaches the desired structure.

It should be pointed out that any hypergraph structure can be regarded as a point in the set of all hypergraph structures. In particular, we endow the set with some reasonable mathematical structures (e.g., linear structures and metric structures) to form a hypergraph space. Then, we can observe the structure of a hypergraph in the hypergraph space by its association with other structures. That is, we can obtain a new hypergraph structure by combining or fine-tuning close structures. This makes it possible to explore the evolution of the hypergraph structure. To address this task, we should first find a suitable metric, which satisfies the three axioms of a metric requires, to measure the similarity between two hypergraphs. We can also study the hypergraph space from the perspective of the metric space. Then, the stability of the structure evolution is translated to the completeness of the metric space and the convergence of the sequence. The fixed-point method, given by the principle of compression mapping in metric space, provides a fine mathematical tool for dynamically optimizing the hypergraph structure from an analytic viewpoint. That is, it offers an analytic framework for convergence analysis of the structure evolution. Alternatively, we can construct a linear representation framework of the hypergraph structure after reasonably defining the linear operation in the set of all hypergraph structures. In other words, a new hypergraph structure is formed by linearly combining some basic hypergraphs. By studying the completeness of the linear space, we realize the representation of any hypergraph structure. Furthermore, by optimizing the combination coefficients, we obtain the best structure of the hypergraph. This also provides a fine mathematical framework for studying the structure evolution of hypergraphs from an algebraic perspective.

However, despite the superiority of hypergraph modeling for capturing complex, high-order correlations, especially for limited data, there are specific scenarios where hypergraph modeling may not be the most appropriate approach, such as the real-time computation of large-scale scenes. Hypergraphs are intrinsically designed to encapsulate complex correlations within data through hyperedges that connect multiple nodes, far exceeding the pairwise connections found in traditional graph models. This capability, while powerful for modeling complex interactions, results in an increase in computational overhead. In scenarios requiring real-time analysis or the processing of large-scale data in dynamic environments, the computational demands of hypergraph models can exceed the capabilities of current computing architectures, making hypergraph models less suitable for applications where immediate data processing and decision-making are crucial. This drives us to further explore efficient hypergraph computing and inference, broadening the scope of hypergraphs’ real-world applications.

3.3. Hypergraph semantic computing

Once the high-order complex correlations among the studied objects have been established, it is crucial to perform hypergraph semantic computing for accurate representation of these objects. Hypergraph neural networks—an emerging technique that synergizes the powerful modeling capability of hypergraphs with the representation capability of neural networks—are developed to learn the structural semantic representation through specific devised hypergraph convolution operators. Existing hypergraph neural networks can be broadly categorized into two streams: spectral-based methods and spatial-based methods, as shown in Table 2 [17], [18], [19], [26], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45].

Spectral-based methods follow a two-step process. First, signals are transformed from the spatial domain to the spectral domain via Fourier transform, wherein the convolution operator is defined. Then, after processing, the signals are transformed back to the spatial domain through Fourier inverse transform. One representative of spectral-based methods is HGNN [18], which extends conventional graph Laplacian and graph Fourier transforms to hypergraph Laplacian and hypergraph Fourier transforms, thereby deducing spectral-domain hypergraph convolution. Another example, hypergraph convolution and hypergraph attention (Hyper-Atten) [17], utilizes a spectral-based convolution operator similar to that of the HGNN but additionally incorporates an attention module to adaptively learn the importance of vertices within a hyperedge. In addition, Yadati et al. [36] proposed HyperGCN, a spectral-based hypergraph neural network derived from hypergraph spectral theory. HyperGCN initially extends the hypergraph to a graph using specific rules, and then performs standard graph convolutional network (GCN) operations on the derived graph. Converting hypergraphs into graphs enables the use of well-explored graph embedding techniques and frameworks. However, this transformation may lead to information loss, as the complex, high-order correlations inherent in the data are compressed into simple, pairwise correlations.

Spatial-based methods distinguish themselves from spectral-based methods by obtaining new representations of central vertices through the aggregation of neighboring vertices’ representations. A representative example of the spatial-based approach is HGNN+ [4], which introduces the concept of a hyperpath to facilitate high-order message passing in hypergraphs. Unlike paths in simple graphs, which are defined as sequences of vertices, hyperpaths are sequences of both vertices and hyperedges. HGNN+ employs a two-stage convolution paradigm where messages are first passed from central vertices to connected hyperedges and then from hyperedges to connected vertices (Fig. 10). HGNN+ is flexible and scalable, since convolution and aggregation operations can be freely defined at each stage and extended to directed hypergraphs. In contrast, the extensibility of the HGNN is limited due to its reliance on spectral theory and the need for a symmetric matrix as the Fourier basis for Laplacian transforms.

We further undertake a comparative analysis of hypergraph neural networks and GNNs from both the spectral and the spatial perspectives, represented by the HGNN and HGNN+, respectively. From the spectral perspective, it has been demonstrated that a GNN can be regarded as a special case of HGNN, assuming that each hyperedge connects only two vertices and all hyperedges share the same weight; this leads to a simple hypergraph known as a two-uniform hypergraph. Then, this two-uniform hypergraph can be equivalently represented as a simple graph. Under such circumstances, the spectral-based hypergraph convolution takes the same form as the graph convolution in a GCN [46]. In essence, the HGNN can effectively address pairwise correlations in simple graphs while also capturing and modeling the high-order correlations inherent in hypergraphs. From the spatial perspective, a rooted subtree can be used to compare HGNN+ and a GNN by using a two-uniform hypergraph and its equivalent simple graph as an example. It can be found that a GNN considers neighboring vertex features only during one-stage convolution, while HGNN+ employs a hierarchical structure that enables multiple message interaction processes. This distinctive characteristic of HGNN+ allows for more comprehensive modeling and distillation of high-order correlations through its two-stage spatial-domain convolution.

Research is still needed on whether hypergraph neural networks can avoid performance instability and degradation with an increase in hypergraph convolution layers, which is a common issue for hypergraph neural networks and GNNs. To deal with this issue, we should explore new stable computing methods from a mathematical perspective. Hypergraph dynamic systems [38] are a possible solution requiring further investigation.

In the case of heterogeneous hypergraphs, hypergraph neural networks designed for homogeneous hypergraphs cannot be directly applied. The main challenge here lies in how to effectively encode heterogeneous nodes and relations into the latent space. To this end, Li et al. [45] proposed the hypergraph transformer neural network (HGTN), which introduces an attention mechanism to adaptively adjust the weights of heterogeneous hypergraphs and encode implicit semantic information.

3.4. Efficient hypergraph computing

The rapid growth of the Internet has led to an exponential increase in the volume of data. In practical applications, hypergraphs constructed from this data may contain millions or even billions of vertices. For example, social networks on social media platforms may have millions of vertices, while user-item networks on e-commerce platforms may have billions. Under such circumstances, there is an urgent need for efficient hypergraph modeling methods that can handle large-scale data within acceptable time and space complexity.

In general, efficient hypergraph modeling can be achieved in four ways: hypergraph factorization, hypergraph sampling, hierarchical hypergraph modeling, and distributed hypergraph modeling. Factorization-based methods are one approach for achieving efficient hypergraph modeling. In hypergraph modeling, a two-dimensional (2D) incidence matrix is usually used to represent the correlations between vertices and hyperedges. However, the space complexity of the incidence matrix (O(NM)) grows exponentially as the number of vertices ($|\mathscr{V}|=N)$) and hyperedges ($|\mathscr{E}|=M$) increases. In this case, the traditional transductive hypergraph computing method is limited in handling such large-scale data. This issue can be solved by factorization-based hypergraph modeling. The core of factorization-based hypergraph modeling is a factorization embedding module, which encodes the correlations between vertices and hyperedges into different low-dimensional latent spaces—that is, the “belonging” embedding space and the “containing” embedding space—through two sub-incidence matrices. In this way, the model can fit the initial big hypergraph structure as closely as possible. The embedding dimension after factorization is much lower than the dimension of the vertices or hyperedges; thus, factorization-based methods can handle large-scale data.

Sampling-based hypergraph modeling is another method for dealing with large-scale data. Traditional transductive hypergraph computation methods typically require full-batch training. That is to say, given large-scale data, a big hypergraph must be constructed based on the data and incorporated into each round of training. As the volume of data increases, so does the number of hyperedges in the constructed hypergraph, which may lead to hardware limitations due to space requirements. To address this issue, the original large-scale hypergraph can be decomposed into multiple smaller sub-hypergraphs through hypergraph sampling. This can be achieved by pre-sampling multiple layers of neighbors for each target vertex or by sampling sub-hypergraphs into mini-batches before each hypergraph convolution layer begins. Hypergraph sampling enables efficient mini-batch computing for large-scale data and naturally supports the inductive hypergraph computation paradigm.

In cases where data is assigned hierarchical labels, hierarchical hypergraph modeling can be employed to handle large-scale data. The data is initially divided into multiple subsets, and a sub-hypergraph is constructed for each subset using methods such as KNN. Hypergraph learning is then performed on all sub-hypergraphs to obtain local high-order representations in latent space. Multiple aggregation operators are subsequently applied for hierarchical label learning, and global representations are acquired by fusing the local high-order representations.

In practical applications, the limited computing resources of a single device often present a bottleneck when training a hypergraph neural network on large-scale data. Using parallel execution strategies to conduct distributed hypergraph learning is an effective solution. In general, the data is first distributed into several partitions; next, the hypergraph neural network model is executed for optimization. Communication between remote neighbors of vertices and synchronization is required during this procedure, with gradients being aggregated to update model parameters.

Beyond the aforementioned methods, hypergraph structure compression and knowledge distillation are two highly promising methods to improve the computing efficiency of hypergraphs. By employing hypergraph isomorphism [47], we can analyze structural equivalence to diminish the dimensionality of the solver space. In terms of hypergraph knowledge distillation, a topology-aware distillation approach [48] can be utilized to inject a complex high-order relationship into inference-efficient shallow network models, thereby accelerating the computation.

4. Applications of hypergraph computation

Recently, the amount of data in intelligent medicine and computer vision has surged, and such datasets involve multi-level, multidimensional, multi-modal, and heterogeneous data. Hypergraph computation has been widely applied in many tasks due to its powerful capability to model complex high-order correlations. Here, we summarize its respective applications in intelligent medicine and computer vision.

4.1. Hypergraphs in intelligent medicine

As shown in Fig. 11, intelligent medicine data show the characteristics of complex high-order correlations, multi-modal heterogeneity, and multidimensionality, such as multi-gene interactions and population-level correlations. Hypergraph computing is suitable for most tasks in intelligent medicine, based on its modeling capability for complex high-order correlations.

4.1.1. Protein, drug, and disease hypergraphs

Hypergraph computation can be utilized to simulate high-order interactions among proteins [49], with the proteins being represented as nodes and protein-protein interactions (PPIs) as hyperedges. PPIs facilitate the prediction of protein structures and attributes. Furthermore, they can be harnessed to engineer novel proteins with specific functions, such as enzymes [50]. Similarly, drugs can be represented as nodes, and the synergistic and antagonistic interactions of multiple drugs can be represented as hyperedges [51], [52]. Hypergraphs can effectively capture complex drug interactions, reducing high experimental costs and addressing the combinatorial explosion issue in drug discovery. Furthermore, proteins and drugs can be utilized to model heterogeneous hypergraph networks, which can simulate complex high-order correlations among drugs, drug-target proteins, and PPIs. Refs. [49], [50], [51], [52] can assist researchers in identifying potential drug targets and expediting the discovery of new drugs.

4.1.2. Cell and tissue hypergraphs

Hypergraph computation is well-suited for capturing the intricate biological structures in medical images and data. It can not only learn complex structural correlations within tissues and interactions between multiple cells [53], [54], [55], thereby revealing their intricate biological functions, but also enhance the accuracy of disease diagnosis. For example, gigapixel-level pathology images contain high-resolution digital information about cellular morphology and the tumor microenvironment. Individual cells or tissue regions can be treated as nodes, while spatial and topological correlations, or the feature distances between these nodes, can be defined as hyperedges. Thus, the complex, high-order correlations between cellular morphology and inter-cellular interactions can be effectively learned through hypergraph computation, which is of significant importance for cancer grading and staging, and for predicting patient survival [56], [57].

4.1.3. Organ hypergraphs

Hypergraph computation serves as an effective tool for modeling the intricate interactions between different regions or tissues within an organ [58], [59], [60], [61], thereby modeling complex functional connectivity and interactions. For example, the brain consists of a myriad of neurons and functionally distinct brain regions [62]. Hypergraph computation can accurately capture these complex correlations among neurons and brain areas and successfully identify activated brain regions and corresponding functional brain graphs. Moreover, hypergraph computation can be applied to compare brain network structures between different individuals, enabling accurate discrimination between normal and abnormal brain connections.

4.1.4. Biological system hypergraphs

In clinical applications, physicians employ the human phenotype ontology (HPO) to describe diseases. Thus, a specific disease can be represented as a collection of phenotypes. Hypergraphs can model these interactions of phenotypes by regarding HPO as nodes and establishing hyperedges with their associations [63], [64]. Doing so has aided in revealing shared phenotype characteristics among various diseases, thereby enabling the inference of shared pathogenic mechanisms or genetic mutations. Furthermore, hypergraph analysis can play a pivotal role in uncovering novel biomarkers, thus advancing our understanding of disease mechanisms. Hypergraph analysis has been shown to be valuable in elucidating phenotype correlations among different diseases, shedding light on both shared characteristics and differences. This, in turn, facilitates the identification of potential disease subtypes and intricate disease correlations. In addition, diseases and drugs can be modeled using heterogeneous hypergraphs, which can learn correlation information between drugs and diseases. For example, such hypergraphs can reveal correlations between drug interactions and specific diseases, thereby assisting in the prediction of potential drug side-effects.

4.1.5. Population-level hypergraphs

Hypergraph computation can analyze clinical data from various patients, as well as diverse cross-modal and multi-view information such as individual lifestyle habits, genetic traits, environmental factors, education, and healthcare [65], [66]. This enables the identification of health trends and disease prevalence within the entire population [67], [68], thereby facilitating more precise public health decision-making and resource allocation. Furthermore, by integrating complex correlation information from multiple data sources, hypergraph computation can offer a more comprehensive analysis of health and disease.

By integrating data from both the population and individual levels, an inter-intra hypergraph can be obtained that integrates inter-object and intra-object correlations into a unified framework. This cross-dimensional hypergraph computing approach offers a more comprehensive and enriched knowledge for downstream clinical tasks, such as disease diagnosis and survival prediction.

4.2. Hypergraphs in computer vision

Computer vision serves as the gateway to artificial intelligence and stands as a pivotal element within contemporary intelligent systems. As shown in Fig. 12, hypergraphs have been shown to be a highly flexible computational framework that can be employed to analyze complex, high-order correlations in various computer vision tasks, such as action recognition, depth estimation, image registration, 3D recognition and retrieval, scene understanding, and object detection.

4.2.1. Action recognition with hypergraphs

In the task of action recognition, hypergraphs provide an advanced framework for encoding complex spatial-temporal correlations among body postures, action sequences, and subject-object interactions [69], [70], [71]. Utilizing higher-order structures to encapsulate these intricate correlations allows hypergraphs to outperform traditional methodologies in terms of both predictive accuracy and computational efficiency. Unlike traditional methods that may only leverage low-level features or confined local data, hypergraphs offer a comprehensive understanding of actions or action sequences. These characteristics make hypergraphs indispensable in diverse applications, such as security surveillance, behavioral analytics, and human-computer interaction.

4.2.2. Depth estimation with hypergraphs

Hypergraph-based depth-estimation methods effectively address the limitations of existing methods, which often fail to exploit global contextual information [72]—that is, capturing the intricate correlations between pixels and their broader spatial context. Thus, the use of hypergraph-based depth-estimation methods leads to improved quality of depth maps. For example, in autonomous vehicles, accurate depth information is critical for decision-making and path planning. Conventional depth-estimation methods cannot deal well with complex traffic scenarios and multiple participants. In contrast, by integrating information from different sensors and perspectives, hypergraphs provide a more comprehensive and accurate depth estimation, thus ensuring a higher level of safety. In augmented reality and virtual reality, depth information is crucial for delivering a more realistic and immersive experience to the users.

4.2.3. Registration with hypergraphs

Hypergraph registration methods efficiently handle higher-order correlations and complex spatial transformations. Unlike conventional methods, which can only handle pixel-to-pixel or region-to-region correlations, hypergraphs capture multi-level correlations for more accurate registration [73], [74], [75]. In the case of point cloud data registration, hypergraph algorithms analyze higher-order geometric and topological correlations between point clouds, effectively enhancing performance in applications such as autonomous driving, robotic navigation, and 3D modeling. Furthermore, hypergraphs excel in handling images and point cloud data with extensive occlusions or noise. Traditional registration methods usually require additional data preprocessing steps in such complicated scenarios, whereas hypergraphs can implement end-to-end model training, greatly improving the accuracy and efficiency of registration.

4.2.4. 3D recognition and retrieval with hypergraphs

For 3D object recognition and retrieval tasks, hypergraph computation can analyze higher-order geometric and topological correlations within 3D models, providing fast and precise model matching [27], [76], [77]. For example, in industrial manufacturing, hypergraphs can be employed to accurately match and identify machine parts or components, thereby enhancing production efficiency and quality. In video game development and digital entertainment, hypergraphs facilitate the recognition and rendering of complex 3D environments, offering a more immersive user experience. Hypergraphs can also be applied in cultural heritage preservation and archaeological research, assisting experts in quickly recognizing and classifying artifacts and relics through efficient analysis of 3D data.

4.2.5. Scene understanding with hypergraphs

Conventional scene understanding approaches often focus on local features and 2D image data and ignore higher-order correlations and inter-dependencies in complex scenes. In contrast, hypergraphs delve into these higher-order correlations, providing a comprehensive understanding of the entire scene. For example, in intelligent transportation and urban planning, hypergraphs can analyze the higher-order correlations between various traffic participants (e.g., cars, pedestrians, and bicycles) and their surrounding environment (i.e., traffic signals, signs, and road layouts) [78], [79], [80]. This helps in effectively predicting and explaining traffic flow, allowing city managers to allocate resources more precisely and improve public safety. Hypergraphs can also be applied in emergency management for natural disasters and environmental monitoring [81]. For example, during emergencies such as floods or forest fires, hypergraphs can swiftly analyze influencing factors, including weather conditions, terrain, and vegetation distribution, and simulate possible disaster spread paths and impact zones, thus aiding decision-makers in implementing more effective strategies.

4.2.6. Object detection with hypergraphs

In object detection scenarios involving complex backgrounds, multiple object categories, and occlusions, hypergraphs outperform existing methods by analyzing complex correlations between objects and their surrounding environment [82]. More specifically, hypergraphs can enhance detection accuracy by examining multiple aspects of an object’s relationship with its surrounding context, such as relative positioning, movement patterns, colors, and textures. For example, in autonomous driving applications, hypergraphs can simultaneously analyze multiple moving and static targets (e.g., other vehicles, pedestrians, and traffic signs), thus predicting their behaviors and potential risks to ensure safer driving.

Table 3 [26], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99] summarizes the applications of hypergraphs, which are categorized into three categories of application: intelligent medicine, computer vision, and others.

4.3. Experiments and discussion

To evaluate the effectiveness of hypergraph computation, we take 3D object recognition as an example, where the 3D object dataset ModelNet40 is used as the evaluation benchmark. A typical hypergraph computation method, HGNN [18], and a data-based method, the group-view convolutional neural network (GVCNN) [100], are utilized for comparison. Here, the training data ratios are set as 6%, 8%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100% of the full training set, and the different performances of HGNN with respect to the varied training set are provided. The experiments and comparisons are demonstrated in Fig. 13, from the following observations can be gleaned. First, given the full training set, the hypergraph computation method HGNN achieves an error of 3.30%, which is 52.00% better than the data-based method GVCNN. With a decrease in the training data, the performance of HGNN becomes slightly worse. To achieve a similar performance to GVCNN using the full dataset, HGNN only needs about 20% of the training data, which reduces the data requirement by 80%. The above comparison justifies the superiority of hypergraph computation in terms of data requirement reduction and performance improvement.

5. Hypergraph computation tool

DeepHypergraph (DHG) [101] is the first open-source deep learning library built upon PyTorch for supporting hypergraph computation [4]. Fig. 14, Fig. 15 respectively show the architecture and functional library of DHG.

As shown in Fig. 14, DHG supports pair-wise message passing on a graph structure and beyond-pair-wise message passing on a hypergraph structure. In addition, DHG offers functions to convert graphs into hypergraphs and vice versa. Promoting graphs to hypergraphs may exploit potential high-order connections and enhance your model’s performance. DHG also integrates commonly used Laplacian matrices and messagepassing functions into the graph/hypergraph structure. Once a structure is built with DHG, these functions become readily available for use in model development.

Fig. 15 shows the primary function library of DHG, built upon PyTorch, allowing integration of any PyTorch-based models. Furthermore, DHG includes a powerful visualization tool for graphs and hypergraphs. DHG offers a range of Laplacian matrices and message-passing functions to assist in building spectral- or spatial-based models. Moreover, DHG provides a powerful visualization tool for graphs and hypergraphs, making it possible to easily visualize the structure of graphs and hypergraphs. Notably, DHG includes random graph/hypergraph generators, a variety of cutting-edge convolutional layers and models for graphs/hypergraphs, numerous public datasets, and diverse evaluation metrics. DHG integrates the Optuna library to facilitate structure and model tuning based on the automated machine learning (Auto-ML) method [102]. DHG also automates the search for optimal configurations in constructing graph/hypergraph structures and finding the best hyper-parameters for model training.

6. Conclusions

This paper presented the novel methodology of hypergraph computation to model high-order correlations inside systems and to conduct collaborative computing on data and high-order correlations. We first justified the necessity of using hypergraphs to represent higher-order relations and provided detailed proofs. Then, we introduced hypergraph structure modeling, hypergraph semantic computing, and efficient hypergraph computing. By undertaking a comparative analysis of hypergraph neural networks and GNNs, we showed that hypergraph neural networks can effectively deal with pairwise correlations in simple graphs while also capturing and modeling the high-order correlations inherent in hypergraphs. After that, we summarized the application of hypergraph computation in fields such as intelligent medicine and computer vision, showing that hypergraphs can achieve inter-intra high-order correlation modeling for various tasks. Experiments on 3D object recognition—a typical application of hypergraph computation—revealed that hypergraph computation can reduce the data requirement by 80% to achieve comparable performance or improve the performance by 52% given the same data, compared with a traditional data-based method (GVCNN). Finally, we introduced the open-source framework DHG, which is a deep learning library built upon PyTorch for hypergraph computation. DHG is a general framework that supports both low-order and high-order relationship modeling.

Several areas of hypergraph computation remain to be explored. In recent years, large language models (LLMs) have shown superiority in understanding and generating human-like text, attracting significant interest from the research community. In this context, an attractive direction for future research is the integration of hypergraph computation with LLMs. While LLMs are good at processing textual data, they often have difficulty capturing the complex, high-order structural information characteristic of hypergraph data. Combining hypergraph computation with LLMs could enhance LLMs’ understanding of complex structural information, bridging the gap between textual understanding and high-order correlation modeling. Another interesting topic is the large hypergraph model. As data becomes more accessible, the opportunity to discover complex patterns and correlations within it grows correspondingly, highlighting the necessity of developing large models for hypergraphs. Such models would help leverage the rich relational information encoded in hypergraphs, pushing the boundaries of what can be achieved in different tasks. The main challenge, however, lies in addressing the computational complexity and storage demands of hypergraph models toward large-scale data. Future efforts should concentrate on optimizing computational efficiency and storage for hypergraph computation.

Compliance with ethics guidelines

Yue Gao, Shuyi Ji, Xiangmin Han, and Qionghai Dai declare that they have no conflict of interest or financial conflicts to disclose.

References

[1]

Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U. Network motifs: simple building blocks of complex networks. Science 2002; 298 (5594):824-7.

[2]

Benson AR, Abebe R, Schaub MT, Jadbabaie A, Kleinberg J. Simplicial closure and higher-order link prediction. Proc Natl Acad Sci USA 2018; 115(48):E11221-30.

[3]

Kelly FP. Reversibility and stochastic networks. Cambridge: Cambridge University Press; 2011.

[4]

Gao Y, Feng Y, Ji S, Ji R. HGNN+: general hypergraph neural networks. IEEE Trans Pattern Anal Mach Intell 2022; 45(3):3181-99.

[5]

Huang J, Liu X, Song Y. Hyper-path-based representation learning for hypernetworks. In: Proceedings of the 28th International Conference on Information & Knowledge Management; 2019 Nov 3-7; Beijing, China. New York City: Association for Computing Machinery; 2019. p. 449-58.

[6]

Kim J, Oh S, Hong S. Transformers generalize deepsets and can be extended to graphs & hypergraphs. In: Ranzato M, Beygelzimer A, Dauphin Y, Liang PS, Vaughan JW, editors. Advances in neural information processing systems. New York City: NeurIPS; 2021.

[7]

Vijaikumar M, Hada D, Shevade S. HyperTeNet:hypergraph and transformerbased neural network for personalized list continuation. In: Proceedings of the 2021 IEEE International Conference on Data Mining; 2021 Dec 7-10; Auckland, New Zealand. New York City: IEEE; 2021. p. 1210-5.

[8]

Li Y, Chen H, Sun X, Sun Z, Li L, Cui L, et al. Hyperbolic hypergraphs for sequential recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management; 2021 Nov 1-5; Gold Coast, QLD, Australia. New York City: The Association for Computing Machinery; 2021. p. 988-97.

[9]

Yu J, Yin H, Li J, Wang Q, Hung NQV, Zhang X, et al. Self-supervised multichannel hypergraph convolutional network for social recommendation. In: Proceedings of the Web Conference 2021; 2021 Apr 19-23; Ljubljana, Slovenia. New York City: The Association for Computing Machinery; 2021. p. 413-24.

[10]

Zhang J, Gao M, Yu J, Guo L, Li J, Yin H. Double-scale self-supervised hypergraph learning for group recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management; 2021 Nov 1-5; Gold Coast, QLD, Australia. New York City: The Association for Computing Machinery; 2021. p. 2557-67.

[11]

Huang S, Elhoseiny M, Elgammal AM, Yang D. Learning hypergraphregularized attribute predictors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015 Jun 7-12; Boston, MA, USA. New York City: IEEE; 2015. p. 409-17.

[12]

Nagy L, Ruppert T, Löcklin A, Abonyi J. Hypergraph-based analysis and design of intelligent collaborative manufacturing space. J Manuf Syst 2022; 65:88-103.

[13]

Fu Y, Hospedales TM, Xiang T, Gong S. Transductive multi-view zero-shot learning. IEEE Trans Pattern Anal Mach Intell 2015; 37(11):2332-45.

[14]

Lin M, Li W, Lu S. Balanced influencemaximization in attributed social network based on sampling. In: Proceedings of the 13th International Conference on Web Search and Data Mining; 2020 Feb 3-7; Houston, TX, USA. New York City: The Association for Computing Machinery; 2020. p. 375-83.

[15]

Sun X, Yin H, Liu B, Chen H, Cao J, Shao Y, et al. Heterogeneous hypergraph embedding for graph classification. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining; 2021 Mar 8-12; online. New York City: The Association for Computing Machinery; 2021. p. 725-33.

[16]

Fang Y, Zheng Y. Metric learning based on attribute hypergraph. In: Proceedings of the 2017 IEEE International Conference on Image Processing; 2017 Sep 17-20; Beijing, China. New York City: IEEE; 2017. p. 3440-4.

[17]

Bai S, Zhang F, Torr PH. Hypergraph convolution and hypergraph attention. Pattern Recognit 2021; 110:107637.

[18]

Feng Y, You H, Zhang Z, Ji R, Gao Y. Hypergraph neural networks. Proc Conf AAAI Artif Intell 2019; 33(1):3558-65.

[19]

Ji S, Feng Y, Ji R, Zhao X, Tang W, Gao Y. Dual channel hypergraph collaborative filtering. In: Proceedingsof the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; Jul 6-10 2020. p. 2020; online. New York City: The Association for Computing Machinery; 2020-9.

[20]

Huang J, Chen C, Ye F, Hu W, Zheng Z. Nonuniform hyper-network embedding with dual mechanism. Acm T Inform Syst 2020; 38(3):1-18.

[21]

Zu C, Gao Y, Munsell B, Kim M, Peng Z, Zhu Y, et al. Identifying high order brain connectome biomarkers via learning on hypergraph. In:Proceedings of the 7th International Workshop on Machine Learning in Medical Imaging (MICCAI 2016); 2016 Oct 17; Athens, Greece. Berlin:Springer; 2016. p. 1-9.

[22]

Klamt S, Haus UU, Theis F. Hypergraphs and cellular networks. PLOS Comput Biol 2009; 5(5):e1000385.

[23]

Fang Q, Sang J, Xu C, Rui Y. Topic-sensitive influencer mining in interestbased social media networks via hypergraph learning. IEEE Trans Multimed 2014; 16(3):796-812.

[24]

Agarwal S, Sawhney R, Thakkar M, Nakov P, Han J, Derr T. Think: temporal hypergraph hyperbolic network. In: Proceedings of the 2022 IEEE International Conference on Data Mining; 2022 Nov 28-Dec 1; Orlando, FL, USA. New York City: IEEE; 2022. p. 849-54.

[25]

Li H, Wang J, Du X, Hu Z, Yang S. KBHN: a knowledge-aware bi-hypergraph network based on visual-knowledge features fusion for teaching image annotation. Inf Process Manage 2023; 60(1):103106.

[26]

Bai J, Gong B, Zhao Y, Lei F, Yan C, Gao Y. Multi-scale representation learning on hypergraph for 3D shape retrieval and recognition. IEEE Trans Image Process 2021; 30:5327-38.

[27]

Trung HT, Van Vinh T, Tam NT, Jo J, Yin H, Hung NQV. Learning holistic interactions in LBSNs with high-order, dynamic, and multi-role contexts. IEEE Trans Knowl Data Eng 2022; 35(5):5002-16.

[28]

Yang D, Qu B, Yang J, Cudre-Mauroux P. Revisiting user mobility and social relationships in LBSNs:a hypergraph embedding approach. In: Proceedings of the World Wide Web Conference; 2019 May 13-17; San Francisco, CA, USA. New York City: The Association for Computing Machinery; 2019. p. 2147-57.

[29]

Yang D, Qu B, Yang J, Cudre-Mauroux P. LBSN2Vec++: heterogeneous hypergraph embedding for location-based social networks. IEEE Trans Knowl Data Eng 2020; 34(4):1843-55.

[30]

Zhu Y, Guan Z, Tan S, Liu H, Cai D, He X. Heterogeneous hypergraph embedding for document recommendation. Neurocomputing 2016; 216:150-62.

[31]

Liu Q, Sun Y, Wang C, Liu T, Tao D. Elastic net hypergraph learning for image clustering and semi-supervised classification. IEEE Trans Image Process 2017; 26(1):452-63.

[32]

Jin T, Yu Z, Gao Y, Gao S, Sun X, Li C. Robust ‘2-hypergraph and its applications. Inf Sci 2019; 501:708-23.

[33]

Li Y, Zhang S, Cheng D, He W, Wen G, Xie Q. Spectral clustering based on hypergraph and self-re-presentation. Multimedia Tools Appl 2017; 76 (16):17559-76.

[34]

He W, Cheng X, Hu R, Zhu Y, Wen G. Feature self-representation based hypergraph unsupervised feature selection via low-rank representation. Neurocomputing 2017; 253:127-34.

[35]

Xi H. Data-driven optimization technologies for MaaS. In: ZhangH, SongX, ShibasakiR,editors. Big data and mobility as a service. Amsterdam: Elsevier; 2022.

[36]

Yadati N, Nimishakavi M, Yadav P, Nitin V, Louis A, Talukdar P, et al. A new method for training graph convolutional networks on hypergraphs. In: Proceedingsof the 2019 Advances in Neural Information Processing Systems; Dec 8-14 2019. p. 2019; Vancouver, BC, Canada. New York City: NeurIPS; 1-12.

[37]

Zhang J, Li F, Xiao X, Xu T, Rong Y, Huang J, et al. Hypergraph convolutional networks via equivalency between hypergraphs and undirected graphs. In: Proceedings of the 2022 ICML Workshop; 2022 Jul 17-23; Baltimore, MD, USA. San Diego: The International Conference on Machine Learning; 2022. p. 1-36.

[38]

Yan J, Feng Y, Ying S, Gao Y. Hypergraph dynamic system. In: Proceedings of the 2024 International Conference on Learning Representations; 2024 May 7- 11; Vienna, Austria; 2024.

[39]

Zhang R, Zou Y, Ma J. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs. In: Proceedings of the 2020 International Conference on Learning Representations; 2020 Apr 26-May 1; online. Trier: the dblp computer science bibliography; 2020.

[40]

Jiang J, Wei Y, Feng Y, Cao J, Gao Y. Dynamic hypergraph neural networks. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence; 2019 Aug 10-16; Macao, China. San Francisco: International Joint Conferences on Artifical Intelligence; 2019. p. 2635-41.

[41]

Dong Y, Sawin W, Bengio Y. HNHN: hypergraph networks with hyperedge neurons. In: Proceedings of the Graph Representation Learning and Beyond Workshop at ICML 2020; 2020 Jul 13-18; online. San Diego: The International Conference on Machine Learning; 2020. p. 1-11.

[42]

Huang J, Yang J. UniGNN: a unified framework for graph and hypergraph neural networks. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence; 2021 Aug 19-27; online. San Francisco: International Joint Conferences on Artifical Intelligence; 2021. p. 2563-9.

[43]

Chien E, Pan C, Peng J, Milenkovic O. You are allset:a multiset function framework for hypergraph neural networks. In: Proceedings of the 10th International Conference on Learning Representations; 2022 Apr 25-29; online. Trier: the dblp computer science bibliography; 2022.

[44]

Wang P, Yang S, Liu Y, Wang Z, Li P. Equivariant hypergraph diffusion neural operators. In: Proceedings of the 11th International Conference on Learning Representations; 2023 May 1-5; Kigali, Rwanda. Trier: the dblp computer science bibliography; 2023.

[45]

Li M, Zhang Y, Li X, Zhang Y, Yin B. Hypergraph transformer neural networks. ACM Trans Knowl Discov Data 2023; 17(5):1-22.

[46]

Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations; 2017 Apr 24-26; Toulon, France. Trier: the dblp computer science bibliography; 2017. p. 1-14.

[47]

Feng Y, Han J, Ying S, Gao Y. Hypergraph isomorphism computation. IEEE Trans Pattern Anal Mach Intell. 2023; 46(5):3880-96.

[48]

Feng Y, Luo Y, Ying S, Gao Y. LightHGNN: distilling hypergraph neural networks into MLPs for 100x faster inference. In: Proceedings of the 12th International Conference on Learning Representations; 2024 May 7-11; Vienna, Austria; 2024.

[49]

Jiang Y, Wang R, Feng J, Jin J, Liang S, Li Z, et al. Explainable deep hypergraph learning modeling the peptide secondary structure prediction. Adv Sci 2023; 10(11):2206151.

[50]

Dotson A, Chen C, Lindsly S, Cicalo A, Dilworth S, Ryan C, et al. Deciphering multi-way interactions in the human genome. Nat Commun 2022; 13:5498.

[51]

Saifuddin KM, Bumgardner B, Tanvir F, Akbas E. HyGNN:drug-drug interaction prediction via hypergraph neural network. In: Proceedings of the 2023 IEEE 39th International Conference on Data Engineering; 2023 Apr 3-7; Acaheim, CA, USA. New York City: IEEE; 2023. p. 1503-16.

[52]

Nguyen DA, Nguyen CH, Mamitsuka H. Central-smoothing hypergraph neural networks for predicting drug-drug interactions. IEEE Trans Neural Netw Learn Syst. In press.

[53]

Vinas R, Joshi CK, Georgiev D, Lin P, Dumitrascu B, Gamazon ER, et al. Hypergraph factorization for multi-tissue gene expression imputation. Nat Mach Intell 2023; 5:739-53.

[54]

Bakht B, Javed S, AlMarzouqi H, Khandoker A, Werghi N. Colorectal cancer tissue classification using semi-supervised hypergraph convolutional network. In: Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging; 2021 Apr 13-16; Nice, France. New York City: IEEE; 2021. p. 1306-9.

[55]

Zhang R, Zhou T, Ma J. Multiscale and integrative single-cell Hi-C analysis with Higashi. Nat Biotechnol 2022; 40(2):254-61.

[56]

Di D, Zhang J, Lei F, Tian Q, Gao Y. Big-hypergraph factorization neural network for survival prediction from whole slide image. IEEE Trans Image Process 2022; 31:1149-60.

[57]

Di D, Zou C, Feng Y, Zhou H, Ji R, Dai Q, et al. Generating hypergraph-based high-order representations of whole-slide histopathological images for survival prediction. IEEE Trans Pattern Anal Mach Intell 2022; 45(5):5800-15.

[58]

Ji J, Ren Y, Lei M. FC-HAT: hypergraph attention network for functional brain network classification. Inf Sci 2022; 608:1301-16.

[59]

Xiao L, Wang J, Kassani PH, Zhang Y, Bai Y, Stephen JM, et al. Multihypergraph learning-based brain functional connectivity analysis in fMRI data. IEEE Trans Med Imaging 2019; 39(5):1746-58.

[60]

Song X, Wu K, Chai L. Brain network analysis of schizophrenia patients based on hypergraph signal processing. IEEE Trans Image Process 2023; 32:30.

[61]

Pan J, Lei B, Shen Y, Liu Y, Feng Z, Wang S. Characterization multi-modal connectivity of brain network by hypergraph GAN for Alzheimer’s disease analysis. In: Proceedings of the 4th Chinese Conference on Pattern Recognition and Computer Vision; 2021 Dec 19-21; Zhuhai, China. Berlin: Springer; 2021. p. 467-78.

[62]

Fan J, Fang L, Wu J, Guo Y, Dai Q. From brain science to artificial intelligence. Engineering 2020; 6(3):248-52.

[63]

Wang M, Shao W, Huang S, Zhang D. Hypergraph-regularized multi-modal learning by graph diffusion for imaging genetics based Alzheimer’s disease diagnosis. Med Image Anal 2023; 89:102883.

[64]

Jiao CN, Gao YL, Yu N, Liu JX, Qi LY. Hyper-graph regularized constrained NMF for selecting differentially expressed genes and tumor classification. IEEE J Biomed Health Inform 2020; 24(10):3002-11.

[65]

Sun L, Rao Y, Zhang X, Lan Y, Yu S. MS-HGAT: memory-enhanced sequential hypergraph attention network for information diffusion prediction. Proc Conf AAAI Artif Intell 2022; 36(4):4156-64.

[66]

Xia L, Huang C, Xu Y, Zhao J, Yin D, Huang J. Hypergraph contrastive collaborative filtering. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval; 2022 Jul 11-15; Madrid, Spain. New York City: Association for Computing Machinery; 2022. p. 70-9.

[67]

Rafferty J, Lee A, Bennett E, Lyons J, Jalali-Najafabadi F, Dhafari TB, et al. Analysis of disease clusters and patient outcomes in people with multiple long term conditions using hypergraphs. Int J Popul Data Sci 2022; 7(3).

[68]

Cai D, Sun C, Song M, Zhang B, Hong S, Li H. Hypergraph contrastive learning for electronic health records. In: Proceedings of the 2022 SIAM International Conference on Data Mining; 2022 Apr 28-30; Alexandria, VA, USA. Philadelphia: Society for Industrial and Applied Mathematics; 2022. p. 127-35.

[69]

Zhu Y, Huang G, Xu X, Ji Y, Shen F. Selective hypergraph convolutional networks for skeleton-based action recognition. In: Proceedings of the 2022 International Conference on Multimedia Retrieval; 2022 Jun 27-30; Newark, NJ, USA. New York City: Association for Computing Machinery; 2022. p. 518- 26.

[70]

Lu Z, Peng Y, Ip HH. Spectral learning of latent semantics for action recognition. In: Proceedings of the 2011 International Conference on Computer Vision; 2011 Nov 6-13; Barcelona, Spain. New York City: IEEE; 2011. p. 1503-10.

[71]

Hu X, Wei D, Wang Z, Shen J, Ren H. Hypergraph video pedestrian reidentification based on posture structure relationship and action constraints. Pattern Recognit 2021; 111:107688.

[72]

Lv X, Wang X, Wang Q, Yu J. 4D light field segmentation from light field super-pixel hypergraph representation. IEEE Trans Vis Comput Graph 2020; 27(9):3597-610.

[73]

Zhang X, Ma R, Zou C, Zhang M, Zhao X, Gao Y. View-aware geometrystructure joint learning for single-view 3D shape reconstruction. IEEE Trans Pattern Anal Mach Intell 2021; 44(10):6546-61.

[74]

Wang Y, Yan C, Feng Y, Du S, Dai Q, Gao Y. STORM: structure-based overlap matching for partial point cloud registration. IEEE Trans Pattern Anal Mach Intell 2022; 45(1):1135-49.

[75]

Yao R, Du S, Cui W, Ye A, Wen F, Zhang H, et al. Hunter: exploring high-order consistency for point cloud registration with severe outliers. IEEE Trans Pattern Anal Mach Intell 2023; 45(12):14760-76.

[76]

Gao Y, Wang M, Tao D, Ji R, Dai Q. 3D object retrieval and recognition with hypergraph analysis. IEEE Trans Image Process 2012; 21(9):4290-303.

[77]

Hao X, Li J, Guo Y, Jiang T, Yu M. Hypergraph neural network for skeletonbased action recognition. IEEE Trans Image Process 2021; 30:2263-75.

[78]

Harrod S. Modeling network transition constraints with hypergraphs. Transport Sci 2011; 45(1):81-97.

[79]

Luo X, Peng J, Liang J. Directed hypergraph attention network for traffic forecasting. IET Intell Transp Syst 2022; 16(1):85-98.

[80]

Wang J, Zhang Y, Wei Y, Hu Y, Piao X, Yin B. Metro passenger flow prediction via dynamic hypergraph convolution networks. IEEE Trans Intell Transp Syst 2021; 22(12):7891-903.

[81]

Rajesh K, Ramaswamy V, Kannan K, Arunkumar N. Satellite cloud image classification for cyclone prediction using dichotomous logistic regression based fuzzy hypergraph model. Future Gener Comput Syst 2019; 98:688-96.

[82]

Liu ZY, Liu JW. Hypergraph attentional convolutional neural network for salient object detection. Vis Comput 2023; 39(7):2881-907.

[83]

Pearcy N, Crofts JJ, Chuzhanova N. Hypergraph models of metabolism. Int J Biol Vet Agric Food Eng 2014; 8(8):752-6.

[84]

Kajino H. Molecular hypergraph grammar with its application to molecular optimization. In: Proceedings of the ICML 2019: 36th International Conference on Machine Learning; 2019 Jun 10-15; Long Beach, CA, USA. Trier: the dblp computer science bibliography; 2019. p. 3183-91.

[85]

Liu M, Zhang J, Yap PT, Shen D. View-aligned hypergraph learning for Alzheimer’s disease diagnosis with incomplete multi-modality data. Med Image Anal 2017; 36:123-34.

[86]

Shao W, Peng Y, Zu C, Wang M, Zhang D, Initiative ADN, et al. Hypergraph based multi-task feature selection for multi-modal classification of Alzheimer’s disease. Comput Med Imaging Graph 2020; 80:101663.

[87]

Rajesh K, Sarala D, Venkataraman V, Kannan K. Hypergraph-based algorithm for segmentation of weather satellite imagery. Indian J Sci Technol 2016; 9 (36):36.

[88]

Li X, Li Y, Shen C, Dick A, Van Den Hengel A. Contextual hypergraph modeling for salient object detection. In: Proceedings of the 2013 IEEE International Conference on Computer Vision;2013 Dec 1-8; Sydney, NSW, Australia. New York City: IEEE; 2013. p. 3328-35.

[89]

Liang Z, Chi Z, Fu H, Feng D. Salient object detection using content-sensitive hypergraph representation and partitioning. Pattern Recognit 2012; 45 (11):3886-901.

[90]

Sawhney R, Agarwal S, Wadhwa A, Shah RR. Spatiotemporal hypergraph convolution network for stock movement forecasting. In: Proceedings of the 2020 IEEE International Conference on Data Mining; 2020 Nov 17-20; Sorrento, Italy. New York City: IEEE; 2020. p. 482-91.

[91]

Sawhney R, Agarwal S, Wadhwa A, Derr T, Shah RR. Stock selection via spatiotemporal hypergraph attention network: a learning to rank approach. Proc Conf AAAI Artif Intell 2021; 35(1):497-504.

[92]

Ma X, Zhao T, Guo Q, Li X, Zhang C. Fuzzy hypergraph network for recommending top-k profitable stocks. Inf Sci 2022; 613:239-55.

[93]

Li X, Cui C, Cao D, Du J, Zhang C. Hypergraph-based reinforcement learning for stock portfolio selection. In: Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing; 2022 May 22-27; online. New York City: IEEE; 2022. p. 4028-32.

[94]

He Y, Tai W, Zhou F, Yang Y. Exploring hypergraph of earnings call for risk prediction. Proc Conf AAAI Artif Intell 2023; 37(13):16226-7.

[95]

Konstantinova EV, Skorobogatov VA. Application of hypergraph theory in chemistry. Discrete Math 2001; 235(1-3):365-83.

[96]

Jost R, Mulas R. Mulas, Hypergraph Laplace operators for chemical reaction networks. Adv Math 2019; 351:870-96.

[97]

Yadati N, Nitin V, Nimishakavi M, Yadav P, Louis A, Talukdar P. NHP:neural hypergraph link prediction. In: Proceedings of the 29th ACM International Conference on Information& Knowledge Management; 2020Oct 19-23; online. New York City: Association for Computing Machinery; 2020. p. 1705-14.

[98]

Xia L, Zheng P, Huang X, Liu C. A novel hypergraph convolution networkbased approach for predicting the material removal rate in chemical mechanical planarization. J Intell Manuf 2022; 33(8):2295-306.

[99]

Wu T, Ling Q. Self-supervised heterogeneous hypergraph network for knowledge tracing. Inf Sci 2023; 624:200-16.

[100]

Feng Y, Zhang Z, Zhao X, Ji R, Gao Y. GVCNN:group-view convolutional neural networks for 3D shape recognition. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, UT, USA. New York City: IEEE; 2018.

[101]

iMoonLab. DeepHypergraph (DHG) [Internet]. San Francisco: GitHub, Inc; c2024 [cited 2024 Apr 12]. Available from: https://github.com/iMoonLab/DeepHypergraph.

[102]

Hutter F, Kotthoff L, Vanschoren J. Automated machine learning:methods, systems, challenges. Cham: Springer Nature; 2019.

RIGHTS & PERMISSIONS

THE AUTHOR

AI Summary AI Mindmap
PDF (3898KB)

6845

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/