An Adaptive Hybrid Edge-Cloud Collaborative Offloading Method for Large-Scale Computational Tasks of Intelligent Machine Tool: Low-Latency, Energy-Efficient, and Secure
Zhiwen Lin
,
Kaien Wei
,
Yiqiao Wang
,
Chuanhai Chen
,
Jinyan Guo
,
Qiang Cheng
,
Zhifeng Liu
An Adaptive Hybrid Edge-Cloud Collaborative Offloading Method for Large-Scale Computational Tasks of Intelligent Machine Tool: Low-Latency, Energy-Efficient, and Secure
a Key Laboratory of CNC Equipment Reliability, Ministry of Education, Jilin University, Changchun 130025, China
b Jilin Provincial Key Laboratory of Advanced Manufacturing and Intelligent Technology for High-End CNC Equipment, Jilin University, Changchun 130025, China
c School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130025, China
d Institute of Advanced Manufacturing and Intelligent Technology, Beijing University of Technology, Beijing 100124, China
e Beijing Key Laboratory of Design and Intelligent Machining Technology for High Precision Machine Tools, Beijing University of Technology, Beijing 100124, China
Intelligent machine tools operating in continuous machining environments are commonly influenced by the coupled effects of multi-component degradation and updates in machining tasks. These factors result in the generation of vast multi-source sensor data streams and numerous computational tasks with interdependent data relationships. The stringent real-time constraints and intricate dependency structures present considerable challenges to traditional single-mode computational frameworks. Furthermore, there is a growing demand for computational offloading solutions in intelligent machine tools that extend beyond merely optimizing latency. These solutions must also address energy management for sustainable manufacturing and ensure security to protect sensitive industrial data. This paper introduces an adaptive hybrid edge-cloud collaborative offloading mechanism that combines single-edge-cloud collaboration with multi-edge-cloud collaboration. This mechanism is capable of dynamically switching between collaborative modes based on the status of computational nodes, task characteristics, dependency complexity, and resource availability, ultimately facilitating low-latency, energy-efficient, and secure task processing. A novel hybrid hyper-heuristic algorithm has been developed to address large-scale task allocation challenges in heterogeneous edge-cloud environments, enabling the flexible allocation of computational resources and performance optimization. Extensive experiments indicate that the proposed approach achieves average enhancements of 27.36% in task processing time and 7.89% in energy efficiency when compared to state-of-the-art techniques, all while maintaining superior security performance. Validation through case studies on a digital twin gantry five-axis machining center illustrates that the mechanism effectively coordinates task execution across multi-source concurrent data processing, complex dependency task collaboration, high-computational machine learning workloads, and continuous batch task deployment scenarios, achieving a 37.03% reduction in latency and a 25.93% optimization in energy use relative to previous generation collaboration methods. These results provide both theoretical and technical backing for sustainable and secure computational offloading in intelligent machine tools, thereby contributing to the evolution of next-generation smart manufacturing systems.
Driven by the pursuit of future-oriented smart manufacturing, traditional machining systems are being replaced by intelligent machine tools that incorporate advanced perception, real-time data analytics, and adaptive decision-making [1]. These intelligent tools generate vast amounts of real-time data that require rapid processing for tasks such as condition monitoring, fault prediction, and process optimization [2]. However, as task complexity increases and surpasses the processing capacity of local devices, it's vital to offload certain computational tasks by dynamically transferring portions of the workload from the machine tool to more powerful external computing platforms [3]. This offloading enhances overall efficiency and enables more intelligent production. Cloud computing offers significant computational power, scalable storage, and resource-sharing capabilities well-suited for smart manufacturing [4]. However, dependence solely on cloud resources could lead to challenges, including increased latency and limited bandwidth, particularly due to the real-time and sensitive nature of industrial data. Edge computing, by processing data closer to its source, can effectively minimize latency and enhance data privacy [5]. As a result, the integration of edge and cloud computing has emerged as a promising paradigm for intelligent machine tool offloading, facilitating flexible resource allocation and improved responsiveness.
In practice, task offloading for intelligent machine tools encounters several challenges, including the diverse and dynamically evolving nature of production tasks, high-frequency and large-volume data streams, and the coordination between local and heterogeneous computing resources. These tasks are mainly characterized by stringent real-time requirements, strong inter-task dependencies, and sensitivity to data privacy and energy consumption. As a result, several new challenges have arisen: ① how to flexibly coordinate local, edge, and cloud computing resources under dynamic network and resource conditions to achieve efficient task allocation; ② how to balance multiple objectives, such as latency, energy consumption, and security amid rising task complexity and data volume; and ③ how to design adaptive offloading and scheduling mechanisms that account for complex task dependencies and varying priorities.
1.2. Task offloading methods in the smart manufacturing environment
Research on intelligent machine tool offloading has evolved from single computational models to edge-cloud collaborative architectures. First, architecture-driven investigations emphasize task distribution between edge and cloud to enhance resource utilization and service responsiveness. For instance, Wang et al. [3] established a cyber-physical framework that utilizes cloud-edge collaboration to offload digital twin modeling and data processing tasks effectively. Moreover, process-oriented approaches harness offloading to optimize manufacturing efficiency and quality. Zhang et al. [6] presented a STEP-NC edge-cloud collaboration system designed to facilitate dynamic toolpath generation for process optimization, whereas Wang et al. [7] integrated edge-cloud distributed models for real-time surface quality monitoring based on tool wear data. In addition, several studies emphasize the integration of production scheduling with computational offloading. For instance, Yang et al. [8] proposed a coupling optimization model that jointly evaluates production efficiency and offloading delay within cloud-edge-terminal architectures. Overall, although these studies address significant components of computing offloading, the computational tasks in intelligent machine tools are marked by substantial and sequential source data, intricate inter-task dependencies, and a variety of task types. Current approaches still encounter challenges in adapting to dynamic network conditions and heterogeneous computing resources.
1.3. Multi-objective optimization for task offloading
Task offloading often necessitates a careful balance of multiple, sometimes conflicting, objectives owing to the complexity and diversity of computational tasks, alongside the dynamic nature of available resources. As a result, recent research on task offloading optimization has predominantly focused on priorities like latency, computational cost, and load balancing. For instance, Shu et al. [9] and Liu et al. [10] highlighted the importance of minimizing latency in edge-cloud collaborative environments. Meanwhile, Ma et al. [11] and Tuo et al. [12] explored the joint optimization of latency and execution costs, while Zeng et al. [13] and Zhang et al. [14] addressed multi-objective offloading by considering latency, resource balancing, and system load. Nonetheless, in the context of intelligent machine tools, the energy consumption associated with processing data from high-frequency sensors has emerged as a significant concern, second only to the energy usage of the production equipment itself [15]. Furthermore, as manufacturing data becomes increasingly sensitive, privacy and security issues have prompted research into personalized and privacy-aware offloading strategies [16]. However, challenges such as the risk of overloading edge nodes or the potential for privacy breaches when offloading sensitive data to the cloud underscore the necessity for multi-objective optimization frameworks specifically designed to meet the unique demands of intelligent machine tool applications. These frameworks should effectively balance efficiency, energy consumption, and security within dynamic and heterogeneous manufacturing environments.
Task offloading for intelligent machine tools typically involves navigating complex, high-dimensional, nonlinear, and multi-objective optimization challenges. To address these intricacies, a range of heuristic and metaheuristic algorithms has been introduced, including hybrid methodologies that integrate particle swarm optimization (PSO) with genetic algorithms (GAs) [17], as well as sophisticated evolutionary techniques such as the non-dominated sorting genetic algorithm II (NSGA-II) [18] and decomposition-based evolutionary strategies [19]. In addition, deep learning-enhanced approaches, like deep reinforcement learning, have shown promise for adaptive and scalable optimization in dynamic scenarios [13]. However, despite these advancements, challenges persist in achieving rapid convergence while maintaining solution diversity, particularly in the highly dynamic and resource-constrained environments typical of intelligent machine tool operations. Therefore, there remains a pressing need for the development of more efficient and robust optimization algorithms tailored specifically to the unique task offloading requirements of intelligent machine tool systems.
1.5. Unique characteristics of intelligent machine tools tasks
The computational tasks associated with intelligent machine tools possess distinctive characteristics that significantly impact offloading strategies. These tasks involve handling large volumes of high-frequency sensor data that necessitate real-time processing [20,21]. The computational tasks in intelligent machine tools exhibit complex dependencies and diverse topologies, as illustrated in Fig. 1, and can be categorized into three types: sequential tasks, parallel tasks, and hybrid tasks. Sequential subtasks depend strictly on preceding results, parallel subtasks operate independently, while hybrid tasks entail intricate interdependencies. The intricacy of these task topologies renders traditional independent task offloading strategies insufficient. Moreover, tasks differ in their periodicity and priority, calling for offloading mechanisms capable of dynamically responding to changes, as well as adaptively refining their strategies based on real-time feedback and evolving system conditions. These unique complexities necessitate the development of adaptive optimization models and algorithms to ensure efficient collaborative processing.
1.6. Contributions
This paper aims to address key challenges in the computational offloading of intelligent machine tools, presenting the following contributions:
(1) Adaptive offloading mechanism: We propose a real-time adaptive offloading mechanism that adjusts offloading strategies according to task complexity, data volume, inter-task dependencies, and available edge-cloud resources, thereby enabling intelligent machine tools to flexibly and efficiently utilize heterogeneous computational resources under varying production and network conditions.
(2) Multi-objective optimization model: We introduce a tailored multi-objective optimization model for intelligent machine tools that simultaneously minimizes latency, energy consumption, and security risks. This model is capable of addressing the need for balanced performance in efficiency, sustainability, and data protection during large-scale, complex task offloading.
(3) Hybrid hyper-heuristic parallel evolutionary algorithm: We develop a hybrid hyper-heuristic parallel evolutionary algorithm that enables the efficient decision-making of edge-cloud collaborative offloading schemes for intelligent machine tools. This algorithm is mainly designed to handle the high dimensionality, complex inter-task dependencies, and dynamic multi-objective requirements of real-world manufacturing scenarios, ensuring the generated offloading strategies are both robust and adaptable to changing computational environments.
2. Framework design and mathematical modeling
2.1. Framework design of intelligent machine tool computational task offloading
According to the task topology and data flow, an edge-cloud collaborative task offloading architecture is proposed, consisting of two layers: the edge computing layer and the cloud computing layer, as illustrated in Fig. 2.
(1) Phase I: edge-based lightweight processing. This phase focuses on lightweight tasks such as sensor data preprocessing and feature extraction. Its primary objective is to minimize data transmission volumes and reduce the computational load on the cloud. The edge computing layer functions in two modes: the single-edge node processing mode and the distributed multi-edge node collaborative processing mode. In the single-edge node processing mode, all subtasks are carried out sequentially on a single edge node, making this mode ideal for serial tasks characterized by strong data dependencies or limited computational requirements. Conversely, the multi-edge node collaborative processing mode enables the distribution of subtasks across multiple edge nodes for parallel execution. The results from these tasks are then consolidated by a primary edge node before being transmitted to the cloud, which makes this mode more suited for parallel or hybrid tasks that involve substantial data volumes or high computational complexity.
(2) Phase II: cloud-based deep processing. This phase addresses computationally intensive tasks that demand significant resources. The cloud layer encompasses high-complexity computational models, including resource allocation, resource management, data fusion, and neural network inference. These models facilitate in-depth analysis and exploration based on the preprocessing results from the edge nodes, thereby enabling global optimization and intelligent prediction capabilities.
Through the collaboration between the edge and cloud computing layers, two primary offloading modes for computational tasks emerge: the single-edge cloud collaborative mode and the distributed multi-edge cloud collaborative mode. This architecture not only allows for dynamic switching between various offloading modes in response to real-time task and resource fluctuations but also enables adaptive selection based on continuous evaluation of system status and task characteristics.
2.2. Modeling of adaptive hybrid edge-cloud collaborative mechanism (AH-ECO)
All mathematical symbols and their meanings utilized in the theoretical modeling of AH-ECO are detailed in Nomenclature.
The fundamental principle of the proposed mechanism lies in its ability to flexibly select the optimal edge-cloud collaboration mode in real-time, guided by the current scale of sensing data, task complexity, and available computational resources. By continuously monitoring these variables, the system can automatically modify its offloading strategy to maintain high efficiency and resource utilization under varying operational conditions. To formalize this adaptive decision process, a collaboration evaluation function (${{S}_{\text{collab}}}$) is defined, which integrates these factors to guide the dynamic selection of the offloading mode:
where the first indicator evaluates the scale of sensing data, the second one assesses the workload pressure of computational complexity relative to edge computing capabilities, and the third one evaluates the effect of data transmission on the network bandwidth. The factors ${{\theta }_{\text{sensor}}}$, ${{\theta }_{\text{load}}}$, and ${{\theta }_{\text{data}}}$ represent the decision thresholds for sensor fusion, computational load, and data transmission, respectively. In the case of Scollab<1, the single-edge cloud collaboration mode is adopted. Nsensor is the number of sensors, $\text{ctn}(\text{C}{{\text{T}}_{j}})$ is the number of computational instructions for computing task $\text{C}{{\text{T}}_{j}}$, $\text{Com}{{\text{p}}_{\text{e}{{\text{s}}_{i}}}}$ is the computing capability of edge node $\text{e}{{\text{s}}_{i}}$, BW is the network bandwidth, and $D(\text{C}{{\text{T}}_{j}})$ is the data volume of computing task $\text{C}{{\text{T}}_{j}}$.
2.2.1. Modeling of the single-edge-cloud collaboration
In this mode, the computation of the task $\text{C}{{\text{T}}_{j}}$ is divided into two phases: ① single-edge node processing and ② cloud processing. The total latency of $\text{C}{{\text{T}}_{j}}$ can be expressed as:
The total latency in the edge computation stage is determined by all subtasks ($\text{c}{{\text{t}}_{j,i}}$) and their data dependencies can be represented by:
The execution latency of the subtasks ($\text{c}{{\text{t}}_{j,i}}$) at the edge node is the sum of four factors, including the terminal data transmission latency, the queuing latency at the edge node, the edge computation latency, and the maximum completion time of all its predecessor tasks. This ensures that execution begins only after all predecessor tasks are completed and their data has been transmitted:
where $T_{\text{que}}^{\text{e}{{\text{s}}_{q}}}$ is the current task queue time for edge node $\text{e}{{\text{s}}_{q}}$ and ${{T}_{\text{comm}}}\left( \text{c}{{\text{t}}_{j,k}}\to \text{c}{{\text{t}}_{j,i}} \right)$ is the data transmission latency from predecessor task $\text{c}{{\text{t}}_{j,k}}$ to current task $\text{c}{{\text{t}}_{j,k}}$.
The precise computation of each latency component is evaluated in the following form:
Once $\text{C}{{\text{T}}_{j}}$ is completed at the edge node, the generated $\text{CT}_{j}^{\text{edge}}$ is uploaded to the cloud for the second phase of processing, with the specific latency calculations as follows:
The total energy consumption of $\text{C}{{\text{T}}_{j}}$ is composed of the energy consumption from the edge computing, cloud computing, and data transmission as follows:
In addition, the data transmission energy consumption is composed of three segments: from the terminal to the edge, from the edge to the cloud, and from the cloud to the edge, which can be provided in the following form:
The total security risk of the task $\text{C}{{\text{T}}_{j}}$ considers two aspects: the failure risk of overloaded edge nodes and the data leakage risk during cloud transmission. The corresponding mathematical expression is stated as follows:
where the first term evaluates the operational reliability of edge resources, accounting for both the ratio of node load to processing capacity and the node’s inherent mean time between failures (MTTF). The w denotes the sensitivity coefficient regarding the impact of the load ratio on failure risk. Moreover, the second term assesses the risk of data leakage during cloud transmission, taking into account factors such as data volume, cloud access frequency, cloud security level, and data dispersion.
2.2.2. Modeling of the distributed multi-edge-cloud collaboration
For this mode, the computation process of the task $\text{C}{{\text{T}}_{j}}$ can be divided into three stages: ① tasks distributing; ② results merging; ③ cloud processing. As a result, the total latency of $\text{C}{{\text{T}}_{j}}$ can be stated by:
where $\text{e}{{\text{s}}_{k}}$ denotes the edge execution point of subtask $\text{c}{{\text{t}}_{j}}$, and $\text{e}{{\text{s}}_{N}}$ denote the last edge node.
The latency ${{T}_{\text{edge}}}\left( \text{c}{{\text{t}}_{j,i}} \right)$ for subtask $\text{c}{{\text{t}}_{j,i}}$ on the edge node can be evaluated based on Eq. (5). After all edge nodes finish processing, the intermediate computation results ($\text{CT}_{j}^{\text{edge},k}$) are transmitted to the main edge node ($\text{e}{{\text{s}}_{m}}$) for merging:
where Tcommesk→esm (·) denotes the delay of results transmission from edge node esk to the main edge node esm, and Tmergeesm (·) denotes the delay for merging task results by main edge node esm.
The merged results $\text{CT}_{j}^{\text{edge}}$ are uploaded to the cloud for the third phase of processing, with the latency calculated using Eq. (7). The total energy consumption of $\text{C}{{\text{T}}_{j}}$ consists of the edge computing energy consumption, distributed edge merging energy consumption, cloud computing energy consumption, and data transmission energy consumption:
where Ecommesk→esm (·) denotes the energy consumption from edge node esk to the main edge node esm, and Emergeesm (·) denotes the energy consumption for merging task results by main edge node esm.
The total security evaluation value of the task $\text{C}{{\text{T}}_{j}}$ is the combination of the failure risks of all assigned edge nodes and the data leakage risk during cloud transmission:
2.2.3. Description of the multi-objective optimization problem
The final optimization goal (O) of the edge-cloud computing system is to minimize the execution latency, energy consumption, and security risks of the digital twin machine tool task set CT. This can be mathematically stated by:
The edge-cloud collaborative task offloading optimization problem is essentially characterized by the uncertainty of task properties, the heterogeneity of resource environments, and the diversity of optimization objectives, resulting in a highly complex, multi-dimensional problem. To effectively address these issues, a hybrid hyper-heuristic operator parallel evolution (HHOPE) algorithm is introduced, detailing its foundational framework and design process as demonstrated in Fig. 3.
Distinct from traditional single-operator [11] or static hybrid metaheuristics [22], the HHOPE integrates multiple evolutionary operators within a hyper-heuristic framework, facilitating adaptive operator selection and parallel optimization. This algorithm features a multi-feature fusion-based task pre-assignment mechanism to enhance initialization and employs a game-theoretic cross-learning strategy among operators to improve both convergence and solution diversity in high-dimensional, multi-objective contexts. Specifically, genetic algorithms, particle swarm optimization, and the sparrow search algorithm are selected as core operators based on their complementary strengths: genetic algorithms excel in global exploration and discrete task assignment, particle swarm optimization can be effectively adept at rapid convergence in continuous and large-scale search spaces, while the sparrow search algorithm exhibits strong adaptability in dynamic environments. This combination proves particularly effective for the heterogeneous, dynamic, and mixed-discrete-continuous nature of intelligent machine tool offloading problems. The HHOPE algorithm operates in three stages: task pre-assignment stage, hyper-heuristic operator library design stage, and multi-group parallel evolutionary optimization stage.
3.2. Task offloading pre-assignment mechanism based on the multi-feature fusion
To accurately describe the characteristics of collaborative computation tasks $\text{ct}_{j,i}^{\text{collab}}$, the task feature vector $\mathscr{P}\left(\mathrm{ct}_{j, i}^{\text {collab }}\right)$ is defined as:
where Load() denotes the normalized representation of the current load of edge nodes, and BWcommon is the communication bandwidth.
The components of the feature vector correspond to the ratio of task computational complexity to node computing capacity, the ratio of task data volume to node bandwidth, and a normalized representation of the node's current load. To ensure an effective allocation of collaborative tasks, the goal is to minimize both computation and transmission latency while balancing the load across edge computing nodes. In addition, a decision variable ${{x}_{j,i,k}}$ is defined, where ${{x}_{j,i,k}}=1$ indicates that $\text{ct}_{j,i}^{\text{collab}}$ is assigned to the edge node $\text{e}{{\text{s}}_{k}}$. The optimization objective function is given by:
The K-means clustering algorithm is then employed to allocate similar tasks with multi-source features to similar computing environments. To this end, at first, N cluster centers (Ck) are randomly initialized, with each center representing the attributes of an edge computing node. For each collaborative task $\text{ct}_{j,i}^{\text{collab}}$, the Euclidean distance between its feature vector $\mathscr{P}\left(\mathrm{ct}_{j, i}^{\text {collab }}\right)$ and each cluster center Ck can be computed as:
$d_{j, i, k}=\left\|\mathscr{P}\left(\mathrm{ct}_{j, i}^{\text {collab }}\right)-C_{k}\right\| $
The task $\text{ct}_{j,i}^{\text{collab}}$ is then assigned to the edge computing node $\text{e}{{\text{s}}_{k}}$ corresponding to the nearest cluster center. After completing the task allocation, the load $\text{Loa}{{\text{d}}_{\text{e}{{\text{s}}_{k}}}}$ of each edge node is utilized to update the corresponding cluster center ${{C}_{k}}$.
Continue performing the steps outlined above—distance calculation, task allocation, and cluster center update—until the variation in cluster centers falls below the specified threshold or the maximum number of iterations is attained. The pseudocode for the task offloading pre-allocation mechanism, which is mainly based on the multi-feature fusion approach, is presented in Algorithm 1.
3.3. Design of the hyper-heuristic operator library
3.3.1. Design of the genetic optimization operator
To address the issue of declining population diversity and premature convergence frequently seen in the later stages of standard Genetic Algorithm (GA), a diversity enhancement strategy has been implemented to optimize this operator, referred to as diversity enhanced GA (DEGA). This mutation operation utilizes a dynamic mutation probability mechanism, applying a high mutation probability during the early stages of evolution to boost exploration, which is then gradually decreased in the subsequent stages to maintain population stability. For this purpose, the mutation probability (${{P}_{\text{mut}}}$) is defined as:
where $P_{\text{mut}}^{\text{max}}$ and $P_{\text{mut}}^{\text{min}}$ represent the maximum and minimum mutation probabilities, respectively. $\text{The}\ \text{ite}{{\text{r}}_{\text{max}}}$ and t denote the maximum number of iterations and the current iteration number, respectively. To further prevent premature convergence, the genetic algorithm introduces an elite retention mechanism and dynamically adjusts the retention rate of elite individuals based on the population diversity monitoring. The diversity metric $\text{DM}\left( X \right)$ for the population can be evaluated in the following form:
$\text{DM}\left( X \right)=\frac{1}{\text{PS}}\underset{i=1}{\overset{\text{PS}}{\mathop \sum }}\,\underset{j=1}{\overset{\text{PS}}{\mathop \sum }}\,d\left( {{X}_{i}},{{X}_{j}} \right)$
where $d\left( {{X}_{i}},{{X}_{j}} \right)$ denotes the Hamming distance between individuals ${{X}_{i}}$ and ${{X}_{j}}$, and PS represents the population size. If $\text{DM}\left( X \right)$ falls below a predefined threshold, the retention of elite individuals decreases while the mutation probability increases to enhance population diversity. The optimized genetic algorithm generates an elite solution in each generation and maintains high-fitness individuals in the population, ensuring high-quality solutions for the subsequent optimization stages.
3.3.2. Design of the particle swarm optimization operator
The standard PSO algorithm struggles with effectively balancing exploration and exploitation, which can result in convergence to local optima. To overcome this limitation, a dynamic learning factor mechanism has been introduced, named DLFPSO. This operator enhances the algorithm's global search capability while also improving local convergence by employing an adaptive dynamic adjustment strategy for the inertia weight:
where ω, ωmax, and ωmin denote the inertia weight, the maximum inertia weight, and the minimum inertia weight, respectively. The non denotes nonlinear modulation index.
The learning factors (i.e., ${{c}_{1}}$ and ${{c}_{2}}$) are able to control the particle's dependence on the individual best position (${{p}_{\text{best}}}$) and the global best position (${{g}_{\text{best}}}$). Additionally, the dynamical learning factors are introduced to improve the particle's ability to explore and exploit:
where ${{c}_{1\_\text{min}}}$ and ${{c}_{2\_\text{min}}}$ denote the minimum value of learning factor ${{c}_{1}}$ and ${{c}_{2}}$, respectively.The ${{c}_{1\_\text{max}}}$ and ${{c}_{2\_\text{max}}}$ denote the maximum value of learning factor ${{c}_{1}}$ and ${{c}_{2}}$, respectively.
At the beginning, smaller ${{c}_{1}}$ and larger ${{c}_{2}}$ foster global exploration, while in the later stages, larger ${{c}_{1}}$ and smaller ${{c}_{2}}$ improve local search refinement. To handle the discreteness of the offloading problem, a Sigmoid function is introduced after the position update for nonlinear mapping:
where $S\left( x_{i,j}^{t+1} \right)\in \left[ 0,1 \right]$ denotes the probability value of the particle's updated position. The binary values $x_{i,j}^{t+1}$ are commonly determined by comparing $S\left( x_{i,j}^{t+1} \right)$ with a random value $r\in \left[ 0,1 \right]$. This hybrid optimization strategy retains the global search capability of the PSO in continuous space while ensuring the feasibility of solutions through discretized mapping.
3.3.3. Design of the sparrow search optimization operator
To overcome the limitations imposed by fixed individual roles in the standard sparrow search algorithm (SSA), this operator is optimized through dynamic role allocation and adaptive step size adjustment, and is referred to as role-adaptive sparrow search algorithm (RASSA). The explorers account for a certain population’s proportion (${{p}_{\text{f}}}$) and are essentially responsible for global search to uncover potential high-quality solutions. Their position update rule can be provided by:
where K denotes the step size factor, and $X_{\text{best}}^{t}$ represents the best position in the current population. p is an alarm value, and ∂ is a random direction indicator. The explorers perform broad random searches under abundant resources and converge near the best solution during resource scarcity. Additionally, the followers account for a specific population’s proportion (${{p}_{\text{j}}}$) and are mainly responsible for local exploitation, either following explorers or moving away from poor solutions. Their position update rule can be rationally expressed by:
where $X_{\text{worst}}^{t}$ represents the worst solution in the current population. The ${{N}_{\text{pop}}}$ denotes the total population size, and η is the step control parameter. Furthermore, the sentinels account for a population’s proportion (${{p}_{\text{w}}}$) and are essentially responsible for leap searches to expand the search range. On this basis, their position update rule can be reasonably stated as:
where $X_{\text{random}}^{t}$ denotes a randomly generated candidate solution, and δ represents the leap factor. In general, random perturbations are capable of enhancing the global search capability. It should be emphasized that the proportions of sparrow roles are dynamically updated over iterations: the explorer proportion (${{p}_{\text{f}}}$) decreases, while the sentinel proportion (${{p}_{\text{w}}}$) increases, mathematically given by:
where ${{p}_{\text{f}0}}$ and ${{p}_{\text{w}0}}$ represent the initial role proportions. This dynamic distribution ensures global exploration is prioritized in early iterations and local exploitation in later ones. Further, a step size strategy based on fitness value is designed such that the step size (${{L}_{i}}$) can be expressed as:
where ${{L}_{\max }}$ is the initial step size, ${{f}_{i}}$ is the fitness value, and ${{f}_{\text{best}}}$ and ${{f}_{\text{worst}}}$ in order are the best and worst fitness values in the current population, respectively. This strategy combines dual information from individual quality and iteration progress: High-quality individuals have smaller step sizes for fine-grained exploitation, whereas low-quality individuals have larger step sizes to enhance global exploration.
To maximize the optimization capabilities of each operator, a novel game-theoretic framework for information-sharing and cross-learning is proposed, integrated with advanced dimensionality reduction techniques for the search space. Each operator adapts its strategies dynamically through interactions with other ones, aiming to maximize its gains throughout the optimization process. Let the operator set be $\text{Opera}=\left\{ \text{DEGA},\text{DLFPSO},\text{RASSA} \right\}$. The state of each operator $\text{Oper}{{\text{a}}_{i}}$ is mainly defined by both the global best solution $X_{\text{best}}^{\text{Oper}{{\text{a}}_{i}}}$ and the population's fitness distribution FX. To this end, let us define the profit function ${{U}_{\text{p}}}\left( \text{Oper}{{\text{a}}_{i}} \right)$ in the following form:
where the ${{s}_{i,j}}$ indicates the information-sharing intensity between operators $\text{Oper}{{\text{a}}_{i}}$ and $\text{Oper}{{\text{a}}_{j}}$. If $\text{Oper}{{\text{a}}_{i}}$ and $\text{Oper}{{\text{a}}_{j}}$ choose to share information in the current iteration, ${{s}_{i,j}}=1$; otherwise, ${{s}_{i,j}}=0$. Additionally, $C\left( {{A}_{i}} \right)$ represents the computational cost of the operator, including its computation time and resource consumption. ${{w}_{1}}$, ${{w}_{2}}$, and ${{w}_{3}}$ denote the weight for solution quality improvement, the weight for information sharing benefit, and weight for computational cost, respectively. During operator interactions, if $\text{Oper}{{\text{a}}_{i}}$ and $\text{Oper}{{\text{a}}_{j}}$ satisfy the criteria for information sharing and cross-learning, a crossover strategy based on fitness differences is employed to create new individuals. Furthermore, the $\text{ }\!\!\Delta\!\!\text{ }F\left( X_{\text{best}}^{\text{Oper}{{\text{a}}_{i}}} \right)$ represents the improvement of the operator's best solution in the current iteration, used to measure its independent optimization capability:
where $X_{\text{best}}^{\text{Oper}{{\text{a}}_{i}},\text{prev}}$ denotes the previous best solution of operatori.
To avoid exponential growth in the dimensionality of the search space, intelligent dimensionality reduction techniques are developed to reduce the search space and enhance the efficiency of finding the global optimal solution. Assuming each computation task $\text{C}{{\text{T}}_{j}}$ in the system has characteristics including computational complexity $\text{ctn}\left( \text{C}{{\text{T}}_{j}} \right)$, data volume $D\left( \text{C}{{\text{T}}_{j}} \right)$, real-time constraint ${{T}_{\text{max}}}\left( \text{C}{{\text{T}}_{j}} \right)$, and energy constraint ${{E}_{\text{max}}}\left( \text{C}{{\text{T}}_{j}} \right)$, the task feature matrix can be defined as:
Through computing the covariance matrix and extracting the eigenvectors, a feature matrix (H) for the reduced task is generated. The principal components with a cumulative contribution rate exceeding 90% are chosen to ensure the retention of critical task features. Furthermore, the reinforcement learning agents engage in exploration and refinement of the search space, thereby identifying the optimal subspace ${{X}_{\text{search}}}\in {{X}_{\text{global}}}$ to enhance efficiency. Moreover, to avoid potential information loss introduced by dimensionality reduction, the search space radius $\text{R}{{\text{a}}_{\text{search}}}$ is dynamically adjusted during the optimization process. The adjustment rule can be stated by:
where $\text{Gradient}\left( F\left( X \right) \right)$ denotes the fitness gradient. Generally, a larger gradient increases the search radius for better global exploration, whereas a smaller gradient reduces it for more precise local searches. The multi-group parallel evolutionary process of HHOPE has been presented in Appendix A.
4. Experiments and case study
4.1. Numerical experiments and performance evaluation of the HHOPE
To validate the performance of the HHOPE algorithm, we utilized various groups of UF, ZDT, and DTLZ test functions. In these experiments, HHOPE was benchmarked against five classical algorithms, which include:
(1) Direction guided evolutionary algorithm (DGEA): This algorithm employs direction vectors to guide population search by examining the geometric characteristics of the Pareto front.
(2) Hyper-dominance-based evolutionary algorithm (HEA): A multi-objective optimization algorithm that prioritizes individuals based on hyper-dominance relationships, thereby enhancing the diversity and uniformity of the solution set.
(3) Improved decomposition-based evolutionary algorithm (IDBEA): This approach decomposes multi-objective problems into several single-objective sub-problems, resolving them collaboratively to optimize the overall objective.
(4) Multi-objective particle swarm optimization (MOPSO): A multi-objective strategy rooted in particle swarm optimization, leveraging swarm intelligence mechanisms to explore and exploit the Pareto front.
(5) Regularity model-based multi-objective estimation of distribution algorithm (RMMEDA): This algorithm constructs the statistical models to estimate the distribution of the Pareto front and generates new solutions based on these models.
Each algorithm was appropriately configured with a population size of 200 and a maximum of 1000 iterations. To minimize randomness, each test function was executed 30 times. The inverted generational distance (IGD) and standard deviation (STD) metrics were employed to evaluate the convergence performance of HHOPE and the benchmark algorithms across each test function. The results of these experiments are presented in Table 1.
In Table 1, the symbols “+,” “−,” and “=” used in the results indicate whether the HHOPE outperformed, underperformed, or performed comparably to the benchmark algorithms on the corresponding test functions. The statistical results reveal that the HHOPE substantially surpassed the benchmark algorithms in 10 out of 12 test functions, particularly excelling in both convex and non-convex scenarios (e.g., UF1, ZDT2), complex Pareto fronts (e.g., UF4, ZDT6), and high-dimensional or constrained problems (e.g., ZDT4, DTLZ2). In cases involving non-uniform or discrete distributions (e.g., UF5, ZDT3), HHOPE effectively balanced global exploration and local exploitation through multi-operator collaboration and intelligent search strategies, resulting in more uniform Pareto solution sets. Furthermore, the HHOPE exhibited the lowest standard deviation across multiple experiments, demonstrating its remarkable result stability.
To analyze the distribution of high-quality solutions generated by the HHOPE across various test functions, boxplots were employed (Fig. 4). The HHOPE's IGD values demonstrated lower means and smaller box heights compared to those of other algorithms, highlighting its superior stability and consistent solution quality.
4.2. Simulation experiments and results analysis
4.2.1. Simulation experiment design
To assess the effectiveness of the HHOPE in addressing multi-objective offloading challenges associated with large-scale machine tool computational tasks, experiments were appropriately designed focusing on two key aspects: the configuration of the experimental environment and the setting of scenario parameters.
(1) Experimental environment setup: A custom simulation platform for task offloading was developed utilizing Python. The experiments were carried out on a system equipped with an Intel Core i7-12700K processor (Intel Corporation, USA), 16 GB of RAM (Kingston, USA), and a 1 TB NVMe SSD (Western Digital, USA). Python 3.9 served as the runtime environment, incorporating libraries such as SciPy, DEAP, SimPy, and NetworkX for task scheduling simulation, algorithm performance evaluation, and results visualization.
(2) Scenario parameter configuration: To ensure reproducibility and fairness, Table 2 outlines the parameters pertaining to computational resources and task characteristics. The computational resource parameters were derived from the performance of real industrial devices, whereas the task characteristics were designed to mirror real-world scenarios. The edge-exclusive subtask data sizes ranged from 200 KB to 3 MB, aligning with state data collected by sensors in digital twin tasks. The number of subtasks was established between 3 and 10, reflecting the customary multi-level data processing workflows. Fig. 5 presents the differentiated attribute configuration information for all computing tasks as an intelligent machine tool completes a single machining task, encompassing the number of subtasks, task scale, number of computing instructions, task dependencies, and computing node distribution for each task. Furthermore, a simulation system for large-scale edge-cloud collaborative offloading of intelligent machine tools was developed, as detailed in Appendix A.
4.2.2. Ablation experiments
To further verify the influence of various modules and parameter configurations within the edge-cloud collaborative mechanism on algorithm performance, two distinct types of ablation experiments were conducted: one ablating the computational task pre-allocation module and the other experimenting with various configurations of core control parameters.
(1) Ablation of computational task pre-allocation module. Fig. 6 illustrates the fitness value curves plotted against iteration numbers for the full HHOPE algorithm alongside its version lacking the task pre-allocation module. The complete HHOPE algorithm demonstrates a significant advantage over the ablated version in terms of initial fitness values. This enhancement enables HHOPE to efficiently navigate toward the optimal solution set while circumventing local optima. Moreover, the complete HHOPE algorithm achieves a fitness value standard deviation of 11.73%, markedly superior to the 22.69% observed in the ablated version. In summary, the inclusion of the computational task pre-allocation module elevates the quality of the initial population, minimizes the time expended on suboptimal solutions during the early stages of the algorithm, and enhances both optimization efficiency and stability of results.
(2) Ablation of various control parameters in the HHOPE. The core control parameters of the HHOPE algorithm include fitness improvement rate weights (${{w}_{1}}$, ${{w}_{2}}$, and ${{w}_{3}}$, derived from Eq. (34)) and the iteration interval for inter-group information exchange Fex. The ranges for these parameters are: ${{w}_{1}}$∈[0.5,1.5], ${{w}_{2}}$∈[0.3,0.7], ${{w}_{3}}$ fixed at 0.3, and Fex∈[10,20]. Fig. 7 illustrates the effects of five distinct parameter configurations on the algorithm's fitness and convergence performance. The red curve (${{w}_{1}}$=1.5) shows that a larger w1 emphasizes fitness improvement, resulting in enhanced optimization efficiency during the initial 50 iterations. However, if w1 is excessively large, it diminishes information sharing, causing slower convergence in subsequent iterations. The balanced values of ${{w}_{1}}$, such as 0.5 or 1, provide a more stable reduction in fitness and yield better final fitness results. The variations in ${{w}_{2}}$ significantly influence the fitness convergence process. A larger ${{w}_{2}}$ (i.e., ${{w}_{2}}$=0.7) improves information sharing between operators, expediting fitness reduction during the early iterations, but it may compromise individual diversity and lead to convergence at local optima later on. The Fex determines the frequency of information exchange between the two groups of optimization operators. A lower Fex (i.e., Fex=10) enhances global search capabilities but may also introduce greater fluctuations within the population.
The results reveal that incorporating the computational task pre-allocation module markedly enhances both population quality and algorithm stability. Furthermore, careful configuration (${{w}_{1}}$=1, ${{w}_{2}}$=0.5, ${{w}_{3}}$=0.3, and Fex=10) achieves a good balance between fitness improvement rate and final convergence precision.
4.2.3. Comparative experiments
To comprehensively evaluate the performance of the proposed HHOPE algorithm for edge-cloud collaborative task offloading, three advanced algorithms were selected for comparison:
(1) Adaptive Q-learning-based hyper-heuristic task offloading (AQL-HHTO) [23]: a hyper-heuristic approach based on Q-learning to dynamically learn task scheduling strategies, achieving high efficiency and adaptability in task allocation.
(2) Intelligent differential evolutionary task offloading (IDE-TO) [24]: a differential evolution-based optimization algorithm that combines global search with local optimization. It offers rapid convergence and effectively balances exploration and exploitation in multi-objective optimization scenarios.
(3) Pareto-based non-dominated sorting genetic algorithm II for task scheduling (PNSGA2-TO) [18]: a multi-objective optimization algorithm that integrates fast non-dominated sorting with crowding distance, enabling the generation of diverse solutions for multi-objective task scheduling problems.
Fig. 8 compares the fitness convergence results of the HHOPE against these established algorithms. Thanks to its task pre-allocation module, the HHOPE begins with a significantly lower initial fitness value, thereby minimizing unnecessary searches for poor solutions. Within the first 50 iterations, HHOPE's fitness value rapidly declined to 0.33874, achieving convergence speed improvements of 39.6%, 36.5%, and 19.2% relative to AQL-HHTO, IDE-TO, and PNSGA2-TO, respectively. By the end of 300 iterations, HHOPE's final fitness value exhibited enhancements of 64.26%, 51.32%, and 41.22% over AQL-HHTO, IDE-TO, and PNSGA2-TO, respectively. Furthermore, the HHOPE demonstrated superior stability during the middle and latter iterations, with minimal fluctuations in fitness values, highlighting its effective balance between global search and local optimization.
To validate the task offloading performance of the adaptive hybrid edge-cloud offloading (AH-ECO) mechanism, which integrates single-edge-cloud collaboration with multi-edge-cloud collaboration, three typical edge-cloud collaborative offloading strategies were selected as comparative benchmarks:
(1) Round-robin edge-cloud offloading (RR-ECO) [25]: The tasks are allocated to edge nodes and cloud nodes in a predetermined sequential order to achieve load balancing through round-robin scheduling.
(2) Greedy edge-cloud offloading (G-ECO) [26]: The tasks are offloaded to locations with the minimum estimated latency. Serially dependent tasks wait for their predecessor tasks to complete before making greedy decisions, whereas hybrid dependent tasks undergo stepwise greedy offloading according to topological order.
(3) Minimum-queue-length based edge-cloud offloading (MQL-ECO) [27]: The tasks are essentially assigned to locations with the currently shortest queue length. As workloads change, tasks naturally flow toward idle nodes. The scheduler ensures dependency order compliance for the dependent subtasks.
To thoroughly evaluate the optimization capabilities of the AH-ECO under varying task scales and dependency complexities, six distinct task sets were designed, encompassing a range of practical scenarios from medium to ultra-large scale, as well as from basic pipeline-type to intricate dependency-type configurations. The detailed configurations of each group are provided in Table 3.
The experimental results are displayed in Fig. 9. Taking the ultra-large-scale complex dependency task set as a reference, AH-ECO demonstrated significant improvements over the baseline methods RR-ECO, G-ECO, and MQL-ECO. Specifically, AH-ECO reduced total task completion latency by 29.72%, 26.73%, and 25.62%, respectively; it also lowered execution energy consumption by 7.6%, 12.26%, and 3.81%, respectively, while achieving a notable 10.63% reduction in security risk metrics. This performance advantage can be mainly attributed to AH-ECO's ability to dynamically assess the resource status of each node and the task dependency structures, allowing for adaptive selection of collaborative modes for various task chain types. Such a mechanism effectively mitigates local congestion and resource waste typical of traditional round-robin and greedy strategies, particularly in complex dependency scenarios, thus significantly decreasing latency and energy consumption overhead caused by frequent inter-node communications.
As the scale of tasks and complexity of dependencies increase, the performance benefits of AH-ECO become even more pronounced, as illustrated in Figs. 9(g) and (h). When the task scale transitions from medium to ultra-large, the comprehensive objective value of AH-ECO rises by only 57.86%, whereas the alternative task collaboration strategies reveal clear performance bottlenecks and divergence. Additionally, as dependency complexity shifts from basic pipeline-type to complex hybrid-type, the performance degradation of AH-ECO is considerably less than that of its counterparts.
In conclusion, AH-ECO stands out for its exceptional ability to deliver lower latency and reduced energy consumption, all while significantly mitigating operational risks. This impressive performance shines particularly in the intricate, heterogeneous, and large-scale edge-cloud environments where interdependencies are high.
4.3. Case study of digital twin machine tool
To assess the effectiveness of AH-ECO in practical applications, this study utilized the digital twin system of gantry five-axis machining center (DT-GFMC) from Beijing Beiyi Machine Tool Co., Ltd. (China) as the experimental platform. The experiment aimed to achieve two primary functions of the DT-GFMC during machining tasks: health status assessment and machining quality optimization, investigating the collaborative processing efficiency of all related computational tasks conducted between the edge and cloud. These functions correspond to multiple task chains, which include a range of components such as machining parameter processing, sensing signal pre-processing, feature analysis, machine tool rigidity evaluation, machine tool status analysis, machining quality prediction, and machining parameter decision-making. This coupling of task chains forms a typical topological structure that combines serial, parallel, and hybrid task types, as illustrated in Fig. 10. Therefore, the experimental protocol could divide the tasks of the DT-GFMC into four distinct stages: data preprocessing, status analysis, health assessment, and decision optimization, with each stage characterized by an incrementally higher level of computational tasks and task complexity.
During the testing process, all tasks were initiated according to the actual production process rhythm, ensuring that they accurately reflected the concurrent load characteristics inherent in machine tool operation. Fig. 11 presents a comparative analysis of offloading performance metrics between AH-ECO and MQL-ECO across the four task stages of the DT-GFMC. The obtained results are indicative of the fact that the AH-ECO's operational mechanism offers special advantages in several aspects:
(1) In the data preprocessing stage, tasks primarily involve processing various sensor signals, processing data, and CNC system data, all executed at the edge side. While the AH-ECO and MQL-ECO exhibit comparable offloading performance for these tasks, the AH-ECO outperforms by utilizing real-time awareness of edge node loads and effectively allocating parallel processing tasks. This approach enables the AH-ECO to complete the preprocessing and feature analysis of 20 or more multi-source signals generated concurrently by the DT-GFMC within each cycle, all within a 10 s timeframe.
(2) In the status analysis stage, driven by the increased scale of tasks and the complexity of dependencies, the AH-ECO performs a thorough analysis of task dependency structures and prioritizes the allocation of closely coupled subtasks to the same node or to low-latency links. This strategy effectively minimizes latency and energy consumption associated with inter-node communications. Empirical test results reveal that the AH-ECO could efficiently coordinate edge and cloud computing resources to complete multiple parallel status analysis tasks—including spindle rigidity assessment, monitoring of moving component temperature rise, and evaluation of tool wear in the DT-GFMC—within 20 s. This results in a remarkable 53.02% reduction in latency and a 28.49% optimization in energy consumption compared to the MQL-ECO-based approach.
(3) The health assessment stage presents an even greater task scale, with certain computational subtasks managed by advanced machine learning models, leading to a substantial increase in computational instruction volumes and resource demands. The AH-ECO-based methodology adeptly identifies high-load nodes and strategically allocates high-computational-instruction tasks to nodes rich in resources. Despite the concurrent execution of multiple complex models, the AH-ECO successfully maintains latency within 50 s while achieving a 29.97% reduction in energy consumption and a 22.94% decrease in security risk.
(4) In the decision optimization stage, where complex process reasoning and optimization decision tasks abound, the AH-ECO-based approach flexibly aligns edge and cloud capabilities with the heterogeneous characteristics of process chains and business priorities, achieving multi-task parallelism and scalable resource utilization. When the task scale doubles, the AH-ECO-based methodology experiences a latency increase of only 24.37%, in stark contrast to MQL-ECO's 38.69% rise. The testing demonstrates that, during both batch and continuous machining tasks, the AH-ECO facilitates a complete closed-loop machining optimization process for the DT-GFMC—from data processing and machine tool status analysis to quality reasoning and parameter decision—within approximately 70 s.
Overall, the AH-ECO-based approach could effectively address prevalent challenges in the practical application of digital twin machine tools, including resource congestion, task dependency bottlenecks, and security risks. This is achieved through mechanisms such as dynamic resource perception, adaptive task scheduling, optimization of dependency structures, and multi-objective global balancing. Such mechanisms ensure stability in operational performance as the digital twin machine tool simultaneously executes functions including status analysis, health assessment, fault diagnosis, machining optimization, energy consumption optimization, and reliability evaluation, eventually enhancing data processing efficiency, computational energy efficiency, and operational security of the system.
5. Concluding remarks and future works
To address the challenges posed by large-scale and complex computational tasks with interdependent data relationships in intelligent machine tools, this research introduces an innovative adaptive hybrid edge-cloud collaborative offloading (AH-ECO) method. The method continuously monitors and analyzes task characteristics, resource availability, and environmental conditions, enabling adaptive switching between single-edge-cloud and multi-edge-cloud collaboration modes. This ensures synchronized optimization of computational efficiency, energy consumption, and security for concurrent and complex computational tasks. Compared to state-of-the-art approaches, the AH-ECO mechanism achieves a 27.35% reduction in task processing time and a 7.89% improvement in energy efficiency. The validation through case studies on intelligent machine tools further confirms the mechanism's effectiveness in real manufacturing scenarios that involve multi-source concurrent data processing, complex dependency task collaboration, high-computational machine learning workloads, and continuous batch task deployment, resulting in a 53.02% reduction in latency and a 29.97% optimization in energy, along with significant enhancements in security metrics.
Despite the progress made in task offloading for intelligent machine tools, the authors suggest that the future research should focus on the following crucial aspects: ① offloading strategies for emerging multi-modal tasks (i.e., image, sound, and vibration analysis) as machine perception capabilities continue to evolve; and ② online decision-making methodologies utilizing deep reinforcement learning to boost real-time performance, moving beyond the existing offline optimization techniques.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This research was funded by the National Natural Science Foundation of China (U23B20104), the Innovation Consortium Project of Machine Tools and Moulds in Dongguan (20251201500012), the Jilin Province Science and Technology Development Plan (YDZJ202401314ZYTS), and the Integrated Project of the National Natural Science Foundation of China (U24B6007).
U Set of terminal sensor nodes, $U=\left\{ {{u}_{1}},{{u}_{2}},...,{{u}_{k}} \right\}$
ES Set of edge nodes, $\text{ES}=\left\{ \text{e}{{\text{s}}_{1}},\text{e}{{\text{s}}_{2}},...,\text{e}{{\text{s}}_{N}} \right\}$
CS Cloud server, quantity is 1
$\text{C}{{\text{T}}_{j}}=\left\{ \text{c}{{\text{t}}_{j,1}},\text{c}{{\text{t}}_{j,2}},...,\text{c}{{\text{t}}_{j,{{n}_{j}}}} \right\}$ Computational task $C{{T}_{j}}$, can be decomposed into a set containing nj subtasks
$\text{c}{{\text{t}}_{j,i}}$ The ith sub-computational task of computational task $C{{T}_{j}}$
$\mathscr{P}\left(\mathrm{ct}_{j, i}\right)$ Attribute set of sub-computational task $\text{c}{{\text{t}}_{j,i}}$, including data size $D\left( \text{c}{{\text{t}}_{j,i}} \right)$, required computational instructions $\text{ctn}\left( \text{c}{{\text{t}}_{j,i}} \right)$, and task type $\text{Type}\left( \text{c}{{\text{t}}_{j,i}} \right)$
Tmax(CTj) Maximum tolerable delay for computational task $C{{T}_{j}}$
Emax(CTj) Maximum tolerable energy consumption for computational task $C{{T}_{j}}$
$\text{BW}_{\text{comm}}^{\text{u}\to \text{es}}$ Communication bandwidth for uploading data from terminal nodes to edge nodes
$\text{BW}_{\text{comm}}^{\text{es}\to \text{es}}$ Communication bandwidth for data interaction between edge nodes
$\text{BW}_{\text{comm}}^{\text{es}\to \text{es}}$ Communication bandwidth for uploading data from edge nodes to cloud server
$\text{BW}_{\text{comm}}^{\text{cs}\to \text{es}}$ Communication bandwidth for distributing data from cloud server to edge nodes
$\text{Com}{{\text{p}}_{\text{es}}}$ Computational task processing rate of edge nodes
$\text{Com}{{\text{p}}_{\text{es}}}$ Computational task processing rate of cloud server
loc Task execution location index, $\text{loc}\in \left\{ \text{e}{{\text{s}}_{i}},\text{cs} \right\}$
$\mathrm{CT}_{j}^{\text {edge }}$ Result of computational task $C{{T}_{j}}$ after edge computing phase processing
$\text{CT}_{j}^{\text{cloud}}$ Result of computational task $C{{T}_{j}}$ after cloud computing phase processing
$\text{CT}_{j}^{\text{final}}$ Result of computational task $C{{T}_{j}}$ after final computing phase processing
$\mathbb{R}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Task dependency indicator variable, determining whether sub-computational task $\text{c}{{\text{t}}_{j,i}}$ has predecessor data dependency tasks
$\text{Dep}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Set of predecessor data dependency tasks for sub-computational task $\text{c}{{\text{t}}_{j,i}}$
Tcommup→esqctj,i
$T_{\text {comm }}^{u_{p} \rightarrow \operatorname{es}_{q}}\left(\mathrm{ct}_{j, i}\right) $ Delay for data of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ uploading from terminal sensor node ${{u}_{p}}$ to edge node $\text{e}{{\text{s}}_{q}}$
$T_{\mathrm{comm}}^{\mathrm{es}_{q} \rightarrow \mathrm{cs}}\left(\mathrm{ct}_{j, i}\right) $ Delay for data of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ uploading from edge node $\text{e}{{\text{s}}_{q}}$ to cloud server cs
$T_{\mathrm{comm}}^{\mathrm{cs} \rightarrow \mathrm{es}_{q}}\left(\mathrm{ct}_{j, i}\right) $ Delay for data of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ distributing from cloud server cs to edge node $\text{e}{{\text{s}}_{q}}$
$T_{\mathrm{que}}^{\mathrm{loc}}$ Current task queue time at execution location loc
$T_{\operatorname{comp}}^{\mathrm{es}_{q}}\left(\mathrm{ct}_{j, i}\right) $ Computation delay of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ on edge node $\text{e}{{\text{s}}_{q}}$
$T_{\text {comp }}^{\mathrm{cs}}\left(\mathrm{ct}_{j, i}\right) $ Computation delay of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ on cloud server cs
$T_{\text {edge }}\left(\mathrm{ct}_{j, i}\right) $ Total delay of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ in edge computing phase
$T_{\text {edge }}\left(\mathrm{ct}_{j, i}\right) $ Total delay of sub-computational task $\text{c}{{\text{t}}_{j,i}}$ in cloud computing phase
${{T}_{\text{total}}}\left( \text{C}{{\text{T}}_{j}} \right)$ Total delay for executing computational task $C{{T}_{j}}$
$\text { Power }_{\text {comm }}^{u} $ Transmission power of terminal sensor node u for data transmission
$\text{Power}_{\text{comm}}^{\text{es}}$ Transmission power of edge node es for data transmission
$\text{Power}_{\text{comm}}^{\text{cs}}$ Transmission power of cloud server cs for data transmission
$\text{Power}_{\text{comp}}^{\text{es}}$ Computation power of edge node es
$\text { Power }_{\text {comp }}^{\text {cs }}$ Computation power of cloud server cs
$E_{\text {comm }}^{u \rightarrow \mathrm{es}}\left(\mathrm{ct}_{j, i}\right) $ Energy consumption for data of $\text{c}{{\text{t}}_{j,i}}$ uploading from terminal node to edge node
$E_{\text{comm}}^{\text{es}\to \text{cs}}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Energy consumption for data of $\text{c}{{\text{t}}_{j,i}}$ uploading from edge node to cloud server
$E_{\text{comm}}^{\text{cs}\to \text{es}}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Energy consumption for result data of $\text{c}{{\text{t}}_{j,i}}$ distributing from cloud server to edge node
$E_{\text{comm}}^{\text{cs}\to \text{es}}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Total energy consumption of $\text{c}{{\text{t}}_{j,i}}$ in edge computing phase
${{E}_{\text{edge}}}\left( \text{c}{{\text{t}}_{j,i}} \right)$ Total energy consumption of $\text{c}{{\text{t}}_{j,i}}$ in cloud computing phase
${{E}_{\text{total}}}\left( \text{C}{{\text{T}}_{j}} \right)$ Total energy consumption for executing computational task $C{{T}_{j}}$
λes
$\lambda_{\mathrm{es}} $ Basic failure rate of edge node es (annual average failure rate in device's initial state)
$\text{Loa}{{\text{d}}_{\text{e}{{\text{s}}_{q}}}}$ Current load of edge node $\text{e}{{\text{s}}_{q}}$ (total instruction count of allocated computational tasks)
$\text{Load}_{\text{e}{{\text{s}}_{q}}}^{\text{max}}$ Maximum load capacity of edge node $\text{e}{{\text{s}}_{q}}$
$\mathrm{MTTF}_{\mathrm{es}}$ Mean time between failures for edge nodes (provided by device manufacturer)
${{A}_{\text{cs}}}$ Number of cloud data access times
$\text{Se}{{\text{c}}_{\text{cs}}}$ Security protection level of cloud (cloud security measures, higher value indicates lower data leakage risk)
${{N}_{\text{cs}}}$ Number of cloud storage instances, more instances means more dispersed data, which is more secure
$\text{Risk}_{\text{fail}}^{\text{e}{{\text{s}}_{q}}}$ Risk of edge node $\text{e}{{\text{s}}_{q}}$ experiencing downtime
$\text{LP}_{\text{leak}}^{\text{cs}}$ Probability of data leakage after offloading task to cloud
${{\beta }_{\text{edge}}}/{{\beta }_{\text{cloud}}}$ Task security optimization weights (used to control the impact of edge load risk and cloud data leakage risk, respectively)
$\text{Ris}{{\text{k}}_{\text{security}}}\left( \text{C}{{\text{T}}_{j}} \right)$ Comprehensive security risk of computational task $C{{T}_{j}}$
α/β/γ
$\alpha / \beta / \gamma$ Weight coefficients for multi-objective optimization, used to adjust the relative importance of optimization objectives
$\mathscr{Y}_{j, i, q}$ Edge node assignment binary variable
$\mathscr{Z}_{j, i, q}$ Cloud offloading binary variable
$\text{B}{{\text{W}}_{\text{es}}}$ Bandwidth of edge node
${{S}_{\text{collab}}}\left( \text{C}{{\text{T}}_{j}} \right)$ Collaboration evaluaiton function for task CTj
$\text{CT}_{j}^{\text{edge}}$ Computational tasks processed by the edge computing layer
${{E}_{\text{comm}}}\left( \text{C}{{\text{T}}_{j}} \right)$ The communication energy consumption of task $C{{T}_{j}}$
$T_{\text {merge }}\left(\mathrm{CT}_{j}\right) $ The time required to merge all subtasks within task $C{{T}_{j}}$
$E_{\text {merge }}\left(\mathrm{CT}_{j}\right) $ The merged energy consumption of all subtasks within task $C{{T}_{j}}$
J.W.Leng, Y.W.Zhong, Z.S.Lin, K.L.Xu, D.Mourtzis, X.L.Zhou, et al. Towards resilience in Industry 5.0: a decentralized autonomous manufacturing paradigm. J Manuf Syst, 71 (2023), pp. 95-114.
[2]
X.Tong, Q.Liu, S.W.Pi, Y.Xiao. Real-time machining data application and service based on IMT digital twin. J Intell Manuf, 31 (5) (2020), pp. 1113-1132.
J.W.Leng, Z.Y.Chen, W.N.Sha, S.D.Ye, Q.Liu, X.Chen. Cloud-edge orchestration-based bi-level autonomous process control for mass individualization of rapid printed circuit boards prototyping services. J Manuf Syst, 63 (2022), pp. 143-161.
[5]
J.Zhang, C.Y.Deng, P.Zheng, X.Xu, Z.T.Ma. Development of an edge computing-based cyber-physical machine tool. Robot Comput-Integr Manuf, 67 (2021), Article 102042.
[6]
K.Y.Zhang, W.L.Xiao, X.M.Fan, G.Zhao. CAM as a service with dynamic toolpath generation ability for process optimization in STEP-NC compliant CNC machining. J Manuf Syst, 80 (2025), pp. 294-308.
[7]
R.Q.Wang, Q.H.Song, Y.Z.Peng, Z.Q.Liu, H.F.Ma, Z.J.Liu, et al. Milling surface roughness monitoring using real-time tool wear data. Int J Mech Sci, 285 (2025), Article 109821.
[8]
B.Yang, Z.Pang, S.L.Wang, F.Mo, Y.F.Gao. A coupling optimization method of production scheduling and computation offloading for intelligent workshops with cloud-edge-terminal architecture. J Manuf Syst, 65 (2022), pp. 421-438.
[9]
W.N.Shu, S.L.Nie, W.C.Jian, X.Ge. An improved and efficient computational offloading method based on ADMM strategy in cloud-edge collaborative computing environment for resilient Industry 5.0. IEEE Trans Consum Electron, 70 (1) (2024), pp. 1392-1402.
[10]
L.N.Liu, B.Sun, Y.Wu, D.H.K.Tsang. Latency optimization for computation offloading with hybrid NOMA-OMA transmission. IEEE Internet Things J, 8 (8) (2021), pp. 6677-6691.
[11]
S.Y.Ma, S.D.Song, L.Y.Yang, J.M.Zhao, F.Yang, L.B.Zhai. Dependent tasks offloading based on particle swarm optimization algorithm in multi-access edge computing. Appl Soft Comput, 112 (2021), Article 107790.
C.Zeng, X.W.Wang, R.F.Zeng, Y.Li, J.Z.Shi, M.Huang. Joint optimization of multi-dimensional resource allocation and task offloading for QoE enhancement in Cloud-Edge-End collaboration. Future Gener Comput Syst, 155 (2024), pp. 121-131.
[14]
J.H.Zhang, J.C.Wang, Z.Y.Yuan, W.Q.Zhang, L.M.Liu. Offloading demand prediction-driven latency-aware resource reservation in edge networks. IEEE Internet Things J, 10 (15) (2023), pp. 13826-13836.
[15]
T.X.Ji, C.Q.Luo, L.X.Yu, Q.L.Wang, S.H.Chen, A.Thapa, et al. Energy-efficient computation offloading in mobile edge computing systems with uncertainties. IEEE Trans Wirel Commun, 21 (8) (2022), pp. 5717-5729.
[16]
D.W.Wei, N.Xi, X.D.Ma, M.Shojafar, S.Kumari, J.F.Ma. Personalized privacy-aware task offloading for edge-cloud-assisted industrial Internet of Things in automated manufacturing. IEEE Trans Ind Inform, 18 (11) (2022), pp. 7935-7945.
[17]
Y.P.Wang, P.Zhang, B.Wang, Z.F.Zhang, Y.L.Xu, B.Lv. A hybrid PSO and GA algorithm with rescheduling for task offloading in device-edge-cloud collaborative computing. Cluster Comput, 28 (2) (2025), p. 101.
[18]
I.Mokni, S.Yassa. A multi-objective approach for optimizing IoT applications offloading in fog-cloud environments with NSGA-II. J Supercomput, 80 (19) (2024), pp. 27034-27072.
[19]
Z.Y.Chai, Y.J.Zhao, Y.L.Li. Multitask computation offloading based on evolutionary multiobjective optimization in industrial Internet of Things. IEEE Internet Things J, 11 (9) (2024), pp. 15894-15908.
[20]
Y.P.Liu, Y.Altintas. In-process identification of machine tool dynamics. CIRO J Manuf Sci Technol, 32 (2021), pp. 322-337.
[21]
J.W.Leng, W.N.Sha, Z.S.Lin, J.B.Jing, Q.Liu, X.Chen. Blockchained smart contract pyramid-driven multi-agent autonomous process control for resilient individualised manufacturing towards Industry 5.0. Int J Prod Res, 61 (13) (2023), pp. 4302-4321.
[22]
A. BenSada, A.Khelloufi, A.Naouri, H.S.Ning, S.Dhelim. Hybrid metaheuristics for selective inference task offloading under time and energy constraints for real-time IoT sensing systems. Cluster Comput, 27 (9) (2024), pp. 12965-12981.
[23]
S.Yeganeh, A. BabazadehSangar, S.Azizi. A novel Q-learning-based hybrid algorithm for the optimal offloading and scheduling in mobile edge computing environments. J Netw Comput Appl, 214 (2023), Article 103617.
[24]
Y.J.Laili, X.H.Wang, L.Zhang, L.Ren. DSAC-configured differential evolution for cloud-edge-device collaborative task scheduling. IEEE Trans Ind Inform, 20 (2) (2024), pp. 1753-1763.
[25]
K.Aljobory, M.A.Yazici. Edge server selection with round-robin-based task processing in multiserver mobile edge computing. Sensors, 25 (11) (2025), p. 3443.
[26]
X.M.Li, J.F.Wan, H.N.Dai, M.Imran, M.Xia, A.Celesti. A hybrid computing solution and resource scheduling strategy for edge computing in smart manufacturing. IEEE Trans Ind Inform, 15 (7) (2019), pp. 4225-4234.
[27]
R.X.Li, C.S.Lim, M.E.Rana, X.C.Zhou. A trade-off task-offloading scheme in multi-user multi-task mobile edge computing. IEEE Access, 10 (2022), pp. 129884-129898.
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.