Resource Type

Journal Article 387

Year

2023 39

2022 34

2021 27

2020 30

2019 23

2018 29

2017 25

2016 13

2015 6

2014 11

2013 6

2012 16

2011 10

2010 7

2009 10

2008 8

2007 16

2006 9

2005 12

2004 20

open ︾

Keywords

Artificial intelligence 5

Computer vision 5

Blockchain 4

Cloud computing 4

Cloud storage 4

Machine learning 4

computer 4

Attribute-based encryption 3

Deep learning 3

computer simulation 3

3D printing 2

Access control 2

CFD 2

Computational methods 2

Data privacy 2

Fog computing 2

Heterogeneous computing 2

High-performance computing 2

IHNI-1 reactor 2

open ︾

Search scope:

排序: Display mode:

Software Architecture of Fogcloud Computing for Big Data in Ubiquitous Cyberspace

Jia Yan, Fang Binxing, Wang Xiang, Wang Yongheng, An Jingbin,Li Aiping, Zhou Bin

Strategic Study of CAE 2019, Volume 21, Issue 6,   Pages 114-119 doi: 10.15302/J-SSCAE-2019.10.001

Abstract:

The cyberspace has expanded from traditional internet to ubiquitous cyberspace which interconnects human, machines,things, services, and applications. The computing paradigm is also shifting from centralized computing in the cloud to combined computing in the front end, middle layer, and cloud. Therefore, traditional computing paradigms such as cloud computing and edge computing can no longer satisfy the evolving computing needs of big data in ubiquitous cyberspace. This paper presents a computing architecture named Fogcloud Computing for big data in ubiquitous cyberspace. Collaborative computing by multiple knowledge actors in the fog, middle layer, and cloud is realized based on the collaborative computing language and models, thereby providing a solution for big data computing in ubiquitous cyberspace.

Keywords: fogcloud computing     ubiquitous cyberspace     big data     Internet of Things     cloud computing    

Edge Computing Technology: Development and Countermeasures

Hong Xuehai and Wang Yang

Strategic Study of CAE 2018, Volume 20, Issue 2,   Pages 20-26 doi: 10.15302/J-SSCAE-2018.02.004

Abstract:

Edge computing is an emerging technology that reduces transmission delays and bandwidth consumption by placing computing, storage, bandwidth, applications, and other resources on the edge of the network. Moreover, application developers and content providers can provide perceptible services based on real-time network information. Mobile terminals, Internet of things, and other devices provide the necessary front-end support for computing sensitive applications, such as image recognition and network games, to share the cloud work load with the processing capability of edge computing. This paper discusses the concept of edge computing, key problems that require solutions, main advances in edge computing, influence of edge computing developments, and opportunities and development countermeasures of edge calculation.

Keywords: cloud computing     edge computing     fog computing     mobile edge computing     internet of things     front-end intelligence    

MEACC: an energy-efficient framework for smart devices using cloud computing systems Research Articles

Khalid Alsubhi, Zuhaib Imtiaz, Ayesha Raana, M. Usman Ashraf, Babur Hayat,usman.ashraf@skt.umt.edu.pk

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 6,   Pages 809-962 doi: 10.1631/FITEE.1900198

Abstract: Rapidly increasing capacities, decreasing costs, and improvements in computational power, storage, and communication technologies have led to the development of many applications that carry increasingly large amounts of traffic on the global networking infrastructure. lead to emerging technologies and play a vital role in rapid evolution. have become a primary 24/7 need in today’s information technology world and include a wide range of supporting processing-intensive applications. Extensive use of many applications on results in increasing complexity of mobile software applications and consumption of resources at a massive level, including smart device battery power, processor, and RAM, and hinders their normal operation. Appropriate resource utilization and energy efficiency are fundamental considerations for because limited resources are sporadic and make it more difficult for users to complete their tasks. In this study we propose the model of mobile energy augmentation using (MEACC), a new framework to address the challenges of massive and inefficient resource utilization in . MEACC efficiently filters the applications to be executed on a smart device or offloaded to the cloud. Moreover, MEACC efficiently calculates the total execution cost on both the mobile and cloud sides including communication costs for any application to be offloaded. In addition, resources are monitored before making the decision to offload the application. MEACC is a promising model for load balancing and reduction in emerging environments.

Keywords: 卸载;智能设备;云计算;移动计算;能耗    

Towards adaptable and tunable cloud-based map-matching strategy for GPS trajectories Article

Aftab Ahmed CHANDIO,Nikos TZIRITAS,Fan ZHANG,Ling YIN,Cheng-Zhong XU

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 12,   Pages 1305-1319 doi: 10.1631/FITEE.1600027

Abstract: Smart cities have given a significant impetus to manage traffic and use transport networks in an intelligent way. For the above reason, intelligent transportation systems (ITSs) and location-based services (LBSs) have become an interesting research area over the last years. Due to the rapid increase of data volume within the transportation domain, cloud environment is of paramount importance for storing, accessing, handling, and processing such huge amounts of data. A large part of data within the transportation domain is produced in the form of Global Positioning System (GPS) data. Such a kind of data is usually infrequent and noisy and achieving the quality of real-time transport applications based on GPS is a difficult task. The map-matching process, which is responsible for the accurate alignment of observed GPS positions onto a road network, plays a pivotal role in many ITS applications. Regarding accuracy, the performance of a map-matching strategy is based on the shortest path between two con-secutive observed GPS positions. On the other extreme, processing shortest path queries (SPQs) incurs high computational cost. Current map-matching techniques are approached with a fixed number of parameters, i.e., the number of candidate points (NCP) and error circle radius (ECR), which may lead to uncertainty when identifying road segments and either low-accurate results or a large number of SPQs. Moreover, due to the sampling error, GPS data with a high-sampling period (i.e., less than 10 s) typically contains extraneous datum, which also incurs an extra number of SPQs. Due to the high computation cost incurred by SPQs, current map-matching strategies are not suitable for real-time processing. In this paper, we propose real-time map-matching (called RT-MM), which is a fully adaptive map-matching strategy based on cloud to address the key challenge of SPQs in a map-matching process for real-time GPS trajectories. The evaluation of our approach against state-of-the-art approaches is per-formed through simulations based on both synthetic and real-world datasets.

Keywords: Map-matching     GPS trajectories     Tuning-based     Cloud computing     Bulk synchronous parallel    

Flexible Resource Scheduling for Software-Defined Cloud Manufacturing with Edge Computing Article

Chen Yang,Fangyin Liao,Shulin Lan,Lihui Wang,Weiming Shen,George Q. Huang

Engineering 2023, Volume 22, Issue 3,   Pages 60-70 doi: 10.1016/j.eng.2021.08.022

Abstract:

This research focuses on the realization of rapid reconfiguration in a cloud manufacturing environment to enable flexible resource scheduling, fulfill the resource potential and respond to various changes. Therefore, this paper first proposes a new cloud and software-defined networking (SDN)-based manufacturing model named software-defined cloud manufacturing (SDCM), which transfers the control logic from automation hard resources to the software. This shift is of significance because the software can function as the “brain” of the manufacturing system and can be easily changed or updated to support fast system reconfiguration, operation, and evolution. Subsequently, edge computing is introduced to complement the cloud with computation and storage capabilities near the end things. Another key issue is to manage the critical network congestion caused by the transmission of a large amount of Internet of Things (IoT) data with different quality of service (QoS) values such as latency. Based on the virtualization and flexible networking ability of the SDCM, we formalize the time-sensitive data traffic control problem of a set of complex manufacturing tasks, considering subtask allocation and data routing path selection. To solve this optimization problem, an approach integrating the genetic algorithm (GA), Dijkstra’s shortest path algorithm, and a queuing algorithm is proposed. Results of experiments show that the proposed method can efficiently prevent network congestion and reduce the total communication latency in the SDCM.

Keywords: Cloud manufacturing     Edge computing     Software-defined networks     Industrial internet of things     Industry 4.0    

Calculating Method for Area Source Model of Particulates

Gu Qing,Yang Xinxing,Li Yunsheng

Strategic Study of CAE 2005, Volume 7, Issue 1,   Pages 41-44

Abstract:

The area source model of particulates was deeply researched, and the places of the initial precipitation of the area sources of particulates were determined. The precipitation of the particulates into the inside of the area source would not be considered. The concentration of the particulates in the inside of the area source block equalled the source intensity of the pollutants of the unit area that was multiplied by the reflection coefficient of the ground surface and divided by the wind speed of the ground surface.

The particulate precipitation of the outside of the area source would be considered. Using the back set method of the virtual point source, and referring to the tilt smoke and cloud model of the part reflex of the point source, the area source model of the particulates was fully given. The edge concentration of the area source was processed by the part of the linear insertion, lest the discontinuity of the calculated results. The central point place of the area sources should be moved horizontally a micro-distance at X and Y directions so that the possibility of the coincidence of both the calculated point and the central point of the area sources would be eliminated.

Keywords: atmospheric environment     particulate     area source model     calculating method    

Generation of noise-like pulses and soliton rains in a graphene mode-locked erbium-doped fiber ring laser Research

Weiyong Yang, Wei Liu, Xingshen Wei, Zixin Guo, Kangle Yang, Hao Huang, Longyun Qi,yangkangle@sgepri.sgcc.com.cn

Frontiers of Information Technology & Electronic Engineering 2021, Volume 22, Issue 3,   Pages 287-436 doi: 10.1631/FITEE.1900636

Abstract: Ubiquitous power (IoT) is a smart service system oriented to all aspects of the power system, and has the characteristics of universal interconnection, human-computer interaction, comprehensive state perception, efficient information processing, and other convenient and flexible applications. It has become a hot topic in the field of IoT. We summarize some existing research work on the IoT and framework. Because it is difficult to meet the requirements of ubiquitous power IoT for in terms of real time, security, reliability, and business function adaptation using the general framework software, we propose a trusted framework, named “EdgeKeeper,” adapting to the ubiquitous power IoT. Several key technologies such as security and trust, quality of service guarantee, application management, and cloud-edge collaboration are desired to meet the needs of the framework. Experiments comprehensively evaluate EdgeKeeper from the aspects of function, performance, and security. Comparison results show that EdgeKeeper is the most suitable framework for the electricity IoT. Finally, future directions for research are proposed.

Keywords: 物联网;泛在电力物联网;边缘计算;可信计算;网络安全    

Technology trends in large-scale high-efficiency network computing Review

Jinshu SU, Baokang ZHAO, Yi DAI, Jijun CAO, Ziling WEI, Na ZHAO, Congxi SONG, Yujing LIU, Yusheng XIA

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 12,   Pages 1733-1746 doi: 10.1631/FITEE.2200217

Abstract: is the basis for large-scale high-efficiency network computing, such as , , big data processing, and artificial intelligence computing. The network technologies of network computing systems in different fields not only learn from each other but also have targeted design and optimization. Considering it comprehensively, three , i.e., integration, differentiation, and optimization, are summarized in this paper for network technologies in different fields. Integration reflects that there are no clear boundaries for network technologies in different fields, differentiation reflects that there are some unique solutions in different application fields or innovative solutions under new application requirements, and optimization reflects that there are some optimizations for specific scenarios. This paper can help academic researchers consider what should be done in the future and industry personnel consider how to build efficient practical network systems.

Keywords: Supercomputing     Cloud computing     Network technology     Development trends    

Integrated and Intelligent Manufacturing: Perspectives and Enablers

Yubao Chen

Engineering 2017, Volume 3, Issue 5,   Pages 588-595 doi: 10.1016/J.ENG.2017.04.009

Abstract:

With ever-increasing market competition and advances in technology, more and more countries are prioritizing advanced manufacturing technology as their top priority for economic growth. Germany announced the Industry 4.0 strategy in 2013. The US government launched the Advanced Manufacturing Partnership (AMP) in 2011 and the National Network for Manufacturing Innovation (NNMI) in 2014. Most recently, the Manufacturing USA initiative was officially rolled out to further “leverage existing resources… to nurture manufacturing innovation and accelerate commercialization” by fostering close collaboration between industry, academia, and government partners. In 2015, the Chinese government officially published a 10-year plan and roadmap toward manufacturing: Made in China 2025. In all these national initiatives, the core technology development and implementation is in the area of advanced manufacturing systems. A new manufacturing paradigm is emerging, which can be characterized by two unique features: integrated manufacturing and intelligent manufacturing. This trend is in line with the progress of industrial revolutions, in which higher efficiency in production systems is being continuously pursued. To this end, 10 major technologies can be identified for the new manufacturing paradigm. This paper describes the rationales and needs for integrated and intelligent manufacturing (i2M) systems. Related technologies from different fields are also described. In particular, key technological enablers, such as the Internet of Things and Services (IoTS), cyber-physical systems (CPSs), and cloud computing are discussed. Challenges are addressed with applications that are based on commercially available platforms such as General Electric (GE)’s Predix and PTC’s ThingWorx.

Keywords: Integrated manufacturing     Intelligent manufacturing     Cloud computing     Cyber-physical system     Internet of Things     Industrial Internet     Predictive analytics     Manufacturing platform    

A Review of Recent Advances and Application for Spiking Neural Networks

Liu Hao , Chai Hongfeng , Sun Quan , Yun Xin , Li Xin

Strategic Study of CAE 2023, Volume 25, Issue 6,   Pages 61-79 doi: 10.15302/J-SSCAE-2023.06.011

Abstract:

Spiking neural network (SNN) is a new generation of artificial neural network. It is more biologically plausible and has been widely concerned by scholars owing to its unique information coding schemes, rich spatiotemporal dynamics, and event-driven operating mode with low power. In recent years, SNN has been explored and applied in many fields such as medical health, industrial detection, and intelligent driving. First, the basic elements and learning algorithms of SNN are introduced, including classical spiking neuron models, spike-timing dependent plasticity (STDP), and common information coding methods. The advantages and disadvantages of the learning algorithms are also analyzed. Then, the mainstream software simulators and neuromorphic hardware of SNN are  summarized. Subsequently, the research progress and application scenarios of SNN in terms of computer vision, natural language processing, and reasoning decision are introduced. Particularly, SNN has shown strong potentials in tasks such as object detection, action recognition, semantic cognition, and speech recognition, significantly improving computational performance. Future research and application of SNN should focus on strengthening the research on key core technologies, promoting the application of technological achievements, and continuously optimizing the industrial ecology, thus to catch up with the advanced international level. Moreover, continuous research and breakthroughs of brain-inspired systems and control theories will promote the establishment of large-scale SNN models and are expected to broaden the application prospect of artificial intelligence.

Keywords: spiking neural network     brain-inspired computing     learning algorithm     neuromorphic chip     application scenario    

Driftor: mitigating cloud-based side-channel attacks by switching and migrating multi-executor virtual machines Regular Papers

Chao YANG, Yun-fei GUO, Hong-chao HU, Ya-wen WANG, Qing TONG, Ling-shu LI

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 5,   Pages 731-748 doi: 10.1631/FITEE.1800526

Abstract:

Co-residency of different tenants’ virtual machines (VMs) in cloud provides a good chance for side-channel attacks, which results in information leakage. However, most of current defense suffers from the generality or compatibility problem, thus failing in immediate real-world deployment. VM migration, an inherit mechanism of cloud systems, envisions a promising countermeasure, which limits co-residency by moving VMs between servers. Therefore, we first set up a unified practical adversary model, where the attacker focuses on effective side channels. Then we propose Driftor, a new cloud system that contains VMs of a multi-executor structure where only one executor is active to provide service through a proxy, thus reducing possible information leakage. Active state is periodically switched between executors to simulate defensive effect of VM migration. To enhance the defense, real VM migration is enabled at the same time. Instead of solving the migration satisfiability problem with intractable CIRCUIT-SAT, a greedy-like heuristic algorithm is proposed to search for a viable solution by gradually expanding an initial has-to-migrate set of VMs. Experimental results show that Driftor can not only defend against practical fast side-channel attack, but also bring about reasonable impacts on real-world cloud applications.

Keywords: Cloud computing     Side-channel attack     Information leakage     Multi-executor structure     Virtual machine switch     Virtual machine migration    

High End Computing in China and the Exploration of “Sunway” Computers

Chen Zuoning

Strategic Study of CAE 2004, Volume 6, Issue 9,   Pages 23-28

Abstract:

In the beginning of the 21st century, driven by application requirements, the research and development of HEC are getting a new upsurge both in China and abroad. This paper analyses the development status and trends of international HEC, describes the overall development and application status of HEC in China, introduces the major technical features and application achievements of Sunway series high-performance computers; discusses the problems faced by today's HEC technologies, and finally makes some suggestions on how to extend progress in the HEC industry in China.

Keywords: high-end computing     grid computing     high performance service     high productivity computing     total cost of ownership     Sunway series computers    

Cost-effective resource segmentation in hierarchical mobile edge clouds Special Feature on Future Network-Research Article

Ming-shuang JIN, Shuai GAO, Hong-bin LUO, Hong-ke ZHANG

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 9,   Pages 1209-1220 doi: 10.1631/FITEE.1800203

Abstract: The fifth-generation (5G) network cloudification enables third parties to deploy their applications (e.g., edge caching and edge computing) at the network edge. Many previous works have focused on specific service strategies (e.g., cache placement strategy and vCPU provision strategy) for edge applications from the perspective of a certain third party by maximizing its benefit. However, there is no literature that focuses on how to efficiently allocate resources from the perspective of a mobile network operator, taking the different deployment requirements of all third parties into consideration. In this paper, we address the problem by formulating an optimization problem, which minimizes the total deployment cost of all third parties. To capture the deployment requirements of the third parties, the applications that they want to deploy are classified into two types, namely, computation-intensive ones and storage-intensive ones, whose requirements are considered as input parameters or constraints in the optimization. Due to the NP-hardness and non-convexity of the formulated problem, we have designed an elitist genetic algorithm that converges to the global optimum to solve it. Extensive simulations have been conducted to illustrate the feasibility and effectiveness of the proposed algorithm.

Keywords: Edge clouds     Edge computing     Edge caching     Resource segmentation     Virtual machine (VM) allocation    

Resource scheduling techniques in cloud from a view of coordination: a holistic survey Review

Yuzhao WANG, Junqing YU, Zhibin YU,yuzhao_w@hust.edu.cn,yjqing@hust.edu.cn,zb.yu@siat.ac.cn

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 1,   Pages 1-40 doi: 10.1631/FITEE.2100298

Abstract: Nowadays, the management of resource contention in shared cloud remains a pending problem. The evolution and deployment of new application paradigms (e.g., deep learning training and s) and custom hardware (e.g., graphics processing unit (GPU) and tensor processing unit (TPU)) have posed new challenges in resource management system design. Current solutions tend to trade cluster efficiency for guaranteed application performance, e.g., resource over-allocation, leaving a lot of resources underutilized. Overcoming this dilemma is not easy, because different components across the software stack are involved. Nevertheless, massive efforts have been devoted to seeking effective performance isolation and highly efficient resource scheduling. The goal of this paper is to systematically cover related aspects to deliver the techniques from the perspective, and to identify the corresponding trends they indicate. Briefly, four topics are involved. First, isolation mechanisms deployed at different levels (micro-architecture, system, and virtualization levels) are reviewed, including GPU multitasking methods. Second, within an individual machine and at the cluster level are investigated, respectively. Particularly, GPU scheduling for deep learning applications is described in detail. Third, adaptive resource management including the latest -related research is thoroughly explored. Finally, future research directions are discussed in the light of advanced work. We hope that this review paper will help researchers establish a global view of the landscape of resource management techniques in shared cloud, and see technology trends more clearly.

Keywords: Coordination     Co-location     Heterogeneous computing     Microservice     Resource scheduling techniques    

Research of trust valuation based on cloud model

Lu Feng,Wu Huizhong

Strategic Study of CAE 2008, Volume 10, Issue 10,   Pages 84-90

Abstract:

At present, the existing descriptions of trust by trust valuation models lack comprehensiveness. To solve this problem, this paper discusses the co-existence and integration of fuzziness and randomness of trust relation, analyzes the ways cloud models describe uncertain concepts and the algorithms cloud models transform between qualitative concepts and their quantitative expressions, and presents trust cloud, a trust evaluation model based on cloud theory. This model provides the algorithms of trust information transfer and combination described by digital characteristics. While describing accurate trust expectation, this model portrays the uncertainty through entropy and hyper entropy. Compared with traditional trust valuation models, trust values obtained in this model contains more semantic information, indicating that valuation results from the model are more suitable as evidence for decision-making on trust.

Keywords: trust valuation     trust model     cloud model     cloud generator    

Title Author Date Type Operation

Software Architecture of Fogcloud Computing for Big Data in Ubiquitous Cyberspace

Jia Yan, Fang Binxing, Wang Xiang, Wang Yongheng, An Jingbin,Li Aiping, Zhou Bin

Journal Article

Edge Computing Technology: Development and Countermeasures

Hong Xuehai and Wang Yang

Journal Article

MEACC: an energy-efficient framework for smart devices using cloud computing systems

Khalid Alsubhi, Zuhaib Imtiaz, Ayesha Raana, M. Usman Ashraf, Babur Hayat,usman.ashraf@skt.umt.edu.pk

Journal Article

Towards adaptable and tunable cloud-based map-matching strategy for GPS trajectories

Aftab Ahmed CHANDIO,Nikos TZIRITAS,Fan ZHANG,Ling YIN,Cheng-Zhong XU

Journal Article

Flexible Resource Scheduling for Software-Defined Cloud Manufacturing with Edge Computing

Chen Yang,Fangyin Liao,Shulin Lan,Lihui Wang,Weiming Shen,George Q. Huang

Journal Article

Calculating Method for Area Source Model of Particulates

Gu Qing,Yang Xinxing,Li Yunsheng

Journal Article

Generation of noise-like pulses and soliton rains in a graphene mode-locked erbium-doped fiber ring laser

Weiyong Yang, Wei Liu, Xingshen Wei, Zixin Guo, Kangle Yang, Hao Huang, Longyun Qi,yangkangle@sgepri.sgcc.com.cn

Journal Article

Technology trends in large-scale high-efficiency network computing

Jinshu SU, Baokang ZHAO, Yi DAI, Jijun CAO, Ziling WEI, Na ZHAO, Congxi SONG, Yujing LIU, Yusheng XIA

Journal Article

Integrated and Intelligent Manufacturing: Perspectives and Enablers

Yubao Chen

Journal Article

A Review of Recent Advances and Application for Spiking Neural Networks

Liu Hao , Chai Hongfeng , Sun Quan , Yun Xin , Li Xin

Journal Article

Driftor: mitigating cloud-based side-channel attacks by switching and migrating multi-executor virtual machines

Chao YANG, Yun-fei GUO, Hong-chao HU, Ya-wen WANG, Qing TONG, Ling-shu LI

Journal Article

High End Computing in China and the Exploration of “Sunway” Computers

Chen Zuoning

Journal Article

Cost-effective resource segmentation in hierarchical mobile edge clouds

Ming-shuang JIN, Shuai GAO, Hong-bin LUO, Hong-ke ZHANG

Journal Article

Resource scheduling techniques in cloud from a view of coordination: a holistic survey

Yuzhao WANG, Junqing YU, Zhibin YU,yuzhao_w@hust.edu.cn,yjqing@hust.edu.cn,zb.yu@siat.ac.cn

Journal Article

Research of trust valuation based on cloud model

Lu Feng,Wu Huizhong

Journal Article