Resource Type

Journal Article 4222

Year

2024 17

2023 344

2022 312

2021 329

2020 254

2019 263

2018 224

2017 261

2016 165

2015 135

2014 115

2013 80

2012 99

2011 97

2010 112

2009 111

2008 134

2007 154

2006 156

2005 156

open ︾

Keywords

Machine learning 35

sustainable development 35

Deep learning 28

Artificial intelligence 27

Additive manufacturing 23

COVID-19 21

development strategy 16

innovation 15

neural network 14

development 13

engineering 13

management 12

numerical simulation 11

Reinforcement learning 10

China 9

industrialization 9

technology 9

Big data 8

DX pile 8

open ︾

Search scope:

排序: Display mode:

MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning Research Articles

Zhao-qi Wu, Jin Wei, Fan Zhang, Wei Guo, Guang-wei Xie,17034203@qq.com

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 7,   Pages 963-1118 doi: 10.1631/FITEE.1900121

Abstract: With the growing amount of information and data, s have been widely used in many applications, including the Google File System, Amazon S3, Hadoop Distributed File System, and Ceph, in which load balancing of plays an important role in improving the input/output performance of the entire system. Unbalanced load on the server leads to a serious bottleneck problem for system performance. However, most existing load balancing strategies, which are based on subtree segmentation or hashing, lack good dynamics and adaptability. In this study, we propose a (MDLB) mechanism based on (RL). We learn that the algorithm and our RL-based strategy consist of three modules, i.e., the policy selection network, load balancing network, and parameter update network. Experimental results show that the proposed MDLB algorithm can adjust the load dynamically according to the performance of the servers, and that it has good adaptability in the case of sudden change of data volume.

Keywords: 面向对象的存储系统;元数据;动态负载均衡;强化学习;Q_learning    

Anovel non-volatile memory storage system for I/O-intensive applications None

Wen-bing HAN, Xiao-gang CHEN, Shun-fen LI, Ge-zi LI, Zhi-tang SONG, Da-gang LI, Shi-yan CHEN

Frontiers of Information Technology & Electronic Engineering 2018, Volume 19, Issue 10,   Pages 1291-1302 doi: 10.1631/FITEE.1700061

Abstract:

The emerging memory technologies, such as phase change memory (PCM), provide chances for highperformance storage of I/O-intensive applications. However, traditional software stack and hardware architecture need to be optimized to enhance I/O efficiency. In addition, narrowing the distance between computation and storage reduces the number of I/O requests and has become a popular research direction. This paper presents a novel PCMbased storage system. It consists of the in-storage processing enabled file system (ISPFS) and the configurable parallel computation fabric in storage, which is called an in-storage processing (ISP) engine. On one hand, ISPFS takes full advantage of non-volatile memory (NVM)’s characteristics, and reduces software overhead and data copies to provide low-latency high-performance random access. On the other hand, ISPFS passes ISP instructions through a command file and invokes the ISP engine to deal with I/O-intensive tasks. Extensive experiments are performed on the prototype system. The results indicate that ISPFS achieves 2 to 10 times throughput compared to EXT4. Our ISP solution also reduces the number of I/O requests by 97% and is 19 times more efficient than software implementation for I/O-intensive applications.

Keywords: In-storage processing     File system     Non-volatile memory (NVM)     Storage system     I/O-intensive applications    

Areliable power management scheme for consistent hashing based distributed key value storage systems Article

Nan-nan ZHAO,Ji-guang WAN,Jun WANG,Chang-sheng XIE

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 10,   Pages 994-1007 doi: 10.1631/FITEE.1601162

Abstract: Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.

Keywords: Consistent hash table (CHT)     Replication     Power management     Key value storage system     Reliability    

ShortTail: taming tail latency for erasure-code-based in-memory systems Research Article

Yun TENG, Zhiyue LI, Jing HUANG, Guangyan ZHANG

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 11,   Pages 1646-1657 doi: 10.1631/FITEE.2100566

Abstract:

s with erasure coding (EC) enabled are widely used to achieve high performance and data availability. However, as the scale of clusters grows, the server-level fail-slow problem is becoming increasingly frequent, which can create long . The influence of long is further amplified in EC-based systems due to the synchronous nature of multiple EC sub-operations. In this paper, we propose an EC-enabled in-memory storage system called ShortTail, which can achieve consistent performance and low latency for both reads and writes. First, ShortTail uses a lightweight request monitor to track the performance of each memory node and identify any fail-slow node. Second, ShortTail selectively performs degraded reads and redirected writes to avoid accessing fail-slow nodes. Finally, ShortTail posts an adaptive write strategy to reduce write amplification of s. We implement ShortTail on top of Memcached and compare it with two baseline systems. The experimental results show that ShortTail can reduce the P99 by up to 63.77%; it also brings significant improvements in the median latency and average latency.

Keywords: Erasure code     In-memory system     Node fail-slow     Small write     Tail latency    

TextGen: a realistic text data content generation method for modern storage system benchmarks Article

Long-xiang WANG,Xiao-she DONG,Xing-jun ZHANG,Yin-feng WANG,Tao JU,Guo-fu FENG

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 10,   Pages 982-993 doi: 10.1631/FITEE.1500332

Abstract: Modern storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modern storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen is ( ). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.

Keywords: Benchmark     Storage system     Word-based compression    

Areliable and energy-efficient storage systemwith erasure coding cache Article

Ji-guang WAN, Da-ping LI, Xiao-yang QU, Chao YIN, Jun WANG, Chang-sheng XIE

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 9,   Pages 1370-1384 doi: 10.1631/FITEE.1600972

Abstract: In modern energy-saving replication storage systems, a primarygroup of disks is always powered up to serve incoming requests whileother disks are often spun down to save energy during slack periods.However, since new writes cannot be immediately synchronized intoall disks, system reliability is degraded. In this paper, we developa high-reliability and energy-efficient replication storage system,named RERAID, based on RAID10. RERAID employs part of the free spacein the primary disk group and uses erasure coding to construct a codecache at the front end to absorb new writes. Since code cache supportsfailure recovery of two or more disks by using erasure coding, RERAIDguarantees a reliability comparable with that of the RAID10 storagesystem. In addition, we develop an algorithm, called erasure codingwrite (ECW), to buffer many small random writes into a few large writes,which are then written to the code cache in a parallel fashion sequentiallyto improve the write performance. Experimental results show that RERAIDsignificantly improves write performance and saves more energy thanexisting solutions.

Keywords: Reliability     Energy-efficient     Storage system     Erasure coding     Cache management    

Engineering DNA Materials for Sustainable Data Storage Using a DNA Movable-Type System Article

Zi-Yi Gong, Li-Fu Song, Guang-Sheng Pei, Yu-Fei Dong, Bing-Zhi Li, Ying-Jin Yuan

Engineering 2023, Volume 29, Issue 10,   Pages 130-136 doi: 10.1016/j.eng.2022.05.023

Abstract:

DNA molecules are green materials with great potential for high-density and long-term data storage. However, the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards, limiting its practical applications. Here, we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing. In this system, these pre-generated DNA fragments, referred to herein as "DNA movable types," are used as basic writing units in a repetitive way. The process of data writing is achieved by the rapid assembly of these DNA movable types, thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis. With this system, we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding, thereby demonstrating the feasibility of this system. Through its repetitive usage and biological assembly of DNA movable-type fragments, this system exhibits excellent potential for writing cost reduction, opening up a novel route toward an economical and sustainable digital data-storage technology.

Keywords: 合成生物学     DNA信息存储     DNA活字存储系统     经济性DNA数据存储    

ONFS: a hierarchical hybrid file system based on memory, SSD, andHDDfor high performance computers Article

Xin LIU, Yu-tong LU, Jie YU, Peng-fei WANG, Jie-ting WU, Ying LU

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 12,   Pages 1940-1971 doi: 10.1631/FITEE.1700626

Abstract: With supercomputers developing towards exascale, the number of compute cores increases dramatically, making more complex and larger-scale applications possible. The input/output (I/O) requirements of large-scale applications, workflow applications, and their checkpointing include substantial bandwidth and an extremely low latency, posing a serious challenge to high performance computing (HPC) storage systems. Current hard disk drive (HDD) based underlying storage systems are becoming more and more incompetent to meet the requirements of next-generation exascale supercomputers. To rise to the challenge, we propose a hierarchical hybrid storage system, on-line and near-line file system (ONFS). It leverages dynamic random access memory (DRAM) and solid state drive (SSD) in compute nodes, and HDD in storage servers to build a three-level storage system in a unified namespace. It supports portable operating system interface (POSIX) semantics, and provides high bandwidth, low latency, and huge storage capacity. In this paper, we present the technical details on distributed metadata management, the strategy of memory borrow and return, data consistency, parallel access control, and mechanisms guiding downward and upward migration in ONFS. We implement an ONFS prototype on the TH-1A supercomputer, and conduct experiments to test its I/O performance and scalability. The results show that the bandwidths of single-thread and multi-thread ‘read’/‘write’ are 6-fold and 5-fold better than HDD-based Lustre, respectively. The I/O bandwidth of data-intensive applications in ONFS can be 6.35 times that in Lustre.

Keywords: High performance computing     Hierarchical hybrid storage system     Distributed metadata management     Data migration    

SA-RSR: a read-optimal data recovery strategy for XOR-coded distributed storage systems Research Articles

Xingjun ZHANG, Ningjing LIANG, Yunfei LIU, Changjiang ZHANG, Yang LI

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 6,   Pages 858-875 doi: 10.1631/FITEE.2100242

Abstract:

To ensure the reliability and availability of data, redundancy strategies are always required for s. Erasure coding, one of the representative redundancy strategies, has the advantage of low storage overhead, which facilitates its employment in s. Among the various erasure coding schemes, are becoming popular due to their high computing speed. When a occurs in such coding schemes, a process called takes place to retrieve the failed node's lost data from surviving nodes. However, data transmission during the process usually requires a considerable amount of time. Current research has focused mainly on reducing the amount of data needed for to reduce the time required for data transmission, but it has encountered problems such as significant complexity and local optima. In this paper, we propose a random search recovery algorithm, named SA-RSR, to speed up recovery of . SA-RSR uses a simulated annealing technique to search for an optimal recovery solution that reads and transmits a minimum amount of data. In addition, this search process can be done in polynomial time. We evaluate SA-RSR with a variety of in simulations and in a real storage system, Ceph. Experimental results in Ceph show that SA-RSR reduces the amount of data required for recovery by up to 30.0% and improves the performance of by up to 20.36% compared to the conventional recovery method.

Keywords: Distributed storage system     Data reliability and availability     XOR-based erasure codes     Single-node failure     Data recovery    

SoftSSD: enabling rapid flash firmware prototyping for solid-state drives Research Article

Jin XUE, Renhai CHEN, Tianyu WANG, Zili SHAO,jinxue@cse.cuhk.edu.hk,renhai.chen@tju.edu.cn,tywang@cse.cuhk.edu.hk,shao@cse.cuhk.edu.hk

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 5,   Pages 659-674 doi: 10.1631/FITEE.2200456

Abstract: Recently, (SSDs) have been used in a wide range of emerging data processing systems. Essentially, an SSD is a complex embedded system that involves both hardware and software design. For the latter, firmware modules such as the flash translation layer (FTL) orchestrate internal operations and flash management, and are crucial to the overall input/output performance of an SSD. Despite the rapid development of new SSD features in the market, the research of flash firmware has been mostly based on simulations due to the lack of a realistic and extensible SSD development platform. In this paper, we propose SoftSSD, a software-oriented SSD development platform for rapid flash firmware prototyping. The core of SoftSSD is a novel framework with an event-driven programming model. With the programming model, new FTL algorithms can be implemented and integrated into a full-featured flash firmware in a straightforward way. The resulting flash firmware can be deployed and evaluated on a hardware development board, which can be connected to a host system via peripheral component interconnect express and serve as a normal non-volatile memory express SSD. Different from existing hardware-oriented development platforms, SoftSSD implements the majority of SSD components (e.g., host interface controller) in software, so that data flows and internal states that were once confined in the hardware can now be examined with a software debugger, providing the observability and extensibility that are critical to the rapid prototyping and research of flash firmware. We describe the programming model and hardware design of SoftSSD. We also perform experiments with real application workloads on a prototype board to demonstrate the performance and usefulness of SoftSSD, and release the open-source code of SoftSSD for public access.

Keywords: Solid-state drives     Storage system     Software hardware co-design    

A disk failure prediction model for multiple issues Research Article

Yunchuan GUAN, Yu LIU, Ke ZHOU, Qiang LI, Tuanjie WANG, Hui LI,hustgyc@hust.edu.cn,liu_yu@hust.edu.cn

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 7,   Pages 964-979 doi: 10.1631/FITEE.2200488

Abstract: methods have been useful in handing a single issue, e.g., heterogeneous disks, model aging, and minority samples. However, because these issues often exist simultaneously, prediction models that can handle only one will result in prediction bias in reality. Existing methods simply fuse various models, lacking discussion of training data preparation and learning patterns when facing multiple issues, although the solutions to different issues often conflict with each other. As a result, we first explore the training data preparation for multiple issues via a data partitioning pattern, i.e., our proposed multi-property data partitioning (MDP). Then, we consider learning with the partitioned data for multiple issues as learning multiple tasks, and introduce the model-agnostic meta-learning (MAML) framework to achieve the learning. Based on these improvements, we propose a novel model named MDP-MAML. MDP addresses the challenges of uneven partitioning and difficulty in partitioning by time, and MAML addresses the challenge of learning with multiple domains and minor samples for multiple issues. In addition, MDP-MAML can assimilate emerging issues for learning and prediction. On the datasets reported by two real-world data centers, compared to state-of-the-art methods, MDP-MAML can improve the area under the curve (AUC) and false detection rate (FDR) from 0.85 to 0.89 and from 0.85 to 0.91, respectively, while reducing the false alarm rate (FAR) from 4.88% to 2.85%.

Keywords: Storage system reliability     Disk failure prediction     Self-monitoring analysis and reporting technology (SMART)     Machine learning    

Generic user revocation systems for attribute-based encryption in cloud storage None

Genlang CHEN, Zhiqian XU, Hai JIANG, Kuan-ching LI

Frontiers of Information Technology & Electronic Engineering 2018, Volume 19, Issue 11,   Pages 1362-1384 doi: 10.1631/FITEE.1800405

Abstract:

Cloud-based storage is a service model for businesses and individual users that involves paid or free storage resources. This service model enables on-demand storage capacity and management to users anywhere via the Internet. Because most cloud storage is provided by third-party service providers, the trust required for the cloud storage providers and the shared multi-tenant environment present special challenges for data protection and access control. Attribute-based encryption (ABE) not only protects data secrecy, but also has ciphertexts or decryption keys associated with fine-grained access policies that are automatically enforced during the decryption process. This enforcement puts data access under control at each data item level. However, ABE schemes have practical limitations on dynamic user revocation. In this paper, we propose two generic user revocation systems for ABE with user privacy protection, user revocation via ciphertext re-encryption (UR-CRE) and user revocation via cloud storage providers (UR-CSP), which work with any type of ABE scheme to dynamically revoke users.

Keywords: Attribute-based encryption     Generic user revocation     User privacy     Cloud storage     Access control    

Storagewall for exascale supercomputing Article

Wei HU,Guang-ming LIU,Qiong LI,Yan-huang JIANG,Gui-lin CAI

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 11,   Pages 1154-1175 doi: 10.1631/FITEE.1601336

Abstract: The mismatch between compute performance and I/O performance has long been a stumbling block as supercomputers evolve from petaflops to exaflops. Currently, many parallel applications are I/O intensive, and their overall running times are typically limited by I/O performance. To quantify the I/O performance bottleneck and highlight the significance of achieving scalable performance in peta/exascale supercomputing, in this paper, we introduce for the first time a formal definition of the ‘storage wall’ from the perspective of parallel application scalability. We quantify the effects of the storage bottleneck by providing a storage-bounded speedup, defining the storage wall quantitatively, presenting existence theorems for the storage wall, and classifying the system architectures depending on I/O performance variation. We analyze and extrapolate the existence of the storage wall by experiments on Tianhe-1A and case studies on Jaguar. These results provide insights on how to alleviate the storage wall bottleneck in system design and achieve hardware/software optimizations in peta/exascale supercomputing.

Keywords: Storage-bounded speedup     Storage wall     High performance computing     Exascale computing    

Research and modeling analysis on adaptive nested fine-grainedsoftware rejuvenation of object-oriented software system

Wang Zhan,Zhao Yanli,Liu Fengyu,Zhang Hong

Strategic Study of CAE 2008, Volume 10, Issue 7,   Pages 158-164

Abstract:

In order to enlarge the field of application, reduce the cost of software rejuvenation and improve software availability and reliability,the adaptive software rejuvenation with fine rejuvenation granularity of object-oriented software system would be put forward. Based on the characteristics of the component and coupling relation between the components of the system, this paper determines the finest rejuvenation granularity, analyzes the degree of the coupling relation,sets down a method to calculate the restart dependence and its degree of the components,so that determines the restart reachable set of each component and gets the restart gather of the components at every granularity,and finally sets down and models the adaptive nested software rejuvenation policy. So executing intelligent software rejuvenation at fine rejuvenation granularity in the object-oriented software system can be supported.

Keywords: software rejuvenation     restart dependence     restart reachable set     rejuvenation granularity     SHLPN    

& Research Article

Yaofeng TU, Rong XIAO, Yinjun HAN, Zhenghua CHEN, Hao JIN, Xuecheng QI, Xinyuan SUN,tu.yaofeng@zte.com.cn,xiao.rong1@zte.com.cn,han.yinjun@zte.com.cn,chen.zhenghua@zte.com.cn,jin.hao1@zte.com.cn,qi.xuecheng@zte.com.cn,sun.xinyuan@zte.com.cn

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 5,   Pages 716-730 doi: 10.1631/FITEE.2200466

Abstract: In s, replication and (EC) are common methods for data redundancy. Compared with replication, EC has better storage efficiency, but suffers higher overhead in update. Moreover, and reliability problems caused by s bring new challenges to applications of EC. Many works focus on optimizing the EC solution, including algorithm optimization, novel data update method, and so on, but lack the solutions for and reliability problems. In this paper, we introduce a storage system that decouples data updating and EC encoding, namely, decoupled data updating and coding (DDUC), and propose a data placement policy that combines replication and parity blocks. For the () EC system, the data are placed as groups of +1 replicas, and redundant data blocks of the same stripe are placed in the parity nodes, so that the parity nodes can autonomously perform local EC encoding. Based on the above policy, a two-phase data update method is implemented in which data are updated in replica mode in phase 1, and the EC encoding is done independently by parity nodes in phase 2. This solves the problem of data reliability degradation caused by s while ensuring high concurrency performance. It also uses persistent memory (PMem) hardware features of the byte addressing and eight-byte atomic write to implement a lightweight logging mechanism that improves performance while ensuring data . Experimental results show that the concurrent access performance of the proposed storage system is 1.70–3.73 times that of the state-of-the-art storage system Ceph, and the latency is only 3.4%–5.9% that of Ceph.

Keywords: Concurrent update     High reliability     Erasure code     Consistency     Distributed storage system    

Title Author Date Type Operation

MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning

Zhao-qi Wu, Jin Wei, Fan Zhang, Wei Guo, Guang-wei Xie,17034203@qq.com

Journal Article

Anovel non-volatile memory storage system for I/O-intensive applications

Wen-bing HAN, Xiao-gang CHEN, Shun-fen LI, Ge-zi LI, Zhi-tang SONG, Da-gang LI, Shi-yan CHEN

Journal Article

Areliable power management scheme for consistent hashing based distributed key value storage systems

Nan-nan ZHAO,Ji-guang WAN,Jun WANG,Chang-sheng XIE

Journal Article

ShortTail: taming tail latency for erasure-code-based in-memory systems

Yun TENG, Zhiyue LI, Jing HUANG, Guangyan ZHANG

Journal Article

TextGen: a realistic text data content generation method for modern storage system benchmarks

Long-xiang WANG,Xiao-she DONG,Xing-jun ZHANG,Yin-feng WANG,Tao JU,Guo-fu FENG

Journal Article

Areliable and energy-efficient storage systemwith erasure coding cache

Ji-guang WAN, Da-ping LI, Xiao-yang QU, Chao YIN, Jun WANG, Chang-sheng XIE

Journal Article

Engineering DNA Materials for Sustainable Data Storage Using a DNA Movable-Type System

Zi-Yi Gong, Li-Fu Song, Guang-Sheng Pei, Yu-Fei Dong, Bing-Zhi Li, Ying-Jin Yuan

Journal Article

ONFS: a hierarchical hybrid file system based on memory, SSD, andHDDfor high performance computers

Xin LIU, Yu-tong LU, Jie YU, Peng-fei WANG, Jie-ting WU, Ying LU

Journal Article

SA-RSR: a read-optimal data recovery strategy for XOR-coded distributed storage systems

Xingjun ZHANG, Ningjing LIANG, Yunfei LIU, Changjiang ZHANG, Yang LI

Journal Article

SoftSSD: enabling rapid flash firmware prototyping for solid-state drives

Jin XUE, Renhai CHEN, Tianyu WANG, Zili SHAO,jinxue@cse.cuhk.edu.hk,renhai.chen@tju.edu.cn,tywang@cse.cuhk.edu.hk,shao@cse.cuhk.edu.hk

Journal Article

A disk failure prediction model for multiple issues

Yunchuan GUAN, Yu LIU, Ke ZHOU, Qiang LI, Tuanjie WANG, Hui LI,hustgyc@hust.edu.cn,liu_yu@hust.edu.cn

Journal Article

Generic user revocation systems for attribute-based encryption in cloud storage

Genlang CHEN, Zhiqian XU, Hai JIANG, Kuan-ching LI

Journal Article

Storagewall for exascale supercomputing

Wei HU,Guang-ming LIU,Qiong LI,Yan-huang JIANG,Gui-lin CAI

Journal Article

Research and modeling analysis on adaptive nested fine-grainedsoftware rejuvenation of object-oriented software system

Wang Zhan,Zhao Yanli,Liu Fengyu,Zhang Hong

Journal Article

&

Yaofeng TU, Rong XIAO, Yinjun HAN, Zhenghua CHEN, Hao JIN, Xuecheng QI, Xinyuan SUN,tu.yaofeng@zte.com.cn,xiao.rong1@zte.com.cn,han.yinjun@zte.com.cn,chen.zhenghua@zte.com.cn,jin.hao1@zte.com.cn,qi.xuecheng@zte.com.cn,sun.xinyuan@zte.com.cn

Journal Article