Resource Type

Journal Article 1777

Conference Information 64

Conference Videos 63

Conference Topics 1

Year

2024 6

2023 170

2022 181

2021 198

2020 157

2019 155

2018 99

2017 111

2016 81

2015 37

2014 30

2013 53

2012 46

2011 53

2010 39

2009 35

2008 58

2007 68

2006 44

2005 58

open ︾

Keywords

Machine learning 42

Deep learning 34

Three Gorges Project 33

Artificial intelligence 21

neural network 16

Reinforcement learning 14

concrete 10

Three Gorges Project ship lift 9

numerical simulation 9

Neural network 8

ship lift 8

COVID-19 7

Software-defined networking (SDN) 7

mathematical model 7

TGP 6

simulation 6

cyberspace 5

cyberspace security 5

genetic algorithm 5

open ︾

Search scope:

排序: Display mode:

Deep 3D reconstruction: methods, data, and challenges Review Article

Caixia Liu, Dehui Kong, Shaofan Wang, Zhiyong Wang, Jinghua Li, Baocai Yin,lcxxib@emails.bjut.edu.cn,wangshaofan@bjut.edu.cn

Frontiers of Information Technology & Electronic Engineering 2021, Volume 22, Issue 5,   Pages 615-766 doi: 10.1631/FITEE.2000068

Abstract: Three-dimensional (3D) reconstruction of shapes is an important research topic in the fields of computer vision, computer graphics, pattern recognition, and virtual reality. Existing 3D reconstruction methods usually suffer from two bottlenecks: (1) they involve multiple manually designed states which can lead to cumulative errors, but can hardly learn semantic features of 3D shapes automatically; (2) they depend heavily on the content and quality of images, as well as precisely calibrated cameras. As a result, it is difficult to improve the reconstruction accuracy of those methods. 3D reconstruction methods based on deep learning overcome both of these bottlenecks by automatically learning semantic features of 3D shapes from low-quality images using deep networks. However, while these methods have various architectures, in-depth analysis and comparisons of them are unavailable so far. We present a comprehensive survey of 3D reconstruction methods based on deep learning. First, based on different deep learning model architectures, we divide 3D reconstruction methods based on deep learning into four types, , , , and based methods, and analyze the corresponding methodologies carefully. Second, we investigate four representative databases that are commonly used by the above methods in detail. Third, we give a comprehensive comparison of 3D reconstruction methods based on deep learning, which consists of the results of different methods with respect to the same database, the results of each method with respect to different databases, and the robustness of each method with respect to the number of views. Finally, we discuss future development of 3D reconstruction methods based on deep learning.

Keywords: 深度学习模型;三维重建;循环神经网络;深度自编码器;生成对抗网络;卷积神经网络    

Adversarial Attacks and Defenses in Deep Learning Feature Article

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu

Engineering 2020, Volume 6, Issue 3,   Pages 346-360 doi: 10.1016/j.eng.2019.12.012

Abstract:

With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques, it is critical
to ensure the security and robustness of the deployed algorithms. Recently, the security vulnerability of
DL algorithms to adversarial samples has been widely recognized. The fabricated samples can lead to various
misbehaviors of the DL models while being perceived as benign by humans. Successful implementations
of adversarial attacks in real physical-world scenarios further demonstrate their practicality.
Hence, adversarial attack and defense techniques have attracted increasing attention from both machine
learning and security communities and have become a hot research topic in recent years. In this paper,
we first introduce the theoretical foundations, algorithms, and applications of adversarial attack techniques.
We then describe a few research efforts on the defense techniques, which cover the broad frontier
in the field. Several open problems and challenges are subsequently discussed, which we hope will provoke
further research efforts in this critical area.

Keywords: Machine learning     Deep neural network Adversarial example     Adversarial attack     Adversarial defense    

Diffractive Deep Neural Networks at Visible Wavelengths Article

Hang Chen, Jianan Feng, Minwei Jiang, Yiqun Wang, Jie Lin, Jiubin Tan, Peng Jin

Engineering 2021, Volume 7, Issue 10,   Pages 1485-1493 doi: 10.1016/j.eng.2020.07.032

Abstract:

Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing, computational speed, and power efficiency. One landmark method is the diffractive deep neural network (D2NN) based on three-dimensional printing technology operated in the terahertz spectral range. Since the terahertz bandwidth involves limited interparticle coupling and material losses, this paper
extends D2NN to visible wavelengths. A general theory including a revised formula is proposed to solve any contradictions between wavelength, neuron size, and fabrication limitations. A novel visible light D2NN classifier is used to recognize unchanged targets (handwritten digits ranging from 0 to 9) and targets that have been changed (i.e., targets that have been covered or altered) at a visible wavelength of 632.8 nm. The obtained experimental classification accuracy (84%) and numerical classification accuracy (91.57%) quantify the match between the theoretical design and fabricated system performance. The presented framework can be used to apply a D2NN to various practical applications and design other new applications.

Keywords: Optical computation     Optical neural networks     Deep learning     Optical machine learning     Diffractive deep neural networks    

Brain Encoding and Decoding in fMRI with Bidirectional Deep Generative Models Review

Changde Du, Jinpeng Li, Lijie Huang, Huiguang He

Engineering 2019, Volume 5, Issue 5,   Pages 948-953 doi: 10.1016/j.eng.2019.03.010

Abstract:

Brain encoding and decoding via functional magnetic resonance imaging (fMRI) are two important aspects of visual perception neuroscience. Although previous researchers have made significant advances in brain encoding and decoding models, existing methods still require improvement using advanced machine learning techniques. For example, traditional methods usually build the encoding and decoding models separately, and are prone to overfitting on a small dataset. In fact, effectively unifying the encoding and decoding procedures may allow for more accurate predictions. In this paper, we first review the existing encoding and decoding methods and discuss
the potential advantages of a "bidirectional" modeling strategy. Next, we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules Furthermore, deep generative models (e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs)) have produced promising results in studies on brain encoding and decoding. Finally, we propose that the dual learning method, which was originally designed for machine translation tasks, could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.

Keywords: Brain encoding and decoding     Functional magnetic resonance imaging     Deep neural networks     Deep generative models     Dual learning    

Recent advances in efficient computation of deep convolutional neural networks Review

Jian CHENG, Pei-song WANG, Gang LI, Qing-hao HU, Han-qing LU

Frontiers of Information Technology & Electronic Engineering 2018, Volume 19, Issue 1,   Pages 64-77 doi: 10.1631/FITEE.1700789

Abstract: Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.

Keywords: Deep neural networks     Acceleration     Compression     Hardware accelerator    

A Survey of Accelerator Architectures for Deep Neural Networks Review

Yiran Chen, uan Xie, Linghao Song, Fan Chen, Tianqi Tang

Engineering 2020, Volume 6, Issue 3,   Pages 264-274 doi: 10.1016/j.eng.2020.01.007

Abstract:

Recently, due to the availability of big data and the rapid growth of computing power, artificial intelligence (AI) has regained tremendous attention and investment. Machine learning (ML) approaches have been successfully applied to solve many problems in academia and in industry. Although the explosion of big data applications is driving the development of ML, it also imposes severe challenges of data processing speed and scalability on conventional computer systems. Computing platforms that are dedicatedly designed for AI applications have been considered, ranging from a complement to von Neumann platforms to a “must-have” and standalone technical solution. These platforms, which belong to a larger category named “domain-specific computing,” focus on specific customization for AI. In this article, we focus on summarizing the recent advances in accelerator designs for deep neural networks (DNNs)—that is, DNN accelerators. We discuss various architectures that support DNN executions in terms of computing units, dataflow optimization, targeted network topologies, architectures on emerging technologies, and accelerators for emerging applications. We also provide our visions on the future trend of AI chip designs.

Keywords: Deep neural network     Domain-specific architecture     Accelerator    

Estimating Rainfall Intensity Using an Image-Based Deep Learning Model Article

Hang Yin, Feifei Zheng, Huan-Feng Duan, Dragan Savic, Zoran Kapelan

Engineering 2023, Volume 21, Issue 2,   Pages 162-174 doi: 10.1016/j.eng.2021.11.021

Abstract:

Urban flooding is a major issue worldwide, causing huge economic losses and serious threats to public safety. One promising way to mitigate its impacts is to develop a real-time flood risk management system; however, building such a system is often challenging due to the lack of high spatiotemporal rainfall data. While some approaches (i.e., ground rainfall stations or radar and satellite techniques) are available to measure and/or predict rainfall intensity, it is difficult to obtain accurate rainfall data with a desirable spatiotemporal resolution using these methods. This paper proposes an image-based deep learning model to estimate urban rainfall intensity with high spatial and temporal resolution. More specifically, a convolutional neural network (CNN) model called the image-based rainfall CNN (irCNN) model is developed using rainfall images collected from existing dense sensors (i.e., smart phones or transportation cameras) and their corresponding measured rainfall intensity values. The trained irCNN model is subsequently employed to efficiently estimate rainfall intensity based on the sensors' rainfall images. Synthetic rainfall data and real rainfall images are respectively utilized to explore the irCNN's accuracy in theoretically and practically simulating rainfall intensity. The results show that the irCNN model provides rainfall estimates with a mean absolute percentage error ranging between 13.5% and 21.9%, which exceeds the performance of other state-of-the-art modeling techniques in the literature. More importantly, the main feature of the proposed irCNN is its low cost in efficiently acquiring high spatiotemporal urban rainfall data. The irCNN model provides a promising alternative for estimating urban rainfall intensity, which can greatly facilitate the development of urban flood risk management in a real-time manner.

Keywords: Urban flooding     Rainfall images     Deep learning model     Convolutional neural networks (CNNs)     Rainfall intensity    

DAN: a deep association neural network approach for personalization recommendation Research Articles

Xu-na Wang, Qing-mei Tan,Xuna@nuaa.edu.cn,tanchina@nuaa.edu.cn

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 7,   Pages 963-980 doi: 10.1631/FITEE.1900236

Abstract: The collaborative filtering technology used in traditional systems has a problem of data sparsity. The traditional matrix decomposition algorithm simply decomposes users and items into a linear model of potential factors. These limitations have led to the low accuracy in traditional algorithms, thus leading to the emergence of systems based on . At present, s mostly use deep s to model some of the auxiliary information, and in the process of modeling, multiple mapping paths are adopted to map the original input data to the potential vector space. However, these deep algorithms ignore the combined effects of different categories of data, which can have a potential impact on the effectiveness of the . Aimed at this problem, in this paper we propose a feedforward deep method, called the deep association (DAN), which is based on the joint action of multiple categories of information, for implicit feedback . Specifically, the underlying input of the model includes not only users and items, but also more auxiliary information. In addition, the impact of the joint action of different types of information on the is considered. Experiments on an open data set show the significant improvements made by our proposed method over the other methods. Empirical evidence shows that deep, joint s can provide better performance.

Keywords: Neural network     Deep learning     Deep association neural network (DAN)     Recommendation    

SmartPaint: a co-creative drawing system based on generative adversarial networks Special Feature on Intelligent Design

Lingyun SUN, Pei CHEN, Wei XIANG, Peng CHEN, Wei-yue GAO, Ke-jun ZHANG

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 12,   Pages 1644-1656 doi: 10.1631/FITEE.1900386

Abstract: Artificial intelligence (AI) has played a significant role in imitating and producing large-scale designs such as e-commerce banners. However, it is less successful at creative and collaborative design outputs. Most humans express their ideas as rough sketches, and lack the professional skills to complete pleasing paintings. Existing AI approaches have failed to convert varied user sketches into artistically beautiful paintings while preserving their semantic concepts. To bridge this gap, we have developed SmartPaint, a co-creative drawing system based on generative adversarial networks (GANs), enabling a machine and a human being to collaborate in cartoon landscape painting. SmartPaint trains a GAN using triples of cartoon images, their corresponding semantic label maps, and edge detection maps. The machine can then simultaneously understand the cartoon style and semantics, along with the spatial relationships among the objects in the landscape images. The trained system receives a sketch as a semantic label map input, and automatically synthesizes its edge map for stable handling of varied sketches. It then outputs a creative and fine painting with the appropriate style corresponding to the human’s sketch. Experiments confirmed that the proposed SmartPaint system successfully generates high-quality cartoon paintings.

Keywords: Co-creative drawing     Deep learning     Image generation    

Representation learning via a semi-supervised stacked distance autoencoder for image classification Research Articles

Liang Hou, Xiao-yi Luo, Zi-yang Wang, Jun Liang,jliang@zju.edu.cn

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 7,   Pages 963-1118 doi: 10.1631/FITEE.1900116

Abstract: is an important application of deep learning. In a typical classification task, the classification accuracy is strongly related to the features that are extracted via deep learning methods. An is a special type of , often used for dimensionality reduction and feature extraction. The proposed method is based on the traditional , incorporating the “distance” information between samples from different categories. The model is called a semi-supervised distance . Each layer is first pre-trained in an unsupervised manner. In the subsequent supervised training, the optimized parameters are set as the initial values. To obtain more suitable features, we use a stacked model to replace the basic structure with a single hidden layer. A series of experiments are carried out to test the performance of different models on several datasets, including the MNIST dataset, street view house numbers (SVHN) dataset, German traffic sign recognition benchmark (GTSRB), and CIFAR-10 dataset. The proposed semi-supervised distance method is compared with the traditional , sparse , and supervised . Experimental results verify the effectiveness of the proposed model.

Keywords: 自动编码器;图像分类;半监督学习;神经网络    

Penetration Depth of Projectiles Into Concrete Using Artificial Neural Network

Li Jianguang,Li Yongchi,Wang Yulan

Strategic Study of CAE 2007, Volume 9, Issue 8,   Pages 77-81

Abstract:

In this article,  nonlinear mapping relation between input of 13 variables of lp and σyt/σyp etc. , and output of penetration depth is established by dimensional analysis and theory of artificial neural networks for problem of penetration depth of projectiles into concrete.  Moreover,  a satisfied output about penetration depth from RBF neural network is gotten by a group of input sets and corresponding output sets,  which comes from M.  J.  Forrestal 's document.

Keywords: neural networks     dimensional analysis     penetration depth of projectiles into concrete     nonlinear mapping relation     RBF neural networks    

Associative affinity network learning for multi-object tracking Research Articles

Liang Ma, Qiaoyong Zhong, Yingying Zhang, Di Xie, Shiliang Pu,maliang6@hikvision.com,zhongqiaoyong@hikvision.com,zhangyingying7@hikvision.com,xiedi@hikvision.com,pushiliang.hri@hikvision.com

Frontiers of Information Technology & Electronic Engineering 2021, Volume 22, Issue 9,   Pages 1194-1206 doi: 10.1631/FITEE.2000272

Abstract: We propose a joint feature and metric learning architecture, called the associative affinity network (AAN), as an affinity model for (MOT) in videos. The AAN learns the associative affinity between tracks and detections across frames in an end-to-end manner. Considering flawed detections, the AAN jointly learns bounding box regression, classification, and affinity regression via the proposed multi-task loss. Contrary to networks that are trained with ranking loss, we directly train a binary classifier to learn the associative affinity of each track-detection pair and use a matching cardinality loss to capture information among candidate pairs. The AAN learns a discriminative affinity model for data association to tackle MOT, and can also perform single-object tracking. Based on the AAN, we propose a simple multi-object tracker that achieves competitive performance on the public MOT16 and MOT17 test datasets.

Keywords: 多目标跟踪;深度神经网络;相似度学习    

Marine target detection based on Marine-Faster R-CNN for navigation radar plane position indicator images Research Article

Xiaolong CHEN, Xiaoqian MU, Jian GUAN, Ningbo LIU, Wei ZHOU,cxlcxl1209@163.com,guanjian_68@163.com

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 4,   Pages 630-643 doi: 10.1631/FITEE.2000611

Abstract: As a classic deep learning target detection algorithm, Faster R-CNN (region convolutional neural network) has been widely used in high-resolution synthetic aperture radar (SAR) and inverse SAR (ISAR) image detection. However, for most common low-resolution radar , it is difficult to achieve good performance. In this paper, taking PPI images as an example, a method based on the Marine-Faster R-CNN algorithm is proposed in the case of complex background (e.‍g., sea clutter) and target characteristics. The method performs feature extraction and target recognition on PPI images generated by radar echoes with the . First, to improve the accuracy of detecting marine targets and reduce the false alarm rate, Faster R-CNN was optimized as the Marine-Faster R-CNN in five respects: new backbone network, anchor size, dense target detection, data sample balance, and scale normalization. Then, JRC (Japan Radio Co., Ltd.) was used to collect echo data under different conditions to build a marine target dataset. Finally, comparisons with the classic Faster R-CNN method and the constant false alarm rate (CFAR) algorithm proved that the proposed method is more accurate and robust, has stronger generalization ability, and can be applied to the detection of marine targets for . Its performance was tested with datasets from different observation conditions (sea states, radar parameters, and different targets).

Keywords: Marine target detection     Navigation radar     Plane position indicator (PPI) images     Convolutional neural network (CNN)     Faster R-CNN (region convolutional neural network) method    

Flexibility Prediction of Aggregated Electric Vehicles and Domestic Hot Water Systems in Smart Grids Article

Junjie Hu, Huayanran Zhou, Yihong Zhou, Haijing Zhang, Lars Nordströmd, Guangya Yang

Engineering 2021, Volume 7, Issue 8,   Pages 1101-1114 doi: 10.1016/j.eng.2021.06.008

Abstract:

With the growth of intermittent renewable energy generation in power grids, there is an increasing demand for controllable resources to be deployed to guarantee power quality and frequency stability. The flexibility of demand response (DR) resources has become a valuable solution to this problem. However, existing research indicates that problems on flexibility prediction of DR resources have not been investigated. This study applied the temporal convolution network (TCN)-combined transformer, a deep learning technique to predict the aggregated flexibility of two types of DR resources, that is, electric vehicles (EVs) and domestic hot water system (DHWS). The prediction uses historical power consumption data of these DR resources and DR signals (DS) to facilitate prediction. The prediction can generate the size and maintenance time of the aggregated flexibility. The accuracy of the flexibility prediction results was verified through simulations of case studies. The simulation results show that under different maintenance times, the size of the flexibility changed. The proposed DR resource flexibility prediction method demonstrates its application in unlocking the demand-side flexibility to provide a reserve to grids.

Keywords: Load flexibility     Electric vehicles     Domestic hot water system     Temporal convolution network-combined transformer     Deep learning    

Two-level hierarchical feature learning for image classification Article

Guang-hui SONG,Xiao-gang JIN,Gen-lang CHEN,Yan NIE

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 9,   Pages 897-906 doi: 10.1631/FITEE.1500346

Abstract: In some image classification tasks, similarities among different categories are different and the samples are usually misclassified as highly similar categories. To distinguish highly similar categories, more specific features are required so that the classifier can improve the classification performance. In this paper, we propose a novel two-level hierarchical feature learning framework based on the deep convolutional neural network (CNN), which is simple and effective. First, the deep feature extractors of different levels are trained using the transfer learning method that fine-tunes the pre-trained deep CNN model toward the new target dataset. Second, the general feature extracted from all the categories and the specific feature extracted from highly similar categories are fused into a feature vector. Then the final feature representation is fed into a linear classifier. Finally, experiments using the Caltech-256, Oxford Flower-102, and Tasmania Coral Point Count (CPC) datasets demonstrate that the expression ability of the deep features resulting from two-level hierarchical feature learning is powerful. Our proposed method effectively increases the classification accuracy in comparison with flat multiple classification methods.

Keywords: Transfer learning     Feature learning     Deep convolutional neural network     Hierarchical classification     Spectral clustering    

Title Author Date Type Operation

Deep 3D reconstruction: methods, data, and challenges

Caixia Liu, Dehui Kong, Shaofan Wang, Zhiyong Wang, Jinghua Li, Baocai Yin,lcxxib@emails.bjut.edu.cn,wangshaofan@bjut.edu.cn

Journal Article

Adversarial Attacks and Defenses in Deep Learning

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu

Journal Article

Diffractive Deep Neural Networks at Visible Wavelengths

Hang Chen, Jianan Feng, Minwei Jiang, Yiqun Wang, Jie Lin, Jiubin Tan, Peng Jin

Journal Article

Brain Encoding and Decoding in fMRI with Bidirectional Deep Generative Models

Changde Du, Jinpeng Li, Lijie Huang, Huiguang He

Journal Article

Recent advances in efficient computation of deep convolutional neural networks

Jian CHENG, Pei-song WANG, Gang LI, Qing-hao HU, Han-qing LU

Journal Article

A Survey of Accelerator Architectures for Deep Neural Networks

Yiran Chen, uan Xie, Linghao Song, Fan Chen, Tianqi Tang

Journal Article

Estimating Rainfall Intensity Using an Image-Based Deep Learning Model

Hang Yin, Feifei Zheng, Huan-Feng Duan, Dragan Savic, Zoran Kapelan

Journal Article

DAN: a deep association neural network approach for personalization recommendation

Xu-na Wang, Qing-mei Tan,Xuna@nuaa.edu.cn,tanchina@nuaa.edu.cn

Journal Article

SmartPaint: a co-creative drawing system based on generative adversarial networks

Lingyun SUN, Pei CHEN, Wei XIANG, Peng CHEN, Wei-yue GAO, Ke-jun ZHANG

Journal Article

Representation learning via a semi-supervised stacked distance autoencoder for image classification

Liang Hou, Xiao-yi Luo, Zi-yang Wang, Jun Liang,jliang@zju.edu.cn

Journal Article

Penetration Depth of Projectiles Into Concrete Using Artificial Neural Network

Li Jianguang,Li Yongchi,Wang Yulan

Journal Article

Associative affinity network learning for multi-object tracking

Liang Ma, Qiaoyong Zhong, Yingying Zhang, Di Xie, Shiliang Pu,maliang6@hikvision.com,zhongqiaoyong@hikvision.com,zhangyingying7@hikvision.com,xiedi@hikvision.com,pushiliang.hri@hikvision.com

Journal Article

Marine target detection based on Marine-Faster R-CNN for navigation radar plane position indicator images

Xiaolong CHEN, Xiaoqian MU, Jian GUAN, Ningbo LIU, Wei ZHOU,cxlcxl1209@163.com,guanjian_68@163.com

Journal Article

Flexibility Prediction of Aggregated Electric Vehicles and Domestic Hot Water Systems in Smart Grids

Junjie Hu, Huayanran Zhou, Yihong Zhou, Haijing Zhang, Lars Nordströmd, Guangya Yang

Journal Article

Two-level hierarchical feature learning for image classification

Guang-hui SONG,Xiao-gang JIN,Gen-lang CHEN,Yan NIE

Journal Article