Resource Type

Journal Article 834

Conference Videos 28

Conference Information 26

Conference Topics 1

Year

2024 4

2023 108

2022 103

2021 104

2020 77

2019 82

2018 56

2017 61

2016 32

2015 13

2014 22

2013 5

2012 22

2011 6

2010 9

2009 12

2008 19

2007 37

2006 14

2005 14

open ︾

Keywords

Machine learning 42

Deep learning 34

Artificial intelligence 17

Reinforcement learning 14

Neural network 5

Active learning 4

Additive manufacturing 4

Big data 3

membrane separation 3

spacecraft 3

3D printing 2

5G 2

Accelerator 2

Adaptive dynamic programming 2

Attention mechanism 2

Autonomous driving 2

Bayesian optimization 2

China 2

Controller placement 2

open ︾

Search scope:

排序: Display mode:

Latent source-specific generative factor learning for monaural speech separation using weighted-factor autoencoder

Jing-jing Chen, Qi-rong Mao, You-cai Qin, Shuang-qing Qian, Zhi-shen Zheng,2221808071@stmail.ujs.edu.cn,mao_qr@ujs.edu.cn,2211908026@stmail.ujs.edu.cn,2211908025@stmail.ujs.edu.cn,3160602062@stmail.ujs.edu.cn

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 11,   Pages 1535-1670 doi: 10.1631/FITEE.2000019

Abstract: Much recent progress in monaural (MSS) has been achieved through a series of architectures based on s, which use an encoder to condense the input signal into compressed features and then feed these features into a decoder to construct a specific audio source of interest. However, these approaches can neither learn of the original input for MSS nor construct each audio source in mixed speech. In this study, we propose a novel weighted-factor (WFAE) model for MSS, which introduces a regularization loss in the objective function to isolate one source without containing other sources. By incorporating a latent attention mechanism and a supervised source constructor in the separation layer, WFAE can learn source-specific and a set of discriminative features for each source, leading to MSS performance improvement. Experiments on benchmark datasets show that our approach outperforms the existing methods. In terms of three important metrics, WFAE has great success on a relatively challenging MSS case, i.e., speaker-independent MSS.

Keywords: 语音分离;生成因子;自动编码器;深度学习    

Representation learning via a semi-supervised stacked distance autoencoder for image classification Research Articles

Liang Hou, Xiao-yi Luo, Zi-yang Wang, Jun Liang,jliang@zju.edu.cn

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 7,   Pages 963-1118 doi: 10.1631/FITEE.1900116

Abstract: is an important application of deep learning. In a typical classification task, the classification accuracy is strongly related to the features that are extracted via deep learning methods. An is a special type of , often used for dimensionality reduction and feature extraction. The proposed method is based on the traditional , incorporating the “distance” information between samples from different categories. The model is called a semi-supervised distance . Each layer is first pre-trained in an unsupervised manner. In the subsequent supervised training, the optimized parameters are set as the initial values. To obtain more suitable features, we use a stacked model to replace the basic structure with a single hidden layer. A series of experiments are carried out to test the performance of different models on several datasets, including the MNIST dataset, street view house numbers (SVHN) dataset, German traffic sign recognition benchmark (GTSRB), and CIFAR-10 dataset. The proposed semi-supervised distance method is compared with the traditional , sparse , and supervised . Experimental results verify the effectiveness of the proposed model.

Keywords: 自动编码器;图像分类;半监督学习;神经网络    

Deep 3D reconstruction: methods, data, and challenges Review Article

Caixia Liu, Dehui Kong, Shaofan Wang, Zhiyong Wang, Jinghua Li, Baocai Yin,lcxxib@emails.bjut.edu.cn,wangshaofan@bjut.edu.cn

Frontiers of Information Technology & Electronic Engineering 2021, Volume 22, Issue 5,   Pages 615-766 doi: 10.1631/FITEE.2000068

Abstract: Three-dimensional (3D) reconstruction of shapes is an important research topic in the fields of computer vision, computer graphics, pattern recognition, and virtual reality. Existing 3D reconstruction methods usually suffer from two bottlenecks: (1) they involve multiple manually designed states which can lead to cumulative errors, but can hardly learn semantic features of 3D shapes automatically; (2) they depend heavily on the content and quality of images, as well as precisely calibrated cameras. As a result, it is difficult to improve the reconstruction accuracy of those methods. 3D reconstruction methods based on deep learning overcome both of these bottlenecks by automatically learning semantic features of 3D shapes from low-quality images using deep networks. However, while these methods have various architectures, in-depth analysis and comparisons of them are unavailable so far. We present a comprehensive survey of 3D reconstruction methods based on deep learning. First, based on different deep learning model architectures, we divide 3D reconstruction methods based on deep learning into four types, , , , and based methods, and analyze the corresponding methodologies carefully. Second, we investigate four representative databases that are commonly used by the above methods in detail. Third, we give a comprehensive comparison of 3D reconstruction methods based on deep learning, which consists of the results of different methods with respect to the same database, the results of each method with respect to different databases, and the robustness of each method with respect to the number of views. Finally, we discuss future development of 3D reconstruction methods based on deep learning.

Keywords: 深度学习模型;三维重建;循环神经网络;深度自编码器;生成对抗网络;卷积神经网络    

Brain Encoding and Decoding in fMRI with Bidirectional Deep Generative Models Review

Changde Du, Jinpeng Li, Lijie Huang, Huiguang He

Engineering 2019, Volume 5, Issue 5,   Pages 948-953 doi: 10.1016/j.eng.2019.03.010

Abstract:

Brain encoding and decoding via functional magnetic resonance imaging (fMRI) are two important aspects of visual perception neuroscience. Although previous researchers have made significant advances in brain encoding and decoding models, existing methods still require improvement using advanced machine learning techniques. For example, traditional methods usually build the encoding and decoding models separately, and are prone to overfitting on a small dataset. In fact, effectively unifying the encoding and decoding procedures may allow for more accurate predictions. In this paper, we first review the existing encoding and decoding methods and discuss
the potential advantages of a "bidirectional" modeling strategy. Next, we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules Furthermore, deep generative models (e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs)) have produced promising results in studies on brain encoding and decoding. Finally, we propose that the dual learning method, which was originally designed for machine translation tasks, could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.

Keywords: Brain encoding and decoding     Functional magnetic resonance imaging     Deep neural networks     Deep generative models     Dual learning    

Battle damage assessment based on an improved Kullback-Leibler divergence sparse autoencoder Article

Zong-feng QI, Qiao-qiao LIU, Jun WANG, Jian-xun LI

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 12,   Pages 1991-2000 doi: 10.1631/FITEE.1601395

Abstract: The nodes number of the hidden layer in a deep learning network is quite difficult to determine with traditional methods. To solve this problem, an improved Kullback-Leibler divergence sparse autoencoder (KL-SAE) is proposed in this paper, which can be applied to battle damage assessment (BDA). This method can select automatically the hidden layer feature which contributes most to data reconstruction, and abandon the hidden layer feature which contributes least. Therefore, the structure of the network can be modified. In addition, the method can select automatically hidden layer feature without loss of the network prediction accuracy and increase the computation speed. Experiments on University of California-Irvine (UCI) data sets and BDA for battle damage data demonstrate that the method outperforms other reference data-driven methods. The following results can be found from this paper. First, the improved KL-SAE regression network can guarantee the prediction accuracy and increase the speed of training networks and prediction. Second, the proposed network can select automatically hidden layer effective feature and modify the structure of the network by optimizing the nodes number of the hidden layer.

Keywords: Battle damage assessment     Improved Kullback-Leibler divergence sparse autoencoder     Structural optimization     Feature selection    

Efficient normalization for quantitative evaluation of the driving behavior using a gated auto-encoder Research Articles

Xin HE, Zhe ZHANG, Li XU, Jiapei YU,xinhe_ee@zju.edu.cn,xupower@zju.edu.cn

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 3,   Pages 452-462 doi: 10.1631/FITEE.2000667

Abstract: is important for a fair evaluation of the driving style. The longitudinal control of a vehicle is investigated in this study. The task can be considered as mapping of the in a different environment to the uniform condition. Unlike the model-based approach as in previous work, where a necessary driver model is employed to conduct the driving cycle test, the approach we propose directly normalizes the using an auto-encoder (AE) when following a standard speed profile. To ensure a positive correlation between the vehicle speed and , a gate constraint is imposed in between the encoder and decoder to form a gated AE (gAE). This approach is model-free and efficient. The proposed approach is tested for consistency with the model-based approach and for its applications to of the and fuel consumption analysis. Simulations are conducted to verify the effectiveness of the proposed scheme.

Keywords: Driving behavior     Normalization     Gated auto-encoder     Quantitative evaluation    

Toward Human-in-the-loop AI: Enhancing Deep Reinforcement Learning Via Real-time Human Guidance for Autonomous Driving Article

Jingda Wu, Zhiyu Huang, Zhongxu Hu, Chen Lv

Engineering 2023, Volume 21, Issue 2,   Pages 75-91 doi: 10.1016/j.eng.2022.05.017

Abstract:

Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algorithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training process. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the efficiency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algorithm under human guidance without imposing specific requirements on participants' expertise or experience.

Keywords: Human-in-the-loop AI     Deep reinforcement learning     Human guidance     Autonomous driving    

Automatic traceability link recovery via active learning Research Articles

Tian-bao Du, Guo-hua Shen, Zhi-qiu Huang, Yao-shen Yu, De-xiang Wu,tbdu_312@outlook.com,ghshen@nuaa.edu.cn,zqhuang@nuaa.edu.cn

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 8,   Pages 1217-1225 doi: 10.1631/FITEE.1900222

Abstract: (TLR) is an important and costly software task that requires humans establish relationships between source and target artifact sets within the same project. Previous research has proposed to establish traceability links by machine learning approaches. However, current machine learning approaches cannot be well applied to projects without traceability information (links), because training an effective predictive model requires humans label too many traceability links. To save , we propose a new TLR approach based on (AL), which is called the AL-based approach. We evaluate the AL-based approach on seven commonly used traceability datasets and compare it with an information retrieval based approach and a state-of-the-art machine learning approach. The results indicate that the AL-based approach outperforms the other two approaches in terms of F-score.

Keywords: Automatic     Traceability link recovery     Manpower     Active learning    

A Geometric Understanding of Deep Learning Article

Na Lei, Dongsheng An, Yang Guo, Kehua Su, Shixia Liu, Zhongxuan Luo, Shing-Tung Yau, Xianfeng Gu

Engineering 2020, Volume 6, Issue 3,   Pages 361-374 doi: 10.1016/j.eng.2019.09.010

Abstract:

This work introduces an optimal transportation (OT) view of generative adversarial networks (GANs). Natural datasets have intrinsic patterns, which can be summarized as the manifold distribution principle: the distribution of a class of data is close to a low-dimensional manifold. GANs mainly accomplish two tasks: manifold learning and probability distribution transformation. The latter can be carried out using the classical OT method. From the OT perspective, the generator computes the OT map, while the discriminator computes the Wasserstein distance between the generated data distribution and the real data distribution; both can be reduced to a convex geometric optimization process. Furthermore, OT theory discovers the intrinsic collaborative—instead of competitive—relation between the generator and the discriminator, and the fundamental reason for mode collapse. We also propose a novel generative model, which uses an autoencoder (AE) for manifold learning and OT map for probability distribution transformation. This AE–OT model improves the theoretical rigor and transparency, as well as the computational stability and efficiency; in particular, it eliminates the mode collapse. The experimental results validate our hypothesis, and demonstrate the advantages of our proposed model.

Keywords: Generative     Adversarial     Deep learning     Optimal transportation     Mode collapse    

SmartPaint: a co-creative drawing system based on generative adversarial networks Special Feature on Intelligent Design

Lingyun SUN, Pei CHEN, Wei XIANG, Peng CHEN, Wei-yue GAO, Ke-jun ZHANG

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 12,   Pages 1644-1656 doi: 10.1631/FITEE.1900386

Abstract: Artificial intelligence (AI) has played a significant role in imitating and producing large-scale designs such as e-commerce banners. However, it is less successful at creative and collaborative design outputs. Most humans express their ideas as rough sketches, and lack the professional skills to complete pleasing paintings. Existing AI approaches have failed to convert varied user sketches into artistically beautiful paintings while preserving their semantic concepts. To bridge this gap, we have developed SmartPaint, a co-creative drawing system based on generative adversarial networks (GANs), enabling a machine and a human being to collaborate in cartoon landscape painting. SmartPaint trains a GAN using triples of cartoon images, their corresponding semantic label maps, and edge detection maps. The machine can then simultaneously understand the cartoon style and semantics, along with the spatial relationships among the objects in the landscape images. The trained system receives a sketch as a semantic label map input, and automatically synthesizes its edge map for stable handling of varied sketches. It then outputs a creative and fine painting with the appropriate style corresponding to the human’s sketch. Experiments confirmed that the proposed SmartPaint system successfully generates high-quality cartoon paintings.

Keywords: Co-creative drawing     Deep learning     Image generation    

Deep learning compact binary codes for fingerprint indexing None

Chao-chao BAI, Wei-qiang WANG, Tong ZHAO, Ru-xin WANG, Ming-qiang LI

Frontiers of Information Technology & Electronic Engineering 2018, Volume 19, Issue 9,   Pages 1112-1123 doi: 10.1631/FITEE.1700420

Abstract:

With the rapid growth in fingerprint databases, it has become necessary to develop excellent fingerprint indexing to achieve efficiency and accuracy. Fingerprint indexing has been widely studied with real-valued features, but few studies focus on binary feature representation, which is more suitable to identify fingerprints efficiently in large-scale fingerprint databases. In this study, we propose a deep compact binary minutia cylinder code (DCBMCC) as an effective and discriminative feature representation for fingerprint indexing. Specifically, the minutia cylinder code (MCC), as the state-of-the-art fingerprint representation, is analyzed and its shortcomings are revealed. Accordingly, we propose a novel fingerprint indexing method based on deep neural networks to learn DCBMCC. Our novel network restricts the penultimate layer to directly output binary codes. Moreover, we incorporate independence, balance, quantization-loss-minimum, and similarity-preservation properties in this learning process. Eventually, a multi-index hashing (MIH) based fingerprint indexing scheme further speeds up the exact search in the Hamming space by building multiple hash tables on binary code substrings. Furthermore, numerous experiments on public databases show that the proposed approach is an outstanding fingerprint indexing method since it has an extremely small error rate with a very low penetration rate.

Keywords: Fingerprint indexing     Minutia cylinder code     Deep neural network     Multi-index hashing    

Attention-based efficient robot grasp detection network Research Article

Xiaofei QIN, Wenkai HU, Chen XIAO, Changxiang HE, Songwen PEI, Xuedian ZHANG,xiaofei.qin@usst.edu.cn,obmmd_zxd@163.com

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 10,   Pages 1430-1444 doi: 10.1631/FITEE.2200502

Abstract: To balance the inference speed and detection accuracy of a grasp detection algorithm, which are both important for robot grasping tasks, we propose an ; structured pixel-level grasp detection named the attention-based efficient network (AE-GDN). Three spatial attention modules are introduced in the encoder stages to enhance the detailed information, and three channel attention modules are introduced in the stages to extract more semantic information. Several lightweight and efficient DenseBlocks are used to connect the encoder and paths to improve the feature modeling capability of AE-GDN. A high intersection over union (IoU) value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration, but might cause a collision. This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers. We design a new IoU loss calculation method based on an hourglass box matching mechanism, which will create good correspondence between high IoUs and high-quality grasp configurations. AE-GDN achieves the accuracy of 98.9% and 96.6% on the Cornell and Jacquard datasets, respectively. The inference speed reaches 43.5 frames per second with only about 1.2×10 parameters. The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well. Codes are available at https://github.com/robvincen/robot_gradethttps://github.com/robvincen/robot_gradet.

Keywords: Robot grasp detection     Attention mechanism     Encoder–     decoder     Neural network    

Embedding expert demonstrations into clustering buffer for effective deep reinforcement learning Research Article

Shihmin WANG, Binqi ZHAO, Zhengfeng ZHANG, Junping ZHANG, Jian PU

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 11,   Pages 1541-1556 doi: 10.1631/FITEE.2300084

Abstract: As one of the most fundamental topics in (RL), is essential to the deployment of deep RL algorithms. Unlike most existing exploration methods that sample an action from different types of posterior distributions, we focus on the policy and propose an efficient selective sampling approach to improve by modeling the internal hierarchy of the environment. Specifically, we first employ in the policy to generate an action candidate set. Then we introduce a clustering buffer for modeling the internal hierarchy, which consists of on-policy data, off-policy data, and expert data to evaluate actions from the clusters in the action candidate set in the exploration stage. In this way, our approach is able to take advantage of the supervision information in the expert demonstration data. Experiments on six different continuous locomotion environments demonstrate superior performance and faster convergence of selective sampling. In particular, on the LGSVL task, our method can reduce the number of convergence steps by 46.7% and the convergence time by 28.5%. Furthermore, our code is open-source for reproducibility. The code is available at https://github.com/Shihwin/SelectiveSampling.

Keywords: Reinforcement learning     Sample efficiency     Sampling process     Clustering methods     Autonomous driving    

Visual knowledge guided intelligent generation of Chinese seal carving Research Article

Kejun ZHANG, Rui ZHANG, Yehang YIN, Yifei LI, Wenqi WU, Lingyun SUN, Fei WU, Huanghuang DENG, Yunhe PAN

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 10,   Pages 1479-1493 doi: 10.1631/FITEE.2100094

Abstract:

We digitally reproduce the process of resource collaboration, design creation, and visual presentation of Chinese art. We develop an intelligent art-generation system (Zhejiang University Intelligent System, http://www.next.zju.edu.cn/seal/; the website of the search and layout system is http://www.next.zju.edu.cn/seal/search_app/) to deal with the difficulty in using a visual knowledge guided approach. The knowledge base in this study is the Qiushi Database, which consists of open datasets of images of seal characters and seal stamps. We propose a seal character generation method based on visual knowledge, guided by the database and expertise. Furthermore, to create the layout of the seal, we propose a deformation algorithm to adjust the seal characters and calculate layout parameters from the database and knowledge to achieve an intelligent structure. Experimental results show that this method and system can effectively deal with the difficulties in the generation of seal carving. Our work provides theoretical and applied references for the rebirth and innovation of art.

Keywords: Seal-carving     Intelligent generation     Deep learning     Parametric modeling     Computational art    

A subband excitation substitute based scheme for narrowband speech watermarking Article

Wei LIU, Ai-qun HU

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 5,   Pages 627-643 doi: 10.1631/FITEE.1601503

Abstract: We propose a new narrowband speech watermarking scheme by replacing part of the speech with a scaled and spectrally shaped hidden signal. Theoretically, it is proved that if a small amount of host speech is modified, then not only an ideal channel model for hidden communication can be established, but also high imperceptibility and good intelligibility can be achieved. Furthermore, a practical system implementation is proposed. At the embedder, the power normalization criterion is first imposed on a passband watermark signal by forcing its power level to be the same as the original passband excitation of the cover speech, and a synthesis filter is then used to spectrally shape the scaled watermark signal. At the extractor, a bandpass filter is first used to get rid of the out-of-band signal, and an analysis filter is then employed to compensate for the distortion introduced by the synthesis filter. Experimental results show that the data rate is as high as 400 bits/s with better bandwidth efficiency, and good imperceptibility is achieved. Moreover, this method is robust against various attacks existing in real applications.

Keywords: Analysis filter     Linear prediction     Narrowband speech watermarking     Passband excitation replacement     Power normalization     Spectral envelope shaping     Synthesis filter    

Title Author Date Type Operation

Latent source-specific generative factor learning for monaural speech separation using weighted-factor autoencoder

Jing-jing Chen, Qi-rong Mao, You-cai Qin, Shuang-qing Qian, Zhi-shen Zheng,2221808071@stmail.ujs.edu.cn,mao_qr@ujs.edu.cn,2211908026@stmail.ujs.edu.cn,2211908025@stmail.ujs.edu.cn,3160602062@stmail.ujs.edu.cn

Journal Article

Representation learning via a semi-supervised stacked distance autoencoder for image classification

Liang Hou, Xiao-yi Luo, Zi-yang Wang, Jun Liang,jliang@zju.edu.cn

Journal Article

Deep 3D reconstruction: methods, data, and challenges

Caixia Liu, Dehui Kong, Shaofan Wang, Zhiyong Wang, Jinghua Li, Baocai Yin,lcxxib@emails.bjut.edu.cn,wangshaofan@bjut.edu.cn

Journal Article

Brain Encoding and Decoding in fMRI with Bidirectional Deep Generative Models

Changde Du, Jinpeng Li, Lijie Huang, Huiguang He

Journal Article

Battle damage assessment based on an improved Kullback-Leibler divergence sparse autoencoder

Zong-feng QI, Qiao-qiao LIU, Jun WANG, Jian-xun LI

Journal Article

Efficient normalization for quantitative evaluation of the driving behavior using a gated auto-encoder

Xin HE, Zhe ZHANG, Li XU, Jiapei YU,xinhe_ee@zju.edu.cn,xupower@zju.edu.cn

Journal Article

Toward Human-in-the-loop AI: Enhancing Deep Reinforcement Learning Via Real-time Human Guidance for Autonomous Driving

Jingda Wu, Zhiyu Huang, Zhongxu Hu, Chen Lv

Journal Article

Automatic traceability link recovery via active learning

Tian-bao Du, Guo-hua Shen, Zhi-qiu Huang, Yao-shen Yu, De-xiang Wu,tbdu_312@outlook.com,ghshen@nuaa.edu.cn,zqhuang@nuaa.edu.cn

Journal Article

A Geometric Understanding of Deep Learning

Na Lei, Dongsheng An, Yang Guo, Kehua Su, Shixia Liu, Zhongxuan Luo, Shing-Tung Yau, Xianfeng Gu

Journal Article

SmartPaint: a co-creative drawing system based on generative adversarial networks

Lingyun SUN, Pei CHEN, Wei XIANG, Peng CHEN, Wei-yue GAO, Ke-jun ZHANG

Journal Article

Deep learning compact binary codes for fingerprint indexing

Chao-chao BAI, Wei-qiang WANG, Tong ZHAO, Ru-xin WANG, Ming-qiang LI

Journal Article

Attention-based efficient robot grasp detection network

Xiaofei QIN, Wenkai HU, Chen XIAO, Changxiang HE, Songwen PEI, Xuedian ZHANG,xiaofei.qin@usst.edu.cn,obmmd_zxd@163.com

Journal Article

Embedding expert demonstrations into clustering buffer for effective deep reinforcement learning

Shihmin WANG, Binqi ZHAO, Zhengfeng ZHANG, Junping ZHANG, Jian PU

Journal Article

Visual knowledge guided intelligent generation of Chinese seal carving

Kejun ZHANG, Rui ZHANG, Yehang YIN, Yifei LI, Wenqi WU, Lingyun SUN, Fei WU, Huanghuang DENG, Yunhe PAN

Journal Article

A subband excitation substitute based scheme for narrowband speech watermarking

Wei LIU, Ai-qun HU

Journal Article