Resource Type

Journal Article 671

Year

2023 35

2022 25

2021 44

2020 26

2019 33

2018 42

2017 44

2016 26

2015 32

2014 34

2013 24

2012 26

2011 26

2010 36

2009 20

2008 21

2007 30

2006 22

2005 28

2004 25

open ︾

Keywords

risk analysis 12

ANSYS 4

Machine learning 4

analysis 4

correlation analysis 4

ecological civilization 4

finite element analysis 4

numerical simulation 4

reliability 4

numerical analysis 3

2035 2

Active learning 2

Adverse geology 2

Artificial intelligence 2

BNLAS 2

Big data analytics 2

COVID-19 2

DX pile 2

Deep learning 2

open ︾

Search scope:

排序: Display mode:

Vector soliton and noise-like pulse generation using a Ti3C2 MXene material in a fiber laser Research

Dongsu Jeong, Dohyun Kim, Yoonho Seo,jdsvs2979@korea.ac.kr,davydo@korea.ac.kr,yoonhoseo@korea.ac.kr

Frontiers of Information Technology & Electronic Engineering 2021, Volume 22, Issue 3,   Pages 287-436 doi: 10.1631/FITEE.1900649

Abstract: With technological advancements, weapon system development has become increasingly complex and costly. Using modeling and simulation (M&S) technology in the conceptual design stage is effective in reducing the development time and cost of weapons. One way to reduce the complexity and trial-and-error associated with weapon development using M&S technology is to develop s to review the functions assigned to new weapons. Although the M&S technology is applicable, it is difficult to analyze how effectively the weapons are functioning, because of the dynamic features inherent in modeling, which considers interrelations among different weapon entities. To support review of weapon functions including these characteristics, this study develops a method to model the interactions between weapons in the . This method includes the following three steps: (1) construct virtual models by converting the weapons and the weapon functions into their corresponding components; (2) generate the combat process from the , which is derived from the interrelations among weapons under consideration using reasoning rules; (3) develop a process-based model that describes weapon functions by combining the combat process with virtual models. Then, a PBM system based on this method is implemented. The case study executed on this system shows that it is useful in deriving process-based models from various s, analyzing weapon functions using the derived models, and reducing weapon development issues in the conceptual design stage.

Keywords: 武器系统;基于过程的建模(PBM);作战场景;交互分析;元模型;Petri网    

Interactive visual labelling versus active learning: an experimental comparison Research

Mohammad CHEGIN, Jürgen BERNARD, Jian CUI, Fatemeh CHEGINI, Alexei SOURIN, Keith Keith, Tobias SCHRECK

Frontiers of Information Technology & Electronic Engineering 2020, Volume 21, Issue 4,   Pages 524-535 doi: 10.1631/FITEE.1900549

Abstract: Methods from supervised machine learning allow the classification of new data automatically and are tremendously helpful for data analysis. The quality of supervised maching learning depends not only on the type of algorithm used, but also on the quality of the labelled dataset used to train the classifier. Labelling instances in a training dataset is often done manually relying on selections and annotations by expert analysts, and is often a tedious and time-consuming process. Active learning algorithms can automatically determine a subset of data instances for which labels would provide useful input to the learning process. Interactive visual labelling techniques are a promising alternative, providing effective visual overviews from which an analyst can simultaneously explore data records and select items to a label. By putting the analyst in the loop, higher accuracy can be achieved in the resulting classifier. While initial results of interactive visual labelling techniques are promising in the sense that user labelling can improve supervised learning, many aspects of these techniques are still largely unexplored. This paper presents a study conducted using the mVis tool to compare three interactive visualisations, similarity map, scatterplot matrix (SPLOM), and parallel coordinates, with each other and with active learning for the purpose of labelling a multivariate dataset. The results show that all three interactive visual labelling techniques surpass active learning algorithms in terms of classifier accuracy, and that users subjectively prefer the similarity map over SPLOM and parallel coordinates for labelling. Users also employ different labelling strategies depending on the visualisation used.

Keywords: Interactive visual labelling     Active learning     Visual analytics    

Group discussion based on the model of interactive genetic algorithms

Song Dongming,Zhu Yaoqin,Wu Huizhong

Strategic Study of CAE 2009, Volume 11, Issue 11,   Pages 64-69

Abstract:

Complex decision-making problem discussion in hall for workshop of metasynthetic engineering (HWME) requires that the experts` qualitative opinions should be converged in the end. To investigate the problem of how to converge their opinions, an approach of expert group discussion is proposed on the base of the model of interactive genetic algorithms, in which expert group thought and the computer technology are integrated tightly and the mutual recognition of community opinion is reached. Practices show that the optimal solution of complicated decision-making problem is obtained easily, and the approach is effective and in accord with the practical discussion process.

Keywords: hall for workshop of metasynthetic engineering     complex decision-making problem     interactive genetic algorithm     performance target demonstration    

Human hip joint center analysis for biomechanical design of a hip joint exoskeleton Article

Wei YANG,Can-jun YANG,Ting XU

Frontiers of Information Technology & Electronic Engineering 2016, Volume 17, Issue 8,   Pages 792-802 doi: 10.1631/FITEE.1500286

Abstract: We propose a new method for the customized design of hip exoskeletons based on the optimization of the humanmachine physical interface to improve user comfort. The approach is based on mechanisms designed to follow the natural trajectories of the human hip as the flexion angle varies during motion. The motions of the hip joint center with variation of the flexion angle were measured and the resulting trajectory was modeled. An exoskeleton mechanism capable to follow the hip center’s movement was designed to cover the full motion ranges of flexion and abduction angles, and was adopted in a lower extremity assistive exoskeleton. The resulting design can reduce human-machine interaction forces by 24.1% and 76.0% during hip flexion and abduction, respectively, leading to a more ergonomic and comfortable-to-wear exoskeleton system. The humanexoskeleton model was analyzed to further validate the decrease of the hip joint internal force during hip joint flexion or abduction by applying the resulting design.

Keywords: Hip joint exoskeleton     Hip joint center     Compatible joint     Human-machine interaction force    

Visual interactive image clustering: a target-independent approach for configuration optimization in machine vision measurement Research Article

Lvhan PAN, Guodao SUN, Baofeng CHANG, Wang XIA, Qi JIANG, Jingwei TANG, Ronghua LIANG

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 3,   Pages 355-372 doi: 10.1631/FITEE.2200547

Abstract: (MVM) is an essential approach that measures the area or length of a target efficiently and non-destructively for product quality control. The result of MVM is determined by its configuration, especially the in image acquisition and the algorithmic in image processing. In a traditional workflow, engineers constantly adjust and verify the configuration for an acceptable result, which is time-consuming and significantly depends on expertise. To address these challenges, we propose a target-independent approach, , which facilitates configuration optimization by grouping images into different clusters to suggest lighting schemes with common parameters. Our approach has four steps: data preparation, data sampling, data processing, and visual analysis with our visualization system. During preparation, engineers design several candidate lighting schemes to acquire images and develop an algorithm to process images. Our approach samples engineer-defined parameters for each image and obtains results by executing the algorithm. The core of data processing is the explainable measurement of the relationships among images using the algorithmic parameters. Based on the image relationships, we develop VMExplorer, a visual analytics system that assists engineers in grouping images into clusters and exploring parameters. Finally, engineers can determine an appropriate lighting scheme with robust parameter combinations. To demonstrate the effectiveness and usability of our approach, we conduct a case study with engineers and obtain feedback from expert interviews.

Keywords: Machine vision measurement     Lighting scheme design     Parameter optimization     Visual interactive image clustering    

Interactive image segmentation with a regression based ensemble learning paradigm Article

Jin ZHANG, Zhao-hui TANG, Wei-hua GUI, Qing CHEN, Jin-ping LIU

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 7,   Pages 1002-1020 doi: 10.1631/FITEE.1601401

Abstract: To achieve fine segmentation of complex natural images, people often resort to an interactive segmentation paradigm, since fully automatic methods often fail to obtain a result consistent with the ground truth. However, when the foreground and background share some similar areas in color, the fine segmentation result of conventional interactive methods usually relies on the increase of manual labels. This paper presents a novel interactive image segmentation method via a regression-based ensemble model with semi-supervised learning. The task is formulated as a non-linear problem integrating two complementary spline regressors and strengthening the robustness of each regressor via semi-supervised learning. First, two spline regressors with a complementary nature are constructed based on multivariate adaptive regression splines (MARS) and smooth thin plate spline regression (TPSR). Then, a regressor boosting method based on a clustering hypothesis and semi-supervised learning is proposed to assist the training of MARS and TPSR by using the region segmentation information contained in unlabeled pixels. Next, a support vector regression (SVR) based decision fusion model is adopted to integrate the results of MARS and TPSR. Finally, the GraphCut is introduced and combined with the SVR ensemble results to achieve image segmentation. Extensive experimental results on benchmark datasets of BSDS500 and Pascal VOC have demonstrated the effectiveness of our method, and the com-parison with experiment results has validated that the proposed method is comparable with the state-of-the-art methods for in-teractive natural image segmentation.

Keywords: Interactive image segmentation     Multivariate adaptive regression splines (MARS)     Ensemble learning     Thin-plate spline regression (TPSR)     Semi-supervised learning     Support vector regression (SVR)    

Physical human-robot interaction estimation based control scheme for a hydraulically actuated exoskeleton designed for power amplification None

Yi LONG, Zhi-jiang DU, Wei-dong WANG, Long HE, Xi-wang MAO, Wei DONG

Frontiers of Information Technology & Electronic Engineering 2018, Volume 19, Issue 9,   Pages 1076-1085 doi: 10.1631/FITEE.1601667

Abstract:

We proposed a lower extremity exoskeleton for power amplification that perceives intended human motion via humanexoskeleton interaction signals measured by biomedical or mechanical sensors, and estimates human gait trajectories to implement corresponding actions quickly and accurately. In this study, torque sensors mounted on the exoskeleton links are proposed for obtaining physical human-robot interaction (pHRI) torque information directly. A Kalman smoother is adopted for eliminating noise and smoothing the signal data. Simultaneously, the mapping from the pHRI torque to the human gait trajectory is defined. The mapping is derived from the real-time state of the robotic exoskeleton during movement. The walking phase is identified by the threshold approach using ground reaction force. Based on phase identification, the human gait can be estimated by applying the proposed algorithm, and then the gait is regarded as the reference input for the controller. A proportional-integral-derivative control strategy is constructed to drive the robotic exoskeleton to follow the human gait trajectory. Experiments were performed on a human subject who walked on the floor at a natural speed wearing the robotic exoskeleton. Experimental results show the effectiveness of the proposed strategy.

Keywords: Exoskeleton     Physical human-robot interaction     Torque sensor     Human gait     Kalman smoother    

Building trust networks in the absence of trust relations Article

Xin WANG, Ying WANG, Jian-hua GUO

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 10,   Pages 1591-1600 doi: 10.1631/FITEE.1601341

Abstract: User-specified trust relations are often very sparse and dynamic, making them difficult to accurately predict from online social media. In addition, trust relations are usually unavailable for most social media platforms. These issues pose a great challenge for predicting trust relations and further building trust networks. In this study, we investigate whether we can predict trust relations via a sparse learning model, and propose to build a trust network without trust relations using only pervasively available interaction data and homophily effect in an online world. In particular, we analyze the reliability of predicting trust relations by interaction behaviors, and provide a principled way to mathematically incorporate interaction behaviors and homophily effect in a novel framework, bTrust. Results of experiments on real-world datasets from Epinions and Ciao demonstrated the effectiveness of the proposed framework. Further experiments were conducted to understand the importance of interaction behaviors and homophily effect in building trust networks.

Keywords: Trust network     Sparse learning     Homophily effect     Interaction behaviors    

Development and Application of Simulation Technology

Wang Zicai

Strategic Study of CAE 2003, Volume 5, Issue 2,   Pages 40-44

Abstract:

This paper discusses the developing process of simulation technology in view of its development, maturation and further development. Then this paper introduces the application of simulation technology in the fields of national economy. Finally, this paper analyzes the level and status quo of home and overseas simulation technology, and presents its future trend in the new century.

Keywords: simulation technology     system simulation     hardware in loop simulation     distributed interactive simulation    

Autonomous flying blimp interaction with human inan indoor space None

Ning-shi YAO, Qiu-yang TAO, Wei-yu LIU, Zhen LIU, Ye TIAN, Pei-yu WANG, Timothy LI, Fumin ZHANG

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 1,   Pages 45-59 doi: 10.1631/FITEE.1800587

Abstract:

We present the Georgia Tech Miniature Autonomous Blimp (GT-MAB), which is designed to support human-robot interaction experiments in an indoor space for up to two hours. GT-MAB is safe while flying in close proximity to humans. It is able to detect the face of a human subject, follow the human, and recognize hand gestures. GT-MAB employs a deep neural network based on the single shot multibox detector to jointly detect a human user’s face and hands in a real-time video stream collected by the onboard camera. A human-robot interaction procedure is designed and tested with various human users. The learning algorithms recognize two hand waving gestures. The human user does not need to wear any additional tracking device when interacting with the flying blimp. Vision-based feedback controllers are designed to control the blimp to follow the human and fly in one of two distinguishable patterns in response to each of the two hand gestures. The blimp communicates its intentions to the human user by displaying visual symbols. The collected experimental data show that the visual feedback from the blimp in reaction to the human user significantly improves the interactive experience between blimp and human. The demonstrated success of this procedure indicates that GT-MAB could serve as a flying robot that is able to collect human data safely in an indoor environment.

Keywords: Robotic blimp     Human-robot interaction     Deep learning     Face detection     Gesture recognition    

A virtual 3D interactive painting method for Chinese calligraphy and painting based on real-time force feedback technology Article

Chao GUO, Zeng-xuan HOU, You-zhi SHI, Jun XU, Dan-dan YU

Frontiers of Information Technology & Electronic Engineering 2017, Volume 18, Issue 11,   Pages 1843-1853 doi: 10.1631/FITEE.1601283

Abstract: A novel 3D interactive painting method for Chinese calligraphy and painting based on force feedback technology is proposed. The relationship between the force exerted on the brush and the resulting brush deformation is analyzed and a spring-mass model is used to build a model of the 3D Chinese brush. The 2D brush footprint between the brush and the plane of the paper or object is calculated according to the deformation of the 3D brush when force is exerted on the 3D brush. Then the 3D brush footprint is obtained by projecting the 2D brush footprint onto the surface of the 3D object in real time, and a complete 3D brushstroke is obtained by superimposing 3D brush footprints along the painting direction. The proposed method has been suc-cessfully applied in a virtual 3D interactive drawing system based on force feedback technology. In this system, users can paint 3D brushstrokes in real time with a Phantom Desktop haptic device, which can effectively serve as a virtual reality interface to the simulated painting environment for users.

Keywords: 3D brush model     3D brushstroke     3D interactive painting     Real-time force feedback technology    

Interactive medical image segmentation with self-adaptive confidence calibration

沈楚云,李文浩,徐琪森,胡斌,金博,蔡海滨,朱凤平,李郁欣,王祥丰

Frontiers of Information Technology & Electronic Engineering 2023, Volume 24, Issue 9,   Pages 1332-1348 doi: 10.1631/FITEE.2200299

Abstract: Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation. However, existing methods often fall into what we call interactive misunderstanding, the essence of which is the dilemma in trading off short- and long-term interaction information. To better use the interaction information at various timescales, we propose an interactive segmentation framework, called interactive MEdical image segmentation with self-adaptive Confidence CAlibration (MECCA), which combines action-based confidence learning and multi-agent reinforcement learning. A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information. A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation, thus directly correcting the model’s interactive misunderstanding. MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance, respectively. Numerical experiments on different segmentation tasks show that MECCA can significantly improve short- and long-term interaction information utilization efficiency with remarkably fewer labeled samples. The demo video is available at https://bit.ly/mecca-demo-video.

Keywords: Medical image segmentation     Interactive segmentation     Multi-agent reinforcement learning     Confidence learning     Semi-supervised learning    

Attention shifting during child–robot interaction: a preliminary clinical study for children with autism spectrum disorder Special Feature on Intelligent Robats

Guo-bin WAN, Fu-hao DENG, Zi-jian JIANG, Sheng-zhao LIN, Cheng-lian ZHAO, Bo-xun LIU, Gong CHEN, Shen-hong CHEN, Xiao-hong CAI, Hao-bo WANG, Li-ping LI, Ting YAN, Jia-ming ZHANG

Frontiers of Information Technology & Electronic Engineering 2019, Volume 20, Issue 3,   Pages 374-387 doi: 10.1631/FITEE.1800555

Abstract:

There is an increasing need to introduce socially interactive robots as a means of assistance in autism spectrum disorder (ASD) treatment and rehabilitation, to improve the effectiveness of rehabilitation training and the diversification of treatment, and to alleviate the shortage of medical personnel in mainland China and other places in the world. In this preliminary clinical study, three different socially interactive robots with different appearances and functionalities were tested in therapy-like settings in four different rehabilitation facilities/institutions in Shenzhen, China. Seventy-four participants, including 52 children with ASD, whose processes of interacting with robots were recorded by three different cameras, all received a single-session three-robot intervention. Data were collected from not only the videos recorded, but also the questionnaires filled mostly by parents of the participants. Some insights from the preliminary results were obtained. These can contribute to the research on physical robot design and evaluations on robots in therapy-like settings. First, when doing physical robot design, some preferential focus should be on aspects of appearances and functionalities. Second, attention analysis using algorithms such as estimation of the directions of gaze and head posture of a child in the video clips can be adopted to quantitatively measure the prosocial behaviors and actions (e.g., attention shifting from one particular robot to other robots) of the children. Third, observing and calculating the frequency of the time children spend on exploring/playing with the robots in the video clips can be adopted to qualitatively analyze such behaviors and actions. Limitations of the present study are also presented.

Keywords: Human–robot interaction     Robot-enhanced therapy     Socially interactive robots     Robot-mediated intervention    

DIP-MOEA: a double-grid interactive preference based multi-objective evolutionary algorithm for formalizing preferences of decision makers Research Article

Luda ZHAO, Bin WANG, Xiaoping JIANG, Yicheng LU, Yihua HU

Frontiers of Information Technology & Electronic Engineering 2022, Volume 23, Issue 11,   Pages 1714-1732 doi: 10.1631/FITEE.2100508

Abstract:

The final solution set given by almost all existing preference-based multi-objective evolutionary algorithms (MOEAs) lies a certain distance away from the decision makers’ preference information region. Therefore, we propose a multi-objective optimization algorithm, referred to as the double-grid interactive preference based MOEA (DIP-MOEA), which explicitly takes the preferences of decision makers (DMs) into account. First, according to the optimization objective of the practical multi-objective optimization problems and the preferences of DMs, the membership functions are mapped to generate a decision preference grid and a preference error grid. Then, we put forward two dominant modes of population, preference degree dominance and preference error dominance, and use this advantageous scheme to update the population in these two grids. Finally, the populations in these two grids are combined with the DMs’ information, and the preference multi-objective optimization interaction is performed. To verify the performance of DIP-MOEA, we test it on two kinds of problems, i.e., the basic DTLZ series functions and the multi-objective knapsack problems, and compare it with several different popular preference-based MOEAs. Experimental results show that DIP-MOEA expresses the preference information of DMs well and provides a solution set that meets the preferences of DMs, quickly provides the test results, and has better performance in the distribution of the Pareto front solution set.

Keywords: Multi-objective evolutionary algorithm (MOEA)     Formalizing preference of decision makers     Population renewal strategy     Preference interaction    

Real-time Gaze Tracking System Based on Dual Illuminators

Huang Ying,Wang Zhiliang,Qi Ying

Strategic Study of CAE 2008, Volume 10, Issue 2,   Pages 86-90

Abstract:

This paper presents a fast, low cost and non-contact gaze tracking system. The system is based on dual illuminators and a CCD camera. The camera captured the i mage of two eyes, and the synthesized information of both eyes was used to detec t the gaze direction. A fast pupil detection algorithm was proposed to speed up the image processing, and a simple model was used to recognize the gaze direction. Experimental results show that the system is applicable from -20° t o +20° angle of gaze in horizon and -16° to +16° angle of gaze in vertical. A human- computer interaction application proves that the system can estimate the user's gaze direction with tiny delay and relative high stability.

Keywords: human-computer interaction     gaze tracking     Purkinje image    

Title Author Date Type Operation

Vector soliton and noise-like pulse generation using a Ti3C2 MXene material in a fiber laser

Dongsu Jeong, Dohyun Kim, Yoonho Seo,jdsvs2979@korea.ac.kr,davydo@korea.ac.kr,yoonhoseo@korea.ac.kr

Journal Article

Interactive visual labelling versus active learning: an experimental comparison

Mohammad CHEGIN, Jürgen BERNARD, Jian CUI, Fatemeh CHEGINI, Alexei SOURIN, Keith Keith, Tobias SCHRECK

Journal Article

Group discussion based on the model of interactive genetic algorithms

Song Dongming,Zhu Yaoqin,Wu Huizhong

Journal Article

Human hip joint center analysis for biomechanical design of a hip joint exoskeleton

Wei YANG,Can-jun YANG,Ting XU

Journal Article

Visual interactive image clustering: a target-independent approach for configuration optimization in machine vision measurement

Lvhan PAN, Guodao SUN, Baofeng CHANG, Wang XIA, Qi JIANG, Jingwei TANG, Ronghua LIANG

Journal Article

Interactive image segmentation with a regression based ensemble learning paradigm

Jin ZHANG, Zhao-hui TANG, Wei-hua GUI, Qing CHEN, Jin-ping LIU

Journal Article

Physical human-robot interaction estimation based control scheme for a hydraulically actuated exoskeleton designed for power amplification

Yi LONG, Zhi-jiang DU, Wei-dong WANG, Long HE, Xi-wang MAO, Wei DONG

Journal Article

Building trust networks in the absence of trust relations

Xin WANG, Ying WANG, Jian-hua GUO

Journal Article

Development and Application of Simulation Technology

Wang Zicai

Journal Article

Autonomous flying blimp interaction with human inan indoor space

Ning-shi YAO, Qiu-yang TAO, Wei-yu LIU, Zhen LIU, Ye TIAN, Pei-yu WANG, Timothy LI, Fumin ZHANG

Journal Article

A virtual 3D interactive painting method for Chinese calligraphy and painting based on real-time force feedback technology

Chao GUO, Zeng-xuan HOU, You-zhi SHI, Jun XU, Dan-dan YU

Journal Article

Interactive medical image segmentation with self-adaptive confidence calibration

沈楚云,李文浩,徐琪森,胡斌,金博,蔡海滨,朱凤平,李郁欣,王祥丰

Journal Article

Attention shifting during child–robot interaction: a preliminary clinical study for children with autism spectrum disorder

Guo-bin WAN, Fu-hao DENG, Zi-jian JIANG, Sheng-zhao LIN, Cheng-lian ZHAO, Bo-xun LIU, Gong CHEN, Shen-hong CHEN, Xiao-hong CAI, Hao-bo WANG, Li-ping LI, Ting YAN, Jia-ming ZHANG

Journal Article

DIP-MOEA: a double-grid interactive preference based multi-objective evolutionary algorithm for formalizing preferences of decision makers

Luda ZHAO, Bin WANG, Xiaoping JIANG, Yicheng LU, Yihua HU

Journal Article

Real-time Gaze Tracking System Based on Dual Illuminators

Huang Ying,Wang Zhiliang,Qi Ying

Journal Article