facilitates the generation of virtual environments for various scenarios about cities. It requires expertise and consideration, and therefore consumes massive time and computation resources. Nevertheless, related tasks sometimes result in dissatisfaction or even failure. These challenges have received significant attention from researchers in the area of . Meanwhile, the burgeoning development of artificial intelligence motivates people to exploit , and hence improves the conventional solutions. In this paper, we present a review of approaches to in using in the literature published between 2010 and 2019. This serves as an overview of the current state of research on from a perspective.

Tian Feng ,   Feiyi Fan   et al.
treatment is a highly challenging effort to reduce mortality in hospital intensive care units since the treatment response may vary for each patient. Tailored s are desired to assist doctors in making decisions efficiently and accurately. In this work, we apply a self-supervised method based on (RL) for on individuals. An uncertainty evaluation method is proposed to separate patient samples into two domains according to their responses to treatments and the state value of the chosen policy. Examples of two domains are then reconstructed with an auxiliary transfer learning task. A distillation method of privilege learning is tied to a variational auto-encoder framework for the transfer learning task between the low- and high-quality domains. Combined with the self-supervised way for better state and action representations, we propose a deep RL method called high-risk uncertainty (HRU) control to provide flexibility on the trade-off between the effectiveness and accuracy of ambiguous samples and to reduce the expected mortality. Experiments on the large-scale publicly available real-world dataset MIMIC-III demonstrate that our model reduces the estimated mortality rate by up to 2.3% in total, and that the estimated mortality rate in the majority of cases is reduced to 9.5%.

Sihan Zhu ,   Jian Pu   et al.
Healthcare and telemedicine industries are relying on technology that is connected to the Internet. Digital health data are more prone to cyber attacks because of the treasure trove of personal data they possess. This necessitates protection of digital medical images and their secure transmission. In this paper, an encryption technique based on mutated with Lorenz and Lü is employed to generate high pseudo-random key streams. The proposed chaos- cryptic system operates on the integer wavelet transform (IWT) domain and a bio-inspired , unit for enhancing the confusion and diffusion phase in an approximation coefficient. Finally, an XOR operation is performed with a quantised chaotic set from the developed combined attractors. The algorithm attains an average entropy of 7.9973, near-zero correlation with an NPCR of 99.642%, a UACI of 33.438%, and a keyspace of 10. Further, the experimental analyses and NIST statistical test suite have been designed such that the proposed technique has the potency to withstand any statistical, differential, and brute force attacks.

is an important and costly task that creates trace links from requirements to different software artifacts. These trace links can help engineers reduce the time and complexity of software maintenance. The (IR) technique has been widely used in . It uses the textual similarity between software artifacts to create links. However, if two artifacts do not share or share only a small number of words, the performance of the IR can be very poor. Some methods have been developed to enhance the IR by considering relations between , but they have been limited to code rather than to other types of . To overcome this limitation, we propose an automatic method that combines the IR method with the between . Specifically, we leverage between rather than just text matching from requirements to . Moreover, the method is not limited to the type of when considering the relations between . We conduct experiments on five public datasets and take account of trace links between requirements and different types of software artifacts. Results show that under the same recall, the precisions on the five datasets improve by 40%, 8%, 20%, 4%, and 6%, respectively, compared with the baseline method. The precision on the five datasets improves by an average of 15.6%, showing that our method outperforms the baseline method when working under the same conditions.

Haijuan Wang ,   Guohua Shen   et al.

Most Popular