Dec 2019, Volume 5 Issue 6
    

  • Select all
    Topic Insights
  • R.N. Lumley
  • Editorial
  • RESEARCH ARTICLE
    Feng Qian
  • News & Highlights
  • RESEARCH ARTICLE
    Chris Palmer
  • RESEARCH ARTICLE
    Mitch Leslie
  • RESEARCH ARTICLE
    Sean O’Neill
  • RESEARCH ARTICLE
    Jane Palmer
  • RESEARCH ARTICLE
    Peter Weiss
  • Topic Insights
  • RESEARCH ARTICLE
    R.N. Lumley
  • Research
  • RESEARCH ARTICLE
    Shuai Mao, Bing Wang, Bing Tang, Qian Feng

    Smart manufacturing is critical in improving the quality of the process industry. In smart manufacturing, there is a trend to incorporate different kinds of new-generation information technologies into process-safety analysis. At present, green manufacturing is facing major obstacles related to safety management, due to the usage of large amounts of hazardous chemicals, resulting in spatial inhomogeneity of chemical industrial processes and increasingly stringent safety and environmental regulations. Emerging information technologies such as artificial intelligence (AI) are quite promising as a means of overcoming these difficulties. Based on state-of-the-art AI methods and the complex safety relations in the process industry, we identify and discuss several technical challenges associated with process safety: ① knowledge acquisition with scarce labels for process safety; ② knowledge-based reasoning for process safety; ③ accurate fusion of heterogeneous data from various sources; and ④ effective learning for dynamic risk assessment and aided decision-making. Current and future works are also discussed in this context.

  • RESEARCH ARTICLE
    James Litster, Ian David L. Bogle

    We outline the smart manufacturing challenges for formulated products, which are typically multicomponent, structured, and multiphase. These challenges predominate in the food, pharmaceuticals, agricultural and specialty chemicals, energy storage and energetic materials, and consumer goods industries, and are driven by fast-changing customer demand and, in some cases, a tight regulatory framework. This paper discusses progress in smart manufacturing—namely, digitalization and the use of large datasets with predictive models and solution-finding algorithms—in these industries. While some progress has been achieved, there is a strong need for more demonstration of model-based tools on realistic problems in order to demonstrate their benefits and highlight any systemic weaknesses.

  • RESEARCH ARTICLE
    Chao Shang, Fengqi You

    Safe, efficient, and sustainable operations and control are primary objectives in industrial manufacturing processes. State-of-the-art technologies heavily rely on human intervention, thereby showing apparent limitations in practice. The burgeoning era of big data is influencing the process industries tremendously, providing unprecedented opportunities to achieve smart manufacturing. This kind of manufacturing requires machines to not only be capable of relieving humans from intensive physical work, but also be effective in taking on intellectual labor and even producing innovations on their own. To attain this goal, data analytics and machine learning are indispensable. In this paper, we review recent advances in data analytics and machine learning applied to the monitoring, control, and optimization of industrial processes, paying particular attention to the interpretability and functionality of machine learning models. By analyzing the gap between practical requirements and the current research status, promising future research directions are identified.

  • RESEARCH ARTICLE
    Teng Zhou, Zhen Song, Kai Sundmacher

    Materials development has historically been driven by human needs and desires, and this is likely to continue in the foreseeable future. The global population is expected to reach ten billion by 2050, which will promote increasingly large demands for clean and high-efficiency energy, personalized consumer products, secure food supplies, and professional healthcare. New functional materials that are made and tailored for targeted properties or behaviors will be the key to tackling this challenge. Traditionally, advanced materials are found empirically or through experimental trial-and-error approaches. As big data generated by modern experimental and computational techniques is becoming more readily available, data-driven or machine learning (ML) methods have opened new paradigms for the discovery and rational design of materials. In this review article, we provide a brief introduction on various ML methods and related software or tools. Main ideas and basic procedures for employing ML approaches in materials research are highlighted. We then summarize recent important applications of ML for the large-scale screening and optimal design of polymer and porous materials, catalytic materials, and energetic materials. Finally, concluding remarks and an outlook are provided.

  • RESEARCH ARTICLE
    Pieter P. Plehiers, Steffen H. Symoens, Ismaël Amghizar, Guy B. Marin, Christian V. Stevens, Kevin M. Van Geem

    Chemical processes can benefit tremendously from fast and accurate effluent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these fields, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning artificial neural networks (DL ANNs) has been developed for the largest chemicals production process—steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker effluent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed characterization of a naphtha is predicted from three points on the boiling curve and PIONA (paraffin, isoparaffin, olefin, naphthene, and aromatics) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the effluent prediction is 0.1 wt%. When combining all networks—using the output of the previous as input to the next—the effluent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major benefit is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of difficult-to-access process parameters and for the envisioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed networks drops significantly for naphthas that are highly dissimilar to those in the training set.

  • RESEARCH ARTICLE
    Weimin Zhong, Chaoyuan Li, Xin Peng, Feng Wan, Xufeng An, Zhou Tian

    Setting up a knowledge base is a helpful way to optimize the operation of the polyethylene process by improving the performance and the efficiency of reuse of information and knowledge—two critical elements in polyethylene smart manufacturing. In this paper, we propose an overall structure for a knowledge base based on practical customer demand and the mechanism of the polyethylene process. First, an ontology of the polyethylene process constructed using the seven-step method is introduced as a carrier for knowledge representation and sharing. Next, a prediction method is presented for the molecular weight distribution (MWD) based on a back propagation (BP) neural network model, by analyzing the relationships between the operating conditions and the parameters of the MWD. Based on this network, a differential evolution algorithm is introduced to optimize the operating conditions by tuning the MWD. Finally, utilizing a MySQL database and the Java programming language, a knowledge base system for the operation optimization of the polyethylene process based on a browser/server framework is realized.

  • RESEARCH ARTICLE
    Arun Pankajakshan, Conor Waldron, Marco Quaglio, Asterios Gavriilidis, Federico Galvanin

    Recent advances in automation and digitization enable the close integration of physical devices with their virtual counterparts, facilitating the real-time modeling and optimization of a multitude of processes in an automatic way. The rich and continuously updated data environment provided by such systems makes it possible for decisions to be made over time to drive the process toward optimal targets. In many manufacturing processes, in order to achieve an overall optimal process, the simultaneous assessment of multiple objective functions related to process performance and cost is necessary. In this work, a multiobjective optimal experimental design framework is proposed to enhance the efficiency of online model-identification platforms. The proposed framework permits flexibility in the choice of trade-off experimental design solutions, which are calculated online—that is, during the execution of experiments. The application of this framework to improve the online identification of kinetic models in flow reactors is illustrated using a case study in which a kinetic model is identified for the esterification of benzoic acid and ethanol in a microreactor.

  • RESEARCH ARTICLE
    Weichao Yue, Weihua Gui, Xiaofang Chen, Zhaohui Zeng, Yongfang Xie

    In the aluminum reduction process, aluminum fluoride (AlF3) is added to lower the liquidus temperature of the electrolyte and increase the electrolytic efficiency. Making the decision on the amount of AlF3 addition (referred to in this work as MDAAA) is a complex and knowledge-based task that must take into consideration a variety of interrelated functions; in practice, this decision-making step is performed manually. Due to technician subjectivity and the complexity of the aluminum reduction cell, it is difficult to guarantee the accuracy of MDAAA based on knowledge-driven or data-driven methods alone. Existing strategies for MDAAA have difficulty covering these complex causalities. In this work, a data and knowledge collaboration strategy for MDAAA based on augmented fuzzy cognitive maps (FCMs) is proposed. In the proposed strategy, the fuzzy rules are extracted by extended fuzzy k-means (EFKM) and fuzzy decision trees, which are used to amend the initial structure provided by experts. The state transition algorithm (STA) is introduced to detect weight matrices that lead the FCMs to desired steady states. This study then experimentally compares the proposed strategy with some existing research. The results of the comparison show that the speed of FCMs convergence into a stable region based on the STA using the proposed strategy is faster than when using the differential Hebbian learning (DHL), particle swarm optimization (PSO), or genetic algorithm (GA) strategies. In addition, the accuracy of MDAAA based on the proposed method is better than those based on other methods. Accordingly, this paper provides a feasible and effective strategy for MDAAA.

  • RESEARCH ARTICLE
    Songsong Liu, Lazaros G. Papageorgiou

    This work addresses the multiscale optimization of the purification processes of antibody fragments. Chromatography decisions in the manufacturing processes are optimized, including the number of chromatography columns and their sizes, the number of cycles per batch, and the operational flow velocities. Data-driven models of chromatography throughput are developed considering loaded mass, flow velocity, and column bed height as the inputs, using manufacturing-scale simulated datasets based on microscale experimental data. The piecewise linear regression modeling method is adapted due to its simplicity and better prediction accuracy in comparison with other methods. Two alternative mixed-integer nonlinear programming (MINLP) models are proposed to minimize the total cost of goods per gram of the antibody purification process, incorporating the data-driven models. These MINLP models are then reformulated as mixed-integer linear programming (MILP) models using linearization techniques and multiparametric disaggregation. Two industrially relevant cases with different chromatography column size alternatives are investigated to demonstrate the applicability of the proposed models.

  • RESEARCH ARTICLE
    Yozo Fujino, Dionysius M. Siringoringo, Yoshiki Ikeda, Tomonori Nagayama, Tsukasa Mizutani

    This paper provides a review on the development of structural monitoring in Japan, with an emphasis on the type, strategy, and utilization of monitoring systems. The review focuses on bridge and building structures using vibration-based techniques. Structural monitoring systems in Japan historically started with the objective of evaluating structural responses against extreme events. In the development of structural monitoring, monitoring systems and collected data were used to verify design assumptions, update specifications, and facilitate the efficacy of vibration control systems. Strategies and case studies on monitoring for the design verification of long-span bridges and tall buildings, the performance of seismic isolation systems in building and bridges, the verification of structural retrofit, the verification of structural control systems (passive, semiactive, and active), structural assessment, and damage detection are described. More recently, the application
    of monitoring systems has been extended to facilitate efficient operation and effective maintenance through the rationalization of risk and asset management using monitoring data. This paper also summarizes the lessons learned and feedback obtained from case studies on the structural monitoring of bridges and buildings in Japan.

  • RESEARCH ARTICLE
    Xuhong Zhou, Xigang Zhang

    In the history of bridge engineering, demand has always been the primary driving force for development. Driven by the huge demand for construction since China's reform and opening-up, Chinese bridge has leapt forward both quantitatively and qualitatively in three major stages, by completing the transition from “follower” to “competitor,” and finally to “leader.” A new future is emerging for Chinese bridge engineering. As an important part of China's transportation infrastructure, the bridge engineering industry is facing challenges in this new era on how to support the construction of a new form of transportation. This paper provides a summary of the status of bridge technology in China, based on a basic analysis of stock demand, incremental demand, and management demand. It is our belief that the Chinese bridge engineering industry must fulfill three outstanding requirements: construction efficiency, management effectiveness, and long-term service. Intelligent technology based on information technology provides a new opportunity for innovation in bridge engineering. As a result, the development path of bridge engineering needs to be changed. This paper puts forward the idea of developing a third-generation bridge project that is characterized by intelligence, and discusses this project's implications, development focus, and plan. In this way, this work provides a direction for the improvement of the core competitiveness of China's bridge engineering industry.

  • RESEARCH ARTICLE
    Tamon Ueda

    Structural intervention involves the restoration and/or upgrading of the mechanical performances of structures. In addition to concrete and steel, which are typical materials for concrete structures, various fiber-reinforced polymers (FRP), cementitious materials with fibers, polymers, and adhesives are often applied for structural intervention. In order to predict structural performance, it is necessary to develop a generic method that is applicable to not only to steel, but also to other materials. Such a generic model could provide information on the mechanical properties required to improve the structural performance. External bonding, which is a typical scheme for structural intervention, is not applied for new structures. It is necessary to clarify material properties and structural details in order to achieve better bonding strength at the interface between the substrate concrete and an externally bonded material. This paper presents the mechanical properties of substrate concrete and relevant intervention material for the following purposes: ① to achieve better shear strength and ultimate deformation of a member after structural intervention; and ② to achieve better debonding strength for external bonding. This paper concludes that some of the mechanical properties and structural details for intervention materials that are necessary for improvement in mechanical performance in structures with structural intervention are new, and differ from those of structures without intervention. For example, high strength and stiffness are important properties for materials in structures without structural intervention, whereas high fracturing strain and low stiffness are important properties for structural intervention materials.

  • RESEARCH ARTICLE
    Roozbeh Rezakhani, Mohammed Alnaggar, Gianluca Cusatis

    The alkali–silica reaction (ASR) is one of the major long-term deterioration mechanisms occurring in concrete structures subjected to high humidity levels, such as bridges and dams. ASR is a chemical reaction between the silica existing inside the aggregate pieces and the alkali ions from the cement paste. This chemical reaction produces ASR gel, which imbibes additional water, leading to gel swelling. Damage and cracking are subsequently generated in concrete, resulting in degradation of its mechanical properties. In this study, ASR damage in concrete is considered within the lattice discrete particle model (LDPM), a mesoscale mechanical model that simulates concrete at the scale of the coarse aggregate pieces. The authors have already modeled successfully ASR within the LDPM framework and they have calibrated and validated the resulting model, entitled ASR-LDPM, against several experimental data sets. In the present work, a recently developed multiscale homogenization framework is employed to simulate the macroscale effects of ASR, while ASR-LDPM is utilized as the mesoscale model. First, the homogenized behavior of the representative volume element (RVE) of concrete simulated by ASR-LDPM is studied under both tension and compression, and the degradation of effective mechanical properties due to ASR over time is investigated. Next, the developed homogenization framework is utilized to reproduce experimental data reported on the free volumetric expansion of concrete prisms. Finally, the strength degradation of prisms in compression and four-point bending beams is evaluated by both the mesoscale model and the proposed multiscale approach in order to analyze the accuracy and computational efficiency of the latter. In all the numerical analyses, different RVE sizes with different inner particle realizations are considered in order to explore their effects on the homogenized response.

  • RESEARCH ARTICLE
    Kenta Nakai

    In this commentary, I explain my perspective on the relationship between artificial intelligence (AI)/data science and biomedicine from a long-range retrospective view. The development of modern biomedicine has always been accelerated by the repeated emergence of new technologies. Since all life systems are basically governed by the information in their own DNA, information science has special importance for the study of biomedicine. Unlike in physics, no (or very few) leading laws have been found in biology. Thus, in biology, the ″data-to-knowledge″ approach is important. AI has historically been applied to biomedicine, and the recent news that an AI-based approach achieved the best performance in an international competition of protein structure prediction may be regarded as another landmark in the field. Similar approaches could contribute to solving problems in genome sequence interpretation, such as identifying cancer-driving mutations in the genome of patients. Recently, the explosive development of next-generation sequencing (NGS) has been producing massive data, and this trend will accelerate. NGS is not only used for ″reading″ DNA sequences, but also for obtaining various types of information at the single-cell level. These data can be regarded as grid data points in climate simulation. Both data science and AI will become essential for the integrative interpretation/simulation of these data, and will take a leading role in future precision medicine.

  • RESEARCH ARTICLE
    Amelia Yilin Lee, Jia An, Chee Kai Chu, Yi Zhang

    The rapid development of additive manufacturing and advances in shape memory materials have fueled the progress of four-dimensional (4D) printing. With increasing improvements in design, reversible 4D printing—or two-way 4D printing—has been proven to be feasible. This technology will fully eliminate the need for human interference, as the programming is completely driven by external stimuli, which allows 4D-printed parts to be actuated in multiple cycles. This study proposes a new reversible 4D printing actuation method. The swelling of an elastomer and heat are used in the programming stage, and heat is used in the recovery stage. The main focus of this study is on the self-actuated programming step. To attain control over the bending, a simple predictive model has been developed to study the degree of curvature. The parameters, temperature, and elastomer thickness have also been studied in order to gain a better understanding of how well the model predicts the curvature. This understanding of the curvature will provide a great degree of control over the reversible 4D-printed structure.

  • RESEARCH ARTICLE
    Hongzhou Hu, Zhihua Zhong

    To study the durability of a passenger car, this work investigates numerical simulation techniques. The investigations are based on an explicit–implicit approach in which substructure techniques are used to reduce the simulation time, allowing full vehicle dynamic analyses to be performed on a timescale that is difficult or impossible with the conventional finite element model (FEM). The model used here includes all necessary nonlinearities in order to maintain accuracy. All key components of the car structure are modeled with deformable materials. Tire–road interactions are modeled in the explicit package with contact-impact interfaces with arbitrary frictional and geometric properties. Key parameters of the responses of the car driven on six different kinds of test road surfaces are examined and compared with experimental values. It can be concluded that the explicit–implicit co-simulation techniques used here are efficient and accurate enough for engineering purposes. This paper also discusses the limitations of the proposed method and outlines possible improvements for future work.

  • RESEARCH ARTICLE
    Fenghua Li, Hui Li, Ben Niu, Jinjun Chen

    With the rapid development of information technology and the continuous evolution of personalized services, huge amounts of data are accumulated by large Internet companies in the process of serving users. Moreover, dynamic data interactions increase the intentional/unintentional persistence of private information in different information systems. However, problems such as the cask principle of preserving private information among different information systems and the difficulty of tracing the source of privacy violations are becoming increasingly serious. Therefore, existing privacy-preserving schemes cannot provide systematic privacy preservation. In this paper, we examine the links of the information life-cycle, such as information collection, storage, processing, distribution, and destruction. We then propose a theory of privacy computing and a key technology system that includes a privacy computing framework, a formal definition of privacy computing, four principles that should be followed in privacy computing, algorithm design criteria, evaluation of the privacy-preserving effect, and a privacy computing language. Finally, we employ four application scenarios to describe the universal application of privacy computing, and discuss the prospect of future research trends. This work is expected to guide theoretical research on user privacy preservation within open environments.