scholarly journals Semantic Networks for Engineering Design: State of the Art and Future Directions

2021 ◽  
pp. 1-45
Author(s):  
Ji Han ◽  
Serhad Sarica ◽  
Feng Shi ◽  
Jianxi Luo

Abstract In the past two decades, there has been increasing use of semantic networks in engineering design for supporting various activities, such as knowledge extraction, prior art search, idea generation and evaluation. Leveraging large-scale pre-trained graph knowledge databases to support engineering design-related natural language processing (NLP) tasks has attracted a growing interest in the engineering design research community. Therefore, this paper aims to provide a survey of the state-of-the-art semantic networks for engineering design and propositions of future research to build and utilize large-scale semantic networks as knowledge bases to support engineering design research and practice. The survey shows that WordNet, ConceptNet and other semantic networks, which contain common-sense knowledge or are trained on non-engineering data sources, are primarily used by engineering design researchers to develop methods and tools. Meanwhile, there are emerging efforts in constructing engineering and technical-contextualized semantic network databases, such as B-Link and TechNet, through retrieving data from technical data sources and employing unsupervised machine learning approaches. On this basis, we recommend six strategic future research directions to advance the development and uses of large-scale semantic networks for artificial intelligence applications in engineering design.

2021 ◽  
Vol 1 ◽  
pp. 2621-2630
Author(s):  
Ji Han ◽  
Serhad Sarica ◽  
Feng Shi ◽  
Jianxi Luo

AbstractThere have been growing uses of semantic networks in the past decade, such as leveraging large-scale pre-trained graph knowledge databases for various natural language processing (NLP) tasks in engineering design research. Therefore, the paper provides a survey of the research that has employed semantic networks in the engineering design research community. The survey reveals that engineering design researchers have primarily relied on WordNet, ConceptNet, and other common-sense semantic network databases trained on non-engineering data sources to develop methods or tools for engineering design. Meanwhile, there are emerging efforts to mine large scale technical publication and patent databases to construct engineering-contextualized semantic network databases, e.g., B-Link and TechNet, to support NLP in engineering design. On this basis, we recommend future research directions for the construction and applications of engineering-related semantic networks in engineering design research and practice.


2017 ◽  
Vol 139 (11) ◽  
Author(s):  
Feng Shi ◽  
Liuqing Chen ◽  
Ji Han ◽  
Peter Childs

With the advent of the big-data era, massive information stored in electronic and digital forms on the internet become valuable resources for knowledge discovery in engineering design. Traditional document retrieval method based on document indexing focuses on retrieving individual documents related to the query, but is incapable of discovering the various associations between individual knowledge concepts. Ontology-based technologies, which can extract the inherent relationships between concepts by using advanced text mining tools, can be applied to improve design information retrieval in the large-scale unstructured textual data environment. However, few of the public available ontology database stands on a design and engineering perspective to establish the relations between knowledge concepts. This paper develops a “WordNet” focusing on design and engineering associations by integrating the text mining approaches to construct an unsupervised learning ontology network. Subsequent probability and velocity network analysis are applied with different statistical behaviors to evaluate the correlation degree between concepts for design information retrieval. The validation results show that the probability and velocity analysis on our constructed ontology network can help recognize the high related complex design and engineering associations between elements. Finally, an engineering design case study demonstrates the use of our constructed semantic network in real-world project for design relations retrieval.


2016 ◽  
Vol 24 (1) ◽  
pp. 66-91 ◽  
Author(s):  
Rita Orji ◽  
Karyn Moffatt

The evolving field of persuasive and behavior change technology is increasingly targeted at influencing behavior in the area of health and wellness. This paper provides an empirical review of 16 years (85 papers) of literature on persuasive technology for health and wellness to: (1.) answer important questions regarding the effectiveness of persuasive technology for health and wellness, (2.) summarize and highlight trends in the technology design, research methods, motivational strategies, theories, and health behaviors targeted by research to date, (3.) uncover pitfalls of existing persuasive technological interventions for health and wellness, and (4.) suggest directions for future research.


Author(s):  
William C. Regli

Abstract This paper describes our initial efforts to deploy a digital library to support engineering design and manufacturing. This experimental testbed, The Engineering Design Repository, is an effort to collect and archive public domain engineering data for use by researchers and engineering professionals. CAD knowledge-bases are vital to engineers, who search through vast amounts of corporate legacy data and navigate online catalogs to retrieve precisely the right components for assembly into new products. This research attempts to begin addressing the critical need for improved computational methods for reasoning about complex geometric and engineering information. In particular, we focus on archival and reuse of design and manufacturing data for mechatronic systems. This paper presents a description of the research problem and an overview of the initial architecture of testbed.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-39
Author(s):  
Thanh Tuan Nguyen ◽  
Thanh Phuong Nguyen

Representing dynamic textures (DTs) plays an important role in many real implementations in the computer vision community. Due to the turbulent and non-directional motions of DTs along with the negative impacts of different factors (e.g., environmental changes, noise, illumination, etc.), efficiently analyzing DTs has raised considerable challenges for the state-of-the-art approaches. For 20 years, many different techniques have been introduced to handle the above well-known issues for enhancing the performance. Those methods have shown valuable contributions, but the problems have been incompletely dealt with, particularly recognizing DTs on large-scale datasets. In this article, we present a comprehensive taxonomy of DT representation in order to purposefully give a thorough overview of the existing methods along with overall evaluations of their obtained performances. Accordingly, we arrange the methods into six canonical categories. Each of them is then taken in a brief presentation of its principal methodology stream and various related variants. The effectiveness levels of the state-of-the-art methods are then investigated and thoroughly discussed with respect to quantitative and qualitative evaluations in classifying DTs on benchmark datasets. Finally, we point out several potential applications and the remaining challenges that should be addressed in further directions. In comparison with two existing shallow DT surveys (i.e., the first one is out of date as it was made in 2005, while the newer one (published in 2016) is an inadequate overview), we believe that our proposed comprehensive taxonomy not only provides a better view of DT representation for the target readers but also stimulates future research activities.


Author(s):  
Pattabiraman V. ◽  
Parvathi R.

Natural data erupting directly out of various data sources, such as text, image, video, audio, and sensor data, comes with an inherent property of having very large dimensions or features of the data. While these features add richness and perspectives to the data, due to sparsity associated with them, it adds to the computational complexity while learning, unable to visualize and interpret them, thus requiring large scale computational power to make insights out of it. This is famously called “curse of dimensionality.” This chapter discusses the methods by which curse of dimensionality is cured using conventional methods and analyzes its performance for given complex datasets. It also discusses the advantages of nonlinear methods over linear methods and neural networks, which could be a better approach when compared to other nonlinear methods. It also discusses future research areas such as application of deep learning techniques, which can be applied as a cure for this curse.


Author(s):  
Helena Hashemi Farzaneh ◽  
Lorenz Neuner

AbstractMuch of the work in design research focusses on the development of methods and tools to support engineering designers. Many of these tools are nowadays implemented in software. Due to the strongly growing use of computers and smart devices in the last two decades, the expectations of users increased dramatically. In particular users expect good usability, for example little effort for learning to apply the software. Therefore, the usability evaluation of design software tools is crucial. A software tool with bad usability will not be used in industrial practice. Recommendations for usability evaluation of software often stem from the field of Human Computer Interaction. The aim of this paper is to tailor these general approaches to the specific needs of engineering design. In addition, we propose a method to analyse the results of the evaluation and to derive suggestions for improving the design software tool. We apply the usability evaluation method on a use case - the KoMBi software tool for bio-inspired design. The case study provides additional insights with regards to problem, causes and improvement categories.


2017 ◽  
Vol 11 (3) ◽  
pp. 10 ◽  
Author(s):  
Kirsti Klette ◽  
Marte Blikstad-Balas ◽  
Astrid Roe

AbstractEducational research into instructional quality would benefit from macro- and meso-level instructional data – such as achievement data or large-scale student surveys – in relation to data from the micro level – such as detailed analyses of classroom practices. Several scholars have specifically asked for studies that correlate achievement data with records of learning processes and teaching strategies, and ongoing projects attempting to do so have shown promising results. Linking different data sources on instructional quality is quite demanding because it requires a concerted effort by researchers from different fields of expertise and different traditions. A main ambition of our ongoing research project is precisely to advance such integration. As the title of the project reveals, we are dedicated to Linking Instruction and Student Achievement (LISA). In this article, we start by providing a theoretical background and status of knowledge related to instructional quality. We go on to argue that video data has shown particular promise in studies aiming to obtain systematic data from a range of classrooms in order to compare classroom practices. We then present the three components of the LISA project’s design – student perception surveys, systematic classroom observation, and achievement gains in national tests – and the value of combining these three data sources. Finally, we will outline some of our findings thus far and point to future research possibilities.Key words: instructional quality; classroom practices; video studies; mathematics; language arts Å koble undervisning med elevprestasjoner - Forskningsdesign for en ny generasjon klasseromsstudierSammendragFor å studere undervisningskvalitet vil det være en fordel å kombinere data fra et makro og meso- nivå  med detaljerte studier av hva som skjer i klasserommet. Flere har etterlyst studier som ser på sammenhenger mellom målbar faglig fremgang og lærerens undervisning. Å få til slike studier er krevende, da det forutsetter et tett samarbeid mellom forskere fra ulike felt med ulik ekspertise innenfor nokså ulike forskningstradisjoner. En hovedambisjon i vårt pågående forskningsprosjekt er nettopp å få til en slik integrasjon. Som tittelen avslører, er vi dedikert til «Linking Instruction and Student Achievement (LISA)». I denne artikkelen presenterer vi det teoretiske og empiriske grunnlaget knyttet til undervisningskvalitet. Videre argumenterer vi for verdien av videodata i studier som sammenligner undervisningspraksiser fra ulike klasserom på en systematisk måte. Deretter presenterer vi de tre datakildene i LISA-prosjektets forskningsdesign – spørreskjemaer til elever om deres oppfatninger om lærerens undervisning, systematiske klasseromsobservasjoner, og målt fremgang på nasjonale prøver i lesing og regning. Verdien av å kombinere nettopp disse tre datakildene vil også bli diskutert. Avslutningsvis deler vi noen av våre tidlige forskningsfunn.Nøkkelord: undervisningskvalitet; klasseromspraksis; video studier; matematikk; norskfaget


2021 ◽  
Vol 9 ◽  
pp. 1061-1080
Author(s):  
Prakhar Ganesh ◽  
Yao Chen ◽  
Xin Lou ◽  
Mohammad Ali Khan ◽  
Yin Yang ◽  
...  

Abstract Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-intensive to suit low- capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1722
Author(s):  
Ivan Kovačević ◽  
Stjepan Groš ◽  
Karlo Slovenec

Intrusion Detection Systems (IDSs) automatically analyze event logs and network traffic in order to detect malicious activity and policy violations. Because IDSs have a large number of false positives and false negatives and the technical nature of their alerts requires a lot of manual analysis, the researchers proposed approaches that automate the analysis of alerts to detect large-scale attacks and predict the attacker’s next steps. Unfortunately, many such approaches use unique datasets and success metrics, making comparison difficult. This survey provides an overview of the state of the art in detecting and projecting cyberattack scenarios, with a focus on evaluation and the corresponding metrics. Representative papers are collected while using Google Scholar and Scopus searches. Mutually comparable success metrics are calculated and several comparison tables are provided. Our results show that commonly used metrics are saturated on popular datasets and cannot assess the practical usability of the approaches. In addition, approaches with knowledge bases require constant maintenance, while data mining and ML approaches depend on the quality of available datasets, which, at the time of writing, are not representative enough to provide general knowledge regarding attack scenarios, so more emphasis needs to be placed on researching the behavior of attackers.


Sign in / Sign up

Export Citation Format

Share Document