scholarly journals Use of Data-Driven Simulation Modeling and Visual Computing Methods for Workplace Evaluation

2020 ◽  
Vol 10 (20) ◽  
pp. 7037 ◽  
Author(s):  
Robert Ojstersek ◽  
Borut Buchmeister ◽  
Natasa Vujica Herzog

In the time of Industry 4.0, the dynamic adaptation of companies to global market demands plays a key role in ensuring sustainable financial and time justification. Financial accessibility, a wide range of user-friendliness, and credible results of the visual computing methods and data-driven simulation modeling enable a higher degree of usability in small, medium, and large enterprises. This paper presents an innovative method for modelling and simulating workplaces in manufacturing based on visual data captured with a spherical camera. The presented approach uses simulation scenarios to investigate the optimization of manual or collaborative workplaces. We evaluated and compared three simulated scenarios, the results of which highlight the potential for improvement regarding manufacturing productivity and cost. In addition, ergonomic analyses of a manual assembly workplace were performed using existing evaluation metrics. The results show the possibility of creating a three-dimensional model of a workplace captured with a spherical camera, which not only describes the model dimensionally but also adds terminological and other production parameters obtained through the analysis of manufacturing system videos. The confirmation of the appropriateness of introducing collaborative workstations is also confirmed by ergonomic analyses Ovaco working analyzing system (OWAS) and rapid upper limb assessment (RULA), which demonstrate the sustainable limits of manual assembly workplaces.

2020 ◽  
Author(s):  
Julia Hegy ◽  
Noemi Anja Brog ◽  
Thomas Berger ◽  
Hansjoerg Znoj

BACKGROUND Accidents and the resulting injuries are one of the world’s biggest health care issues often causing long-term effects on psychological and physical health. With regard to psychological consequences, accidents can cause a wide range of burdens including adjustment problems. Although adjustment problems are among the most frequent mental health problems, there are few specific interventions available. The newly developed program SelFIT aims to remedy this situation by offering a low-threshold web-based self-help intervention for psychological distress after an accident. OBJECTIVE The overall aim is to evaluate the efficacy and cost-effectiveness of the SelFIT program plus care as usual (CAU) compared to only care as usual. Furthermore, the program’s user friendliness, acceptance and adherence are assessed. We expect that the use of SelFIT is associated with a greater reduction in psychological distress, greater improvement in mental and physical well-being, and greater cost-effectiveness compared to CAU. METHODS Adults (n=240) showing adjustment problems due to an accident they experienced between 2 weeks and 2 years before entering the study will be randomized. Participants in the intervention group receive direct access to SelFIT. The control group receives access to the program after 12 weeks. There are 6 measurement points for both groups (baseline as well as after 4, 8, 12, 24 and 36 weeks). The main outcome is a reduction in anxiety, depression and stress symptoms that indicate adjustment problems. Secondary outcomes include well-being, optimism, embitterment, self-esteem, self-efficacy, emotion regulation, pain, costs of health care consumption and productivity loss as well as the program’s adherence, acceptance and user-friendliness. RESULTS Recruitment started in December 2019 and is ongoing. CONCLUSIONS To the best of our knowledge, this is the first study examining a web-based self-help program designed to treat adjustment problems resulting from an accident. If effective, the program could complement the still limited offer of secondary and tertiary psychological prevention after an accident. CLINICALTRIAL ClinicalTrials.gov NCT03785912; https://clinicaltrials.gov/ct2/show/NCT03785912?cond=NCT03785912&draw=2&rank=1


2021 ◽  
pp. 204141962199349
Author(s):  
Jordan J Pannell ◽  
George Panoutsos ◽  
Sam B Cooke ◽  
Dan J Pope ◽  
Sam E Rigby

Accurate quantification of the blast load arising from detonation of a high explosive has applications in transport security, infrastructure assessment and defence. In order to design efficient and safe protective systems in such aggressive environments, it is of critical importance to understand the magnitude and distribution of loading on a structural component located close to an explosive charge. In particular, peak specific impulse is the primary parameter that governs structural deformation under short-duration loading. Within this so-called extreme near-field region, existing semi-empirical methods are known to be inaccurate, and high-fidelity numerical schemes are generally hampered by a lack of available experimental validation data. As such, the blast protection community is not currently equipped with a satisfactory fast-running tool for load prediction in the near-field. In this article, a validated computational model is used to develop a suite of numerical near-field blast load distributions, which are shown to follow a similar normalised shape. This forms the basis of the data-driven predictive model developed herein: a Gaussian function is fit to the normalised loading distributions, and a power law is used to calculate the magnitude of the curve according to established scaling laws. The predictive method is rigorously assessed against the existing numerical dataset, and is validated against new test models and available experimental data. High levels of agreement are demonstrated throughout, with typical variations of <5% between experiment/model and prediction. The new approach presented in this article allows the analyst to rapidly compute the distribution of specific impulse across the loaded face of a wide range of target sizes and near-field scaled distances and provides a benchmark for data-driven modelling approaches to capture blast loading phenomena in more complex scenarios.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Martin Gajdošík ◽  
Karl Landheer ◽  
Kelley M. Swanberg ◽  
Christoph Juchem

AbstractIn vivo magnetic resonance spectroscopy (MRS) is a powerful tool for biomedical research and clinical diagnostics, allowing for non-invasive measurement and analysis of small molecules from living tissues. However, currently available MRS processing and analytical software tools are limited in their potential for in-depth quality management, access to details of the processing stream, and user friendliness. Moreover, available MRS software focuses on selected aspects of MRS such as simulation, signal processing or analysis, necessitating the use of multiple packages and interfacing among them for biomedical applications. The freeware INSPECTOR comprises enhanced MRS data processing, simulation and analytical capabilities in a one-stop-shop solution for a wide range of biomedical research and diagnostic applications. Extensive data handling, quality management and visualization options are built in, enabling the assessment of every step of the processing chain with maximum transparency. The parameters of the processing can be flexibly chosen and tailored for the specific research problem, and extended confidence information is provided with the analysis. The INSPECTOR software stands out in its user-friendly workflow and potential for automation. In addition to convenience, the functionalities of INSPECTOR ensure rigorous and consistent data processing throughout multi-experiment and multi-center studies.


2020 ◽  
Vol 17 (6) ◽  
pp. 692-725
Author(s):  
Peter Krüger Andersen

The revised Markets in Financial Instruments Directive and Regulation (the MiFID II regime)See Directive 2014/65/EU (MiFID II) and Regulation (EU) 600/2014 (MiFIR). is one of the most comprehensive reforms of market structural and investor protection regimes the world has yet seen. The MiFID II regime will affect the European – and likely the global – market structure for years to come. Based on relevant perspectives from the revised best execution regime under MiFID II, this article suggest that it is time to reduce complexity. It is argued that unless a sufficient degree of horizontal and vertical integration of the best execution regulation takes place, the policy objectives cannot be reached. Further, it is argued that the significant data exercise that comes with the new rules only serves end-investors if a sufficient level of data consistency can be achieved. From this outset, the article emphasises the increased importance of data in today’s EU financial regulation. The article includes relevant comparisons to the equivalent US rules on best execution.


2014 ◽  
Vol 657 ◽  
pp. 392-396
Author(s):  
Adela Ursanu Dragoş ◽  
Sergiu Stanciu ◽  
Nicanor Cimpoeşu ◽  
Mihai Dumitru ◽  
Ciprian Paraschiv

Entire or partial loss of function in the shoulder, elbow or wrist represent an increasingly common ailment connected to a wide range of injuries or other conditions including sports, occupational, spinal cord injuries or strokes. A general treatment of these problems relies on physiotherapy procedures. An increasing number of metallic materials are continuously being developed to expect the requirements for different engineering applications including biomedical field. Few constructive models that can involve intelligent materials are analyzed to establish the advantages in usage of shape memory elements mechanical implementation. The shape memory effect, superelasticity and damping capacity are unique characteristics at metallic alloys which demand careful consideration in both design and manufacturing processes. The actual rehabilitation systems can be improved using smart elements in motorized equipments like robotic systems. Shape memory alloys, especially NiTi (nitinol), represent a very good alternative for actuation in equipments with moving dispositive based on very good actuation properties, low mass, small size, safety and user friendliness. In this article the actuation and the force characteristics were analyzed to investigate a relationship between the bending angle and the actuation real value.


2021 ◽  
Vol 143 (3) ◽  
Author(s):  
Suhui Li ◽  
Huaxin Zhu ◽  
Min Zhu ◽  
Gang Zhao ◽  
Xiaofeng Wei

Abstract Conventional physics-based or experimental-based approaches for gas turbine combustion tuning are time consuming and cost intensive. Recent advances in data analytics provide an alternative method. In this paper, we present a cross-disciplinary study on the combustion tuning of an F-class gas turbine that combines machine learning with physics understanding. An artificial-neural-network-based (ANN) model is developed to predict the combustion performance (outputs), including NOx emissions, combustion dynamics, combustor vibrational acceleration, and turbine exhaust temperature. The inputs of the ANN model are identified by analyzing the key operating variables that impact the combustion performance, such as the pilot and the premixed fuel flow, and the inlet guide vane angle. The ANN model is trained by field data from an F-class gas turbine power plant. The trained model is able to describe the combustion performance at an acceptable accuracy in a wide range of operating conditions. In combination with the genetic algorithm, the model is applied to optimize the combustion performance of the gas turbine. Results demonstrate that the data-driven method offers a promising alternative for combustion tuning at a low cost and fast turn-around.


2021 ◽  
Author(s):  
Elton Figueiredo de Souza Soares ◽  
Renan Souza ◽  
Raphael Melo Thiago ◽  
Marcelo de Oliveira Costa Machado ◽  
Leonardo Guerreiro Azevedo

In our data-driven society, there are hundreds of possible data systems in the market with a wide range of configuration parameters, making it very hard for enterprises and users to choose the most suitable data systems. There is a lack of representative empirical evidence to help users make an informed decision. Using benchmark results is a widely adopted practice, but like there are several data systems, there are various benchmarks. This ongoing work presents an architecture and methods of a system that supports the recommendation of the most suitable data system for an application. We also illustrates how the recommendation would work in a fictitious scenario.


2021 ◽  
Vol 17 (2) ◽  
pp. e1008635
Author(s):  
Gerrit Ansmann ◽  
Tobias Bollenbach

Many ecological studies employ general models that can feature an arbitrary number of populations. A critical requirement imposed on such models is clone consistency: If the individuals from two populations are indistinguishable, joining these populations into one shall not affect the outcome of the model. Otherwise a model produces different outcomes for the same scenario. Using functional analysis, we comprehensively characterize all clone-consistent models: We prove that they are necessarily composed from basic building blocks, namely linear combinations of parameters and abundances. These strong constraints enable a straightforward validation of model consistency. Although clone consistency can always be achieved with sufficient assumptions, we argue that it is important to explicitly name and consider the assumptions made: They may not be justified or limit the applicability of models and the generality of the results obtained with them. Moreover, our insights facilitate building new clone-consistent models, which we illustrate for a data-driven model of microbial communities. Finally, our insights point to new relevant forms of general models for theoretical ecology. Our framework thus provides a systematic way of comprehending ecological models, which can guide a wide range of studies.


Author(s):  
Awder Mohammed Ahmed ◽  
◽  
Adnan Mohsin Abdulazeez ◽  

Multi-label classification addresses the issues that more than one class label assigns to each instance. Many real-world multi-label classification tasks are high-dimensional due to digital technologies, leading to reduced performance of traditional multi-label classifiers. Feature selection is a common and successful approach to tackling this problem by retaining relevant features and eliminating redundant ones to reduce dimensionality. There is several feature selection that is successfully applied in multi-label learning. Most of those features are wrapper methods that employ a multi-label classifier in their processes. They run a classifier in each step, which requires a high computational cost, and thus they suffer from scalability issues. Filter methods are introduced to evaluate the feature subsets using information-theoretic mechanisms instead of running classifiers to deal with this issue. Most of the existing researches and review papers dealing with feature selection in single-label data. While, recently multi-label classification has a wide range of real-world applications such as image classification, emotion analysis, text mining, and bioinformatics. Moreover, researchers have recently focused on applying swarm intelligence methods in selecting prominent features of multi-label data. To the best of our knowledge, there is no review paper that reviews swarm intelligence-based methods for multi-label feature selection. Thus, in this paper, we provide a comprehensive review of different swarm intelligence and evolutionary computing methods of feature selection presented for multi-label classification tasks. To this end, in this review, we have investigated most of the well-known and state-of-the-art methods and categorize them based on different perspectives. We then provided the main characteristics of the existing multi-label feature selection techniques and compared them analytically. We also introduce benchmarks, evaluation measures, and standard datasets to facilitate research in this field. Moreover, we performed some experiments to compare existing works, and at the end of this survey, some challenges, issues, and open problems of this field are introduced to be considered by researchers in the future.


2021 ◽  
Author(s):  
Ol'ga Babina

In the monograph, the region is presented as a complex, multilevel socio-economic system consisting of many heterogeneous, interacting economic entities of different levels (economic agents and markets, management, resources and economic processes), jointly organizing reproduction processes embedded in the economic space of the national economy on the local territory. Currently, the role of rational management of the socio-economic development of the region is increasing. In such conditions, it is advisable to use strategic planning, which, in turn, has increasingly been carried out using a simulation model. The simulation model in regional strategic planning allows government agencies to predict their activities in the presence of various controlled and uncontrolled factors of the external and internal environment. In this study, the list of principles of strategic planning focused on the processes of strategic planning of the region using the method of simulation modeling is supplemented. A methodology for organizing strategic planning processes at the meso-level using simulation modeling technology is proposed. For a wide range of readers interested in the problems of regional strategic planning.


Sign in / Sign up

Export Citation Format

Share Document