scholarly journals KNOWLEDGE BASED EMPLOYMENT PROCESS – DATA DRIVEN RECRUITMENT

Author(s):  
Jelena Vemić Đurković ◽  
◽  
Ivica Nikolić ◽  
Slavica Siljanoska ◽  
◽  
...  

The purpose of this paper is to highlight the main benefits and challenges of using a data-driven recruiting system in enterprises. The trend of increasing digital presence in all fields requires new knowledge and skills of employees. Sustainable development of enterprise is increasingly based on human capital and investment in it. Precisely in these conditions of business, on the one hand, there is increasing pressure to attract and hire the highest quality employees more efficiently, which implies large investments in the recruitment processes and on the other hand to justify those investments. The high-quality data-driven recruitment system provides a way to measure the contribution of recruiting process to the success, to adequately manage existing recruitment programs, and to justify investments in their further development. A special part of this paper will be consecrated to the trends and challenges of using data-driven recruitment in the context of the global crisis of the coronavirus COVID - 19 pandemic.

2019 ◽  
Author(s):  
Guillaume A Rousselet ◽  
Cyril R Pernet ◽  
Rand R. Wilcox

The bootstrap is a versatile technique that relies on data-driven simulations to make statistical inferences. When combined with robust estimators, the bootstrap can afford much more powerful and flexible inferences than is possible with standard approaches such as t-tests on means. In this R tutorial, we use detailed illustrations of bootstrap simulations to give readers an intuition of what the bootstrap does and how it can be applied to solve many practical problems, such as building confidence intervals for many aspects of the data. In particular, we illustrate how to build confidence intervals for measures of location, including measures of central tendency, in the one-sample case, for two independent and two dependent groups. We also demonstrate how to compare correlation coefficients using the bootstrap and to perform simulations to determine if the bootstrap is fit for purpose for a particular application. The tutorial also addresses two widespread misconceptions about the bootstrap: that it makes no assumptions about the data, and that it leads to robust inferences on its own. The tutorial focuses on detailed graphical descriptions, with data and code available online to reproduce the figures and analyses in the article (https://osf.io/8b4t5/).


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253940
Author(s):  
Jesús-Adrián Alvarez ◽  
Francisco Villavicencio ◽  
Cosmo Strozza ◽  
Carlo Giovanni Camarda

Empirical research on human mortality and extreme longevity suggests that the risk of death among the oldest-old ceases to increase and levels off at age 110. The universality of this finding remains in dispute because of two main reasons: i) high uncertainty around statistical estimates generated from scarce data, and ii) the lack of country-specific comparisons. In this article, we estimate age patterns of mortality above age 105 using data from the International Database on Longevity, an exceptionally large and recently updated database comprising more than 13,000 validated records of long-lived individuals from eight populations. We show that, in all of them, similar mortality trajectories arise, suggesting that the risk of dying levels off after age 105. As more high-quality data become available, there is more evidence in support of a levelling-off of the risk of dying as a regularity of longevous populations.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Janet E. Squires ◽  
Alison M. Hutchinson ◽  
Anne-Marie Bostrom ◽  
Kelly Deis ◽  
Peter G. Norton ◽  
...  

Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers’ perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.


2015 ◽  
Vol 6 (1) ◽  
Author(s):  
Jaqueline Kaleian Eserian ◽  
Márcia Lombardo

The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC) analysis.   Type: Commentary


Author(s):  
Sethu Arun Kumar ◽  
Thirumoorthy Durai Ananda Kumar ◽  
Narasimha M Beeraka ◽  
Gurubasavaraj Veeranna Pujar ◽  
Manisha Singh ◽  
...  

Predicting novel small molecule bioactivities for the target deconvolution, hit-to-lead optimization in drug discovery research, requires molecular representation. Previous reports have demonstrated that machine learning (ML) and deep learning (DL) have substantial implications in virtual screening, peptide synthesis, drug ADMET screening and biomarker discovery. These strategies can increase the positive outcomes in the drug discovery process without false-positive rates and can be achieved in a cost-effective way with a minimum duration of time by high-quality data acquisition. This review substantially discusses the recent updates in AI tools as cheminformatics application in medicinal chemistry for the data-driven decision making of drug discovery and challenges in high-quality data acquisition in the pharmaceutical industry while improving small-molecule bioactivities and properties.


2015 ◽  
Vol 71 (1) ◽  
pp. 46-58 ◽  
Author(s):  
Timothy R. Ramadhar ◽  
Shao-Liang Zheng ◽  
Yu-Sheng Chen ◽  
Jon Clardy

A detailed set of synthetic and crystallographic guidelines for the crystalline sponge method based upon the analysis of expediently synthesized crystal sponges using third-generation synchrotron radiation are reported. The procedure for the synthesis of the zinc-based metal–organic framework used in initial crystal sponge reports has been modified to yield competent crystals in 3 days instead of 2 weeks. These crystal sponges were tested on some small molecules, with two being unexpectedly difficult cases for analysis with in-house diffractometers in regard to data quality and proper space-group determination. These issues were easily resolved by the use of synchrotron radiation using data-collection times of less than an hour. One of these guests induced a single-crystal-to-single-crystal transformation to create a larger unit cell with over 500 non-H atoms in the asymmetric unit. This led to a non-trivial refinement scenario that afforded the best Flackxabsolute stereochemical determination parameter to date for these systems. The structures did not require the use ofPLATON/SQUEEZEor other solvent-masking programs, and are the highest-quality crystalline sponge systems reported to date where the results are strongly supported by the data. A set of guidelines for the entire crystallographic process were developed through these studies. In particular, the refinement guidelines include strategies to refine the host framework, locate guests and determine occupancies, discussion of the proper use of geometric and anisotropic displacement parameter restraints and constraints, and whether to perform solvent squeezing/masking. The single-crystal-to-single-crystal transformation process for the crystal sponges is also discussed. The presented general guidelines will be invaluable for researchers interested in using the crystalline sponge method at in-house diffraction or synchrotron facilities, will facilitate the collection and analysis of reliable high-quality data, and will allow construction of chemically and physically sensible models for guest structural determination.


2021 ◽  
Vol 4 ◽  
Author(s):  
Ruwan Wickramarachchi ◽  
Cory Henson ◽  
Amit Sheth

Scene understanding is a key technical challenge within the autonomous driving domain. It requires a deep semantic understanding of the entities and relations found within complex physical and social environments that is both accurate and complete. In practice, this can be accomplished by representing entities in a scene and their relations as a knowledge graph (KG). This scene knowledge graph may then be utilized for the task of entity prediction, leading to improved scene understanding. In this paper, we will define and formalize this problem as Knowledge-based Entity Prediction (KEP). KEP aims to improve scene understanding by predicting potentially unrecognized entities by leveraging heterogeneous, high-level semantic knowledge of driving scenes. An innovative neuro-symbolic solution for KEP is presented, based on knowledge-infused learning, which 1) introduces a dataset agnostic ontology to describe driving scenes, 2) uses an expressive, holistic representation of scenes with knowledge graphs, and 3) proposes an effective, non-standard mapping of the KEP problem to the problem of link prediction (LP) using knowledge-graph embeddings (KGE). Using real, complex and high-quality data from urban driving scenes, we demonstrate its effectiveness by showing that the missing entities may be predicted with high precision (0.87 Hits@1) while significantly outperforming the non-semantic/rule-based baselines.


Soziale Welt ◽  
2020 ◽  
Vol 71 (1-2) ◽  
pp. 3-23 ◽  
Author(s):  
Irena Kogan ◽  
Frank Kalter

Given the recent surge in interest in refugee research, this editorial discusses whether the study of refugees’ migration and integration requires entirely new theoretical and methodological approaches. We make the case that refugee migration is a special type of migration and that refugee integration is subject to similar laws and regularities as the integration of all kinds of immigrants. Therefore, it should be studied using conventional theoretical and analytical approaches to empirical-analytical migration and integration research. Obviously, special conditions of refugee migration apply, such as specific patterns of refugees’ selectivity, health and resource endowment, settlement conditions, and reception or integration services. However, such peculiarities do not represent distinct mechanisms; they are simply background conditions for more general mechanisms. Contributions to this Special Issue, which all rely on new high-quality data from Germany, best highlight the universality of general mechanisms of immigrant integration, on the one hand, and the relevance of refugee migrants’ specific characteristics and conditions, on the other hand.


Sign in / Sign up

Export Citation Format

Share Document