scholarly journals A Cyber Physical System Crowdsourcing Inference Method Based on Tempering: An Advancement in Artificial Intelligence Algorithms

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jia Liu ◽  
Mingchu Li ◽  
William C. Tang ◽  
Sardar M. N. Islam

Activity selection is critical for the smart environment and Cyber-Physical Systems (CPSs) that can provide timely and intelligent services, especially as the number of connected devices is increasing at an unprecedented speed. As it is important to collect labels by various agents in the CPSs, crowdsourcing inference algorithms are designed to help acquire accurate labels that involve high-level knowledge. However, there are some limitations in the algorithm in the existing literature such as incurring extra budget for the existing algorithms, inability to scale appropriately, requiring the knowledge of prior distribution, difficulties to implement these algorithms, or generating local optima. In this paper, we provide a crowdsourcing inference method with variational tempering that obtains ground truth as well as considers both the reliability of workers and the difficulty level of the tasks and ensure a local optimum. The numerical experiments of the real-world data indicate that our novel variational tempering inference algorithm performs better than the existing advancing algorithms. Therefore, this paper provides a new efficient algorithm in CPSs and machine learning, and thus, it makes a new contribution to the literature.

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Hang Yu ◽  
Yu Zhang ◽  
Pengxing Cai ◽  
Junyan Yi ◽  
Sheng Li ◽  
...  

In this study, a hybrid metaheuristic algorithm chaotic gradient-based optimizer (CGBO) is proposed. The gradient-based optimizer (GBO) is a novel metaheuristic inspired by Newton’s method which has two search strategies to ensure excellent performance. One is the gradient search rule (GSR), and the other is local escaping operation (LEO). GSR utilizes the gradient method to enhance ability of exploitation and convergence rate, and LEO employs random operators to escape the local optima. It is verified that gradient-based metaheuristic algorithms have obvious shortcomings in exploration. Meanwhile, chaotic local search (CLS) is an efficient search strategy with randomicity and ergodicity, which is usually used to improve global optimization algorithms. Accordingly, we incorporate GBO with CLS to strengthen the ability of exploration and keep high-level population diversity for original GBO. In this study, CGBO is tested with over 30 CEC2017 benchmark functions and a parameter optimization problem of the dendritic neuron model (DNM). Experimental results indicate that CGBO performs better than other state-of-the-art algorithms in terms of effectiveness and robustness.


Author(s):  
Travis Eiles ◽  
Patrick Pardy

Abstract This paper demonstrates a breakthrough method of visible laser probing (VLP), including an optimized 577 nm laser microscope, visible-sensitive detector, and an ultimate-resolution gallium phosphide-based solid immersion lens on the 10 nm node, showing a 110 nm resolution. This is 2x better than what is achieved with the standard suite of probing systems using typical infrared (IR) wavelengths today. Since VLP provides a spot diameter reduction of 0.5x over IR methods, it is reasonable, based simply on geometry, to project that VLP using the 577 nm laser will meet the industry needs for laser probing for both the 10 nm and 7 nm process nodes. Based on its high level of optimization, including high resolution and specialized solid immersion lens, it is highly likely that this VLP technology will be one of the last optically-based fault isolation methods successfully used.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nick Le Large ◽  
Frank Bieder ◽  
Martin Lauer

Abstract For the application of an automated, driverless race car, we aim to assure high map and localization quality for successful driving on previously unknown, narrow race tracks. To achieve this goal, it is essential to choose an algorithm that fulfills the requirements in terms of accuracy, computational resources and run time. We propose both a filter-based and a smoothing-based Simultaneous Localization and Mapping (SLAM) algorithm and evaluate them using real-world data collected by a Formula Student Driverless race car. The accuracy is measured by comparing the SLAM-generated map to a ground truth map which was acquired using high-precision Differential GPS (DGPS) measurements. The results of the evaluation show that both algorithms meet required time constraints thanks to a parallelized architecture, with GraphSLAM draining the computational resources much faster than Extended Kalman Filter (EKF) SLAM. However, the analysis of the maps generated by the algorithms shows that GraphSLAM outperforms EKF SLAM in terms of accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


2015 ◽  
Vol 114 (6) ◽  
pp. 3351-3358 ◽  
Author(s):  
Stefania de Vito ◽  
Marine Lunven ◽  
Clémence Bourlon ◽  
Christophe Duret ◽  
Patrick Cavanagh ◽  
...  

When we look at bars flashed against a moving background, we see them displaced in the direction of the upcoming motion (flash-grab illusion). It is still debated whether these motion-induced position shifts are low-level, reflexive consequences of stimulus motion or high-level compensation engaged only when the stimulus is tracked with attention. To investigate whether attention is a causal factor for this striking illusory position shift, we evaluated the flash-grab illusion in six patients with damaged attentional networks in the right hemisphere and signs of left visual neglect and six age-matched controls. With stimuli in the top, right, and bottom visual fields, neglect patients experienced the same amount of illusion as controls. However, patients showed no significant shift when the test was presented in their left hemifield, despite having equally precise judgments. Thus, paradoxically, neglect patients perceived the position of the flash more veridically in their neglected hemifield. These results suggest that impaired attentional processes can reduce the interaction between a moving background and a superimposed stationary flash, and indicate that attention is a critical factor in generating the illusory motion-induced shifts of location.


2010 ◽  
Vol 19 (01) ◽  
pp. 65-99 ◽  
Author(s):  
MARC POULY

Computing inference from a given knowledgebase is one of the key competences of computer science. Therefore, numerous formalisms and specialized inference routines have been introduced and implemented for this task. Typical examples are Bayesian networks, constraint systems or different kinds of logic. It is known today that these formalisms can be unified under a common algebraic roof called valuation algebra. Based on this system, generic inference algorithms for the processing of arbitrary valuation algebras can be defined. Researchers benefit from this high level of abstraction to address open problems independently of the underlying formalism. It is therefore all the more astonishing that this theory did not find its way into concrete software projects. Indeed, all modern programming languages for example provide generic sorting procedures, but generic inference algorithms are still mythical creatures. NENOK breaks a new ground and offers an extensive library of generic inference tools based on the valuation algebra framework. All methods are implemented as distributed algorithms that process local and remote knowledgebases in a transparent manner. Besides its main purpose as software library, NENOK also provides a sophisticated graphical user interface to inspect the inference process and the involved graphical structures. This can be used for educational purposes but also as a fast prototyping architecture for inference formalisms.


PEDIATRICS ◽  
1977 ◽  
Vol 60 (3) ◽  
pp. 312-312
Author(s):  
P. H. Rhodes

The value judgments about medicine are contributed to by the public image. Formerly this has been one of a devoted, caring, self-sacrificing, somewhat unworldly group of people, dedicated to their work for the suffering and diseased. But the doctors are not separate from society and they are affected by its values. These have been adopted by the profession so that it is coming to be seen as no worse and no better than any other group of comparable education and training. Its status has diminished and this has called into question its compensation at a high level. Status cannot be maintained when its base has been eroded.


Author(s):  
Bo Wang ◽  
Xiaoting Yu ◽  
Chengeng Huang ◽  
Qinghong Sheng ◽  
Yuanyuan Wang ◽  
...  

The excellent feature extraction ability of deep convolutional neural networks (DCNNs) has been demonstrated in many image processing tasks, by which image classification can achieve high accuracy with only raw input images. However, the specific image features that influence the classification results are not readily determinable and what lies behind the predictions is unclear. This study proposes a method combining the Sobel and Canny operators and an Inception module for ship classification. The Sobel and Canny operators obtain enhanced edge features from the input images. A convolutional layer is replaced with the Inception module, which can automatically select the proper convolution kernel for ship objects in different image regions. The principle is that the high-level features abstracted by the DCNN, and the features obtained by multi-convolution concatenation of the Inception module must ultimately derive from the edge information of the preprocessing input images. This indicates that the classification results are based on the input edge features, which indirectly interpret the classification results to some extent. Experimental results show that the combination of the edge features and the Inception module improves DCNN ship classification performance. The original model with the raw dataset has an average accuracy of 88.72%, while when using enhanced edge features as input, it achieves the best performance of 90.54% among all models. The model that replaces the fifth convolutional layer with the Inception module has the best performance of 89.50%. It performs close to VGG-16 on the raw dataset and is significantly better than other deep neural networks. The results validate the functionality and feasibility of the idea posited.


2021 ◽  
Vol 39 (4) ◽  
pp. 1-33
Author(s):  
Fulvio Corno ◽  
Luigi De Russis ◽  
Alberto Monge Roffarello

In the Internet of Things era, users are willing to personalize the joint behavior of their connected entities, i.e., smart devices and online service, by means of trigger-action rules such as “IF the entrance Nest security camera detects a movement, THEN blink the Philips Hue lamp in the kitchen.” Unfortunately, the spread of new supported technologies makes the number of possible combinations between triggers and actions continuously growing, thus motivating the need of assisting users in discovering new rules and functionality, e.g., through recommendation techniques. To this end, we present , a semantic Conversational Search and Recommendation (CSR) system able to suggest pertinent IF-THEN rules that can be easily deployed in different contexts starting from an abstract user’s need. By exploiting a conversational agent, the user can communicate her current personalization intention by specifying a set of functionality at a high level, e.g., to decrease the temperature of a room when she left it. Stemming from this input, implements a semantic recommendation process that takes into account ( a ) the current user’s intention , ( b ) the connected entities owned by the user, and ( c ) the user’s long-term preferences revealed by her profile. If not satisfied with the suggestions, then the user can converse with the system to provide further feedback, i.e., a short-term preference , thus allowing to provide refined recommendations that better align with the original intention. We evaluate by running different offline experiments with simulated users and real-world data. First, we test the recommendation process in different configurations, and we show that recommendation accuracy and similarity with target items increase as the interaction between the algorithm and the user proceeds. Then, we compare with other similar baseline recommender systems. Results are promising and demonstrate the effectiveness of in recommending IF-THEN rules that satisfy the current personalization intention of the user.


Sign in / Sign up

Export Citation Format

Share Document