scholarly journals Self-Supervised Variational Auto-Encoders

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 747
Author(s):  
Ioannis Gatopoulos ◽  
Jakub M. Tomczak

Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).

2021 ◽  
Vol 118 (15) ◽  
pp. e2101344118
Author(s):  
Qiao Liu ◽  
Jiaze Xu ◽  
Rui Jiang ◽  
Wing Hung Wong

Density estimation is one of the fundamental problems in both statistics and machine learning. In this study, we propose Roundtrip, a computational framework for general-purpose density estimation based on deep generative neural networks. Roundtrip retains the generative power of deep generative models, such as generative adversarial networks (GANs) while it also provides estimates of density values, thus supporting both data generation and density estimation. Unlike previous neural density estimators that put stringent conditions on the transformation from the latent space to the data space, Roundtrip enables the use of much more general mappings where target density is modeled by learning a manifold induced from a base density (e.g., Gaussian distribution). Roundtrip provides a statistical framework for GAN models where an explicit evaluation of density values is feasible. In numerical experiments, Roundtrip exceeds state-of-the-art performance in a diverse range of density estimation tasks.


Author(s):  
L. Ometto ◽  
S. Challapalli ◽  
M. Polo ◽  
G. Cestari ◽  
A. Villagrossi ◽  
...  

AI and Ethics ◽  
2021 ◽  
Author(s):  
Steven Umbrello ◽  
Ibo van de Poel

AbstractValue sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.


Author(s):  
Daniel Overhoff ◽  
Peter Kohlmann ◽  
Alex Frydrychowicz ◽  
Sergios Gatidis ◽  
Christian Loewe ◽  
...  

Purpose The DRG-ÖRG IRP (Deutsche Röntgengesellschaft-Österreichische Röntgengesellschaft international radiomics platform) represents a web-/cloud-based radiomics platform based on a public-private partnership. It offers the possibility of data sharing, annotation, validation and certification in the field of artificial intelligence, radiomics analysis, and integrated diagnostics. In a first proof-of-concept study, automated myocardial segmentation and automated myocardial late gadolinum enhancement (LGE) detection using radiomic image features will be evaluated for myocarditis data sets. Materials and Methods The DRG-ÖRP IRP can be used to create quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis and is characterized by the following performance criteria: Possibility of using multicentric networked data, automatically calculated quality parameters, processing of annotation tasks, contour recognition using conventional and artificial intelligence methods and the possibility of targeted integration of algorithms. In a first study, a neural network pre-trained using cardiac CINE data sets was evaluated for segmentation of PSIR data sets. In a second step, radiomic features were applied for segmental detection of LGE of the same data sets, which were provided multicenter via the IRP. Results First results show the advantages (data transparency, reliability, broad involvement of all members, continuous evolution as well as validation and certification) of this platform-based approach. In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium. In the segment-based myocardial LGE detection, the AUC was 0.73 and 0.79 after exclusion of segments with uncertain annotation.The evaluation and provision of the data takes place at the IRP, taking into account the FAT (fairness, accountability, transparency) and FAIR (findable, accessible, interoperable, reusable) criteria. Conclusion It could be shown that the DRG-ÖRP IRP can be used as a crystallization point for the generation of further individual and joint projects. The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific groups can be networked.In a first proof-of-concept study on automated segmentation of the myocardium and automated myocardial LGE detection, these advantages were successfully applied.Our study shows that with the DRG-ÖRP IRP, strategic goals can be implemented in an interdisciplinary way, that concrete proof-of-concept examples can be demonstrated, and that a large number of individual and joint projects can be realized in a participatory way involving all groups. Key Points:  Citation Format


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 198
Author(s):  
Stephen Fox

Active inference is a physics of life process theory of perception, action and learning that is applicable to natural and artificial agents. In this paper, active inference theory is related to different types of practice in social organization. Here, the term social organization is used to clarify that this paper does not encompass organization in biological systems. Rather, the paper addresses active inference in social organization that utilizes industrial engineering, quality management, and artificial intelligence alongside human intelligence. Social organization referred to in this paper can be in private companies, public institutions, other for-profit or not-for-profit organizations, and any combination of them. The relevance of active inference theory is explained in terms of variational free energy, prediction errors, generative models, and Markov blankets. Active inference theory is most relevant to the social organization of work that is highly repetitive. By contrast, there are more challenges involved in applying active inference theory for social organization of less repetitive endeavors such as one-of-a-kind projects. These challenges need to be addressed in order for active inference to provide a unifying framework for different types of social organization employing human and artificial intelligence.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lukman E. Mansuri ◽  
D.A. Patel

PurposeHeritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many social and natural factors seriously threaten heritage structures by deteriorating and damaging the original. Therefore, regular visual inspection of heritage structures is necessary for their conservation and preservation. Conventional inspection practice relies on manual inspection, which takes more time and human resources. The inspection system seeks an innovative approach that should be cheaper, faster, safer and less prone to human error than manual inspection. Therefore, this study aims to develop an automatic system of visual inspection for the built heritage.Design/methodology/approachThe artificial intelligence-based automatic defect detection system is developed using the faster R-CNN (faster region-based convolutional neural network) model of object detection to build an automatic visual inspection system. From the English and Dutch cemeteries of Surat (India), images of heritage structures were captured by digital camera to prepare the image data set. This image data set was used for training, validation and testing to develop the automatic defect detection model. While validating this model, its optimum detection accuracy is recorded as 91.58% to detect three types of defects: “spalling,” “exposed bricks” and “cracks.”FindingsThis study develops the model of automatic web-based visual inspection systems for the heritage structures using the faster R-CNN. Then it demonstrates detection of defects of spalling, exposed bricks and cracks existing in the heritage structures. Comparison of conventional (manual) and developed automatic inspection systems reveals that the developed automatic system requires less time and staff. Therefore, the routine inspection can be faster, cheaper, safer and more accurate than the conventional inspection method.Practical implicationsThe study presented here can improve inspecting the built heritages by reducing inspection time and cost, eliminating chances of human errors and accidents and having accurate and consistent information. This study attempts to ensure the sustainability of the built heritage.Originality/valueFor ensuring the sustainability of built heritage, this study presents the artificial intelligence-based methodology for the development of an automatic visual inspection system. The automatic web-based visual inspection system for the built heritage has not been reported in previous studies so far.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-15
Author(s):  
Rubina Shaheen ◽  
Mir Kasi

The report gives a presents use of artificial intelligence in few administrative agencies. In-depth thematic analysis of some institution, have been conducted to review the current trends. In thematic analysis, 12 institutions have been selected and described the details of the institutions using artificial intelligence in different departments. These analyses yielded five major findings. First, the government has a wide application of Artificial Intelligence toolkit traversing the federal administrative and state. Almost half of the federal agencies evaluated (45%) has used AI and associated machine learning (ML) tools. Also, AI tools are already enhancing agency strategies in  the full span of governance responsibilities, such as keeping regulatory assignments bordering on market efficiency, safety in workplace, health care, and protection of the environmental, protecting the privileges and benefits of the government ranging from intellectual properties to disability, accessing, verifying and analyzing all risks to public  safety and health, Extracting essential data from the data stream of government including complaints by consumer and the communicating with citizens on their rights, welfare, asylum seeking and business ownership. AI toolkit owned by government span the complete scope of Artificial Intelligence techniques, ranging from conventional machine learning to deep learning including natural language and image data. Irrespective of huge acceptance of AI, much still has to be done in this area by the government. Recommendations also discussed at the end.


Sign in / Sign up

Export Citation Format

Share Document