scholarly journals Non-Invasive Ambient Intelligence in Real Life: Dealing with Noisy Patterns to Help Older People

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3113 ◽  
Author(s):  
Miguel Ángel Antón ◽  
Joaquín Ordieres-Meré ◽  
Unai Saralegui ◽  
Shengjing Sun

This paper aims to contribute to the field of ambient intelligence from the perspective of real environments, where noise levels in datasets are significant, by showing how machine learning techniques can contribute to the knowledge creation, by promoting software sensors. The created knowledge can be actionable to develop features helping to deal with problems related to minimally labelled datasets. A case study is presented and analysed, looking to infer high-level rules, which can help to anticipate abnormal activities, and potential benefits of the integration of these technologies are discussed in this context. The contribution also aims to analyse the usage of the models for the transfer of knowledge when different sensors with different settings contribute to the noise levels. Finally, based on the authors’ experience, a framework proposal for creating valuable and aggregated knowledge is depicted.

2021 ◽  
Vol 14 (3) ◽  
pp. 1-21
Author(s):  
Roy Abitbol ◽  
Ilan Shimshoni ◽  
Jonathan Ben-Dov

The task of assembling fragments in a puzzle-like manner into a composite picture plays a significant role in the field of archaeology as it supports researchers in their attempt to reconstruct historic artifacts. In this article, we propose a method for matching and assembling pairs of ancient papyrus fragments containing mostly unknown scriptures. Papyrus paper is manufactured from papyrus plants and therefore portrays typical thread patterns resulting from the plant’s stems. The proposed algorithm is founded on the hypothesis that these thread patterns contain unique local attributes such that nearby fragments show similar patterns reflecting the continuations of the threads. We posit that these patterns can be exploited using image processing and machine learning techniques to identify matching fragments. The algorithm and system which we present support the quick and automated classification of matching pairs of papyrus fragments as well as the geometric alignment of the pairs against each other. The algorithm consists of a series of steps and is based on deep-learning and machine learning methods. The first step is to deconstruct the problem of matching fragments into a smaller problem of finding thread continuation matches in local edge areas (squares) between pairs of fragments. This phase is solved using a convolutional neural network ingesting raw images of the edge areas and producing local matching scores. The result of this stage yields very high recall but low precision. Thus, we utilize these scores in order to conclude about the matching of entire fragments pairs by establishing an elaborate voting mechanism. We enhance this voting with geometric alignment techniques from which we extract additional spatial information. Eventually, we feed all the data collected from these steps into a Random Forest classifier in order to produce a higher order classifier capable of predicting whether a pair of fragments is a match. Our algorithm was trained on a batch of fragments which was excavated from the Dead Sea caves and is dated circa the 1st century BCE. The algorithm shows excellent results on a validation set which is of a similar origin and conditions. We then tried to run the algorithm against a real-life set of fragments for which we have no prior knowledge or labeling of matches. This test batch is considered extremely challenging due to its poor condition and the small size of its fragments. Evidently, numerous researchers have tried seeking matches within this batch with very little success. Our algorithm performance on this batch was sub-optimal, returning a relatively large ratio of false positives. However, the algorithm was quite useful by eliminating 98% of the possible matches thus reducing the amount of work needed for manual inspection. Indeed, experts that reviewed the results have identified some positive matches as potentially true and referred them for further investigation.


2021 ◽  
Author(s):  
Chinh Luu ◽  
Quynh Duy Bui ◽  
Romulus Costache ◽  
Luan Thanh Nguyen ◽  
Thu Thuy Nguyen ◽  
...  

Author(s):  
Siam Islam ◽  
Popin Saha ◽  
Touhidul Chowdhury ◽  
Asif Sorowar ◽  
Raqeebir Rab

2021 ◽  
pp. 1-67
Author(s):  
Stewart Smith ◽  
Olesya Zimina ◽  
Surender Manral ◽  
Michael Nickel

Seismic fault detection using machine learning techniques, in particular the convolution neural network (CNN), is becoming a widely accepted practice in the field of seismic interpretation. Machine learning algorithms are trained to mimic the capabilities of an experienced interpreter by recognizing patterns within seismic data and classifying them. Regardless of the method of seismic fault detection, interpretation or extraction of 3D fault representations from edge evidence or fault probability volumes is routine. Extracted fault representations are important to the understanding of the subsurface geology and are a critical input to upstream workflows including structural framework definition, static reservoir and petroleum system modeling, and well planning and de-risking activities. Efforts to automate the detection and extraction of geological features from seismic data have evolved in line with advances in computer algorithms, hardware, and machine learning techniques. We have developed an assisted fault interpretation workflow for seismic fault detection and extraction, demonstrated through a case study from the Groningen gas field of the Upper Permian, Dutch Rotliegend; a heavily faulted, subsalt gas field located onshore, NE Netherlands. Supervised using interpreter-led labeling, we apply a 2D multi-CNN to detect faults within a 3D pre-stack depth migrated seismic dataset. After prediction, we apply a geometric evaluation of predicted faults, using a principal component analysis (PCA) to produce geometric attribute representations (strike azimuth and planarity) of the fault prediction. Strike azimuth and planarity attributes are used to validate and automatically extract consistent 3D fault geometries, providing geological context to the interpreter and input to dependent workflows more efficiently.


Author(s):  
Rathimala Kannan ◽  
Intan Soraya Rosdi ◽  
Kannan Ramakrishna ◽  
Haziq Riza Abdul Rasid ◽  
Mohamed Haryz Izzudin Mohamed Rafy ◽  
...  

Data analytics is the essential component in deriving insights from data obtained from multiple sources. It represents the technology, methods and techniques used to obtain insights from massive datasets. As data increases, companies are looking for ways to gain relevant business insights underneath layers of data and information, to help them better understand new business ventures, opportunities, business trends and complex challenges. However, to date, while the extensive benefits of business data analytics to large organizations are widely published, micro, small, and medium sized organisations have not fully grasped the potential benefits to be gained from data analytics using machine learning techniques. This study is guided by the research question of how data analytics using machine learning techniques can benefit small businesses. Using the case study method, this paper outlines how small businesses in two different industries i.e. healthcare and retail can leverage data analytics and machine learning techniques to gain competitive advantage from the data. Details on the respective benefits gained by the small business owners featured in the two case studies provide important answers to the research question.


Author(s):  
Olga Pyatetska

The article analyzes media instrument of modern communication, i.e. storytelling, which is widely used for commercial, advertising and corporate purposes to influence recipient's emotions, cognition and motivations. At the same time, storytelling based on real life facts is one of the most effective learning techniques that promotes linguistic competence and enables various communication tasks to be solved. Analysis of storytelling showed that it gained particular relevance due to the principles of submission the information in implicit form, unobtrusively influencing the audience, gaining its trust and loyalty, resulting in the recipients make their own decisions and draw appropriate conclusions. It is established that to reach a high level of influence on the target audience, a story must be true, emotional, relevant and new, contain an idea, a bright character or image, have a dynamic plot, often with a surprise effect, logical conclusion, intrigue till the end and (for electronic versions)be accompanied by quality content. Despite defined algorithms for story-building and typical content structures of its plot, there is a tendency to create storytelling outside the box. The main principle that determines the theme, ideas, specifics of language organization of stories is adaptation to the target audience. Separate analysis of direct-acting storytelling which has recently spread in social networks is given. Its purpose is to draw the reader's attention to current problems, influence the recipient's emotions and behavior with the help of verbal and non-verbal means. An example of such storytelling in Ukraine is the Ukraїner Media Project which helped to represent our country in a new way and realize the dreams of many ordinary citizens. The studying of different stories showed that storytelling uses such linguistic and stylistic means as emotionally coloured vocabulary which is typical for literary, mass media and colloquial functional styles, foreign words, jargon, slang expressions, phraseologisms, metaphors, personifications, rhetoric constructions etc. As for parts of speech, verbs are more frequently used because they intensify and dynamize the narrative.


Author(s):  
Hesham M. Al-Ammal

Detection of anomalies in a given data set is a vital step in several applications in cybersecurity; including intrusion detection, fraud, and social network analysis. Many of these techniques detect anomalies by examining graph-based data. Analyzing graphs makes it possible to capture relationships, communities, as well as anomalies. The advantage of using graphs is that many real-life situations can be easily modeled by a graph that captures their structure and inter-dependencies. Although anomaly detection in graphs dates back to the 1990s, recent advances in research utilized machine learning methods for anomaly detection over graphs. This chapter will concentrate on static graphs (both labeled and unlabeled), and the chapter summarizes some of these recent studies in machine learning for anomaly detection in graphs. This includes methods such as support vector machines, neural networks, generative neural networks, and deep learning methods. The chapter will reflect the success and challenges of using these methods in the context of graph-based anomaly detection.


Sign in / Sign up

Export Citation Format

Share Document