scholarly journals A Deep Multi-Frame Super-Resolution Network for Dynamic Scenes

2021 ◽  
Vol 11 (7) ◽  
pp. 3285
Author(s):  
Ze Pan ◽  
Zheng Tan ◽  
Qunbo Lv

The multi-frame super-resolution techniques have been prosperous over the past two decades. However, little attention has been paid to the combination of deep learning and multi-frame super-resolution. One reason is that most deep learning-based super-resolution methods cannot handle variant numbers of input frames. Another reason is that it is hard to capture accurate temporal and spatial information because of the misalignment of input images. To solve these problems, we propose an optical-flow-based multi-frame super-resolution framework, which is capable of dealing with various numbers of input frames. This framework enables to make full use of the input frames, allowing it to obtain better performance. In addition, we use a spatial subpixel alignment module for more accurate subpixel-wise spatial alignment and introduce a dual weighting module to generate weights for temporal fusion. Both two modules lead to more effective and accurate temporal fusion. We compare our method with other state-of-the-art methods and conduct ablation studies on our method. The results of qualitative and quantitative analyses show that our method achieves state-of-the-art performances, demonstrating the advantage of the designed framework and the necessity of proposed modules.

2020 ◽  
Vol 10 (23) ◽  
pp. 8754
Author(s):  
Wajeeha Sultan ◽  
Nadeem Anjum ◽  
Mark Stansfield ◽  
Naeem Ramzan

Salient-object detection is a fundamental and the most challenging problem in computer vision. This paper focuses on the detection of salient objects, especially in low-contrast images. To this end, a hybrid deep-learning architecture is proposed where features are extracted on both the local and global level. These features are then integrated to extract the exact boundary of the object of interest in an image. Experimentation was performed on five standard datasets, and results were compared with state-of-the-art approaches. Both qualitative and quantitative analyses showed the robustness of the proposed architecture.


Over the past few years, Deep learning-based methods have shown encouraging and inspiring results for one of the most complex tasks of computer vision and image processing; Image Inpainting. The difficulty of image inpainting is derived from its’ need to fully and deeply understand of the structure and texture of images for producing accurate and visibly plausible results especially for the cases of inpainting a relatively larger region. Deep learning methods usually employ convolution neural network (CNN) for processing and analyzing images using filters that consider all image pixels as valid ones and usually use the mean value to substitute the missing pixels. This result in artifacts and blurry inpainted regions inconsistent with the rest of the image. In this paper, a new novel-based method is proposed for image inpainting of random-shaped missing regions with variable size and arbitrary locations across the image. We employed the use of dilated convolutions for composing multiscale context information without any loss in resolution as well as including a modification mask step after each convolution operation. The proposed method also includes a global discriminator that also considers the scale of patches as well as the whole image. The global discriminator is responsible for capturing local continuity of images texture as well as the overall global images’ features. The performance of the proposed method is evaluated using two datasets (Places2 and Paris Street View). Also, a comparison with the recent state-of-the-art is preformed to demonstrate and prove the effectiveness of our model in both qualitative and quantitative evaluations.


Author(s):  
Jerrold L. Abraham

Inorganic particulate material of diverse types is present in the ambient and occupational environment, and exposure to such materials is a well recognized cause of some lung disease. To investigate the interaction of inhaled inorganic particulates with the lung it is necessary to obtain quantitative information on the particulate burden of lung tissue in a wide variety of situations. The vast majority of diagnostic and experimental tissue samples (biopsies and autopsies) are fixed with formaldehyde solutions, dehydrated with organic solvents and embedded in paraffin wax. Over the past 16 years, I have attempted to obtain maximal analytical use of such tissue with minimal preparative steps. Unique diagnostic and research data result from both qualitative and quantitative analyses of sections. Most of the data has been related to inhaled inorganic particulates in lungs, but the basic methods are applicable to any tissues. The preparations are primarily designed for SEM use, but they are stable for storage and transport to other laboratories and several other instruments (e.g., for SIMS techniques).


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 37 ◽  
Author(s):  
Luca Cappelletti ◽  
Tommaso Fontana ◽  
Guido Walter Di Donato ◽  
Lorenzo Di Tucci ◽  
Elena Casiraghi ◽  
...  

Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences.


2020 ◽  
Vol 1 (3) ◽  
pp. 1017-1024 ◽  
Author(s):  
Alberto Cambrosio ◽  
Jean-Philippe Cointet ◽  
Alexandre Hannud Abdo

This article examines the thorny issue of the relationship (or lack thereof) between qualitative and quantitative approaches in Science and Technology Studies (STS). Although quantitative methods, broadly understood, played an important role in the beginnings of STS, these two approaches subsequently strongly diverged, leaving an increasing gap that only a few scholars have tried to bridge. After providing a short overview of the origins and development of quantitative analyses of textual corpora, we critically examine the state of the art in this domain. Focusing on the availability of advanced network structure analysis tools and Natural Language Processing workflows, we interrogate the fault lines between the increasing offer of computational tools in search of possible uses and the conceptual specifications of STS scholars wishing to explore the epistemic and ontological dimensions of techno-scientific activities. Finally, we point to possible ways to overcome the tension between ethnographic descriptions and quantitative methods while continuing to avoid the dichotomies (social/cognitive, organizing/experimenting) that STS has managed to discard.


Perception ◽  
1998 ◽  
Vol 27 (5) ◽  
pp. 541-552 ◽  
Author(s):  
Haruyuki Kojima ◽  
Randolph Blake

The linking of spatial information is essential for coherent space perception. A study is reported of the contribution of temporal and spatial alignment for the linkage of spatial elements in terms of depth perception. Stereo half-images were generated on the left and right halves of a large-screen video monitor and viewed through a mirror stereoscope. The half-images portrayed a black vertically oriented bar with two brackets immediately flanking this bar and placed in crossed or uncrossed disparity relative to the bar. A pair of thin white ‘bridging lines' could appear on the black bar, always at zero disparity. Brackets and bridging lines could be flickered either in phase or out of phase. Observers judged whether the brackets appeared in front of or behind the black bar, with disparity varied. Compared to conditions when the bridging lines were absent, depth judgments were markedly biased toward “in front” when bridging lines and brackets flashed in temporal phase; this bias was much reduced when the bridging lines and brackets flashed out of phase. This biasing effect also depended on spatial offset of lines and brackets. However, perception was uninfluenced by the lateral separation between object and brackets.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4082 ◽  
Author(s):  
Jun Liang ◽  
Kunyan Wei ◽  
Qun Meng ◽  
Zhenying Chen ◽  
Jiajie Zhang ◽  
...  

BackgroundAs the world’s second-largest economy, China has launched health reforms for the second time and invested significant funding in medical informatics (MI) since 2010; however, few studies have been conducted on the outcomes of this ambitious cause.ObjectiveThis study analyzed the features of major MI meetings held in China and compared them with similar MI conferences in the United States, aiming at informing researchers on the outcomes of MI in China and the US from the professional conference perspective and encouraging greater international cooperation for the advancement of the field of medical informatics in China and, ultimately, the promotion of China’s health reform.MethodsQualitative and quantitative analyses of four MI meetings in China (i.e., CMIAAS, CHINC, CHITEC, and CPMI) and two in the US (i.e., AMIA and HIMSS) were conducted. Furthermore, the size, constituent parts and regional allocation of participants, topics, and fields of research for each meeting were determined and compared.ResultsFrom 1985 to 2016, approximately 45,000 individuals attended the CMIAAS and CPMI (academic), CHINC and CHITEC (industry), resulting in 5,085 documented articles. In contrast, in 2015, 38,000 and 3,700 individuals, respectively, attended the American HIMSS (industry) and AMIA (academic) conferences and published 1,926 papers in the latter. Compared to those of HIMSS in 2015, the meeting duration of Chinese industry CHITEC was 3 vs. 5 days, the number of vendors was 100 vs. 1,500+, the number of sub-forums was 10 vs. 250; while compared to those of AMIA, the meeting duration of Chinese CMIAAS was 2 vs. 8 days, the number of vendors was 5 vs. 65+, the number of sub-forums was 4 vs. 26. HIMSS and AMIA were more open, international, and comprehensive in comparison to the aforementioned Chinese conferences.ConclusionsThe current MI in China can be characterized as “hot in industry application, and cold in academic research.” Taking into consideration the economic scale together with the huge investment in MI, conference yield and attendee diversity are still low in China. This study demonstrates an urgent necessity to elevate the medical informatics discipline in China and to expand research fields in order to maintain pace with the development of medical informatics in the US and other countries.


1980 ◽  
Vol 24 ◽  
pp. 91-97
Author(s):  
W. N. Schreiner ◽  
C. Surdukowski ◽  
R. Jenkins

During the past three years we have undertaken the development of a complete X-Ray Powder Diffraction, facility with the goal of fully integrating experimental and analytical procedures. Such an approach potentially offers substantially improved performance over previously existing systems by virtue of its internal self-consistency and it opens the possibility of significantly extending analytic procedures for both qualitative and quantitative analyses. Our work to date has resulted in improved performance and significant extensions in both areas, and today I will report on those advances in the area of qualitative analysis.


2017 ◽  
Vol 26 (03) ◽  
pp. 1750015 ◽  
Author(s):  
Sotiris Batsakis ◽  
Ilias Tachmazidis ◽  
Grigoris Antoniou

Representation of temporal and spatial information for the Semantic Web often involves qualitative defined information (i.e., information described using natural language terms such as “before” or “overlaps”) since precise dates or coordinates are not always available. This work proposes several temporal representations for time points and intervals and spatial topological representations in ontologies by means of OWL properties and reasoning rules in SWRL. All representations are fully compliant with existing Semantic Web standards and W3C recommendations. Although qualitative representations for temporal interval and point relations and spatial topological relations exist, this is the first work proposing representations combining qualitative and quantitative information for the Semantic Web. In addition to this, several existing and proposed approaches are compared using different reasoners and experimental results are presented in detail. The proposed approach is applied to topological relations (RCC5 and RCC8) supporting both qualitative and quantitative (i.e., using coordinates) spatial relations. Experimental results illustrate that reasoning performance differs greatly between different representations and reasoners. To the best of our knowledge, this is the first such experimental evaluation of both qualitative and quantitative Semantic Web temporal and spatial representations. In addition to the above, querying performance using SPARQL is evaluated. Evaluation results demonstrate that extracting qualitative relations from quantitative representations using reasoning rules and querying qualitative relations instead of directly querying quantitative representations increases performance at query time.


2020 ◽  
Vol 9 (9) ◽  
pp. 538 ◽  
Author(s):  
Wenchao Li ◽  
Xin Liu ◽  
Chenggang Yan ◽  
Guiguang Ding ◽  
Yaoqi Sun ◽  
...  

The rapidly growing location-based social network (LBSN) has become a promising platform for studying users’ mobility patterns. Many online applications can be built based on such studies, among which, recommending locations is of particular interest. Previous studies have shown the importance of spatial and temporal influences on location recommendation; however, most existing approaches build a universal spatial–temporal model for all users despite the fact that users always demonstrate heterogeneous check-in behavior patterns. In order to realize truly personalized location recommendations, we propose a Gaussian process based model for each user to systematically and non-linearly combine temporal and spatial information to predict the user’s displacement from their currently checked-in location to the next one. The locations whose distances to the user’s current checked-in location are the closest to the predicted displacement are recommended. We also propose an enhancement to take into account category information of locations for semantic-aware recommendation. A unified recommendation framework called spatial–temporal–semantic (STS) is introduced to combine displacement prediction and the semantic-aware enhancement to provide final top-N recommendation. Extensive experiments over real datasets show that the proposed STS framework significantly outperforms the state-of-the-art location recommendation models in terms of precision and mean reciprocal rank (MRR).


Sign in / Sign up

Export Citation Format

Share Document