scholarly journals Comparison of different methods for average anatomical templates creation: do we really gain anything from a diffeomorphic framework?

2018 ◽  
Author(s):  
Vladimir S. Fonov ◽  
D. Louis Collins

AbstractIn the field of computation anatomy, the diffeomorphic framework is widely used to perform analysis of human brain anatomy in both healthy and diseased populations. While useful for analysis, the framework imposes certain implementation constraints that do not necessarily result in improved accuracy of inter-subject co-registration in case of average anatomical template (AAT) construction – a common technique used in large population studies. In this work, we evaluated several state-of-the-art non-diffeomorphic and diffeomorphic non-linear registration frameworks in terms of their ability to build AATs. While all methods generated well behaved transforms, we found that the diffeomorphic framework does not automatically guarantee an increase of accuracy in average anatomical template construction.

2021 ◽  
Vol 11 (15) ◽  
pp. 6975
Author(s):  
Tao Zhang ◽  
Lun He ◽  
Xudong Li ◽  
Guoqing Feng

Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.


Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jingdian Ming ◽  
Yongbin Zhou ◽  
Huizhong Li ◽  
Qian Zhang

AbstractDue to its provable security and remarkable device-independence, masking has been widely accepted as a noteworthy algorithmic-level countermeasure against side-channel attacks. However, relatively high cost of masking severely limits its applicability. Considering the high tackling complexity of non-linear operations, most masked AES implementations focus on the security and cost reduction of masked S-boxes. In this paper, we focus on linear operations, which seems to be underestimated, on the contrary. Specifically, we discover some security flaws and redundant processes in popular first-order masked AES linear operations, and pinpoint the underlying root causes. Then we propose a provably secure and highly efficient masking scheme for AES linear operations. In order to show its practical implications, we replace the linear operations of state-of-the-art first-order AES masking schemes with our proposal, while keeping their original non-linear operations unchanged. We implement four newly combined masking schemes on an Intel Core i7-4790 CPU, and the results show they are roughly 20% faster than those original ones. Then we select one masked implementation named RSMv2 due to its popularity, and investigate its security and efficiency on an AVR ATMega163 processor and four different FPGA devices. The results show that no exploitable first-order side-channel leakages are detected. Moreover, compared with original masked AES implementations, our combined approach is nearly 25% faster on the AVR processor, and at least 70% more efficient on four FPGA devices.


1998 ◽  
Vol 84 (1) ◽  
pp. 362-371 ◽  
Author(s):  
Roger G. Eston ◽  
Ann V. Rowlands ◽  
David K. Ingledew

Eston, Roger G., Ann V. Rowlands, and David K. Ingledew.Validity of heart rate, pedometry, and accelerometry for predicting the energy cost of children’s activities. J. Appl. Physiol. 84(1): 362–371, 1998.—Heart rate telemetry is frequently used to estimate daily activity in children and to validate other methods. This study compared the accuracy of heart rate monitoring, pedometry, triaxial accelerometry, and uniaxial accelerometry for estimating oxygen consumption during typical children’s activities. Thirty Welsh children (mean age 9.2 ± 0.8 yr) walked (4 and 6 km/h) and ran (8 and 10 km/h) on a treadmill, played catch, played hopscotch, and sat and crayoned. Heart rate, body accelerations in three axes, pedometry counts, and oxygen uptake were measured continuously during each 4-min activity. Oxygen uptake was expressed as a ratio of body mass raised to the power of 0.75 [scaled oxygen uptake (sV˙o 2)]. All measures correlated significantly ( P < 0.001) with sV˙o 2. A multiple-regression equation that included triaxial accelerometry counts and heart rate predicted sV˙o 2 better than any measure alone ( R 2 = 0.85, standard error of the estimate = 9.7 ml ⋅ kg−0.75 ⋅ min−1). The best of the single measures was triaxial accelerometry ( R 2 = 0.83, standard error of the estimate = 10.3 ml ⋅ kg−0.75 ⋅ min−1). It is concluded that a triaxial accelerometer provides the best assessment of activity. Pedometry offers potential for large population studies.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 760 ◽  
Author(s):  
Alysson R. Muotri

Human brain organoids, generated from pluripotent stem cells, have emerged as a promising technique for modeling early stages of human neurodevelopment in controlled laboratory conditions. Although the applications for disease modeling in a dish have become routine, the use of these brain organoids as evolutionary tools is only now getting momentum. Here, we will review the current state of the art on the use of brain organoids from different species and the molecular and cellular insights generated from these studies. Besides, we will discuss how this model might be beneficial for human health and the limitations and future perspectives of this technology.


Despite improvement in diagnosis and management, cardiovascular disease (CVD) is the leading cause of death and hospitalization throughout the world. The expansion of digital cardiology presents outstanding opportunities for clinicians, researchers, and health care administrators to improve outcomes and sustainability of health systems. Electronic big health data combining electronic health records (EHRs) from diverse individuals across a wide variety of platforms may provide a real-time solution to questions and problems relating to health. Very large population studies based on EHR are efficient and cost-effective, and offer an alternative to traditional research approaches. Indeed, digital cardiology can help researchers to enhance, diagnose, and manage CVD using dedicated algorithms that allow targeted and personalized CVD treatments


Author(s):  
Burton H. Singer ◽  
Jürg Utzinger ◽  
Carol D. Ryff ◽  
Yulan Wang ◽  
Elaine Holmes

Author(s):  
Jeremy T. Bradley ◽  
Marcel C. Guenther ◽  
Richard A. Hayden ◽  
Anton Stefanek

This chapter discusses the latest trends and developments in performance analysis research of large population models. In particular, it reviews GPA, a state-of-the-art Multiformalism, Multisolution (MFMS) tool that provides a framework for the implementation of various population modelling formalisms and solution methods.


1990 ◽  
Vol 36 (11) ◽  
pp. 1871-1874 ◽  
Author(s):  
J S Hill ◽  
P H Pritchard

Abstract A simple procedure for phenotyping apolipoprotein (apo) E directly from plasma has been developed for use in the lipid clinic laboratory. In this new method, 10 microL of serum or plasma is pretreated with neuraminidase (EC 3.2.1.18), which removes the sialic acid residues from apo E and eliminates additional bands, thereby ensuring correct phenotype assignment. After a rapid delipidation step, the samples are focused in vertical polyacrylamide mini-slab gels and immunoblotted with a polyclonal goat anti-apo E antibody, followed by a Protein G-peroxidase conjugate. The accuracy of this method was confirmed by comparison with the established procedure of phenotyping by isoelectric focusing of delipidated very-low-density lipoprotein. In addition, sera from 203 subjects from Vancouver, selected without conscious bias, were used to determine the local distribution of the apo E alleles. We estimate that the relative frequencies of apo E alleles epsilon 2, epsilon 3, and epsilon 4 in this population are 0.086, 0.761, and 0.153, respectively. The speed and convenience of using minigels make this procedure ideal for clinical laboratory applications and large population studies.


Sign in / Sign up

Export Citation Format

Share Document