scholarly journals Visualizing anatomically registered data with Brainrender

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Federico Claudi ◽  
Adam L Tyson ◽  
Luigi Petrucco ◽  
Troy W Margrie ◽  
Ruben Portugues ◽  
...  

Three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques generate large multidimensional datasets that can be registered to a common reference frame. Generating insights from such datasets depends critically on visualization and interactive data exploration, but this a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. Here, we present brainrender: an open-source Python package for interactive visualization of multidimensional datasets registered to brain atlases. Brainrender facilitates the creation of complex renderings with different data types in the same visualization and enables seamless use of different atlas sources. High-quality visualizations can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data.

Author(s):  
Federico Claudi ◽  
Adam L. Tyson ◽  
Luigi Petrucco ◽  
Troy W. Margrie ◽  
Ruben Portugues ◽  
...  

The recent development of high-resolution three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques has fueled the generation of large datasets that can be registered to a common reference frame. This registration facilitates integrating data from different sources and resolutions to assemble rich multidimensional datasets. Generating insights from these new types of datasets depends critically on the ability to easily visualize and explore the data in an interactive manner. This is, however, a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. To address this challenge, we have developed brainrender: a generic, open-source Python package for simultaneous and interactive visualization of multidimensional datasets registered to brain atlases. Brainrender has been designed to facilitate the creation of complex custom renderings and can be used programmatically or through a graphical user interface. It can easily render different data types in the same visualization, including user-generated data, and enables seamless use of different brain atlases using the same code base. In addition, brainrender generates high-quality visualizations that can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data.


Plant Methods ◽  
2019 ◽  
Vol 15 (1) ◽  
Author(s):  
Rachele Tofanelli ◽  
Athul Vijayan ◽  
Sebastian Scholz ◽  
Kay Schneitz

Abstract Background A salient topic in developmental biology relates to the molecular and genetic mechanisms that underlie tissue morphogenesis. Modern quantitative approaches to this central question frequently involve digital cellular models of the organ or tissue under study. The ovules of the model species Arabidopsis thaliana have long been established as a model system for the study of organogenesis in plants. While ovule development in Arabidopsis can be followed by a variety of different imaging techniques, no experimental strategy presently exists that enables an easy and straightforward investigation of the morphology of internal tissues of the ovule with cellular resolution. Results We developed a protocol for rapid and robust confocal microscopy of fixed Arabidopsis ovules of all stages. The method combines clearing of fixed ovules in ClearSee solution with marking the cell outline using the cell wall stain SCRI Renaissance 2200 and the nuclei with the stain TO-PRO-3 iodide. We further improved the microscopy by employing a homogenous immersion system aimed at minimizing refractive index differences. The method allows complete inspection of the cellular architecture even deep within the ovule. Using the new protocol we were able to generate digital three-dimensional models of ovules of various stages. Conclusions The protocol enables the quick and reproducible imaging of fixed Arabidopsis ovules of all developmental stages. From the imaging data three-dimensional digital ovule models with cellular resolution can be rapidly generated using image analysis software, for example MorphographX. Such digital models will provide the foundation for a future quantitative analysis of ovule morphogenesis in a model species.


2020 ◽  
Author(s):  
Piotr Majka ◽  
Sylwia Bednarek ◽  
Jonathan M. Chan ◽  
Natalia Jermakow ◽  
Cirong Liu ◽  
...  

AbstractThe rapid adoption of marmosets in neuroscience has created a demand for three dimensional (3D) atlases of the brain of this species to facilitate data integration in a common reference space. We report on a new open access template of the marmoset cortex (the Nencki–Monash, or NM template), representing a morphological average of 20 brains of young adult individuals, obtained by 3D reconstructions generated from Nissl-stained serial sections. The method used to generate the template takes into account morphological features of the individual brains, as well as the borders of clearly defined cytoarchitectural areas. This has resulted in a resource which allows direct estimates of the most likely coordinates of each cortical area, as well as quantification of the margins of error involved in assigning voxels to areas, and preserves quantitative information about the laminar structure of the cortex. We provide spatial transformations between the NM and other available marmoset brain templates, thus enabling integration with magnetic resonance imaging (MRI) and tracer-based connectivity data. The NM template combines some of the main advantages of histology-based atlases (e.g. information about the cytoarchitectural structure) with features more commonly associated with MRI-based templates (isotropic nature of the dataset, and probabilistic analyses). The underlying workflow may be found useful in the future development of brain atlases that incorporate information about the variability of areas in species for which it may be impractical to ensure homogeneity of the sample in terms of age, sex and genetic background.Graphical abstractHighlightsA 3D template of the marmoset cortex representing the average of 20 individuals.The template is based on Nissl stain and preserves information about cortical layers.Probabilistic mapping of areas, cortical thickness, and layer intensity profiles.Includes spatial transformations to other marmoset brain atlases.AbbreviationsFor a list of areas and their abbreviations see Table S2.


Author(s):  
Thomas Blanc ◽  
Mohamed El Beheiry ◽  
Jean-Baptiste Masson ◽  
Bassam Hajj

AbstractThe quantity of experimentally recorded point cloud data, such generated in single-molecule experiments, is increasing continuously in both size and dimension. Gaining an intuitive understanding of the data and facilitating multi-dimensional data analysis remains a challenge. It is especially challenging when static distribution properties are not predictive of dynamical properties. Here, we report a new open-source software platform, Genuage, that enables the easy perception, interaction and analysis of complex multidimensional point cloud datasets by leveraging virtual reality. We illustrate the benefit of the Genuage with examples of three-dimensional static and dynamic localization microscopy datasets, as well as some synthetic datasets. Genuage has a large breadth of usage modes, due to its compatibility with arbitrary multidimensional data types extending beyond the single-molecule research community.


Author(s):  
Jerome J. Paulin

Within the past decade it has become apparent that HVEM offers the biologist a means to explore the three-dimensional structure of cells and/or organelles. Stereo-imaging of thick sections (e.g. 0.25-10 μm) not only reveals anatomical features of cellular components, but also reduces errors of interpretation associated with overlap of structures seen in thick sections. Concomitant with stereo-imaging techniques conventional serial Sectioning methods developed with thin sections have been adopted to serial thick sections (≥ 0.25 μm). Three-dimensional reconstructions of the chondriome of several species of trypanosomatid flagellates have been made from tracings of mitochondrial profiles on cellulose acetate sheets. The sheets are flooded with acetone, gluing them together, and the model sawed from the composite and redrawn.The extensive mitochondrial reticulum can be seen in consecutive thick sections of (0.25 μm thick) Crithidia fasciculata (Figs. 1-2). Profiles of the mitochondrion are distinguishable from the anterior apex of the cell (small arrow, Fig. 1) to the posterior pole (small arrow, Fig. 2).


Author(s):  
Karen F. Han

The primary focus in our laboratory is the study of higher order chromatin structure using three dimensional electron microscope tomography. Three dimensional tomography involves the deconstruction of an object by combining multiple projection views of the object at different tilt angles, image intensities are not always accurate representations of the projected object mass density, due to the effects of electron-specimen interactions and microscope lens aberrations. Therefore, an understanding of the mechanism of image formation is important for interpreting the images. The image formation for thick biological specimens has been analyzed by using both energy filtering and Ewald sphere constructions. Surprisingly, there is a significant amount of coherent transfer for our thick specimens. The relative amount of coherent transfer is correlated with the relative proportion of elastically scattered electrons using electron energy loss spectoscopy and imaging techniques.Electron-specimen interactions include single and multiple, elastic and inelastic scattering. Multiple and inelastic scattering events give rise to nonlinear imaging effects which complicates the interpretation of collected images.


Author(s):  
Nora Rat ◽  
Iolanda Muntean ◽  
Diana Opincariu ◽  
Liliana Gozar ◽  
Rodica Togănel ◽  
...  

Development of interventional methods has revolutionized the treatment of structural cardiac diseases. Given the complexity of structural interventions and the anatomical variability of various structural defects, novel imaging techniques have been implemented in the current clinical practice for guiding the interventional procedure and for selection of the device to be used. Three– dimensional echocardiography is the most used imaging method that has improved the threedimensional assessment of cardiac structures, and it has considerably reduced the cost of complications derived from malalignment of interventional devices. Assessment of cardiac structures with the use of angiography holds the advantage of providing images in real time, but it does not allow an anatomical description. Transesophageal Echocardiography (TEE) and intracardiac ultrasonography play major roles in guiding Atrial Septal Defect (ASD) or Patent Foramen Ovale (PFO) closure and device follow-up, while TEE is the procedure of choice to assess the flow in the Left Atrial Appendage (LAA) and the embolic risk associated with a decreased flow. On the other hand, contrast CT and MRI have high specificity for providing a detailed description of structure, but cannot assess the flow through the shunt or the valvular mobility. This review aims to present the role of modern imaging techniques in pre-procedural assessment and intraprocedural guiding of structural percutaneous interventions performed to close an ASD, a PFO, an LAA or a patent ductus arteriosus.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sign in / Sign up

Export Citation Format

Share Document