Vaa3D

Vaa3D[1][2] (in Chinese ‘挖三维’) is an Open Source visualization and analysis software suite created mainly by Hanchuan Peng and his team at Janelia Research Campus, HHMI and Allen Institute for Brain Science. The software performs 3D, 4D and 5D rendering and analysis of very large image data sets, especially those generated using various modern microscopy methods, and associated 3D surface objects. This software has been used in several large neuroscience initiatives and a number of applications in other domains. In a recent Nature Methods review article,[3] it has been viewed as one of the leading Open Source software suites in the related research fields. It has also been used in several other award-winning work, e.g. mapping of dragonfly neurons and large-scale visualization of cellular data.

Creation

Vaa3D was created in 2007 to tackle the large-scale brain mapping project at Janelia Farm of Howard Hughes Medical Institute. The initial goal was to quickly visualize any of the tens of thousands of large 3D laser scanning microscopy image stacks of fruit fly brains, each with a few gigabytes in volume. Low level OpenGL based 3D rendering was developed to provide as fast possible direct rendering of multi-dimensional image stacks. C/C++ and Qt were used to create cross-platform compatibility so the software can run on Mac, Linux and Windows. Strong functions for synchronizing multiple 2D/3D/4D/5D rendered views, generating global and local 3D viewers, and virtual finger, allow Vaa3D be able to streamline a number of operations for complicated brain science tasks, e.g. brain comparison and neuron reconstruction. Vaa3D also provides an extensible plugin interface that currently host many dozens of open source plugins contributed by researchers worldwide.

3D Visualization of 3D, 4D, and 5D Image Data

Vaa3D is able to render 3D, 4D, and 5D data (X,Y,Z,Color,Time) quickly. The volume rendering is typically at the scale of a few gigabytes and can be extended to the scale of terabytes per image set. The visualization is made fast by using OpenGL directly.

Vaa3D handles the problem of large data visualization via several techniques. One way is to combine both the synchronized and asynchronized data rendering, which displays the full resolution data only when the rotation or other dynamic display of the data is paused, and otherwise displays only a coarse level image.

An alternative method used in Vaa3D is to combine both global and local 3D viewers. The global 3D viewer optionally displays only the downsampled image while the local 3D viewer displays full resolution image but only at certain local areas. Intuitive 3D navigation is done by determining a 3D region of interest using the Virtual Finger technique followed by generating in real-time a specific 3D local viewer for such a region of interest.

Fast 3D Human-Machine Interaction, Virtual Finger and 3D WYSIWYG

3D visualization of an image stack is essentially a passive process to observe the data. The combination of an active way to input a user's preference of specific locations quickly greatly increase the efficiency of exploration of the 3D or higher-dimensional image contents. Nonetheless, ‘exploring 3D image content’ requires that a user is able to efficiently interact with and quantitatively profile the patterns of image objects using a graphical user interface of 3D image-visualization tools. Virtual Finger, or 3D-WYSIWYG ('What You See in 2D is What You Get in 3D') technique allows efficient generation and use of the 3D location information from 2D input of a user on the typical 2D display or touch devices.

The Virtual Finger technique maps the identified 2D user input via 2D display devices, such as a computer screen, back to the 3D volumetric space of the image. Mathematically, this is an often difficult inverse problem. However, by utilizing the spatial sparseness and continuity information in many 3D image data sets, this inverse problem can be well solved, as shown in a recent paper.[4]

The Vaa3D's Virtual Finger technology allows instant and random-order exploration of complex 3D image content, similar to using real fingers explore the real 3D world using a single click or stroke to locate 3D objects. It has been used to boost the performance of image data acquisition, visualization, management, annotation, analysis and the use of the image data for real-time experiments such as microsurgery.

Rendering of Surface Objects

Vaa3D displays three major types of 3D surface objects:

These 3D surface objects are also often arranged as "sets". Vaa3D can display multiple sets of any of these surface objects, which can also be overlaid on top of image voxel data using different overlaying relationships. These features are useful for colocalization, quantification, comparison, and other purposes.

Applications

The software has been used in a number of applications such as the following examples.

Neuron Reconstruction and Quantification

Vaa3D provides a Vaa3D-Neuron package to reconstruct, quantify, and compare 3D morphology of single neurons of a number of species.

Vaa3D-Neuron allows several ways of neuron tracing.

Single Cell Analysis for C. elegans, Fruitfly, and Mouse

Vaa3D was used to extract single cells from several studies of the nematode C. elegans,[8] the insect fruitfly,[9] mouse,[10] and other species. The primary functions used were 3D image segmentation for extracting and quantifying single cells' gene expression levels, and fast cell counting in specific brain areas. Vaa3D also provides methods to annotate these cells and identify their names.

Vaa3D also provides a SRS (Simultaneous Segmentation and Recognition) algorithm [11] for 3D segmentation of complicated cells, which are often touching each other. This was done by adaptively mapping an predefined "atlas" (layout map of some cells) to an image iteratively using the Expectation Maximization algorithm until convergence. SRS has been shown to reduce over-segmentation and under-segmentation errors compared to usually used watershed segmentation method.

Brain Mapping and 3D Image Registration

Vaa3D has been used in several brain mapping projects, in terms of both pattern alignment (registration) and multiplexing based analysis.

Extensions

Vaa3D can be extended using a plugin interface. A wizard called "Plugin Creator" is also provided to generate a basic template of a new plugin.

The following main categories of plugins are currently released.

Vaa3D has also been extended to support ITK, Matlab, Bioformats, OpenCV and other widely used software. One extension, called Vaa3D-TeraFly, is to visualize terabytes of image data using a Google-Earth style dive in view of data.

See also

References

  1. Peng, H.; et al. (2010). "V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets". Nature Biotechnology. 28 (4): 348–353. doi:10.1038/nbt.1612.
  2. Peng, H.; et al. (2014). "Extensible visualization and analysis for multidimensional images using Vaa3D". Nature Protocols. 9 (1): 193–208. doi:10.1038/nprot.2014.011.
  3. Eliceiri, K; et al. (2012). "Biological imaging software tools". Nature Methods. 9 (7): 697–710. doi:10.1038/nmeth.2084.
  4. Peng, H.; et al. (2014). "Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis". Nature Communications. 5: 4342. doi:10.1038/ncomms5342.
  5. Peng, H.; et al. (2011). "Automatic 3D neuron tracing using all-path pruning". Bioinformatics. 27 (13): i239–i247. doi:10.1093/bioinformatics/btr237.
  6. Xiao, H.; et al. (2013). "APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of gray-weighted image distance-trees". Bioinformatics. 29 (11): 1448–1454. doi:10.1093/bioinformatics/btt170. PMID 23603332.
  7. Peng, Hanchuan; Zhou, Zhi; Meijering, Erik (2016). "Automatic Tracing of Ultra-Volume of Neuronal Images". BioRXiv. doi:10.1101/087726.
  8. Long, F.; et al. (2009). "A 3D digital atlas of C. elegans and its application to single-cell analyses". Nature Methods. 6 (9): 667–672. doi:10.1038/nmeth.1366.
  9. Heckscher, E.; et al. (2014). "Atlas-builder software and the eNeuro atlas: resources for developmental biology and neuroscience". Development. 141: 2524–2532. doi:10.1242/dev.108720.
  10. Aponte, Y.; et al. (2011). "AGRP neurons are sufficient to orchestrate feeding behavior rapidly and without training". Nature Neuroscience. 14 (14): 351–355. doi:10.1038/nn.2739.
  11. Qu, L.; et al. (2011). "Simultaneous recognition and segmentation of cells: application in C.elegans". Bioinformatics. 27 (20): 2895–2902. doi:10.1093/bioinformatics/btr480. PMC 3187651Freely accessible. PMID 21849395.
  12. Qu, L.; et al. (2014). "LittleQuickWarp: an ultrafast image warping tool". Methods. 73: 38–42. doi:10.1016/j.ymeth.2014.09.002.
  13. Peng, H.; et al. (2011). "BrainAligner: 3D registration atlases of Drosophila brains". Nature Methods. 8 (6): 493–498. doi:10.1038/nmeth.1602.
This article is issued from Wikipedia - version of the 11/21/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.