Fig 6 - uploaded by Daniel Haehn
Content may be subject to copyright.
User interface of Dojo. The interface consists of the 2D slice view (center), the 3D volume renderer (top right corner), the toolbox (top left corner), additional textual information (bottom left corner) and an activity log including all connected users (bottom right corner).

User interface of Dojo. The interface consists of the 2D slice view (center), the 3D volume renderer (top right corner), the toolbox (top left corner), additional textual information (bottom left corner) and an activity log including all connected users (bottom right corner).

Source publication
Article
Full-text available
Proofreading refers to the manual correction of automatic segmentations of image data. In connectomics, electron microscopy data is acquired at nanometer-scale resolution and results in very large image volumes of brain tissue that require fully automatic segmentation algorithms to identify cell boundaries. However, these algorithms require hundred...

Context in source publication

Context 1
... graphical user interface of Dojo (Fig. 6) was designed with non- expert users in mind and aims to be minimalistic and clean. The 2D slice viewer uses the full window size while controls, information and help are moved to the corners to not disturb the data visualization and to provide a distraction-free environment. All textual information is kept small but still readable. ...

Similar publications

Article
Full-text available
Recent developments in serial-section electron microscopy allow the efficient generation of very large image data sets but analyzing such data poses challenges for software tools. Here we introduce Volume Annotation and Segmentation Tool (VAST), a freely available utility program for generating and editing annotations and segmentations of large vol...

Citations

... For proofreading, CAVE builds on the ChunkedGraph 33,34 . Like previous systems 31, [35][36][37][38][39] , the ChunkedGraph represents cells as connected components in a supervoxel (groups of voxels) graph. It is currently the only system for neuron-based proofreading by a distributed community but was too costly to be used on petascale datasets (~1mm 3 of brain tissue). ...
... These requirements are met by the ChunkedGraph proofreading system, whose design was described previously 33,34 . Like previous proofreading systems 31,35,36 , the ChunkedGraph stores the segmentation as a graph of atomic segments, called supervoxels ( Fig. 2a,b). Connected components in this graph represent neurons (Fig. 2c). ...
Preprint
Full-text available
Advances in Electron Microscopy, image segmentation and computational infrastructure have given rise to large-scale and richly annotated connectomic datasets which are increasingly shared across communities. To enable collaboration, users need to be able to concurrently create new annotations and correct errors in the automated segmentation by proofreading. In large datasets, every proofreading edit relabels cell identities of millions of voxels and thousands of annotations like synapses. For analysis, users require immediate and reproducible access to this constantly changing and expanding data landscape. Here, we present the Connectome Annotation Versioning Engine (CAVE), a computational infrastructure for immediate and reproducible connectome analysis in up-to petascale datasets (~1mm ³ ) while proofreading and annotating is ongoing. For segmentation, CAVE provides a distributed proofreading infrastructure for continuous versioning of large reconstructions. Annotations in CAVE are defined by locations such that they can be quickly assigned to the underlying segment which enables fast analysis queries of CAVE's data for arbitrary time points. CAVE supports schematized, extensible annotations, so that researchers can readily design novel annotation types. CAVE is already used for many connectomics datasets, including the largest datasets available to date.
... Traced neuron branches save the spatial and topological information of the OM image, making it easier to investigate the connection patterns and topology relations of the brain's nervous system. Typically, the reconstruction process consists of an automated stage that provides a preliminary tracing result and a manual stage where neuron imaging experts verify and correct the tracing result [1,9,10,22]. ...
Article
Full-text available
Neuron tracing, also known as neuron reconstruction, is an essential step in investigating the morphology of neuronal circuits and mechanisms of the brain. Since the ultra-high throughput of optical microscopy (OM) imaging leads to images of multiple gigabytes or even terabytes, it takes tens of hours for the state-of-the-art methods to generate a neuron reconstruction from a whole mouse brain OM image. We introduce InstantTrace, a novel framework that utilizes parallel neuron tracing on GPUs, achieving a significant speed boost of more than 20×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document} compared to state-of-the-art methods with comparable reconstruction quality on the BigNeuron dataset. Our framework utilizes two methods to achieve this performance advance. Firstly, it takes advantage of the sparse feature and tree structure of the neuron image, which serial tracing methods cannot fully exploit. Secondly, all stages of the neuron tracing pipeline, including the initial reconstruction stage that have not been parallelized in the past, are executed on GPU using carefully designed parallel algorithms. Furthermore, to investigate the applicability and robustness of the InstantTrace framework, a test on a whole mouse brain OM Image is conducted, and a preliminary neuron reconstruction of the whole brain is finished within 1 h on a single GPU, an order of magnitude faster than the existing methods. Our framework has the potential to significantly improve the efficiency of the neuron tracing process, allowing neuron image experts to obtain a preliminary reconstruction result instantly before engaging in manual verification and refinement.
... (3) Optimized procedures for manual corrections, as well as optimized computational and storage efficiency to reduce user operation time and hardware requirements. Previous software programs for assisting 3D cell segmentation either could not utilize the semantic segmentation results as input, generate less accurate instance segmentation results, or lack key functions essential for manual corrections of the segmentation mistakes 11,12,[21][22][23][24] . ...
... In the past, other software programs have made significant contributions to advancing the field of 3D segmentation [21][22][23][24]29,30 . It is beneficial to compare the efficiency between our software with them. ...
Article
Full-text available
Recent advances in microscopy techniques, especially in electron microscopy, are transforming biomedical studies by acquiring large quantities of high-precision 3D cell image stacks. To examine cell morphology and connectivity in organs such as the brain, scientists need to conduct cell segmentation, which extracts individual cell regions of different shapes and sizes from a 3D image. This is challenging due to the indistinct images often encountered in real biomedical research: in many cases, automatic segmentation methods inevitably contain numerous mistakes in the segmentation results, even when using advanced deep learning methods. To analyze 3D cell images effectively, a semi-automated software solution is needed that combines powerful deep learning techniques with the ability to perform post-processing, generate accurate segmentations, and incorporate manual corrections. To address this gap, we developed Seg2Link, which takes deep learning predictions as inputs and use watershed 2D + cross-slice linking to generate more accurate automatic segmentations than previous methods. Additionally, it provides various manual correction tools essential for correcting mistakes in 3D segmentation results. Moreover, our software has been optimized for efficiently processing large 3D images in diverse organisms. Thus, Seg2Link offers an practical solution for scientists to study cell morphology and connectivity in 3D image stacks.
... Nearly all major connectomics systems support the display of image pyramids. Neuroglancer, CATMAID (Saalfeld et al., 2009), NeuTu/DVID (Zhao et al., 2018;Katz and Plaza, 2019), Knossos (Helmstaedter et al., 2011), WebKnossos (Boergens et al., 2017), PyKnossos (Wanner et al., 2016), BigDataViewer (Pietzsch et al., 2015), Vaa3d (Peng et al., 2010), BossDB (Hider et al., 2019), Omni (Shearer, 2009), VAST (Berger et al., 2018), RECONSTRUCT (Fiala, 2005), VikingViewer (Anderson et al., 2011), Dojo (Haehn et al., 2014), Ilastik (Berg et al., 2019), IMOD (Kremer et al., 1996), TrakEM2 (Cardona et al., 2012), ITK-SNAP (Yushkevich et al., 2006), and SSECRETT/NeuroTrace (Jeong et al., 2010) all support the storage and display of image pyramids of electron micrographs. Most of these also support the display of segmentation overlays as well. ...
Article
Full-text available
Three-dimensional electron microscopy images of brain tissue and their dense segmentations are now petascale and growing. These volumes require the mass production of dense segmentation-derived neuron skeletons, multi-resolution meshes, image hierarchies (for both modalities) for visualization and analysis, and tools to manage the large amount of data. However, open tools for large-scale meshing, skeletonization, and data management have been missing. Igneous is a Python-based distributed computing framework that enables economical meshing, skeletonization, image hierarchy creation, and data management using cloud or cluster computing that has been proven to scale horizontally. We sketch Igneous's computing framework, show how to use it, and characterize its performance and data storage.
... As for humanproofreading, there are already several publicly available medical image processing software with manual annotation main tools, such as ITK-SNAP [12], MITK [13], 3D Slicer [14], TurgleSeg [15] and Seg3D [16]. Most of them provided interaction functions including pixel painting, contour interpolation [13,15,17,18], interactive level sets [19], surface adjustment [20], super-pixel [21] and super-voxel [22] modification. However, these tools were developed for general segmentation purposes; none of them was dedicatedly designed for efficient correction of neural network outputs. ...
Article
Full-text available
Purpose Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. Methods We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. Results For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. Conclusion Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.
... UNI-EM (Urakubo et al., 2019) provides the entire segmentation workflow including pre-processing, data preparation, training, inference, postprocessing, proofreading and visualization. One of its main components is DOJO, as a web-based tool for proofreading (Haehn et al., 2014). An additional 3D annotator is included in the package for correcting 3D segmentations using a surface mesh-based 3D viewer. ...
Preprint
Full-text available
Large-scale electron microscopy (EM) datasets generated using (semi-) automated microscopes are becoming the standard in EM. Given the vast amounts of data, manual analysis of all data is not feasible, thus automated analysis is crucial. The main challenges in automated analysis include the annotation that is needed to analyse and interpret biomedical images, coupled with achieving high-throughput. Here, we review the current state-of-the-art of automated computer techniques and major challenges for the analysis of structures in cellular EM. The advanced computer vision, deep learning and software tools that have been developed in the last five years for automatic biomedical image analysis are discussed with respect to annotation, segmentation and scalability for EM data. Integration of automatic image acquisition and analysis will allow for high-throughput analysis of millimeter-range datasets with nanometer resolution.
... The first is a data structure called the ChunkedGraph, which is the basis for proofreading. Like previous systems [11][12][13][14] , FlyWire represents neurons as connected components in a graph of supervoxels (groups of voxels). A naive implementation of this underlying data structure would scale poorly to large datasets. ...
Article
Full-text available
Due to advances in automated image acquisition and analysis, whole-brain connectomes with 100,000 or more neurons are on the horizon. Proofreading of whole-brain automated reconstructions will require many person-years of effort, due to the huge volumes of data involved. Here we present FlyWire, an online community for proofreading neural circuits in a Drosophila melanogaster brain and explain how its computational and social structures are organized to scale up to whole-brain connectomics. Browser-based three-dimensional interactive segmentation by collaborative editing of a spatially chunked supervoxel graph makes it possible to distribute proofreading to individuals located virtually anywhere in the world. Information in the edit history is programmatically accessible for a variety of uses such as estimating proofreading accuracy or building incentive systems. An open community accelerates proofreading by recruiting more participants and accelerates scientific discovery by requiring information sharing. We demonstrate how FlyWire enables circuit analysis by reconstructing and analyzing the connectome of mechanosensory neurons.
... However, due to the limited availability of ground-truth data, they suffer from over-and under-segmentation. Haehn et al. [22] developed desktop applications for proofreading the automatic algorithm segmentations. Unfortunately, these methods are unapplicable to WFM images due to the differences in neuron visual representation, details level, and images quality. ...
Article
Full-text available
We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score vs. voxel intensity value. For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the study of cholinergic neurons, which are severely affected in Alzheimer's disease.
... However, all existing methods are not perfect and prone to errors, particularly when applied to real data, which requires manual proofreading by humans. Existing proofreading methods are primarily based on the interactive manual correction of either merge or split errors (see Figure 1) using an intuitive user interface and visualization [9,12]. Even with the support of such interactive tools, manual proofreading is a time-consuming and labor-intensive task, resulting in a bottleneck in the connectome analysis workflow. ...
Preprint
Full-text available
The segmentation of nanoscale electron microscopy (EM) images is crucial but challenging in connectomics. Recent advances in deep learning have demonstrated the significant potential of automatic segmentation for tera-scale EM images. However, none of the existing segmentation methods are error-free, and they require proofreading, which is typically implemented as an interactive, semi-automatic process via manual intervention. Herein, we propose a fully automatic proofreading method based on reinforcement learning. The main idea is to model the human decision process in proofreading using a reinforcement agent to achieve fully automatic proofreading. We systematically design the proposed system by combining multiple reinforcement learning agents in a hierarchical manner, where each agent focuses only on a specific task while preserving dependency between agents. Furthermore, we also demonstrate that the episodic task setting of reinforcement learning can efficiently manage a combination of merge and split errors concurrently presented in the input. We demonstrate the efficacy of the proposed system by comparing it with state-of-the-art proofreading methods using various testing examples.
... For instance, unlike transmission EMs, the more commonly used scanning EMs measure the backscattering of the electron beam [42]. Still, direct volume rendering is used [29,23], often in combination with other techniques, to proofread [17,1] and analyze or explore [2,5,4,21] automatically [25,50,39] or semi-automatically [22] generated segmentations. ...
Preprint
In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178$\times$ speed-up over na$\mathrm{\"{i}}$ve parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions.