Empanada and MitoNet
Overview
EMPANADA stands for EM Panoptic Any Dimension Annotation. It’s a library developed to efficiently train and deploy deep learning models for the panoptic segmentation of large 2D and 3D EM images. We trained a general model called MitoNet to automatically segment mitochondrial instances, using empanada and a highly heterogeneous dataset of labeled mitochondria. MitoNet is currently available for use in napari with the empanada-napari plugin – though we’re open to supporting its use in other software platforms.
Resources
-
empanada: Source code for the empanada library. Documentation is here.
-
empanada-napari: Source code for the empanada-napari plugin. Documentation is here.
-
CEM1.5M: An unlabeled dataset of 1.5 million EM images of cells. Used for self-supervised pre-training and selecting heterogenous image data to annotate for segmentation model training.
-
CEM1.5M Pre-trained Weights: PyTorch weights for a ResNet50 model pre-trained on CEM1.5M using the SwAV algorithm.
-
CEM-MitoLab: A dataset of ~22,000 images containing over 135,000 individually labeled mitochondria. This is the dataset we used to train MitoNet.
-
MitoNet models: Model definition and weights as PyTorch scripted modules (includes optimized GPU and CPU versions).
-
Benchmark datasets: Six benchmark volumes of instance segmented mitochondria and one benchmark dataset of 100 TEM images from diverse EM datasets.
Citing this work
If you find any of these resources useful in your work, please cite:
@article {Conrad2022.03.17.484806,
author = {Conrad, Ryan and Narayan, Kedar},
title = {Instance segmentation of mitochondria in electron microscopy images with a generalist deep learning model},
elocation-id = {2022.03.17.484806},
year = {2022},
doi = {10.1101/2022.03.17.484806},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2022/03/18/2022.03.17.484806},
eprint = {https://www.biorxiv.org/content/early/2022/03/18/2022.03.17.484806.full.pdf},
journal = {bioRxiv}
}