The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Card for DenSpine
Volumetric Files
The dataset is comprised of dendrites from 3 brain samples: seg_den (also known as M50), mouse (M10), and human (H10).
Every species has 3 volumetric .h5 files:
{species}_raw.h5: instance segmentation of entire dendrites in volume (labelled1-50or1-10), where trunks and spines share the same label{species}_spine.h5: "binary" segmentation, where trunks are labelled0and spines are labelled theirrawdendrite label{species}_seg.h5: spine instance segmentation (labelled51-...or11-...), where every spine in the volume is labelled uniquely
Point Cloud Files
In addition, we provide preprocessed point clouds sampled along a dendrite's centerline skeletons for ease of use in evaluating point-cloud based methods.
data=np.load(f"{species}_1000000_10000/{idx}.npz", allow_pickle=True)
trunk_id, pc, trunk_pc, label = data["trunk_id"], data["pc"], data["trunk_pc"], data["label"]
trunk_idis an integer which corresponds to the dendrite'srawlabelpcis a shape[1000000,3]isotropic point cloudtrunk_pcis a shape[skeleton_length, 3](ordered) array, which represents the centerline of the trunk ofpclabelis a shape[1000000]array with values corresponding to theseglabels of each point in the point cloud
We provide a comprehensive example of how to instantiate a PyTorch dataloader using our dataset in dataloader.py (potentially using the FFD transform with frenet=True).
Training splits for seg_den
The folds used for training/evaluating the seg_den dataset, based on raw labels are defined as follows:
seg_den_folds = [
[3, 5, 11, 12, 23, 28, 29, 32, 39, 42],
[8, 15, 19, 27, 30, 34, 35, 36, 46, 49],
[9, 14, 16, 17, 21, 26, 31, 33, 43, 44],
[2, 6, 7, 13, 18, 24, 25, 38, 41, 50],
[1, 4, 10, 20, 22, 37, 40, 45, 47, 48],
]
- Downloads last month
- 39