Skip to content

two question about spleen_segmentation_3d tutorial #7959

@hz1z

Description

@hz1z

in you tutorial
spleen_segmentation_3d

question1

val_outputs = [post_pred(i) for i in decollate_batch(val_outputs)]
val_labels = [post_label(i) for i in decollate_batch(val_labels)]

Isn’t this redundant? This code decollates the data from the batch dimension and then wraps it into a list, effectively doing nothing, right? Why not directly use post_pred(val_outputs) and post_label(val_labels)? Does it mean that the AsDiscrete transform has to remove the data from the batch dimension to work?

question2

train_transforms = Compose(
    [
        LoadImaged(keys=["image", "label"]),
        EnsureChannelFirstd(keys=["image", "label"]),
        ScaleIntensityRanged(
            keys=["image"],
            a_min=-57,
            a_max=164,
            b_min=0.0,
            b_max=1.0,
            clip=True,
        ),
        CropForegroundd(keys=["image", "label"], source_key="image"),
        Orientationd(keys=["image", "label"], axcodes="RAS"),
        Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
        RandCropByPosNegLabeld(
            keys=["image", "label"],
            label_key="label",
            spatial_size=(96, 96, 96),
            pos=1,
            neg=1,
            num_samples=4,
            image_key="image",
            image_threshold=0,
        ),
# for inference
roi_size = (160, 160, 160)
sw_batch_size = 4

why roi_size = (160, 160, 160)?
for training,you randomly crop the data used to a size of (96, 96, 96)? Why are you using (160, 160, 160) for inference instead?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions