Test issue in semantic segmentation using Deep learning 4j

Hello @AlexBlack , please can you see the following issue: Test issue in semantic segmentation using DL4J · Issue #8644 · eclipse/deeplearning4j · GitHub

it is closed but fixed yet, i uploaded all the resources that can you need. Thank you so much for the help.

What about the solution there doesn’t work for you?

thank you so much @treo for your reply.

I already used `NormalizerMinMaxScaler in training process: SemanticSegmentation.java · GitHub

and don’t work

That is the point. NormalizerMinMaxScaler is your problem here.
As I’ve explained already in the original answer on Github:
You can not just create a new one and fit it on your test data.
You have to serialize it when you originally train your model, and when testing it, you have to load that serialized normalizer.

When you create a new normalizer and fit it on your test data, it will only have the statistics of that test data. So if you only have a single picture in your test folder, it will only have the statistics of that single picture. This in turn results in plain wrong normalization.

You always have to use the exactly same normalization for inference / evaluation / testing as you’ve used for training. Otherwise you put yourself into a garbage in, garbage out situation.

ahh ok, thank you.

now i will try the second solution that you have proposed by using ImagePreProcessingScaler instead of NormalizerMinMaxScaler.

Unless your pictures have the range of 0 to 255, then you will need retraining with that solution as well. But If I remember correctly the grayscale images in your dataset excerpt that you shared with us did indeed range from pure black to pure white, so that might actually work without retraining in your case.

no problem i will retrain the model, the training process take only 5 hours.

Hi @treo @AlexBlack , i serialised the NormalizerMinMaxScaler after training like this:

FileOutputStream fi = new FileOutputStream(DATA_PATH+“/NormalizerMinMaxScaler.txt”);
ObjectOutputStream oi = new ObjectOutputStream(fi);
oi.writeObject(scaler);
oi.close();

and i loaded it before testing like this:

NormalizerMinMaxScaler scaler = (NormalizerMinMaxScaler)oi.readObject();;
scaler.fitLabel(true);
scaler.fit(iter);
iter.setPreProcessor(scaler);
oi.close();

but the issue still not fixed.

There are multiple problems here.

First, you should serialize your Normalizer using NormalizerSerializer like that:

NormalizerSerializer.getDefault().write(scaler, DATA_PATH+"/NormalizerMinMaxScaler.bin");

And deserialize it like that:

NormalizerMinMaxScaler scaler = NormalizerSerializer.getDefault().restore(DATA_PATH+"/NormalizerMinMaxScaler.bin");

When you serialize it like you did, you risk that you will not be able to deserialize it with a newer version of dl4j.

For now it should still work the way you did it (so you don’t have to go out and retrain to try the next step).

The problem here is that you are still fitting on the test data. You should not do that. It should load all the necessary statistics from the serialized object.
So remove the fit... lines and it should work.

i deleted file from test:
FileInputStream fi = new FileInputStream(DATA_PATH+“/NormalizerMinMaxScaler.txt”);
ObjectInputStream oi = new ObjectInputStream(fi);
NormalizerMinMaxScaler scaler = (NormalizerMinMaxScaler)oi.readObject();
iter.setPreProcessor(scaler);
oi.close();

The results enhanced significantly when using only one image. But the the segmentation results changes little bit when adding other image. Anyway is better than the first solution.

now i wil retrain the model with and using NormalizerSerializer.

i tested ImagePreProcessingScaler for training segmentation and not work, i think there is a problem in this class for segmentation, i found this issue ND4J: ImagePreProcessingScaler should support segmentation · Issue #8135 · eclipse/deeplearning4j · GitHub

The referenced issue has been fixed in November 2019, and the fix was included in beta6. What exactly doesn’t work for you?

I get wrong values of loss during training process. for example when i used Cross Entropy: Binary Classification as a loss function i get negative values.

I see. Even though the scaler has support for transforming labels for segmentation, it can’t actually be configured to do so when used as a normalizer.

I’ve created an issue: ImagePreProcessingScaler can not be configured to preprocess labels · Issue #8731 · eclipse/deeplearning4j · GitHub

Thank you for noticing this :+1:

so i can still using NormalizerMinMaxScaler for images normalization or is better using ImagePreProcessingScaler? i don’t know if they will give the same results, as my objective just rescaling the images and labels between 0 and 1.

if you want to use ImagePreProcessingScaler, the easiest way to do it right now would be to subclass it, and override preProcess like this:

@Override
    public void preProcess(DataSet toPreProcess) {
        this.preProcess(toPreProcess.getFeatures());
        this.transformLabel(toPreProcess.getLabels());
    }

You can also take a look at your NormalizerMinMaxScaler, if its stats show that the range it is working on is 0 to 255, then you should get essentially the same results.

no its working thank you so much, thank you for the effort.
Please just last question what is the utility of using fit method in ImagePreProcessingScaler and NormalizerMinMaxScaler?

ImagePreProcessingScaler

fit(dataSetIterator) will do nothing. All the configuration that this normalizer needs will have to be provided upon construction. Because fit is a no-op here, you don’t have to serialize and save the normalizer after training and you don’t have to restore it for inference, you can just create a new one.

NormalizerMinMaxScaler

fit(dataSetIterator) will iterate through the provided DataSetIterator and learn the statistics of the underlying data. Those statistics will inform how it actually applies the normalization. For this reason you have to serialize and save this normalizer so you can use the same statistics for normalizing test data as you’ve used when you originally trained your model.

@treo thank you so much, i really appreciate your help.

@AbdelmajidB feel free to mark a post here as the solution, so this topic will get marked as solved :slight_smile: