Run Transfer Learning Example against an image file

Hello, can anyone help me to run an image from file through the graph produced by the MobileNetTransferLearningExample? I have run the example, here modified to use the CIFAR-100 dataset rather than CIFAR-10, gotten a 78% accuracy rate, and saved the SameDiff to disk using asFlatFile(). In this test code I load the SameDiff from disk and run a jpg through it. I get a result from Predictions/Output, but I don’t know how to interpret it. How can I map this 4D array to a single label? Thank you.

Code:

public class LoadMobileNet {

public static void main(String[] args) throws Exception {

    //cifar-100
    int numClasses = 100; 

    SameDiff sd = SameDiff.fromFlatFile(new File("/tmp/frozenGraph.dat"));

    String urlString = "https://cdn.britannica.com/16/126516-050-2D2DB8AC/Triumph-Rocket-III-motorcycle-2005.jpg";
    int w = 1600;
    int h = 1131;
    URL url = new URL(urlString);
    INDArray testImage = new ImageLoader(h, w, 3).asMatrix(url.openStream());

    INDArray out = sd.batchOutput()
        .input("input", testImage)
        .output("Predictions/Output")
        .outputSingle();

    System.out.println("Result: " + out.shapeInfoToString());
}

Output:

Result: Rank: 4, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,30,44,100], Stride: [132000,4400,100,1]

@MrForum The output is an image with multiple channels. I don’t get the problem.

@jijiji the original network took an input image and output the match percentage to each label by label index, so I assumed after transfer learning was applied, the output would be similar. I’m not sure what could be done with an output image when the goal is to classify the input by label, any guidance would be much appreciated. Thanks.

Result: Rank: 4, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,30,44,100], Stride: [132000,4400,100,1]

is not a classification output typically. May be you miss layers here. Check graph configuration.

@jijiji it’s the DL4J MobileNetTransferLearningExample

That’s weird. Try another backends.

That certainly shouldn’t be a backend dependent issue.

Depending on what exactly you’ve done, it may be that the output you are selecting isn’t the correct one. Can you share something that allows us to reproduce this from start to finish?

@treo You can reproduce by running MobileNetTransferLearningExample with a single line added at the end of main():

sd.asFlatFile(new File(“/tmp/cifar10graph.dat”));

Then load it and run an image through it with:

public static void main(String args) throws Exception {

    SameDiff sd = SameDiff.fromFlatFile(new File("/tmp/cifar10graph.dat"));

    String urlString = "https://cdn.britannica.com/16/126516-050-2D2DB8AC/Triumph-Rocket-III-motorcycle-2005.jpg";
    int w = 1600;
    int h = 1131;
    URL url = new URL(urlString);
    INDArray testImage = new ImageLoader(h, w, 3).asMatrix(url.openStream());
    		
    INDArray out = sd.batchOutput()
        .input("input", testImage)
        .output("Predictions/Output")
        .outputSingle();
    
    System.out.println("Result: " + out.shapeInfoToString());
}

Yielding the same result as above but with 10 rather than 100 as the last dim size:

Result: Rank: 4, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,30,44,10], Stride: [13200,440,10,1]

How could you use this result to classify the image against the labels?

Thanks.

That looks interesting. The default image input size for MobileNet is 244x244 if I remember correctly. You are putting in an image of 1600x1131.

What is the shape of the output when you put a 244x244 image in? I guess it is probably [1,1,1,10] (or [1,1,1,100] for cifar100).

The easiest way forward is probably to just scale your image accordingly. If you don’t want to do that, I guess you will want to add some kind of pooling layer at the end.

@treo for a 244x244 image the result was

Result: Rank: 4, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,2,2,10], Stride: [40,20,10,1]