Why is mini batch size passed to the shape of placeholders?

Maybe a dumb question, but I thought that the dataset object contain mini batches, and batching is handled by the training functiin

@Cezar a dataset can be ANYTHING including the full dataset if it’s small enough. The iterator just handles batching. Dataset is what it operates on.

yes and in samediff why do i pass the minibatch size to the shape parameter in samediff.placeholder?

Can you show me a bit more of what you’re doing? Are you following an example? That might help me answer your question a bit. Placeholders can handle variable shapes. MInibatch size is usually for normalization of the gradients during training.

In all of the same diff examples, I saw that the shape always starts with -1 for mini batch/batch size. I was wondering why that is, why I cannot just put the shape I use directly.


     //First: Let's create our placeholders. Shape: [minibatch, in/out]
        SDVariable input = sd.placeHolder("input", DataType.FLOAT, -1, nIn);
        SDVariable labels = sd.placeHolder("labels", DataType.FLOAT, -1, 1);

-1 is just better because it automatically infers the shape based on the first example thrown in.