How to use DynamicCustomOps objects for their name-sake operations

Looking through the docs,I see a lot of names I recognize from NumPy as extensions/implementations of CustomOp or DynamicCustomOp

These objects don’t have any methods that seem meant for the operations implied by the class names. In fact, I looked at “Direct Known Subclasses” of DynamicCustomOp and I see a lot of functionality that I would love to use, but don’t know how to implement.

For example BinCount(): https://deeplearning4j.org/api/latest/org/nd4j/linalg/api/ops/impl/transforms/BinCount.html

What is the methodology for putting these objects to use?

@wcneill you will generally want to access those through samediff if possible, but if there is a c++ op that doesn’t have java bindings, you can access it via DynamicCustomOp.

Nd4j’s ops can be found here:

Is there a code snippet somewhere that I’m not seeing that explains how to use these ops via SameDiff or other?

I’ve been peaking at the SameDiff docs, as well as linalg.api.ops.custom which has a lot of familiar names from Numpy but it’s not very clear how to use these objects. For example, the constructor of the Roll object looks like the method signature for np.roll:

Roll(@NonNull INDArray input, @NonNull INDArray axes, @NonNull INDArray shifts) 

But, I don’t see any methods in the Roll class that actually do the operation. I only see opName(), tensorflowName() and calculateOutputDatatypes().

@wcneill DynanicCustomOps are a way of accessing underlying c++ ops that may not be already mapped by us. You can see the full list of ops here:
https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/resources/nd4j-op-def.pbtxt

All you need to know is DyamicCustomOps have addInput(…), addTArgs (add float argument), addIArg(…) which allow you to add arguments that you want to pass to the underlying c++ runtime following the schema above.

Here’s the roll declaration:

opList {
  name: "roll"
  argDescriptor {
    name: "shift"
    argType: INT64
  }
  argDescriptor {
    name: "outputs"
    argType: OUTPUT_TENSOR
  }
  argDescriptor {
    name: "input"
    argType: INPUT_TENSOR
  }
  argDescriptor {
    name: "shiftsI"
    argType: INPUT_TENSOR
    argIndex: 1
  }
  argDescriptor {
    name: "dimensions"
    argType: INPUT_TENSOR
    argIndex: 2
  }
  opDeclarationType: CONFIGURABLE_OP_IMPL
}

These signatures are generated from our code gen tool found in contrib:

I generally run this by importing in intellij. There hasn’t been a need to streamline this too much but I would like to eventually get around to tthat.

This is all generated from the c++. Generally dynamic ops allow some parameters to be passed in either via integer arguments or via ndarray arguments. This is to allow flexibility in situations where the dimensions of a computation are themselves a variable in a computation graph rather than static.

Hopefully that clarifies the situation a bit.

If you just want to run the op directly, you can use Nd4j.exec(op). That will run the op in INDArray mode and give you the results immediately.

If you instead want to use the op in a SameDiff graph, you can use its op.outputVariable() method to get a reference to its output.

If you look into the SameDiff namespaces, that is exactly how it is done there:

1 Like

Thank you, this is exactly what I need.

And thank you also to @agibsonccc that does clarify things a lot and is going to be very useful in the future.