Parallel Inference NullPointerException

I’m using MacOS, Java13, DL4J M2.1 with the following dl4j configurations. I’m working on a project regarding value function approximation. In revenueMaximizationAdv, I have saved the feature vectors of samples in an INDArray [ '], with each sample’s feature vector being an INDArray of (1,n_features). The INDArray [ '] was iteratively assigned an INDArray of (1,n_features) from an attribute of the agent class. I did this because using Nd4j.zeros or Nd4j.vstack seems to be memory-demanding and time-consuming for my computer. In this same class, I tried to access a static method from class OSV that runs ParallelInference.output(INDArray… input). My Parallel Inference Configuration is
new ParallelInference.Builder(model).inferenceMode(InferenceMode.BATCHED).batchLimit(32).workers(4).build()
and hope that I’m able to get the output in form of INDArray back with each INDArray having shape (1,1). However, the following error arises. After searching for answers, I still was unable to sort out the reason.
Exception in thread "main" java.lang.NullPointerException at OSV.eval(OSV.java:122) at revenueMaximizationAdv.doMatching(revenueMaximizationAdv.java:143) at Simulator.sim(Simulator.java:414) at mainSim.main(mainSim.java:78)

@timmy1010697 could you provide the full stack trace please? That’s not dl4j code.

I’ve solved my previous exception error. Turns out it’s because the ParallelInference is not initialized before calling OSV.eval(). However, a new error (or more like confusion regarding usage of method) arises. As I mentioned earlier, the INDArray[ '] s_a that I have stores 864 (1,n_features) INDArrays, which is also the input to the pi.ouput() method. From the javadoc, it is supposed to return a INDArray [ '], and as per my understanding, one that is of the same size and have corresponding order to the input array. However, by running INDArray[] VF_a = OSV.eval(s_a);(Please also see the static eval method attached below), the VF_a is only of length 1, and not 864 as expected. There has been limited documentation online regarding how to use ParallelInference.output(INDArray input), so I hope you can advise on how to address this issue.

public class OSV{
	private static MultiLayerNetwork model;
	private static ParallelInference pi;
	public static INDArray[] eval(INDArray[] state){return pi.output(state);} 
	public void update(){
		//sample and generate a DataSetIterator from experience replay
		model.fit(DataSetIterator);
		pi.updatedModel(model);
	}
}

I’ve tried to comment out the last pi line and use model.output() in OSV.eval instead

Do we have any updates on my latest question entry?

@timmy1010697 apologies. There’s not much to document. It’s meant to mirror the networks (both MLN and ComputationGraph)

This should be a mismatch. I would recommend running without ParalellInference to see if anything happens. PI just delegates to a threadlocal version of a copy of your network you pass in. It doesn’t do anything special.