Deeplearning becomes slow and when deeplearning start via the RestAPI then DiscreteSpace = null

deeplearning becomes slow after about 80 steps and when I start deeplearning via the RestAPI then DiscreteSpace = null

this is my QLearningConfiguration

public static QLearningConfiguration buildConfig() {
return QLearningConfiguration.builder()
.seed(123L)
.maxEpochStep(200)
.maxStep(15000)
.expRepMaxSize(150000)
.batchSize(128)
.targetDqnUpdateFreq(500)
.updateStart(10)
.rewardFactor(0.01)
.gamma(0.99)
.errorClamp(1.0)
.minEpsilon(0.1f)
.epsilonNbStep(10000)
.doubleDQN(true)
.build();
}

this is a part of my pom.xml

	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>rl4j</artifactId>
		<version>1.0.0-M1</version>
		<type>pom</type>
	</dependency>
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>rl4j-core</artifactId>
		<version>1.0.0-M1.1</version>
	</dependency>

	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-core</artifactId>
		<version>1.0.0-M2.1</version>
	</dependency>

	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-native</artifactId>
		<version>1.0.0-M2.1</version>
		<scope>test</scope>
	</dependency>

This is my CPU

It would be kind if someone could help me there …

@ynot it’s kind of hard to tell what you’re doing. If I can’t run it I can’t help you. I can’t reverse engineer your whole context just from a process screenshot. I can only guess which means my answer will probably be wrong due to missing details and I can’t read minds. (eg: I can’t see your code)

Code without configuration and context doesn’t really help anyone help you.

With that in mind, please follow up with more details.

  1. Start with describing everything about what you are running. That includes the web framework you’re using, how you are running the rest api
  2. Give more details about what you think fast should be. What are we talking here? Throughput? Records per second? Let’s establish a metric to to measure so we can clearly say yes or no if the support here clearly made your model faster after a bit of discussion.

Let me suggest a few things:

  1. Tell me what you think fast should be. Where does it start? What does it slow down to? If you don’t know how to measure performance please use the performance listener.

  2. How are you running a benchmark? Are you just letting some training code run? I would advise giving me a clear way to reproduce your benchmark. Usually that will come down to seeing all of your code including your data pre processing.

  3. In order to make sure you’re using the latest dl4j please include an mvn dependency:tree dump of all your dependencies as a github gist so we can make sure it’s using the latest.

First of all, thanks for the quick reply.
With much experimenting around the dependencys it works now.

I use Springboot and then call the URL.

I just call the training code.
After about 80 seps it hung up on

return new StepReply<>(
observation,
reward,
isDone(),
“training”
);

for a short time so the next step was very late.

My solution was to change the versions of the dependencies. So I changed the version
1.0.0-M2.1 to 1.0.0-beta7 and so on. And now it works.

Only I don’t understand that it was because of the version…

Here my dependency tree
[mvn tree · GitHub](https://mvn dependency:tree)

Since I have another question and this does not fit to the topic, I will have to open a new one and would like to explain more details about my approach