Accessing values of variables after training

I’m new to neural networks and I’m unsure if my problem requires a network or if I need to do it manually somehow. I’m not even sure if I use the correct wording :slight_smile: … so any pointers and example snippets are appreciated!

I have y = w * x where

  • y is a result vector of shape 1 x nSamples
  • w are the weights and is a matrix of shape nSamples x nFeatures
  • x is an input vector of shape 1 x nFeatures

Where nSamples is in the order of a few thousands and nFeatures is 10-20.

I would like to feed the network with elements w_ij and y_i and I would expect deeplearning4j to train x_j and somehow return these values at the end after training. The difference to the normal use cases is probably that I do not want to predict y, but I need access to the trained “raw” x values.

Is this problem possible with a neural network and I could fetch x after training somehow?

Or, I’ve seen Ex2_LinearRegression.java, and it looks like this could work. But unfortunately I do not understand the details regarding the placeholderData, broadcast, minibatch and also not how I would use the gradient to optimize the x values. btw: could it be that there is a line missing at line 72 ala sd.setLossVariables(mse)? At least mse is unused.

(Or do I have to reformulate my problem into x = w⁻¹ y? But I’m a bit lost on how I would do approach this in that case.)

@karussell

I would like to feed the network with elements w_ij and y_i and I would expect deeplearning4j to train x_j and somehow return these values at the end after training.

The way samediff works is you just get a map back of outputs. That will include the outputs as a matrix. No deep learning framework (even the python ones) operate on individual neurons. That was a very long time ago that code was written like that. Everything now a days is expressed in terms of variables that are matrices.

In general when you ask for certain variables back from .output(…) that is what you will get back in the map.

s. btw: could it be that there is a line missing at line 72 ala sd.setLossVariables(mse)? At least mse is unused

We have a limited amount of automatic inference of loss variable inference but it’s still recommended to mark certain variable as loss variables.

Thanks a lot for the very fast response.

And would it be possible to do the training by hand somehow like in Ex2_LinearRegression but with a loop that uses the gradient to optimize the x values? I got something like this working in pytorch:

optimizer = optim.Rprop([params], lr=5)
for i in range(10):
    optimizer.zero_grad()
    x_clamped = torch.clamp(params, min_values, max_values)
    loss = objective(x_clamped)
    loss.backward()
    optimizer.step()
    print(f'Objective function result: {loss.item():,.0f}')

but it felt slow and I’d prefer to have it in Java.

@karussell sorry not really understanding…that’s what we do with SGD and fit(…).