Reduction of accuracy

Hi all,
I have an array namely “real”. it has a vector of double numbers. I am trying to create an INDArray based on my vector.
my code is below :

double real_d = (double) ArrBD.get(0);
INDArray real = Nd4j.create(real_d);

It works but unfortunately, it converts big numbers to infinity and small numbers to zero.
Here you can see the double array:

[0.0, 5.911259568298103E-306, -1.8401529637727614E-221, 1.7562463582928743E-268, 2.7206514809021945E-133, 4.583815262762733E90, 2.5637698710586 …

and here the data of INDArray:

[0.0,0.0,-0.0,0.0,0.0,Infinity,Infinity,-0.0,-0.0 …

I appreciate your help in advance

correct me if im wrong but your loosing precision? each number is only allowed so much accuracy because of its memory size? is this what you mean?

1 Like

This is likely due to not specifying the data type. We don’t implement a “double” etc any different way than what you’d see in numpy etc.

1 Like

Thank you. You are right. I understood what is going behind it. I read binary file in double format and Nd4j shows it in float format during debugging, but when I call this line

data.real.data().asDouble();

it returns correct values.