@yjay nd4j itself is just a “numpy”. In order to use it with a specific input pipeline, you normally use datavec with it.
We have examples of that here for csv data:
Nd4j can also ready numpy arrays directly. If you want to use python to do your ETL then just save numpy arrays we load that can work as well:
Unfortunately, we don’t have as much magic around auto data type conversion and do not support objects as input types. We do allow strings though. If you can tell me more specifically what you’re trying to do I can make a better recommendation.
Basically what I’m trying to do is make an array that can be used for data science predictions. More specifically at the moment I’m looking to use if for k-nearest neighbour. In knn.fit, I want to use a flattened array containing numpy.array for the values in knn.fit.
I also want to be able to use if later on for things like numpy.maximum and minimum.
@yjay then for columnar data, we typically use csv record readers and datavec. Like I mentioned, if you prefer python and pandas you can save your dataset as numpy arrays and we can directly load them.
For max and min you’ll want: