2023-06-06T16:00:00Z
@wwh131 could you give more context? Also why do you need to build from source?
Googlle Translate/Baidu Translate/DeepL is fine if you need to translate.
Thanks!
I am doing software migration and need to adapt this DL4J on an ARM64 machine. I don’t see much installation guidance on the official website
@wwh131 you don’t “install” dl4j. It’s a library you use in a java project. We already have linux-arm64 code. Could you clarify why you need to build from source still? Is that not working for you?
Edit: You can see the supported platforms here: Central Repository: org/nd4j/nd4j-native/1.0.0-M2.1
Ah, I didn’t know there was this website
@wwh131 we do have the examples and I’d be happy to help you build from source if you needed to. For arm64 we have cross compile scripts but it takes a bit of setup. Try this as is first. You’lll want to use nd4j-native and nd4j-native with the linux-arm64 classifier to make this work.
If you have any other questions about the ARM implementation please let me know.
Yes, I downloaded the source code for M2.1 and executed the buildnativeoperations.sh script in libnd4j. But I don’t have a GPU on my server. After the execution is completed, execute mvn clean install - DSkipTests=true in the nd4j directory according to README and report the above error
@wwh131 sorry not sure why you need a GPU. I guess you’re assuming it’s for jetson nano? The linux-arm64 bindings I linked you are for the CPU backend. It has nothing to do with cuda. The cuda backend also has linux-arm64 bindings for the jetson architecture but that’s not the same artifact id. I think you’re confusing the 2?
@wwh131 could you clarify what you need then? We do have jetson nano bindings for cuda using the nd4j-cuda backend and nd4j-native with CPU. Both exist and you don’t need to build.
You should either be able to build on an arm box or cross compile.
The cross compile flow is here:
Since you’re not being very clear if you are looking to build for jetson nano that is here:
Again I’m going to stress though you shouldn’t need to do this.
Haha, I don’t understand cross compilation. I don’t know which dl4j to download on the website you provided:Central Repository: org/deeplearning4j
@wwh131 I’m a bit confused…are you not familiar with maven? That’s all over the website. I just showed you the available platforms available on maven central. If you want to try to manually download all jars I am not going to support that. Please use a proper build system. If you do know maven then do what I suggested.
Okay, thank you very much for your reply
@wwh131 could you please reply to me what you’re missing? Your reply makes no sense.
You didn’t at all answer my question. I’m trying to help you. If you don’t know maven then it will be hard to help you. Manually downloading jars will be a bigger waste of your time than asking me a few questions so I can figure out what your gaps are?
Yes, I really don’t know about DL4J and MAVEN. I can only read some information from README and compile it. If there are any errors, I will check the data. Of course, I can’t understand many of them, including the code and error messages
This could help you figure out how to get going with maven. But notice, you can’t just copy & paste things from there. In particular, the versions are old, as the article is from 2018.
Also, please tell us what hardware you are going to be using, so we can help you more effectively to select the correct dependencies that you need.
@wwh131 thanks for confirming. Please folllow Paul’s guide. It’s a great resource to start. Please then follow: Quick Start - Deeplearning4j in addition to Paul’s guide.
From there, you can start from a sample project:
After you’re comfortable there you’ll need to do what I mentioned and understand how to run math code for your particular platform. Please understand what a maven classifier is:
Afterwards understand what a backend is:
After that your backend declaration will be:
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-native</artifactId>
<version>1.0.0-M2.1</version>
</dependency>
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-native</artifactId>
<version>1.0.0-M2.1</version>
<classifier>linux-arm64</classifier>
</dependency>
That will allow you to run cpu code for your platform. Avoid building from source. You don’t need to do it.
Yes, I have installed the IDE on the arm64 server, but I cannot use it. I opened the deeplearning4j project, but the IDE closed it directly. Is it the reason why I don’t have MVN install? I saw on Quick Build that MVN INSTALL is needed before opening the project. I am currently using MVN INSTALL on examples, but it is very slow and I am not sure if an error will be reported.