Run on ARM cpu server

How to run dl4j on ARM cpu server? Do i have to compile the code on the server?

For now, i use docker arm64v8/openjdk:11-jdk on ARM cpu server to run dl4j and get error:
java.lang.UnsatisfiedLinkError: no nd4jcpu in java.library.path: [/usr/java/packages/lib, /lib, /usr/lib]

Part of the pom.xml:

<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<mkl.version>2019.6</mkl.version>
<nd4j.backend>nd4j-native</nd4j.backend>
<javacpp-presets.version>1.5.2</javacpp-presets.version>
<dl4j.version>1.0.0-beta6</dl4j.version>

<build>
	<resources>
		<resource>
			<directory>src/main/java</directory>
		</resource>
	</resources>
	<plugins>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-compiler-plugin</artifactId>
			<version>3.5.1</version>
			<configuration>
				<source>1.8</source>
				<target>1.8</target>
			</configuration>
		</plugin>
	</plugins>
</build>

<dependencies>
	<!-- nd4j start -->
	<dependency>
		<groupId>org.bytedeco</groupId>
		<artifactId>javacpp</artifactId>
		<version>${javacpp-presets.version}</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-api</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-jackson</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>jackson</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.datavec</groupId>
		<artifactId>datavec-data-image</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-common</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-native-platform</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<!-- nd4j end -->
	<!-- dl4j start -->
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-core</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-nn</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-ui</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-ui-model</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.deeplearning4j</groupId>
		<artifactId>deeplearning4j-zoo</artifactId>
		<version>${dl4j.version}</version>
	</dependency>
	<!-- https://mvnrepository.com/artifact/org.freemarker/freemarker -->
	<dependency>
		<groupId>org.freemarker</groupId>
		<artifactId>freemarker</artifactId>
		<version>2.3.29</version>
	</dependency>
	<dependency>
		<groupId>org.nd4j</groupId>
		<artifactId>nd4j-native</artifactId>
		<version>${dl4j.version}</version>
		<classifier>windows-x86_64-avx2</classifier>
	</dependency>
	<!-- dl4j end -->
</dependencies>

Either specify nd4j-native-platform as dependency, or use arm classifier.

It looks like you actually have the platform artifact. But maybe your arm server is different from what we support directly.

Can you share with us some more information. In particular the output of the following commands:
uname -a
cat /proc/cpuinfo

It seems that there is no arm jar(s) on https://oss.sonatype.org for beta-6 or snapshots.
Where can i get it?

The server infomation:
uname -r
4.14.0-115.el7a.0.1.aarch64
cat /proc/cpuinfo
processor : 0
BogoMIPS : 200.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop
CPU implementer : 0x48
CPU architecture: 8
CPU variant : 0x1
CPU part : 0xd01
CPU revision : 0

processor : 1

Unfortunately we don’t have any released Linux Aarch64 artifacts at the moment. And compiling it yourself isn’t all done yet either: Jetson Nano resolves to non existing architecture (linux-aarch64) · Issue #8726 · eclipse/deeplearning4j · GitHub

I guess you are using the ARM Servers from Huawei Cloud?

So it cannot be fixed by compiling it myself.

It’s Huawei’s server, but not from Huawei Cloud.
Handle 0x0001, DMI type 1, 27 bytes
System Information
Manufacturer: Huawei
Product Name: TaiShan 2280 V2

So it cannot be fixed by compiling it myself.

As of this moment this is true. I expect that we will either have an artifact for it in the next release or at least allow it to be compiled for that platform soon.

If you want to try compiling it yourself, track the linked issue. As soon as it is closed, you should be able to compile there.