Periodic boundary conditions Convolutional layers

Hi,
I need to implement a ComputationGraph with Convolutional layers (actually, a residual neural network). The domain of the problem is a 2d array that is conceptually periodic, in other words it is a square with Periodic Boundary Conditions. Thus, I’d need the convolution to reflect this periodicity, spanning the whole array and “wrapping around”, producing a layer of the same height and width. I haven’t been able to find any built-in way to achieve this with DL4J, am I wrong? I know for example that there is a scipy.signal.convolve2d 's optional boundary='wrap' option in Python. I could give as input an array padded with a periodic frame as large as my kernels but that would work only for the first layer, instead I have many of them. Is there any effective way to implement PBC? Thank you in advance

@EquanimeAugello do you have a reference paper you’re trying to implement? We have conv layers but it sounds like you want something custom.

Hello,
thank you for your reply,
I’m not following any particular paper relative to this feature. On the internet I found some instances of similar requirements of PBC though, that are maybe clearer than my description (here, here).
What I’d like to implement (explained a little bit better) would be a network with convolutional layers that work in this way (or at least a workaround with similar effect): let’s say I have a 3x3 filter and a 5x5 input array. My array has periodic boundary conditions in the sense that the upmost row is (conceptually) adjacent to the lowest row, and the rightmost to the leftmost (“pac-man effect” or a toroidal topology of the array). Now, let’s assume the kernel starts to slide (stride=(1,1)) from the position in which its center entry (kernel[1][1]) is “on top” of array[0][0] entry. The entries of the kernel the “exit” the edges of the array, let’s say for example kernel[0][1], during the convolution should be multiplied with the entry array[4][0]. kernel[0][0] with entry array[4][4], kernel[1][0] with array[0][4] etc… as if the array repeated periodically in 2D-space. The output of such a convolution would have the same height and width of the input. It would be nice, for instance, if I could add a custom padding at any layer that depends with this periodicity conditions on the activations of the layer itself. This way, during the whole series of convolution, the notion of proximity between information at opposite edges of the array would be preserved. Maybe it would be even nicer if no unnecessary memory was used and the convolution “wrapped around” with no need for padding.

@EquanimeAugello could you clarify if you need a special convolution layer or something? I feel like you’re over specifying a specific form of convolution you’re expecting to exist. We have an im2col based implementation of convolution layer (that’s what most frameworks do). I’m familiar with the signal processing way of doing it as well. If you just need a resnet I can link to that. If you ar looking for a very specific form of convolution we don’t have that unfortunately. Probably 6/7 years ago the first versions of convolution in the framework were implemented like that but not anymore.

Thank you very much for your reply.
My question was about either a special convolution option that directly worked reflecting the PBS or (as much as welcome as the former) a different way to implement the network with the existing options offered by the library to achieve the same goal. In this perspective, I realized that giving, as input to the convolution, an array that is a 9-fold repetition of the original array:

array array array
array array array
array array array

would give the network the ability to get all the “periodic-edge” information, so-to-speak, as long as I add some padding at any layer in order to always preserve their height & width.
I have some questions to ask about how to actually implement this, but I think I should open a new thread for that to keep the forum tidy.