encoding.functions

Encoding Autograd Fuctions

batchnormtrain

encoding.functions.batchnormtrain(input, mean, std, gamma, beta)[source]

Applies Batch Normalization over a 3d input that is seen as a mini-batch.

\[y = \frac{x - \mu[x]}{ \sqrt{var[x] + \epsilon}} * \gamma + \beta\]
Shape:
  • Input: \((N, C)\) or \((N, C, L)\)
  • Output: \((N, C)\) or \((N, C, L)\) (same shape as input)

aggregate

encoding.functions.aggregate(A, X, C)[source]

Aggregate operation, aggregate the residuals of inputs (\(X\)) with repect to the codewords (\(C\)) with assignment weights (\(A\)).

\[e_{k} = \sum_{i=1}^{N} a_{ik} (x_i - d_k)\]
Shape:
  • Input: \(A\in\mathcal{R}^{B\times N\times K}\) \(X\in\mathcal{R}^{B\times N\times D}\) \(C\in\mathcal{R}^{K\times D}\) (where \(B\) is batch, \(N\) is total number of features, \(K\) is number is codewords, \(D\) is feature dimensions.)
  • Output: \(E\in\mathcal{R}^{B\times K\times D}\)

Examples

>>> B,N,K,D = 2,3,4,5
>>> A = Variable(torch.cuda.DoubleTensor(B,N,K).uniform_(-0.5,0.5), requires_grad=True)
>>> X = Variable(torch.cuda.DoubleTensor(B,N,D).uniform_(-0.5,0.5), requires_grad=True)
>>> C = Variable(torch.cuda.DoubleTensor(K,D).uniform_(-0.5,0.5), requires_grad=True)
>>> func = encoding.aggregate()
>>> E = func(A, X, C)

scaled_l2

encoding.functions.scaled_l2(X, C, S)[source]

scaled_l2 distance

\[sl_{ik} = s_k \|x_i-c_k\|^2\]
Shape:
  • Input: \(X\in\mathcal{R}^{B\times N\times D}\) \(C\in\mathcal{R}^{K\times D}\) \(S\in \mathcal{R}^K\) (where \(B\) is batch, \(N\) is total number of features, \(K\) is number is codewords, \(D\) is feature dimensions.)
  • Output: \(E\in\mathcal{R}^{B\times N\times K}\)

sum_square

encoding.functions.sum_square(input)[source]

Calculate sum of elements and sum of squares for Batch Normalization