Shortcuts

encoding.parallel

  • Current PyTorch DataParallel Table is not supporting mutl-gpu loss calculation, which makes the gpu memory usage very in-balance. We address this issue here by doing DataParallel for Model & Criterion.

Note

Deprecated, please use torch.nn.parallel.DistributedDataParallel with encoding.nn.DistSyncBatchNorm for the best performance.

Encoding Data Parallel

DataParallelModel

class encoding.parallel.DataParallelModel(module, device_ids=None, output_device=None, dim=0)[source]

Implements data parallelism at the module level.

This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. Note that the outputs are not gathered, please use compatible encoding.parallel.DataParallelCriterion.

The batch size should be larger than the number of GPUs used. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples).

Parameters
  • module – module to be parallelized

  • device_ids – CUDA devices (default: all devices)

Reference:

Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, Amit Agrawal. “Context Encoding for Semantic Segmentation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018

Example:

>>> net = encoding.nn.DataParallelModel(model, device_ids=[0, 1, 2])
>>> y = net(x)

DataParallelCriterion

class encoding.parallel.DataParallelCriterion(module, device_ids=None, output_device=None, dim=0)[source]

Calculate loss in multiple-GPUs, which balance the memory usage for Semantic Segmentation.

The targets are splitted across the specified devices by chunking in the batch dimension. Please use together with encoding.parallel.DataParallelModel.

Reference:

Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, Amit Agrawal. “Context Encoding for Semantic Segmentation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018

Example:

>>> net = encoding.nn.DataParallelModel(model, device_ids=[0, 1, 2])
>>> criterion = encoding.nn.DataParallelCriterion(criterion, device_ids=[0, 1, 2])
>>> y = net(x)
>>> loss = criterion(y, target)
forward(inputs, *targets, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

allreduce

encoding.parallel.allreduce(*inputs)[source]

Cross GPU all reduce autograd operation for calculate mean and variance in SyncBN.