Select the PyTorch Docker image at the Environment step of the job creation wizard. Only the latest version of PyTorch is supported.
just create job with the following option:
just create job single --project <project> \--datasets <dataset> --docker-image pytorch-0.4.0-gpu-py36-cuda9.2"
It is possible to take advantage of TensorBoard even in PyTorch! This repository provides a TensorBoard API for PyTorch (or any other framework).
When using pip, include the following in the
For anaconda, add this to the
name: clusteronedependencies:- tensorboardX
Then use the following snippet in your code:
...# you requirementsfrom tensorboardX import SummaryWriterfrom clusterone import get_logs_pathimport os#Your local path to outputs, locally tensorboard summaries will be saved hereROOT_PATH_TO_LOCAL_LOGS = os.path.expanduser("~/Documents/pytorch-projects/examples/logs")... #model definitionif __name__ == "__main__":writer = SummaryWriter(log_dir = get_logs_path(ROOT_PATH_TO_LOCAL_LOGS))for batch_index in range(nb_batches):... # training operationloss = ... # compute your loss#only save loss to tensorboard every n batches to not slow down trainingif batch_index % 100 == 0:writer.add_scalar('loss', loss, batch_index)
See the full tensorboard-pytorch API documentation for more information.