I have been using your docker container for 6 months or so now, thanks for putting it together :)
The jupyter jobs look neat, but I assume they are charged continuous time? Would be cool if somehow that only ended up charged for compute time, but I understand that would be difficult.
Are these instances guaranteed to be in a given region, for if I wanted to route more complex debug output / intermediate files to S3?
@Jupyter NB, we charge continuously right now. Charging for compute time only is possible, but an interesting engineering challenge (sandboxing, scheduling, etc.) - We’ll take this as a feature request! :) We’re all in the Oregon data center (us-west-2) now.
Thanks - glad you found it useful! The attention and feedback that I got from building dl-docker has been terrific. Definitely one of the reasons we started working on this seriously :)
You mean bash on Windows? The CPU version of dl-docker will work on Windows! But unfortunately, the GPU version does not. You would need GPU passthrough, which is currently not supported. See https://github.com/NVIDIA/nvidia-docker/issues/197
The jupyter jobs look neat, but I assume they are charged continuous time? Would be cool if somehow that only ended up charged for compute time, but I understand that would be difficult.
Are these instances guaranteed to be in a given region, for if I wanted to route more complex debug output / intermediate files to S3?