Closed
Description
1. Issue or feature description
When I submit K8S jobs, I find that even though I don't allocate any GPU resource in YAML spec, I can still see all the GPUs in the pod container if the image is built with CUDA support. Is this because by default nvidia-docker2 adds a hook to docker container startup, which makes all GPU devices visible to the container and GPU isolation will be applied only when GPU limit is explicitly specified in pod spec?I think this is a bug as user might accidentally get access to all GPU while they shouldn't.
2. Steps to reproduce the issue
- Install nvidia-docker 2 and set default-runtime to nvidia
- Deploy device-plugin daemonset
- Create a YAML which has no GPU limit but uses a CUDA image
- Submit the YAML and check inside the created container by nvidia-smi command
- All GPU on the host node will be visible in the container while no GPU is expected
3. Information to attach (optional if deemed irrelevant)
I am using Docker CE 18.03 and Nvidia-docker 2.0.3.
Metadata
Metadata
Assignees
Labels
No labels