SmartFace GPU acceleration in Docker
Some services can benefit from GPU acceleration, which can be enabled in docker compose file, but also some prerequisites needs to be met on host machine.
To use GPU acceleration, you will need to have the following on the docker host machine:
- Nvidia GPU compatible with Cuda 11.1
- Nvidia driver of version >= 450.80.02
- Nvidia container toolkit https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
To use GPU for HW decoding and face detection for cameras uncomment
runtime: nvidia and
docker-compose.yml for camera services
sf-cam-*. When using the NVIDIA docker runtime, SmartFace camera processes need gstreamer pipelines as camera source.
Neural Networks processing GPU support
To use support GPU acceleration in camera service, remote detector service, extractor service, pedestrian detector service or liveness services uncomment environment variable:
For specify neural networks runtime what will be used, uncomment environment variable
Gpu__GpuNeuralRuntime and set one of the value:
Tensorvalue only if your GPU supports tensor runtime.
Tensor you can uncomment mapping
"/var/tmp/innovatrics/tensor-rt:/var/tmp/innovatrics/tensor-rt" to retain TensorRT cache files in the host when container is recreated. This can be helpful as generating cache files is a longer operation which needs to be performed before the first run of the neural network.