Customizing deployment with Model Analyzer in NVIDIA Triton Server
I am following the tutorial from NVIDIA Triton Server and am currently on the 3rd step to getting to know deployments of ML models. The step involves installing the Model Analyser Module and there is an associated command with it: