Oven logo

Oven

Published

Triton Performance Analyzer

pip install perf-analyzer

Package Downloads

Weekly DownloadsMonthly Downloads

Authors

Requires Python

Dependencies

    Triton Performance Analyzer

    Triton Performance Analyzer is CLI tool which can help you optimize the inference performance of models running on Triton Inference Server by measuring changes in performance as you experiment with different optimization strategies.


    Features

    Inference Load Modes

    • Concurrency Mode simlulates load by maintaining a specific concurrency of outgoing requests to the server

    • Request Rate Mode simulates load by sending consecutive requests at a specific rate to the server

    • Custom Interval Mode simulates load by sending consecutive requests at specific intervals to the server

    Performance Measurement Modes

    • Time Windows Mode measures model performance repeatedly over a specific time interval until performance has stabilized

    • Count Windows Mode measures model performance repeatedly over a specific number of requests until performance has stabilized

    Other Features


    Quick Start

    The steps below will guide you on how to start using Perf Analyzer.

    Step 1: Start Triton Container

    export RELEASE=<yy.mm> # e.g. to use the release from the end of February of 2023, do `export RELEASE=23.02`
    
    docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3
    
    docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3
    

    Step 2: Download simple Model

    # inside triton container
    git clone --depth 1 https://github.com/triton-inference-server/server
    
    mkdir model_repository ; cp -r server/docs/examples/model_repository/simple model_repository
    

    Step 3: Start Triton Server

    # inside triton container
    tritonserver --model-repository $(pwd)/model_repository &> server.log &
    
    # confirm server is ready, look for 'HTTP/1.1 200 OK'
    curl -v localhost:8000/v2/health/ready
    
    # detach (CTRL-p CTRL-q)
    

    Step 4: Start Triton SDK Container

    docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
    
    docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
    

    Step 5: Run Perf Analyzer

    # inside sdk container
    perf_analyzer -m simple
    

    See the full quick start guide for additional tips on how to analyze output.


    Documentation


    Contributing

    Contributions to Triton Perf Analyzer are more than welcome. To contribute please review the contribution guidelines, then fork and create a pull request.


    Reporting problems, asking questions

    We appreciate any feedback, questions or bug reporting regarding this project. When help with code is needed, follow the process outlined in the Stack Overflow (https://stackoverflow.com/help/mcve) document. Ensure posted examples are:

    • minimal - use as little code as possible that still produces the same problem

    • complete - provide all parts needed to reproduce the problem. Check if you can strip external dependency and still show the problem. The less time we spend on reproducing problems the more time we have to fix it

    • verifiable - test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.