根据https://github.com/vllm-project/vllm/tree/main/benchmarks
先启动服务,比如:
CUDA_VISIBLE_DEVICES=0 vllm serve /models/Qwen3-32B --host 0.0.0.0 --port 8123 --gpu_memory_utilization 0.9 --max_model_len 4096 --task generate --enable-lora --lora-modules lora_name=/models/adapter/Qwen3-32B/ --quantization bitsandbytes --dtype auto --kv-cache-dtype fp8 --enforce-eager
再执行测速脚本,比如:
vllm bench serve --port 8123 --model /models/Qwen3-32B --dataset-name sharegpt --dataset-path eval.json --num-prompts 100
其中eval.json
准备成sharegpt格式(https://llamafactory.readthedocs.io/zh-cn/latest/getting_started/data_preparation.html#sharegpt
)