mirror of
https://github.com/vllm-project/vllm.git
synced 2025-10-20 14:53:52 +08:00
89988ec8c2a0c3e18e63767d9df5ca8f6b8ff21c
CacheFlow
Build from source
pip install -r requirements.txt
pip install -e . # This may take several minutes.
Test simple server
ray start --head
python simple_server.py
The detailed arguments for simple_server.py
can be found by:
python simple_server.py --help
FastAPI server
To start the server:
ray start --head
python -m cacheflow.http_frontend.fastapi_frontend
To test the server:
python -m cacheflow.http_frontend.test_cli_client
Gradio web server
Install the following additional dependencies:
pip install gradio
Start the server:
python -m cacheflow.http_frontend.fastapi_frontend
# At another terminal
python -m cacheflow.http_frontend.gradio_webserver
Load LLaMA weights
Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.
- Converting LLaMA weights to huggingface format with this script.
Please make sure that
python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
llama
is included in the output directory name. - For all the commands above, specify the model with
--model /output/path/llama-7b
to load the model. For example:python simple_server.py --model /output/path/llama-7b python -m cacheflow.http_frontend.fastapi_frontend --model /output/path/llama-7b
Description
A high-throughput and memory-efficient inference and serving engine for LLMs
amdblackwellcudadeepseekdeepseek-v3gptgpt-ossinferencekimillamallmllm-servingmodel-servingmoeopenaipytorchqwenqwen3tputransformer
Readme
Apache-2.0
743 MiB
Languages
Python
85.7%
Cuda
8.3%
C++
4.3%
Shell
0.8%
C
0.4%
Other
0.4%