[Doc] Update V1 user guide for embedding models (#19842)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
This commit is contained in:
22quinn
2025-06-19 02:43:27 -07:00
committed by GitHub
parent 4719460644
commit 6f68c49220

View File

@ -70,7 +70,7 @@ For each item, our progress towards V1 support falls into one of the following s
|-----------------------------|------------------------------------------------------------------------------------|
| **Decoder-only Models** | <nobr>🚀 Optimized</nobr> |
| **Encoder-Decoder Models** | <nobr>🟠 Delayed</nobr> |
| **Embedding Models** | <nobr>🚧 WIP ([PR #16188](https://github.com/vllm-project/vllm/pull/16188))</nobr> |
| **Embedding Models** | <nobr>🟢 Functional</nobr> |
| **Mamba Models** | <nobr>🚧 WIP ([PR #19327](https://github.com/vllm-project/vllm/pull/19327))</nobr> |
| **Multimodal Models** | <nobr>🟢 Functional</nobr> |
@ -80,11 +80,11 @@ vLLM V1 currently excludes model architectures with the `SupportsV0Only` protoco
This corresponds to the V1 column in our [list of supported models][supported-models].
See below for the status of models that are still not yet supported in V1.
See below for the status of models that are not yet supported or have more features planned in V1.
#### Embedding Models
The initial support will be provided by [PR #16188](https://github.com/vllm-project/vllm/pull/16188).
The initial basic support is now functional.
Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249),
which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360)