[Chore][Doc] uses model id determined from OpenAI client (#17815)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
This commit is contained in:
Aaron Pham
2025-05-07 21:48:57 -04:00
committed by GitHub
parent d43f914d42
commit a8238bbdb0
3 changed files with 3 additions and 3 deletions

View File

@ -138,7 +138,7 @@ def main():
api_key="-",
)
model = "Qwen/Qwen2.5-3B-Instruct"
model = client.models.list().data[0].id
print("Guided Choice Completion:")
print(guided_choice_completion(client, model))

View File

@ -59,7 +59,7 @@ and San Francisco?
}]
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
model=client.models.list().data[0].id,
messages=messages,
response_format={
"type":

View File

@ -4,7 +4,7 @@ An example shows how to generate structured outputs from reasoning models
like DeepSeekR1. The thinking process will not be guided by the JSON
schema provided by the user. Only the final output will be structured.
To run this example, you need to start the vLLM server with the reasoning
To run this example, you need to start the vLLM server with the reasoning
parser:
```bash