add FAQ doc under 'serving' (#5946)

This commit is contained in:
ning.zhang
2024-07-01 14:11:36 -07:00
committed by GitHub
parent 12a59959ed
commit 83bdcb6ac3
2 changed files with 13 additions and 0 deletions

View File

@ -84,6 +84,7 @@ Documentation
serving/usage_stats
serving/integrations
serving/tensorizer
serving/faq
.. toctree::
:maxdepth: 1

View File

@ -0,0 +1,12 @@
Frequently Asked Questions
========================
Q: How can I serve multiple models on a single port using the OpenAI API?
A: Assuming that you're referring to using OpenAI compatible server to serve multiple models at once, that is not currently supported, you can run multiple instances of the server (each serving a different model) at the same time, and have another layer to route the incoming request to the correct server accordingly.
----------------------------------------
Q: Which model to use for offline inference embedding?
A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model