Extend Inference Runtimes
TOC
Introduction
This document will guide you step-by-step on how to add new inference runtimes for serving either Large Language Model (LLM) or any other models like "image classification", "object detection", "text classification" etc.
Alauda AI comes with a builtin "vLLM" inference engine, with "custom inference runtimes", you can introduce more inference engines like Seldon MLServer, Triton inference server and so on.
By introducing custom runtimes, you can expand the platform's support for a wider range of model types and GPU types, and optimize performance for specific scenarios to meet broader business needs.
In this section, we'll demonstrate extending current AI platform with a custom XInfernece serving runtime to deploy LLMs and serve an "OpenAI compatible API".
Scenarios
Consider extending your AI Platform inference service runtimes if you encounter any of the following situations:
- Support for New Model Types: Your model isn't natively supported by the current default inference runtime
vLLM. - Compatibility with other types GPUs: You need to perform LLM inference on hardware equipped with GPUs like AMD or Huawei Ascend.
- Performance Optimization for Specific Scenarios: In certain inference scenarios, a new runtime (like Xinference) might offer better performance or resource utilization compared to existing runtimes.
- Custom Inference Logic: You need to introduce custom inference logic or dependent libraries that are difficult to implement within the existing default runtimes.
Prerequisites
Before you start, please ensure you meet these conditions:
- Your ACP cluster is deployed and running normally.
- Your AI Platform version is 1.3 or higher.
- You have the necessary inference runtime image(s) prepared. For example, for the Xinference runtime, images might look like
xprobe/xinference:v1.2.2(for GPU) orxprobe/xinference:v1.2.2-cpu(for CPU). - You have cluster administrator privileges (needed to create CRD instances).
Steps
Create Inference Runtime Resources
You'll need to create the corresponding inference runtime resources based on your target hardware environment (GPU/CPU/NPU).
-
Prepare the Runtime YAML Configuration:
Based on the type of runtime you want to add (e.g., Xinference) and your target hardware environment, prepare the appropriate YAML configuration file. Here are examples for the Xinference runtime across different hardware environments:
- GPU Runtime Example
- Tip: Make sure to replace the
imagefield value with the path to your actual prepared runtime image. You can also modify theannotations.cpaas.io/display-namefield to customize the display name of the runtime in the AI Platform UI.
- Tip: Make sure to replace the
-
Apply the YAML File to Create the Resource:
From a terminal with cluster administrator privileges, execute the following command to apply your YAML file and create the inference runtime resource:
TIP- Important Tip: Please refer to the examples above and create/configure the runtime based on your actual environment and inference needs. These examples are for reference only. You'll need to adjust parameters like the image, resource
limits, andrequeststo ensure the runtime is compatible with your model and hardware environment and runs efficiently. - Note: You can only use this custom runtime on the inference service publishing page after the runtime resource has been created!
- Important Tip: Please refer to the examples above and create/configure the runtime based on your actual environment and inference needs. These examples are for reference only. You'll need to adjust parameters like the image, resource
Publish Xinference Inference Service and Select the Runtime
Once the Xinference inference runtime resource is successfully created, you can select and configure it when publishing your LLM inference service on the AI Platform.
-
Configure Inference Framework for the Model:
Ensure that on the model details page of the model repository you are about to publish, you have selected the appropriate framework through the File Management metadata editing function. The framework parameter value chosen here must match a value included in the
supportedModelFormatsfield when you created the inference service runtime. Please ensure the model framework parameter value is listed in thesupportedModelFormatslist set in the inference runtime. -
Navigate to the Inference Service Publishing Page:
Log in to the AI Platform and navigate to the "Inference Services" or "Model Deployment" modules, then click "Publish Inference Service."
-
Select the Xinference Runtime:
In the inference service creation wizard, find the "Runtime" or "Inference Framework" option. From the dropdown menu or list, select the Xinference runtime you created in Step 1 (e.g., "Xinference CPU Runtime" or "Xinference GPU Runtime (CUDA)").
-
Set Environment Variables: The Xinference runtime requires specific environment variables to function correctly. On the inference service configuration page, locate the "Environment Variables" or "More Settings" section and add the following environment variable:
-
Environment Variable Parameter Description
-
Example:
- Variable Name:
MODEL_FAMILY - Variable Value:
llama(if you are using a Llama series model, checkout the docs for more detail. Or you can runxinference registrations -t LLMto list all supported model families.)
- Variable Name:
-