HuggingFace Integration
Using SkyDeck.ai as the front end for your HuggingFace models.
Last updated
Using SkyDeck.ai as the front end for your HuggingFace models.
Last updated
After deploying your inference endpoint on HuggingFace, you should see the following user interface:
On this page, you will need the following information:
Endpoint URL
Model Repository
API token. You can view this by checking the "Add API token" box in the Call Examples code block.
In addition to these, you will also need the context window of your model. This can be found on the model's information page.
After collecting this information, format it into JSON as shown in the example below:
Next, paste this into the Credential field of your integration.
Once the credential is successfully validated, you should see your HuggingFace model listed in GenStudio's model list:
Scaling to 0 is a dynamic feature offered by Inference Endpoints, designed to optimize resource utilization and costs. By intelligently monitoring request patterns and reducing the number of replicas to none during idle times, it ensures that you only use resources when necessary.
However, this does introduce a cold start period when traffic resumes, and there are a few considerations to be mindful of. For an in-depth look at how this feature functions, its benefits, and potential challenges, please refer to HuggingFace's guide on Autoscaling.
At the moment, we only support endpoints for models with a text-generation
tag that are deployed as text-generation-inference
containers. We are working to expand our list of supported models.