跳到主要内容

Hugging Face Endpoint

With Hugging Face Inference Endpoints, you can easily deploy Transformers, Diffusers or any model on dedicated, fully managed infrastructure. Click here to create an endpoint.

Create Hugging Face endpoint resource

There are two ways to add a Hugging Face Endpoint resource.

  1. Enter the ILLA Builder >> Click Resources tab >> Click Create New >> Choose Hugging Face Endpoint >> Configure the connection information and click Save Resource
  2. Enter the edit page >> click + New in the action list >> Choose Hugging Face Endpoint >> Configure the connection information or click + New Resource to add new connection information

Configure connection information

PropertiesRequiredDescription
NamerequiredDefine a resource name that will be used for display in ILLA
Endpoint URLrequiredcreate Endpoint here: https://ui.endpoints.huggingface.co/new and get the URL.
TokenrequiredThe organization token. You can get it in https://huggingface.co/settings/tokens.

Create Actions

Enter the edit page >> click + New in the action list >> Choose Hugging Face Endpoint >> Choose an existing resource or add a new resource

Configure actions

PropertiesRequiredDescription
Parameter typerequiredThe parameter type of your Endpoint. For example, if your Endpoint needs an text input, choose fill in “inputs” parameter with text. If your Endpoint needs an JSON input, choose fill in “inputs” parameter with JSON or key-value.
ParameterrequiredEnter your parameter. Use {{ componentName.value }} to use data of components.

Use case

We have deployed openai/whisper-base which gets an audio file input and converts into text. It is suitable for meeting minutes, podcasts to text, etc. Next we will introduce how to use this model to build an application in ILLA Cloud.

Step 1: Build the front-end interface with components

We have built an interface using the components such as file upload and button, as follows.

Step 2: Add a Hugging Face resource

Fill in the fields shown below to finish the resource configuration. Create an Endpoint and get the Endpoint URL. And get the organization API token in profile settings.

Step 3: Configure an Action

Select a parameter type first. Take openai/whisper-base as an example, this model requires binary file input so we change the parameter to Binary.

Set the binary to be passed to the model as the file data uploaded from the file upload component. For example, {{upload1.value[0]}}

Step 4: Connect the components and actions

Add an event handler to the button to trigger the action run when the button is clicked. And set the value of text component to {{whisper.data[0].text}} to display the convert result on text component.

After the above steps are configured, the application is created and the running results are as follows.