跳到主要内容

Hugging Face API

Hugging Face is the Github of the machine learning community, with hundreds of thousands of pre-trained models and 10,000 datasets currently available. You can freely access models and datasets shared by other industry experts or host and deploy your models on Hugging Face.

Some examples of models available in the Hugging Face library include:

  1. BERT (Bidirectional Encoder Representations from Transformers): BERT is a transformer-based model developed by Google for various NLP tasks. It has achieved state-of-the-art results in language understanding and machine translation tasks.
  2. GPT (Generative Pre-training Transformer): GPT is another transformer-based model developed by OpenAI. It is primarily used for language generation tasks, such as translation and text summarization.
  3. RoBERTa (Robustly Optimized BERT): RoBERTa is an extension of the BERT model that was developed to improve BERT's performance on various NLP tasks.
  4. XLNet (eXtraordinary LanguageNet): XLNet is a transformer-based model developed by Google that is designed to improve the performance of transformer models on language understanding tasks.
  5. ALBERT (A Lite BERT): ALBERT is a version of the BERT model that was developed to be more efficient and faster to train while maintaining good performance on NLP tasks.

What you can do with Hugging Face in ILLA Builder

In Hugging Face, over 130,000 machine-learning models are available through the public API, which you can use and test for free at https://huggingface.co/models. In addition, if you need a solution for production scenarios, you can use Hugging Face's Inference Endpoints, which can be deployed and accessed at https://huggingface.co/docs/inference-endpoints/index.

ILLA Builder provides dozens of commonly used front-end components, allowing you to build different front-end interfaces based on your specific needs quickly. At the same time, ILLA offers a connection to Hugging Face, allowing you to quickly connect to the API, send requests, and receive returned data. By connecting the API and front-end components, you can implement the requirement that users can enter content through the front end and submit it to the API. The API returns the generated content to be displayed on the front end.

Configure Hugging Face API resource

PropertiesRequiredDescription
NamerequiredDefine a resource name that will be used for display in ILLA
TokenrequiredThe user access or API token. You can get it in https://huggingface.co/settings/tokens.

Configure Action

PropertiesRequiredDescription
Model IDrequiredSearch for models: https://huggingface.co/models
Parameter typerequiredThe parameter type of your endpoint. For example, if your endpoint needs an text input, choose fill in “inputs” parameter with text. If your endpoint needs an JSON input, choose fill in “inputs” parameter with JSON or key-value.
ParameterrequiredEnter your parameter. Use {{ componentName.value }} to use data of components.

How to use Hugging Face in ILLA Builder

Usecase 1

Step 1: Build UI with components on ILLA Builder

Here we will demonstrate how to build a simple text that input text prompt and a text question to output a text answer.

You need two textarea components labeled as "Enter the original text" and "Enter the question here", and button component labeled as "Run", and a text area component labeled as "Anwer" for output text. The following image is an example as described above.

hfapi

Step 2: Create a Hugging Face Resource and config the Actions

Click + New in the action list and select Hugging Face Inference API.

hfapi2

Fill in the form to connect to your Hugging Face:

Name: The name that will be displayed in ILLA

Token: Get in your Hugging Face profile settings

hfapi3

Confirm the model information in Hugging Face before configuring the actions:

Select a model in Hugging Face Model Page

Let’s use deepset/roberta-base-squad2 as an example. Enter the detail page > click Deploy > Click Inference API

hfapi4

The parameters after “inputs” is what you should fill in ILLA.

hfapi5

In ILLA Builder, we should fill in the Model ID and Parameter. Taking the above model as an example, the “inputs” is a key-value pair, so we can fill in it with key-value or JSON.

hfapi6

hfapi7

And we also support the text input and binary input which can meet all Hugging Face Inference API connections.

Step 3: Connect actions to components

To pass the user's front-end input to the API, you can use {{ to retrieve data inputted in the component. For example, input2 is the component to input the question and input1 is the component to input context, we can fill question and context in key, and fill {{ input.value }} in value:

{
"question": {{input2.value}},
"context": {{input1.value}}
}

Before displaying the output data of the Action in the front-end component, we should confirm which field the output of different models is placed in. Still taking deepset/roberta-base-squad2 as an example, the results are as follows:

hfapi8

So we can get the answer with {{ textQuestion.data[0].answer }}(the textQuestion is the name of the action).

hfapi9

Demo

hfapi10

hfapi11

Usecase 2

Here we will demonstrate how to use the Stable diffusion model thorugh the Hugging Face API in Illa Cloud.

Step 1: Building a Front-end Interface

We construct a front-end interface by utilizing a drag-and-drop approach to place essential components such as input fields, buttons, images, and more. After adjusting the styles of the components, we obtain the following complete webpage.

hugging_layout

Step 2: Creating Resources and Actions

We establish resources and actions by utilizing the Hugging Face Inference API to connect to the Stable Diffusion model. Two models can be utilized: runwayml/stable-diffusion-v1-5 and stabilityai/stable-diffusion-2-1.

Choose the "Hugging Face Inference API" for this purpose.

action_list

Provide a name for this resource and enter your token from the Hugging Face platform.

hugging_config

In the Action configuration panel, please enter the Model ID and Parameter. We will retrieve the selected model from radioGroup1, so fill in the Model ID as  {{radioGroup1.value}} . For the input, since it is obtained from the input field, fill in the parameter as {{input1.value}}. The configuration should be as shown in the following image.

hugging_demo

We attempt to input "A mecha robot in a favela in expressionist style" in the input component and run the Action. The resulting execution is as follows. From the left panel, you can observe the available data that can be called, including base64binary and dataURI.

hugging_output

Step 3: Displaying Data on Components

To display the image obtained from Step 2, we modify the Image source of the image component to {{generateInput.fileData.dataURI}}. This will enable us to show the generated image.

hugging_display

Step 4: Running the Action with Components

To run the action created in Step 2 when the button component is clicked, add an event handler to the button component.

hugging_event_config

Step 5: Testing

Following the previous four steps, you can utilize additional components and data sources to complete other tasks and build a more comprehensive tool. For example, you can use other models to assist in generating prompts or store prompts in localStorage or a database. Let's take a look at the complete outcome when all the steps are implemented.

hugging_test

Now you may play around with it! Try other cutting-edge models that is provided by Hugging face through this API. There are more to explore, such as stable diffusion anime models download, training an artist style, sampling method for realistic images. You can do much more!