跳到主要内容

AI Agent

Customize the AI Agent according to your needs, limiting the behavior of the GPT model to generate content as per your requirements. Integrate the AI Agent into the ILLA App to make your app more intelligent.

What is AI Agent

AI Agent is a feature developed based on powerful language models such as OpenAI's GPT-3.5 and GPT-4. It empowers you to edit prompts, allowing you to tailor the capabilities of the large language models according to your specific needs. You can save these modifications as your own AI Agent, enabling quick and convenient access. Additionally, you can also directly utilize AI Agents contributed by other outstanding creators.

Use case

Explore all AI Agents on illa.ai

Marketing

Blog generator

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7es/detail 1

Email generator

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7eg/detail 2

SEO

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7ep/detail 3

Customer support

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7eh/detail 4

Language

Language learning

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7eD/detail 5

Translator

Fast try: https://illa.ai/ai-agent/ILAfx4p1C7ek/detail 6

Create AI Agent

FieldRequired or notDescription
IconRequiredYou can upload images within 500KB, or use AI to generate an icon after filling in the Name. The icon will be displayed in your AI Agent dashboard and, if you contribute the AI Agent to the ILLA Community, it will also be shown in the Community.
NameRequiredName of the AI Agent
DescriptionRequiredA brief description of 160 characters of the AI Agent. You can also generate it after filling in the prompt.
ModeRequiredChat mode : The conversation will be requested along with the current message, previous conversation history, and prompt, resulting in increased token consumption. However, the output will be more accurate as it takes the context of the conversation into account for fine-tuning.Text generation mode:only the current message and prompt are included in the request, resulting in lower token consumption, but it doesn't take the conversation context into consideration for tuning.It's important to note that the conversation history refers to the visible conversation on the screen, and we do not store this information. Once you refresh or close the webpage, the conversation history will not be retained.
PromptRequiredA prompt is a command or instruction that you provide to the language model like GPT. It defines the role or desired output of the model. You can use prompts to instruct the model to perform specific tasks or generate specific types of content. For example, you can use prompts like "Act as an English teacher" or "Please output the results in Markdown format" to guide the model's behavior and generate the desired response.
VariableOptionalVariables allow you to dynamically fill in content within the prompt. A variable consists of a key and a value.The keyrepresents the variable name and cannot contain spaces. It can be freely changed in edit mode but cannot be changed in run mode.The valuerepresents the variable value. Once you enter the variable name, the variable value becomes a required field. After creating a variable, you also need to reference it in the prompt using double curly braces{{variable_name}}. For example, if you create a variable with the key "translate" and the value "English", the prompt can be:Translate the content to {{translate}}. During runtime, the model will actually receive the prompt asTranslate the content to English.
ModelRequiredWe support GPT-3.5-turbo, GPT-3.5-turbo-16k, GPT-4, GPT-4-32k, LLAMA, and others.
Max TokenRequiredThis is used to set the maximum number of tokens allowed per API call. It helps prevent excessive token consumption in a single call, but setting a lower token limit may result in less accurate responses. Different models have different maximum token limits, and you can refer to the official documentation of each model to find out the specific limits.
TemperatureRequiredThe allowed range is 0.1 to 2, but it is typically set between 0.1 and 1. This is used to control the trade-off between response accuracy and randomness in the model's output. When the value is set to a lower level, the output becomes more deterministic and conservative, which may result in increased accuracy. On the other hand, when the value is set to a higher level, the output becomes more creative and random, introducing more variability but potentially reducing accuracy.

Share AI Agent

Share with team members

Contribute to community

Use AI Agent in ILLA App

How-to

Demo

Fork and use

https://illa.ai/app/ILAfx4p1C71f/detail