JSON Format for LLM Tools
A Portable Way to Share Tools
Introduction
Sharing a tool in a way that could be quickly added to a program or tool editor would be greatly improved with a standard way to represent a tool and how to use it. We want to enable features such as the following:
An icon to visually represent the tool
Metadata for the prompt:
A name for the tool
A description for the tool
Usage notes for the tool
Placeholder parameters that are included in the tool string
Expected output
Versioning and timestamps.
JSON Format Specification
You can download our sample JSON here.
Fields Description
model_prompt: A string containing the GPT model prompt.
metadata: An object containing additional information about the GPT model prompt, including the following sub-fields:
model_version: A string indicating the version of the GPT model used.
creator: An object containing information about the creator of the GPT model prompt, with the following sub-fields:
name: A string representing the name of the creator.
email: A string representing the email of the creator.
organization: A string representing the organization the creator is affiliated with.
parameters: An object containing information about the GPT model parameters, with the following sub-fields:
temperature: A float indicating the temperature used for controlling the randomness of the output.
max_tokens: An integer indicating the maximum number of tokens in the generated response.
top_p: A float representing the nucleus sampling probability threshold.
frequency_penalty: A float representing the penalty applied to tokens based on their frequency in the dataset.
presence_penalty: A float representing the penalty applied to new tokens based on their presence in the prompt.
timestamp: A string in ISO 8601 format representing the date and time when the GPT model prompt was created or last modified.
expected_output (Optional): An object containing fields related to the expected output from the model_prompt, including the following sub-fields:
type: A string indicating the type of output expected from the model_prompt.
format (Optional): A string representing the format of the expected output if applicable.
language (Optional): A string representing the programming language of the expected output if the type is
code
.allowed_values (Optional): An array of strings containing a list of allowed output values if the type is
limited
.
variables (Optional): A list containing variables that might be inserted into the
model_prompt
string in an f-string style. Each variable contains the following sub-fields:name: A string representing the variable name.
type: A string showing the type of variable. Currently the possible values of
type
aretext
for default variable, andsingle-select
ormulti-select
for selection variables.description: A string showing the description of the variable, including usages and examples.
default: A value showing the default value of the variable. This value is a string if
type
istext
orsingle-select
, and an array of strings formulti-select
.allowed_values: An array of strings containing a list of allowed values if the variable type is
single-select
ormulti-select
avatar (Optional): An object containing fields related to the graphic image acting as an avatar or icon for the prompt, including the following sub-fields:
avatar_type: A string specifying the type of avatar data included.
avatar: A string containing the URL pointing to the image if the avatar_type is
url
, or a base64-encoded string representing the image if the avatar_type isbase64
.
prompt_name (Optional): A string representing the name of the prompt.
description (Optional): A string providing a brief description of the tool and its purpose.
usage_notes (Optional): A string containing free-form notes from the creator about the usage or any specific considerations related to the tool.
To specify the format of the expected output from the model_prompt, you can add an expected_output
object within the metadata
object. Depending on the type of expected output, you can include the relevant sub-fields in the expected_output
object.
To include fields for variables that might be inserted into the model_prompt
string in an f-string style, you can add a separate variables
list within the metadata
object.
To include a graphic image acting as an avatar or icon for the prompt, you can add an avatar
field within the metadata
object.
Including the expected_output
, variables
, avatar
, prompt_name
, description
, and usage_notes
fields within the metadata
object helps keep all the contextual information about the prompt in one place, making it easier to manage and understand.
You can use the version
field at the top level of the JSON object to explicitly track the version of the entire JSON file
Last updated