JSON Format for Smart Tools
A portable way to share SkyDeck.AI smart tools written in Python code
Overall
To create a smart tool on SkyDeck.AI, you need to upload a set of files according to the specifications mentioned in the File Structure section. Once uploaded, our platform will perform the initial setup, which may take a few minutes. Afterward, the tool will be available in the GenStudio Workspace.
File Structure
<tool_name>.json
The tool's behavior is also configured through a JSON file. Here is a brief overview of the key fields in the configuration:
version: The current version of the tool.
tool_name: The name of the tool. This name should be unique in your workspace.
tool_code: Contains the Python code to be executed. More details on this field will be on the next section.
description: A brief description of what the tool does.
usage_notes: Instructions on how to use the tool.
model_version: Specify the models that are allowed to use for follow-up questions. To specify all models, use ["gpt-4", "gpt-3.5", "claude", "chat-bison"].
creator: Information about the creator of the tool, including name, email, and organization.
variables: An array of variables used by the tool. Each variable has a name, description, and default value. The order of variables in the UI follows the order in this array.
expected_output: The type of output produced by the tool. During the development stage, the value should always be text.
avatar_type: The format of the avatar used in the tool's UI.
timestamp: The date and time when the tool was last updated.
requirements: specifies the required packages to run the script in tool_code.
avatar: the string representing the logo of this tool
tool_code convention:
This script outlines the functioning of your tool. The main component of this script is the execute function, which has the following requirements:
The function should have a single input parameter called variables, which is a dictionary. Each key in this dictionary corresponds to a field that the user would input into your tool.
The function should return a string, which will be displayed as the response on the GenStudio UI.
Example Tools
Image generation using DALL-E 2
Description:
This tool accepts an image description as input and generates a corresponding URL for the image. The output includes the URL along with an expiration note. The tool functions by sending the query to the OpenAI DALL-E API and retrieving the response.
Input:
Description: Image description, e.g., "A white furry cat"
Output:
A message with the generated URL for the image along with the expiration note.
Python script (which would be a field inside image_generation.json):
import openai
def execute(variables):
openai.api_key = '<USER API KEY>'
description = variables['Description']
response = openai.Image.create(
prompt=description,
n=1,
size="1024x1024"
)
image_url = response['data'][0]['url']
return f"Here is the link to your image:{image_url}. The link will expire in 1 hour."
image_generation.json
{
"version": "0.1",
"metadata": {
"tool_name": "Image generation",
"tool_code": "import openai\n\ndef execute(variables):\n openai.api_key = ''\n description = variables['Description']\n response = openai.Image.create(\n prompt=description,\n n=1,\n size=\"1024x1024\"\n )\n\n image_url = response['data'][0]['url']\n return f\"Here is the link to your image:{image_url}. The link will be expired in 1 hour.\"\n",
"description": "Generated an image based on the description with OpenAI's DALL-E model.",
"usage_notes": "Describe the image in detail and put it in the description field. An URL of the image will be returned. The lifetime of the URL is about 1 hour, so make sure to download it before expired",
"model_version": ["gpt-3.5","gpt-3.5-turbo", "gpt-4", "claude"],
"creator": {
"name": "SkyDeck AI",
"email": "[email protected]",
"organization": "East Agile"
},
"variables": [
{
"name": "Description",
"description": "Image description",
"default": "a white siamese cat"
}
],
"expected_output": {
"type": "text"
},
"avatar_type": "base64",
"timestamp": "2023-05-23T10:00:00Z",
"requirements": "openai>=0.27.4",
"avatar": ""
}
}
Real-time weather report with Open-Meteo API
Description:
This tool leverages the Open-Meteo API to provide real-time weather information based on users' questions. By asking a question about the weather, such as temperature, precipitation, or wind conditions, the tool retrieves the most relevant and up-to-date data.
The functioning of this tool relies on APIChain - a feature from the LangChain library - to access the Open-Meteo API documentation. This enables the tool to learn how to make the correct API calls and retrieve the required information seamlessly.
Input:
Question: Ask a specific question about the weather, e.g., "What is the current temperature in New York City?"
Output:
A response providing the requested weather information.
Python script (which would be a field inside weather_reporter.json):
from langchain.chains.api import open_meteo_docs
from langchain.chat_models import ChatOpenAI
from langchain.chains import APIChain
def execute(variables):
question = variables['Question']
llm = ChatOpenAI(
model_name='gpt-3.5-turbo',
openai_api_key='<USER API KEY>'
)
api_chain = APIChain.from_llm_and_api_docs(
llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=False
)
result = api_chain.run(question)
return result
weather_reporter.json
{
"version": "0.1",
"metadata": {
"tool_name": "Weather Reporter",
"tool_code": "from langchain.chains.api import open_meteo_docs\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import APIChain\n\n\ndef execute(variables):\n question = variables['Question']\n llm = ChatOpenAI(model_name='gpt-3.5-turbo',\n openai_api_key='')\n api_chain = APIChain.from_llm_and_api_docs(\n llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=False)\n result = api_chain.run(question)\n return result\n",
"description": "Leverage the Open-Meteo API to retrieve real-time weather details",
"usage_notes": "Enter your weather-related question in the provided field",
"model_version": ["gpt-3.5", "gpt-3.5-turbo", "gpt-4", "claude"],
"creator": {
"name": "SkyDeck AI",
"email": "[email protected]",
"organization": "East Agile"
},
"variables": [
{
"name": "Question",
"description": "Inquire about the weather conditions",
"default": "What is the current temperature in Munich, Germany, expressed in degrees Celsius?"
}
],
"expected_output": {
"type": "text"
},
"avatar_type": "base64",
"timestamp": "2023-07-13T10:00:00Z",
"requirements": "openai>=0.27.4\nlangchain>=0.0.229",
"avatar": ""
}
}
Limitations
AWS Lambda only allows a function to run for a maximum of 15 minutes and 10GB of RAM. Therefore, the tools should finish their execution within this constraint.
Last updated