# Composio

Composio enables AI Agents and LLMs to authenticate and integrate with various tools via function calling.

- **Category:** ai agents
- **Auth:** NO_AUTH
- **Composio Managed App Available?** N/A
- **Tools:** 25
- **Triggers:** 0
- **Slug:** `COMPOSIO`
- **Version:** 20260316_00

## Tools

### Check active connection (deprecated)

**Slug:** `COMPOSIO_CHECK_ACTIVE_CONNECTION`

Deprecated: use check active connections instead for bulk operations. check active connection status for a toolkit or specific connected account id. returns connection details if active, or required parameters for establishing connection if none exists. active connections enable agent actions on the toolkit.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `connected_account_id` | string | No | Specific connected account ID to check status for |
| `toolkit` | string | No | Name of the toolkit to check |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Check multiple active connections

**Slug:** `COMPOSIO_CHECK_ACTIVE_CONNECTIONS`

Check active connection status for multiple toolkits or specific connected account ids. returns connection details if active, or required parameters for establishing connection if none exists. active connections enable agent actions on toolkits.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `requests` | array | Yes | List of connection check requests |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Plan

**Slug:** `COMPOSIO_CREATE_PLAN`


This is a workflow builder that ensures the LLM produces a complete, step-by-step plan for any use case.
WHEN TO CALL:
- Call this tool based on COMPOSIO_SEARCH_TOOLS output. If search tools response indicates create_plan should be called and the usecase is not easy, call it.
- Use this tool after COMPOSIO_SEARCH_TOOLS or COMPOSIO_MANAGE_CONNECTIONS to generate an execution plan for the user's use case.
- USE for medium or hard tasks — skip it for easy ones.
- If the user switches to a new use case in the same chat and COMPOSIO_SEARCH_TOOLS again instructs you to call the planner, you MUST call this tool again for that new use case.

Memory Integration:
- You can choose to add the memory received from the search tool into the known_fields parameter of the plan function to enhance planning with discovered relationships and information.

Outputs a complete plan with sections such as "workflow_steps", "complexity_assessment", "decision_matrix", "failure_handling" "output_format", and more as needed.

If you skip this step for non-easy tasks, workflows will likely be incomplete, or fail during execution for complex tasks.
Calling it guarantees reliable, accurate, and end-to-end workflows aligned with the available tools and connections.
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `difficulty` | string ("medium" | "hard") | Yes | Difficulty level for the plan. Choose "medium" for moderate complexity (summarize slack messages from last day), and "hard" for complex tasks requiring multiple steps or advanced logic (create personalized draft for 100 emails). Do not call for easy tasks. |
| `known_fields` | string | Yes | Provide any workflow inputs you already know as comma-separated key:value pairs (not an array). E.g. channel name, user email, timezone, etc. This helps the tool infer or look up relevant memories (like resoliving channel_id from a given channel_name). Keep max 2-3 short and structured values— focus on stable identifiers, names, emails, or settings only. Do not include free-form or long text (like messages, notes, or descriptions). Example: "channel_name:pod-sdk, channel_id:123, user_names:John,Maria, timezone:Asia/Kolkata" |
| `primary_tool_slugs` | array | Yes | List of primary tool slugs that can accomplish the main task. Never invent tool slugs, only use the ones given by Search. For example: ['GITHUB_LIST_PULL_REQUESTS', 'SLACK_SEND_MESSAGE'] |
| `reasoning` | string | Yes | Short reasoning from the search about the use case and how the selected tools can accomplish it |
| `related_tool_slugs` | array | No | List of related/supporting tool slugs that might be useful. These are optional tools that could help with the task. Never invent tool slugs, only use the ones given by Search. |
| `use_case` | string | Yes | Detailed explanation of the use case the user is trying to accomplish. Include as many details as possible for a better plan |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download S3 File

**Slug:** `COMPOSIO_DOWNLOAD_S3_FILE`

Download a file from a public s3 (or r2) url to a local path.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `local_path` | string | No | Optional local path where the file should be saved. If not provided, will use a temporary directory with the filename from the URL |
| `s3_url` | string | Yes | Public S3 URL to download the file from |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Enable trigger

**Slug:** `COMPOSIO_ENABLE_TRIGGER`

Enable a specific trigger for the authenticated user.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `config_params` | object | No | Configuration parameters for the trigger |
| `connected_account_id` | string | Yes | Connected account ID to enable trigger for |
| `toolkit_slug` | string | Yes | Slug of the toolkit |
| `trigger_name` | string | Yes | Name of the trigger to enable |
| `user_id` | string | No | User ID for the trigger |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Execute Composio Tool

**Slug:** `COMPOSIO_EXECUTE_TOOL`

Execute a tool using the composio api.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `allow_destructive` | boolean | No | Whether to allow destructive tools to be executed. If true, the tool will be executed even if it is destructive. |
| `arguments` | object | Yes | The arguments to be passed to the tool. The schema of the arguments is present in the retrieve_actions response |
| `connected_account_id` | string | No | The ID of the connected account to use. If not provided, uses the first active connection for the toolkit |
| `tool_slug` | string | Yes | The slug of the tool to execute, to be used from the list of tools retrieved using retrieve_actions |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Tool Dependency Graph

**Slug:** `COMPOSIO_GET_DEPENDENCY_GRAPH`

Get the dependency graph for a given tool, showing related parent tools that might be useful. this action calls the composio labs dependency graph api to retrieve tools that are commonly used together with or before the specified tool. this helps discover related tools and understand common workflows.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tool_name` | string | Yes | The name of the tool to get dependency graph for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get required parameters for connection

**Slug:** `COMPOSIO_GET_REQUIRED_PARAMETERS`

Gets the required parameters for connecting to a toolkit via initiate connection. returns the exact parameter names and types needed for initiate connection's parameters field. supports api keys, oauth credentials, connection fields, and hybrid authentication scenarios. if has default credentials is true, you can call initiate connection with empty parameters for seamless oauth flow.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `toolkit` | string | Yes | Name of the toolkit to analyze for authentication requirements. Returns parameters for API keys, OAuth credentials, or connection fields needed by initiate_connection. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get response schema

**Slug:** `COMPOSIO_GET_RESPONSE_SCHEMA`

Retrieves the response schema for a specified composio tool. this action fetches the complete response schema definition for any valid composio tool, returning it as a dictionary that describes the expected response structure.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tool` | string | Yes | Name of the tool. For example: GITHUB_LIST_PULL_REQUESTS. You can find the relevant tool names using COMPOSIO_RETRIEVE_ACTIONS tool. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Initiate connection

**Slug:** `COMPOSIO_INITIATE_CONNECTION`

Initiate a connection to a toolkit with comprehensive authentication support. supports all authentication scenarios: 1. composio default oauth (no parameters needed) 2. custom oauth (user's client id/client secret) 3. api key/bearer token authentication 4. basic auth (username/password) 5. hybrid scenarios (oauth + connection fields like site name) 6. connection-only fields (subdomain, api key at connection level) 7. no authentication required automatically detects and validates auth config vs connection fields, provides helpful error messages for missing parameters.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parameters` | object | No |          Authentication parameters for the connection. Structure depends on auth type:                  API Key Auth: {"generic_api_key": "your_key"}         Bearer Token: {"bearer_token": "your_token"} or {"access_token": "your_token"}         Basic Auth: {"username": "user", "password": "pass"}         Custom OAuth: {"client_id": "your_id", "client_secret": "your_secret"}         Connection Fields: {"subdomain": "your_subdomain", "site_name": "your_site"}                   Examples:         - Exa: {"generic_api_key": "your_exa_api_key"}         - GitHub (token): {"access_token": "ghp_xxxxx"}         - Google Super (OAuth): {"client_id": "xxx.apps.googleusercontent.com", "client_secret": "GOCSPX-xxx"}         - SharePoint (hybrid): {"client_id": "your_id", "client_secret": "your_secret", "site_name": "your_site"}         - Zendesk (connection only): {"subdomain": "your_subdomain"}                  Leave empty {} for default OAuth flow (if supported by toolkit).         Use get_required_parameters action to see exact parameter names and requirements.          |
| `toolkit` | string | Yes | Name of the toolkit to connect (e.g., 'gmail', 'exa', 'github', 'linear') |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List toolkits

**Slug:** `COMPOSIO_LIST_TOOLKITS`

List all the available toolkits on composio with filtering options.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `category` | string | No | Filter toolkits by category |
| `min_tools` | integer | No | Filter toolkits by minimum number of tools |
| `name_filter` | string | No | Filter toolkits by name/slug |
| `no_auth_only` | boolean | No | Only return toolkits that don't require authentication |
| `size` | integer | No | Limit the number of results returned |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List triggers

**Slug:** `COMPOSIO_LIST_TRIGGERS`

List available triggers and their configuration schemas.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `toolkit_names` | array | No | List of toolkit names to filter triggers (optional), if not provided/empty, all triggers will be returned |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Manage connections

**Slug:** `COMPOSIO_MANAGE_CONNECTIONS`


Create or manage connections to user's apps. Returns a branded authentication link that works for OAuth, API keys, and all other auth types.

Call policy:
- First call COMPOSIO_SEARCH_TOOLS for the user's query.
- If COMPOSIO_SEARCH_TOOLS indicates there is no active connection for a toolkit, call COMPOSIO_MANAGE_CONNECTIONS with the exact toolkit name(s) returned.
- Do not call COMPOSIO_MANAGE_CONNECTIONS if COMPOSIO_SEARCH_TOOLS returns no main tools and no related tools.
- Toolkit names in toolkits must exactly match toolkit identifiers returned by COMPOSIO_SEARCH_TOOLS; never invent names.
- NEVER execute any toolkit tool without an ACTIVE connection.

Tool Behavior:
- If a connection is Active, the tool returns the connection details. Always use this to verify connection status and fetch metadata.
- If a connection is not Active, returns a authentication link (redirect_url) to create new connection.
- If reinitiate_all is true, the tool forces reconnections for all toolkits, even if they already have active connections.

Workflow after initiating connection:
- Always show the returned redirect_url as a FORMATTED MARKDOWN LINK to the user, and ask them to click on the link to finish authentication.
- Begin executing tools only after the connection for that toolkit is confirmed Active.
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `toolkits` | array | Yes | List of toolkits to check or connect. Should be a valid toolkit returned by SEARCH_TOOLS (never invent one). If a toolkit is not connected, will initiate connection. Example: ['gmail', 'exa', 'github', 'outlook', 'reddit', 'googlesheets', 'one_drive'] |
| `reinitiate_all` | boolean | No | Force reconnection for ALL toolkits in the toolkits list, even if they already have Active connections.               WHEN TO USE:               - You suspect existing connections are stale or broken.               - You want to refresh all connections with new credentials or settings.               - You're troubleshooting connection issues across multiple toolkits.               BEHAVIOR:               - Overrides any existing active connections for all specified toolkits and initiates new link-based authentication flows.               DEFAULT: false (preserve existing active connections) |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Multi Execute Composio Tools

**Slug:** `COMPOSIO_MULTI_EXECUTE_TOOL`


  Fast and parallel tool executor for tools and recipes discovered through COMPOSIO_SEARCH_TOOLS. Use this tool to execute up to 50 tools in parallel across apps only when they're logically independent (no ordering/output dependencies). Response contains structured outputs ready for immediate analysis - avoid reprocessing them via remote bash/workbench tools.

Prerequisites:
- Always use valid tool slugs and their arguments discovered through COMPOSIO_SEARCH_TOOLS. NEVER invent tool slugs or argument fields. ALWAYS pass STRICTLY schema-compliant arguments with each tool execution.
- Ensure an ACTIVE connection exists for the toolkits that are going to be executed. If none exists, MUST initiate one via COMPOSIO_MANAGE_CONNECTIONS before execution.
- Only batch tools that are logically independent - no ordering, no output-to-input dependencies, and no intra-call chaining (tools in one call can't use each other's outputs). DO NOT pass dummy or placeholder inputs; always resolve required inputs using appropriate tools first.

Usage guidelines:
- Use this whenever a tool is discovered and has to be called, either as part of a multi-step workflow or as a standalone tool.
- If COMPOSIO_SEARCH_TOOLS returns a tool that can perform the task, prefer calling it via this executor. Do not write custom API calls or ad-hoc scripts for tasks that can be completed by available Composio tools.
- Prefer parallel execution: group independent tools into a single multi-execute call where possible.
- Predictively set sync_response_to_workbench=true if the response may be large or needed for later scripting. It still shows response inline; if the actual response data turns out small and easy to handle, keep everything inline and SKIP workbench usage.
- Responses contain structured outputs for each tool. RULE: Small data - process yourself inline; large data - process in the workbench.
- ALWAYS include inline references/links to sources in MARKDOWN format directly next to the relevant text. Eg provide slack thread links alongside with summary, render document links instead of raw IDs.

Restrictions: Some tools or toolkits may be disabled in this environment. If the response indicates a restriction, inform the user and STOP execution immediately. Do NOT attempt workarounds or speculative actions.


- CRITICAL: You MUST always include the 'memory' parameter - never omit it. Even if you think there's nothing to remember, include an empty object {} for memory.

Memory Storage:
- CRITICAL FORMAT: Memory must be a dictionary where keys are app names (strings) and values are arrays of strings. NEVER pass nested objects or dictionaries as values.
- CORRECT format: {"slack": ["Channel general has ID C1234567"], "gmail": ["John's email is john@example.com"]}
- Write memory entries in natural, descriptive language - NOT as key-value pairs. Use full sentences that clearly describe the relationship or information.
- ONLY store information that will be valuable for future tool executions - focus on persistent data that saves API calls.
- STORE: ID mappings, entity relationships, configs, stable identifiers.
- DO NOT STORE: Action descriptions, temporary status updates, logs, or "sent/fetched" confirmations.
- Examples of GOOD memory (store these):
  * "The important channel in Slack has ID C1234567 and is called #general"
  * "The team's main repository is owned by user 'teamlead' with ID 98765"
  * "The user prefers markdown docs with professional writing, no emojis" (user_preference)
- Examples of BAD memory (DON'T store these):
  * "Successfully sent email to john@example.com with message hi"
  * "Fetching emails from last day (Sep 6, 2025) for analysis"
- Do not repeat the memories stored or found previously.


#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tools` | array | Yes | List of logically independent tools to execute in parallel. |
| `thought` | string | No | One-sentence, concise, high-level rationale (no step-by-step). |
| `sync_response_to_workbench` | boolean | Yes | Syncs the response to the remote workbench (for later scripting/processing) while still viewable inline. Predictively set true if the output may be large or need scripting; if it turns out small/manageable, skip workbench and use inline only. Default: false |
| `memory` | object | No | CRITICAL: Memory must be a dictionary with app names as keys and string arrays as values. NEVER use nested objects. Format: {"app_name": ["string1", "string2"]}. Store durable facts - stable IDs, mappings, roles, preferences. Exclude ephemeral data like message IDs or temp links. Use full sentences describing relationships. Always include this parameter. |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |
| `current_step` | string | No | Short enum for current step of the workflow execution. Eg FETCHING_EMAILS, GENERATING_REPLIES. Always include to keep execution aligned with the workflow. |
| `current_step_metric` | string | No | Progress metrics for the current step - use to track how far execution has advanced. Format as a string "done/total units" - example "10/100 emails", "0/n messages", "3/10 pages". |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Run bash commands

**Slug:** `COMPOSIO_REMOTE_BASH_TOOL`


  Execute bash commands in a REMOTE sandbox for file operations, data processing, and system tasks. Essential for handling large tool responses saved to remote files.
  PRIMARY USE CASES:
- Process large tool responses saved by COMPOSIO_MULTI_EXECUTE_TOOL to remote sandbox
- File system operations, extract specific information from JSON with shell tools like jq, awk, sed, grep, etc.
- Commands run from /home/user directory by default
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | string | Yes | The bash command to execute |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Execute Code remotely in work bench

**Slug:** `COMPOSIO_REMOTE_WORKBENCH`


  Process **REMOTE FILES** or script BULK TOOL EXECUTIONS using Python code IN A REMOTE SANDBOX. If you can see the data in chat, DON'T USE THIS TOOL.
**ONLY** use this when processing **data stored in a remote file** or when scripting bulk tool executions.

DO NOT USE
- When the complete response is already inline/in-memory, or you only need quick parsing, summarization, or basic math.

USE IF
- To parse/analyze tool outputs saved to a remote file in the sandbox or to script multi-tool chains there.
- For bulk or repeated executions of known Composio tools (e.g., add a label to 100 emails).
- To call APIs via proxy_execute when no Composio tool exists for that API.


OUTPUTS
- Returns a compact result or, if too long, artifacts under `/home/user/.code_out`.

IMPORTANT CODING RULES:
  1. Stepwise Execution: Split work into small steps. Save intermediate outputs in variables or temporary file in `/tmp/`. Call COMPOSIO_REMOTE_WORKBENCH again for the next step. This improves composability and avoids timeouts.
  2. Notebook Persistence: This is a persistent Jupyter notebook cell: variables, functions, imports, and in-memory state from previous and future code executions are preserved in the notebook's history and available for reuse. You also have a few helper functions available.
  3. Parallelism & Timeout (CRITICAL): There is a hard timeout of 4 minutes so complete the code within that. Prioritize PARALLEL execution using ThreadPoolExecutor with suitable concurrency for bulk operations - e.g., call run_composio_tool or invoke_llm parallelly across rows to maximize efficiency.
    3.1 If the data is large, split into smaller batches and call the workbench multiple times to avoid timeouts.
  4. Checkpoints: Implement checkpoints (in memory or files) so that long runs can be resumed from the last completed step.
  5. Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request **outside** the workbench via COMPOSIO_MULTI_EXECUTE_TOOL or use invoke_llm helper.
  6. LLM Helpers: Always use invoke_llm helper for summary, analysis, or field extraction on results. This is a smart LLM that will give much better results than any adhoc filtering.
  7. Avoid Meta Loops: Do not use run_composio_tool to call COMPOSIO_MULTI_EXECUTE_TOOL or other COMPOSIO_* meta tools to avoid cycles. Only use it for app tools.
  8. Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize fetching pages if tool supports page_number.
  9. No Hardcoding: Never hardcode data in code. Always load it from files or tool responses, iterating to construct intermediate or final inputs/outputs.
  10. If the final output is in a workbench file, use upload_local_file to download it - never expose the raw workbench file path to the user. Prefer to download useful artifacts after task is complete.


ENV & HELPERS:
- Home directory: `/home/user`.
- NOTE: Helper functions already initialized in the workbench - DO NOT import or redeclare them:
    - 
`run_composio_tool(tool_slug: str, arguments: dict) -> tuple[Dict[str, Any], str]`: Execute a known Composio **app** tool (from COMPOSIO_SEARCH_TOOLS). Do not invent names; match the tool's input schema. Suited for loops/parallel/bulk over datasets.
      i) run_composio_tool returns JSON with top-level "data". Parse carefully—structure may be nested.
    
    - 
`invoke_llm(query: str) -> tuple[str, str]`: Invoke an LLM for semantic tasks. Pass MAX 200k characters in input.
      i) NOTE Prompting guidance: When building prompts for invoke_llm, prefer f-strings (or concatenation) so literal braces stay intact. If using str.format, escape braces by doubling them ({{ }}).
      ii) Define the exact JSON schema you want and batch items into smaller groups to stay within token limit.

    - `upload_local_file(*file_paths) -> tuple[Dict[str, Any], str]`: Upload files in workbench to Composio S3/R2 storage. Use this to download any generated files/artifacts from the workbench.
    - `proxy_execute(method, endpoint, toolkit, query_params=None, body=None, headers=None) -> tuple[Any, str]`: Call a toolkit API directly when no Composio tool exists. Only one toolkit can be invoked with proxy_execute per workbench call
    - `web_search(query: str) -> tuple[str, str]`: Search the web for information.
    - `smart_file_extract(sandbox_file_path: str, show_preview: bool = True) -> tuple[str, str]`: Extracts text from files in the sandbox (e.g., PDF, image).
    - Workbench comes with comprehensive Image Processing (PIL/Pillow, OpenCV, scikit-image), PyTorch ML libraries, Document and Report handling tools (pandoc, python-docx, pdfplumber, reportlab), and standard Data Analysis tools (pandas, numpy, matplotlib) for advanced visual, analytical, and AI tasks.
  All helper functions return a tuple (result, error). Always check error before using result.

## Python Helper Functions for LLM Scripting


### run_composio_tool(tool_slug, arguments)
Executes a known Composio tool via backend API. Do NOT call COMPOSIO_* meta tools to avoid cyclic calls.

    def run_composio_tool(tool_slug: str, arguments: Dict[str, Any]) -> tuple[Dict[str, Any], str]
    # Returns: (tool_response_dict, error_message)
    #   Success: ({"data": {actual_data}}, "") - Note the top-level data
    #   Error:   ({}, "error_message") or (response_data, "error_message")

    result, error = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 1, "user_id": "me"})
    if error:
        print("GMAIL_FETCH_EMAILS error:", error); return
    email_data = result.get("data", {})
    print("Fetched:", email_data)
    


### invoke_llm(query)
Calls LLM for reasoning, analysis, and semantic tasks. Pass MAX 200k characters input.

    def invoke_llm(query: str) -> tuple[str, str]
    # Returns: (llm_response, error_message)

    resp, error = invoke_llm("Summarize the key points from this data")
    if not error:
      print("LLM:", resp)

    # Example: analyze tool response with LLM
    tool_resp, err = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 5, "user_id": "me"})
    if not err:
      parsed = tool_resp.get("data", {})
      resp, err2 = invoke_llm(f"Analyze these emails and summarize: {parsed}")
      if not err2:
        print("LLM Gmail Summary:", resp)
    # TIP: batch prompts to reduce LLM calls.
    


### upload_local_file(*file_paths)
Uploads sandbox files to Composio S3/R2 storage. Single files upload directly, multiple files are auto-zipped.
Use this when you need to upload/download any generated artifacts from the sandbox.

    def upload_local_file(*file_paths) -> tuple[Dict[str, Any], str]
    # Returns: (result_dict, error_string)
    # Success: ({"s3_url": str, "uploaded_file": str, "type": str, "id": str, "s3key": str, "message": str}, "")
    # Error: ({}, "error_message")

    # Single file
    result, error = upload_local_file("/path/to/report.pdf")

    # Multiple files (auto-zipped)
    result, error = upload_local_file("/home/user/doc1.txt", "/home/user/doc2.txt")

    if not error:
      print("Uploaded:", result["s3_url"])


### proxy_execute(method, endpoint, toolkit, query_params=None, body=None, headers=None)
Direct API call to a connected toolkit service.

    def proxy_execute(
        method: Literal["GET","POST","PUT","DELETE","PATCH"],
        endpoint: str,
        toolkit: str,
        query_params: Optional[Dict[str, str]] = None,
        body: Optional[object] = None,
        headers: Optional[Dict[str, str]] = None,
    ) -> tuple[Any, str]
    # Returns: (response_data, error_message)

    # Example: GET request with query parameters
    query_params = {"q": "is:unread", "maxResults": "10"}
    data, error = proxy_execute("GET", "/gmail/v1/users/me/messages", "gmail", query_params=query_params)
    if not error:
      print("Success:", data)


### web_search(query)
Searches the web via Exa AI.

    def web_search(query: str) -> tuple[str, str]
    # Returns: (search_results_text, error_message)

    results, error = web_search("latest developments in AI")
    if not error:
        print("Results:", results)

## Best Practices


### Error-first pattern and Defensive parsing (print keys while narrowing)
    res, err = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 5})
    if err:
        print("error:", err); return
    if isinstance(res, dict):
        print("res keys:", list(res.keys()))
        data = res.get("data") or {}
        print("data keys:", list(data.keys()))
        msgs = data.get("messages") or []
        print("messages count:", len(msgs))
        for m in msgs:
            print("subject:", m.get("subject", "<missing>"))

### Parallelize (4-min sandbox timeout)
Adjust concurrency so all tasks finish within 4 minutes.

    import concurrent.futures

    MAX_CONCURRENCY = 10 # Adjust as needed

    def send_bulk_emails(email_list):
        def send_single(email):
            result, error = run_composio_tool("GMAIL_SEND_EMAIL", {
                "to": email["recipient"], "subject": email["subject"], "body": email["body"]
            })
            if error:
                print(f"Failed {email['recipient']}: {error}")
                return {"status": "failed", "error": error}
            return {"status": "sent", "data": result}

        results = []
        with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_CONCURRENCY) as ex:
            futures = [ex.submit(send_single, e) for e in email_list]
            for f in concurrent.futures.as_completed(futures):
                results.append(f.result())
        return results

    email_list = [{"recipient": f"user{i}@example.com", "subject": "Test", "body": "Hello"} for i in range(1000)]
    results = send_bulk_emails(email_list)
    

    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `code_to_execute` | string | Yes | Python to run inside the persistent **remote Jupyter sandbox**. State (imports, variables, files) is preserved across executions. Keep code concise to minimize tool call latency. Avoid unnecessary comments. |
| `thought` | string | No | Concise objective and high-level plan (no private chain-of-thought). 1 sentence describing what the cell should achieve and why the sandbox is needed. |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |
| `current_step` | string | No | Short enum for current step of the workflow execution. Eg FETCHING_EMAILS, GENERATING_REPLIES. Always include to keep execution aligned with the workflow. |
| `current_step_metric` | string | No | Progress metrics for the current step - use to track how far execution has advanced. Format as a string "done/total units" - example "10/100 emails", "0/n messages", "3/10 pages". |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Search Composio Tools

**Slug:** `COMPOSIO_SEARCH_TOOLS`


  MCP Server Info: COMPOSIO MCP connects 500+ apps—Slack, GitHub, Notion, Google Workspace (Gmail, Sheets, Drive, Calendar), Microsoft (Outlook, Teams), X/Twitter, Figma, Web Search / Deep research, Browser tool (scrape URLs, browser automation), Meta apps (Instagram, Meta Ads), TikTok, AI tools like Nano Banana & Veo3, and more—for seamless cross-app automation.
  Use this MCP server to discover the right tools and the recommended step-by-step plan to execute reliably.
  ALWAYS call this tool first whenever a user mentions or implies an external app, service, or workflow—never say "I don't have access to X/Y app" before calling it.

  Tool Info: Extremely fast discovery tool that returns relevant MCP-callable tools along with a recommended execution plan and common pitfalls for reliable execution.

Usage guidelines:
  - Use this tool whenever kicking off a task. Re-run it when you need additional tools/plans due to missing details, errors, or a changed use case.
  - If the user pivots to a different use case in same chat, you MUST call this tool again with the new use case and generate a new session_id.
  - Specify the use_case with a normalized description of the problem, query, or task. Be clear and precise. Queries can be simple single-app actions or multiple linked queries for complex cross-app workflows.
  - Pass known_fields along with use_case as a string of key–value hints (for example, "channel_name: general") to help the search resolve missing details such as IDs.
  

Splitting guidelines (Important):
  1. Atomic queries: 1 query = 1 tool call. Include hidden prerequisites (e.g., add "get Linear issue" before "update Linear issue").
  2. Include app names: If user names a toolkit, include it in every sub query so intent stays scoped (e.g., "fetch Gmail emails", "reply to Gmail email").
  3. English input: Translate non-English prompts while preserving intent and identifiers.

  Example:
  User query: "send an email to John welcoming him and create a meeting invite for tomorrow"
  Search call: queries: [
    {use_case: "send an email to someone", known_fields: "recipient_name: John"},
    {use_case: "create a meeting invite", known_fields: "meeting_date: tomorrow"}
  ]

Plan review checklist (Important):
  - The response includes a detailed execution plan and common pitfalls. You MUST review this plan carefully, adapt it to your current context, and generate your own final step-by-step plan before execution. Execute the steps in order to ensure reliable and accurate execution. Skipping or ignoring required steps can lead to unexpected failures.
  - Check the plan and pitfalls for input parameter nuances (required fields, IDs, formats, limits). Before executing any tool, you MUST review its COMPLETE input schema and provide STRICTLY schema-compliant arguments to avoid invalid-input errors.
  - Determine whether pagination is needed; if a response returns a pagination token and completeness is implied, paginate until exhaustion and do not return partial results.

Response:
  - Tools & Input Schemas: The response lists toolkits (apps) and tools suitable for the task, along with their tool_slug, description, input schema / schemaRef, and related tools for prerequisites, alternatives, or next steps.
    - NOTE: Tools with schemaRef instead of input_schema require you to call COMPOSIO_GET_TOOL_SCHEMAS first to load their full input_schema before use.
  - Connection Info: If a toolkit has an active connection, the response includes it along with any available current user information. If no active connection exists, you MUST initiate a new connection via COMPOSIO_MANAGE_CONNECTIONS with the correct toolkit name. DO NOT execute any toolkit tool without an ACTIVE connection.
  - Time Info: The response includes the current UTC time for reference. You can reference UTC time from the response if needed.
  - The tools returned to you through this are to be called via COMPOSIO_MULTI_EXECUTE_TOOL. Ensure each tool execution specifies the correct tool_slug and arguments exactly as defined by the tool's input schema.
    - The response includes a memory parameter containing relevant information about the use case and the known fields that can be used to determine the flow of execution. Any user preferences in memory must be adhered to.

SESSION: ALWAYS set this parameter, first for any workflow. Pass session: {generate_id: true} for new workflows OR session: {id: "EXISTING_ID"} to continue. ALWAYS use the returned session_id in ALL subsequent meta tool calls.
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `queries` | array | Yes | List of structured search queries (in English) to process in parallel. Each query represents a specific use case or task. For multi-app or complex workflows, split them into smaller single-app, API-level actions for best accuracy, including implicit prerequisites (e.g., fetch the resource before updating it). Each query returns 5-10 tools. |
| `session` | object | No | Session context for correlating meta tool calls within a workflow. Always pass this parameter. Use {generate_id: true} for new workflows or {id: "EXISTING_ID"} to continue existing workflows. |
| `model` | string | No | Client LLM model name (recommended). Used to optimize planning/search behavior. Ignored if omitted or invalid. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution. Format: "X out of Y searches failed, reasons: <details>" |
| `successful` | boolean | Yes | Whether all searches completed successfully. False if any query failed |

### Wait for connection

**Slug:** `COMPOSIO_WAIT_FOR_CONNECTION`

Wait for connections to be established for given toolkits.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mode` | string ("any" | "all") | No | Wait for ANY connection or ALL connections to reach success/failed state (default: any) |
| `toolkits` | array | Yes | List of toolkit slugs to wait for |
| `timeout_seconds` | integer | No | Maximum time to wait in seconds (default: 300, max: 600) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create / Update Recipe from Workflow

**Slug:** `COMPOSIO_CREATE_UPDATE_RECIPE`

Convert executed workflow into a reusable notebook. Only use when workflow is complete or user explicitly requests.

--- DESCRIPTION FORMAT (MARKDOWN) - MUST BE NEUTRAL ---

Description is for ANY user of this recipe, not just the creator. Keep it generic.
- NO PII (no real emails, names, channel names, repo names)
- NO user-specific defaults (defaults go in defaults_for_required_parameters only)
- Use placeholder examples only

Generate rich markdown with these sections:

## Overview
[2-3 sentences: what it does, what problem it solves]

## How It Works
[End-to-end flow in plain language]

## Key Features
- [Feature 1]
- [Feature 2]

## Step-by-Step Flow
1. **[Step]**: [What happens]
2. **[Step]**: [What happens]

## Apps & Integrations
| App | Purpose |
|-----|---------|
| [App] | [Usage] |

## Inputs Required
| Input | Description | Format |
|-------|-------------|--------|
| channel_name | Slack channel to post to | WITHOUT # prefix |

(No default values here - just format guidance)

## Output
[What the recipe produces]

## Notes & Limitations
- [Edge cases, rate limits, caveats]

--- CODE STRUCTURE ---

Code has 2 parts:
1. DOCSTRING HEADER (comments) - context, learnings, version history
2. EXECUTABLE CODE - clean Python that runs

DOCSTRING HEADER (preserve all history when updating):

"""
RECIPE: [Name]
FLOW: [App1] → [App2] → [Output]

VERSION HISTORY:
v2 (current): [What changed] - [Why]
v1: Initial version

API LEARNINGS:
- [API_NAME]: [Quirk, e.g., Response nested at data.data]

KNOWN ISSUES:
- [Issue and fix]
"""

Then EXECUTABLE CODE follows (keep code clean, learnings stay in docstring).

--- INPUT SCHEMA (USER-FRIENDLY) ---

Ask for: channel_name, repo_name, sheet_url, email_address
Never ask for: channel_id, spreadsheet_id, user_id (resolve in code)
Never ask for large inputs: use invoke_llm to generate content in code

GOOD DESCRIPTIONS (explicit format, generic examples - no PII):
  channel_name: Slack channel WITHOUT # prefix
  repo_name: Repository name only, NOT owner/repo
  google_sheet_url: Full URL from browser
  gmail_label: Label as shown in Gmail sidebar

REQUIRED vs OPTIONAL:
- Required: things that change every run (channel name, date range, search terms)
- Optional: generic settings with sensible defaults (sheet tab, row limits)

--- DEFAULTS FOR REQUIRED PARAMETERS ---

- Provide in defaults_for_required_parameters for all required inputs
- Use values from workflow context
- Use empty string if no value available - never hallucinate
- Match types: string param needs string default, number needs number
- Defaults are private to creator, not shared when recipe is published
- SCHEDULE-FRIENDLY DEFAULTS:
- Use RELATIVE time references unless user asks otherwise, not absolute dates
✓ "last_24_hours", "past_week", "7" (days back)
✗ "2025-01-15", "December 18, 2025"
- - Never include timezone as an input parameter unless specifically asked
- - Test: "Will this default work if recipe runs tomorrow?"

--- CODING RULES ---

SINGLE EXECUTION: Generate complete notebook that runs in one invocation.
CODE CORRECTNESS: Must be syntactically and semantically correct and executable.
ENVIRONMENT VARIABLES: All inputs via os.environ.get(). Code is shared - no PII.
TIMEOUT: 4 min hard limit. Use ThreadPoolExecutor for bulk operations.
SCHEMA SAFETY: Never assume API response schema. Use invoke_llm to parse unknown responses.
NESTED DATA: APIs often double-nest. Always extract properly before using.
ID RESOLUTION: Convert names to IDs in code using FIND/SEARCH tools.
FAIL LOUDLY: Raise Exception if expected data is empty. Never silently continue.
CONTENT GENERATION: Never hardcode text. Use invoke_llm() for generated content.
DEBUGGING: Timestamp all print statements.
NO META LOOPS: Never call RUBE_* or COMPOSIO_* meta tools via run_composio_tool.
OUTPUT: End with just output variable (no print).

--- HELPERS ---

Available in notebook (dont import). See RUBE_REMOTE_WORKBENCH for details:
run_composio_tool(slug, args) returns (result, error)
invoke_llm(prompt, reasoning_effort="low") returns (response, error)
  # reasoning_effort: "low" (bulk classification), "medium" (summarization), "high" (creative/complex content)
  # Always specify based on task - use low by default, medium for analysis, high for creative generation
proxy_execute(method, endpoint, toolkit, ...) returns (result, error)
upload_local_file(*paths) returns (result, error)

--- CHECKLIST ---

- Description: Neutral, no PII, no defaults - for any user
- Docstring header: Version history, API learnings (preserve on update)
- Input schema: Human-friendly names, format guidance, no large inputs
- Defaults: In defaults_for_required_parameters, type-matched, from context
- Code: Single execution, os.environ.get(), no PII, fail loudly
- Output: Ends with just output

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_id` | string | No | Recipe id to update (optional). If not provided, will create a new recipe |
| `name` | string | Yes | Name for the notebook / recipe. Please keep it short (ideally less than five words) |
| `description` | string | Yes | Description for the notebook / recipe |
| `output_schema` | object | Yes | Expected output json schema of the Notebook / Recipe. If the schema has array, please ensure it has "items" in it, so we know what kind of array it is. If the schema has object, please ensure it has "properties" in it, so we know what kind of object it is |
| `input_schema` | object | Yes | Expected input json schema for the Notebook / Recipe. Please keep the schema simple, avoid nested objects and arrays. Types of all input fields should be string only. Each key of this schema will be a single environment variable input to your Notebook |
| `workflow_code` | string | Yes | The Python code that implements the workflow, generated by the LLM based on the executed workflow. Should include all necessary imports, tool executions (via run_composio_tool), and proper error handling. Notebook should always end with output cell (not print) |
| `defaults_for_required_parameters` | object | No |  Defaults for required parameters of the notebook / recipe. We store those PII related separately after encryption. Please ensure that the parameters you provide match the input schema for the recipe and all required inputs are covered. Fine to ignore optional parameters |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Execute Recipe

**Slug:** `COMPOSIO_EXECUTE_RECIPE`

Executes a Recipe

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_id` | string | Yes | Recipe id to update (optional). If not provided, will create a new recipe |
| `input_data` | object | Yes | Input object to pass to the Recipe |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Output from the API execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create / Update Recipe from Workflow

**Slug:** `COMPOSIO_UPSERT_RECIPE`


    Convert the executed workflow into a recipe using Python Pydantic code.
    The recipe_slug parameter is required. If a recipe with the provided slug already exists, a new version will be created.
    If the slug does not exist, a new recipe will be created.

    
    This tool allows you to:

    1. Save the executed workflow as a reusable recipe using Python Pydantic code
    2. The recipe_slug parameter is required. If a recipe with the provided slug already exists, a new version will be created. If the slug does not exist, a new recipe will be created.
    3. Recipes are defined using Python Pydantic models that extend ComposioRecipe base class
    4. You should generate the Python Pydantic code for the recipe based on the executed workflow, please see below for more instructions on how to generate the code for the recipe
    5. The recipe code must define request and response Pydantic models and implement the execute method

    WHEN TO USE
    - Only run this tool when the workflow is completed and successful or if the user explicitly asked to run this tool

    DO NOT USE
    - When the workflow is still being processed, or not yet completed and the user explicitly didn't ask to run this tool

    IMPORTANT CODING RULES:
    1. Single Execution: Please generate the code for the full recipe that can be executed in a single invocation
    2. Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request **outside** the workbench via COMPOSIO_MULTI_EXECUTE_TOOL or use invoke_llm helper.
    3. Parallelism & Timeout (CRITICAL): There is a hard timeout of 4 minutes so complete the code within that. Prioritize PARALLEL execution using ThreadPoolExecutor with suitable concurrency for bulk operations - e.g., call run_composio_tool or invoke_llm parallelly across rows to maximize efficiency.
    4. LLM Helpers: You should always use invoke_llm helper for summary, analysis, or field extraction on results. This is a smart LLM that will give much better results than any adhoc filtering.
    5. Avoid Meta Loops: Do not use run_composio_tool to call COMPOSIO_MULTI_EXECUTE_TOOL or other COMPOSIO_* meta tools to avoid cycles. Only use it for app tools.
    6. Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize fetching pages if tool supports page_number.
    7. No Hardcoding: Never hardcode data in code. Always load it from files or tool responses, iterating to construct intermediate or final inputs/outputs.
    8. No Hardcoding: Please do NOT hardcode any PII related information like email / name / home address / social security id or anything like that. This is very risky
    9. NEVER HARDCODE CONTENT (CRITICAL): For ANY content generation use case, you MUST use invoke_llm helper instead of hardcoding. This includes but is not limited to: social media posts (Twitter, LinkedIn, Instagram, Facebook, Reddit, etc.), blog posts, articles, email content, SEO reports, market research, jokes, stories, creative writing, product descriptions, news summaries, documentation, or ANY text content that should be unique, personalized, or contextual. Always use invoke_llm with a specific prompt to generate fresh content every time.
    10. Dynamic Content Generation: Structure your code like this for content generation: content_prompt = f"Generate a [specific type] about [topic] that [requirements]" then generated_content, error = invoke_llm(content_prompt). Every execution should produce different, contextually appropriate content.
    11. Code Correctness (CRITICAL): Code must be syntactically and semantically correct and executable.
    12. Recipe Structure: The recipe code must follow this structure:
        - Import required modules: from pydantic import BaseModel, Field; from recipes.base import ComposioRecipe
        - Define Request model: class RecipeRequest(BaseModel) with Field descriptions
        - Define Response model: class RecipeResponse(BaseModel) with Field descriptions
        - Define Recipe class: class RecipeName(ComposioRecipe[RecipeRequest, RecipeResponse]) with name attribute and execute method
        - The execute method should use self.run_composio_tool() to call tools and return RecipeResponse instance
        - IMPORTANT: The recipe class must have a name attribute that is a slug-type identifier (lowercase, underscores, no spaces). Example: name = "weather_lookup" or name = "github_repo_analyzer". This name will be used as the recipe slug identifier.
    13. Pydantic Models: Use proper Field descriptions and types. Request model should have all input parameters with Field(..., description="...")
    14. Response Model: Should match the expected output structure with proper Field descriptions
    15. Tool Execution: Use self.run_composio_tool(tool_slug, arguments) to execute tools within the recipe
    16. Error Handling: Handle errors appropriately and return meaningful error messages
    17. Debugging (CRITICAL): For every print statement in your code, please prefix it with the time at which it is being executed. This will help me investigate latency related stuff
    18. If there are any errors while running, please throw the error so that the person running the recipe can see and fix it
    19. FAIL LOUDLY (CRITICAL): If you expect data but get 0 results, raise Exception immediately. NEVER silently continue or create empty outputs. Recovery loop will fix the code - don't hide issues. Example: if len(items) == 0: raise Exception("Expected data but got none")
    20. NESTED DATA (CRITICAL): APIs often double-nest data. Always extract: data = result.get("data", {}); if "data" in data: data = data["data"]. Try flexible field names: item.get("id") or item.get("channel_id")

    IMPORTANT SCHEMA RULES:
    1. Keep request model simple - include only parameters users would want to vary between runs
    2. Do not ask for large inputs. Use invoke_llm helper to generate large content in the code
    3. HUMAN-FRIENDLY INPUTS (CRITICAL):
       - ✓ Ask for: channel_name, google_sheet_url, repo_name, email_address
       - ✗ Never ask for: channel_id, spreadsheet_id, document_id, user_id
       - Extract IDs in code: use FIND/SEARCH tools to convert names/URLs to IDs
       - For URLs: extract IDs in code with regex (e.g., spreadsheet_id from google_sheet_url)
    4. REQUIRED vs OPTIONAL: Mark as required only if it's specific to the user's workflow and would change every run. Generic settings should be optional with sensible defaults
    5. Identify what varies between runs: channel name, date range, search terms = required. Sheet tab name, row limits, formatting = optional
    6. [CRITICAL]: Use search/find tools in code to convert human inputs (names/URLs) to IDs before calling other tools

    ENV & HELPERS:
    Recipes inherit from ComposioRecipe and have access to these helper methods:

    1. self.run_composio_tool(tool_slug: str, params: Dict) -> Tuple[Dict, Optional[str]]
       - Execute a Composio tool
       - Returns: (result dict, error string or None)

    2. self.invoke_llm(prompt: str) -> Tuple[str, Optional[str]]
       - Invoke an LLM for semantic analysis, reasoning, summarization, and other generic tasks
       - Returns: (completion text, error string or None)

    3. self.web_search(query: str) -> Tuple[str, Optional[str]]
       - Perform web search
       - Returns: (answer text, error string or None)

    NOTE: The recipe code must be valid Python Pydantic code that extends ComposioRecipe base class.
  

    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_slug` | string | Yes | Recipe slug identifier (required). CREATE vs UPDATE behavior:   - **CREATE**: Pass slug WITHOUT "RECIPE_" prefix (e.g., "weather_lookup" → becomes "RECIPE_WEATHER_LOOKUP_C")   - **UPDATE**: Pass the EXACT full slug from create response (e.g., "RECIPE_WEATHER_LOOKUP_C"). A new version will be created.   Maximum length: 32 characters. |
| `name` | string | Yes | Name for the notebook / recipe. Please keep it short (ideally less than five words) |
| `description` | string | Yes | Description for the notebook / recipe |
| `recipe_code` | string | Yes | The Python Pydantic code that implements the recipe, generated by the LLM based on the executed workflow. Should include Pydantic models for request and response, and a recipe class extending ComposioRecipe with an execute method. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Recipe Details by Slug

**Slug:** `COMPOSIO_GET_RECIPE`


    Get the details of an existing recipe by its slug.
    Returns the recipe's name, description, input/output schemas, and the toolkits it uses.
    Use this to inspect a recipe's structure before executing it.
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_slug` | string | Yes | Recipe slug identifier (e.g., RECIPE_MY_WORKFLOW_C). Use the exact slug returned when the recipe was created. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Recipe details |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Existing Recipe Details

**Slug:** `COMPOSIO_GET_RECIPE_DETAILS`


    Get the details of the existing recipe for a given recipe id.
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_id` | string | Yes | Recipe id to update (optional). If not provided, will create a new recipe |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Wait for connection

**Slug:** `COMPOSIO_WAIT_FOR_CONNECTIONS`


Wait for user auth to finish. Call ONLY after you have shown the Auth link from COMPOSIO_MANAGE_CONNECTIONS.
Wait until mode=any/all toolkits reach a terminal state (ACTIVE/FAILED) or timeout.

Example Input: { toolkits: ["gmail","outlook"], mode: "any" }
    

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `toolkits` | array | Yes | List of toolkit slugs to wait for. |
| `mode` | string ("any" | "all") | No | Wait for ANY connection or ALL connections to reach active/failed state (default: any) |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Tool Schemas

**Slug:** `COMPOSIO_GET_TOOL_SCHEMAS`

Retrieve input schemas for tools by slug. Returns complete parameter definitions required to execute each tool. Make sure to call this tool whenever the response of COMPOSIO_SEARCH_TOOLS does not provide a complete schema for a tool - you must never invent or guess any input parameters.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tool_slugs` | array | Yes | Array of tool slugs to retrieve schemas for. Pass slugs exactly as returned by COMPOSIO_SEARCH_TOOLS. |
| `session_id` | string | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |
| `include` | array | No | Schema fields to include. Defaults to ["input_schema"]. Include "output_schema" when calling tools in the workbench to validate response structure. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |
