# Scrapegraph Ai

ScrapeGraphAI is an AI-powered web scraping API that enables developers to extract structured data from any website using natural language prompts. Website https://scrapegraphai.com

- **Category:** ai web scraping
- **Auth:** API_KEY
- **Composio Managed App Available?** N/A
- **Tools:** 27
- **Triggers:** 0
- **Slug:** `SCRAPEGRAPH_AI`
- **Version:** 20260316_00

## Tools

### Convert Webpage to Markdown (V2)

**Slug:** `SCRAPEGRAPH_AI_CONVERT_WEBPAGE_TO_MARKDOWN_V2`

Tool to convert any webpage into clean, well-formatted Markdown with full parameter control. Use when you need advanced options like stealth mode, custom headers, or webhook notifications. Supports all Markdownify API parameters.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mock` | boolean | No | If true, return mock test data instead of actual conversion. Useful for testing without consuming credits. Default: false. |
| `steps` | array | No | Browser interaction steps to perform before extracting content (e.g., clicking buttons, filling forms, scrolling). |
| `stream` | boolean | No | Whether to return streaming response. Default: false. |
| `headers` | object | No | Optional headers to send with the request, including cookies and user agent. Use to customize request behavior or handle authentication. |
| `stealth` | boolean | No | Enable stealth mode to bypass bot protection using advanced anti-detection techniques. Adds +4 credits to the request cost. Default: false. |
| `wait_ms` | integer | No | The number of milliseconds to wait before scraping the website. Useful for pages that load content dynamically. Default: 3000ms. |
| `branding` | boolean | No | Include ScrapeGraphAI branding in the response. Default: false. |
| `webhook_url` | string | No | Webhook URL to send the job result to when processing completes. Enables async notification of completion. |
| `website_url` | string | Yes | The URL of the webpage to convert to Markdown. Must be a valid HTTP/HTTPS URL. |
| `country_code` | string | No | The country code to use for the scrape (e.g., 'US', 'GB', 'FR'). Determines the geographic location for the request. |
| `render_heavy_js` | boolean | No | Enable rendering of heavy JavaScript. Use for Single Page Applications (SPAs) that require full JavaScript execution. Default: false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Generate Schema

**Slug:** `SCRAPEGRAPH_AI_GENERATE_SCHEMA`

Generate or modify a JSON schema based on a search query for structured data extraction. Use when you need a schema template for scraping specific data fields.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user_prompt` | string | Yes | The user's search query describing what data structure to generate. Be specific about the fields and structure you need (e.g., 'Extract product name, price, and availability from an e-commerce product page'). |
| `existing_schema` | object | No | Optional existing JSON schema to modify or extend. If provided, the API will refine this schema based on the user_prompt. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Agentic Scraper History

**Slug:** `SCRAPEGRAPH_AI_GET_AGENTIC_SCRAPER_HISTORY`

Retrieve paginated history of agentic scraper jobs. Use to view past scraping requests, their status, and results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Use to navigate through pages of history results. |
| `page_size` | integer | No | Number of history records to return per page. Maximum is typically 100. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Crawler History

**Slug:** `SCRAPEGRAPH_AI_GET_CRAWLER_HISTORY`

Retrieve the history of crawler jobs for your account. Returns paginated list of past crawler requests with their status, results, and metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Must be a positive integer. |
| `page_size` | integer | No | Number of crawler history records to return per page. Must be between 1 and 100. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Credits

**Slug:** `SCRAPEGRAPH_AI_GET_CREDITS`

Retrieve remaining and used credits for your ScrapeGraphAI account. Useful for checking credit availability before bulk scraping operations to avoid mid-run failures.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Endpoint Suggestions

**Slug:** `SCRAPEGRAPH_AI_GET_ENDPOINT_SUGGESTIONS`

Tool to get AI-powered suggestions for creating scraping endpoints. Use when you need to identify what data can be extracted from a website and how to structure the scraping logic.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `prompt` | string | Yes | Natural language description of what data you want to extract. Be specific about the type of information needed (e.g., 'product listings', 'user profiles', 'news articles'). |
| `website_url` | string | Yes | The website URL to analyze for scraping opportunities. Must be a valid URL with https:// or http:// protocol. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Live Session URL

**Slug:** `SCRAPEGRAPH_AI_GET_LIVE_SESSION_URL`

Tool to get a URL for a live browser session. Use when you need to interact with a webpage in real-time through a controlled browser environment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL of the webpage to open in the live browser session. Must be a valid HTTP/HTTPS URL. |
| `timeout` | integer | No | Timeout for the live session in seconds. Default is 300 seconds (5 minutes). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Markdownify History

**Slug:** `SCRAPEGRAPH_AI_GET_MARKDOWNIFY_HISTORY`

Tool to retrieve the history of markdownify webpage-to-Markdown conversion jobs. Use when you need to view past markdownify requests and their statuses.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Defaults to 1. |
| `page_size` | integer | No | Number of records to return per page. Defaults to 10. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Scrape History

**Slug:** `SCRAPEGRAPH_AI_GET_SCRAPE_HISTORY`

Retrieve the history of scrape jobs from your ScrapeGraphAI account. Use this to check the status of past scrapes, view results, and track credit usage.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Starts at 1. |
| `page_size` | integer | No | Number of scrape requests to return per page. Maximum depends on API limits. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Searchscraper History

**Slug:** `SCRAPEGRAPH_AI_GET_SEARCHSCRAPER_HISTORY`

Get the history of searchscraper jobs with pagination support. Use this to retrieve past searchscraper requests, their status, and results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Must be 1 or greater. |
| `page_size` | integer | No | Number of records per page. Must be between 1 and 100. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Sitemap History

**Slug:** `SCRAPEGRAPH_AI_GET_SITEMAP_HISTORY`

Tool to retrieve the history of sitemap extraction jobs. Use when you need to view past sitemap extraction requests, their status, and results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Defaults to 1. |
| `page_size` | integer | No | Number of items per page. Defaults to 10. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Smartscraper History

**Slug:** `SCRAPEGRAPH_AI_GET_SMARTSCRAPER_HISTORY`

Tool to retrieve the history of smartscraper jobs. Use when you need to view past scraping requests and their results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination (starts at 1). Use to navigate through multiple pages of history. |
| `page_size` | integer | No | Number of records to return per page (1-100). Default is 10. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Usage Timeline

**Slug:** `SCRAPEGRAPH_AI_GET_USAGE_TIMELINE`

Tool to retrieve usage timeline statistics for your ScrapeGraphAI account. Use when you need to visualize or analyze service usage patterns over time.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `days` | string ("7" | "14" | "30" | "all") | No | Time range for usage timeline data. Choose '7' for last 7 days, '14' for last 14 days, '30' for last 30 days, or 'all' for complete history. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Webhook Logs

**Slug:** `SCRAPEGRAPH_AI_GET_WEBHOOK_LOGS`

Tool to retrieve webhook delivery logs for a crawler job. Use when you need to check the status and history of webhook notifications sent for a specific crawler execution.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `crawler_id` | string | Yes | The unique identifier of the crawler job to retrieve webhook logs for. Obtained from starting a crawler job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Scheduled Jobs

**Slug:** `SCRAPEGRAPH_AI_LIST_SCHEDULED_JOBS`

Retrieve a paginated list of all scheduled scraping jobs for your account. Use this action to view and manage your scheduled jobs, including their configuration, cron schedules, and active status. Supports filtering by service type and active status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page` | integer | No | Page number for pagination. Must be 1 or greater. |
| `is_active` | string | No | Filter jobs by active status. Use 'true' to show only active jobs, 'false' for inactive jobs, or omit to show all jobs. |
| `page_size` | integer | No | Number of jobs to return per page. Must be between 1 and 100. |
| `service_type` | string | No | Filter jobs by service type (e.g., 'smartscraper', 'markdownify', 'searchscraper'). If not provided, returns jobs of all service types. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Markdownify Status

**Slug:** `SCRAPEGRAPH_AI_MARKDOWNIFY_STATUS`

Check the status and retrieve results of a Markdownify webpage-to-Markdown conversion job. Use this action to poll for the status of an async Markdownify request started via SCRAPEGRAPH_AI_MARKDOWNIFY. Note: The ScrapeGraph AI API typically returns completed results synchronously, so this status endpoint is primarily useful for long-running conversions of large or complex webpages.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `request_id` | string | Yes | The unique identifier (UUID) of the Markdownify request returned by the SCRAPEGRAPH_AI_MARKDOWNIFY action |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Save Endpoint Configuration

**Slug:** `SCRAPEGRAPH_AI_SAVE_ENDPOINT`

Tool to save custom scraping endpoint configurations to ScrapeGraphAI. Use when you need to create reusable scraping endpoints with specific parameters and extraction logic.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `suggestions` | array | Yes | List of endpoint configurations to save. Each configuration defines a custom scraping endpoint with its parameters and extraction logic. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Search Scraper

**Slug:** `SCRAPEGRAPH_AI_SEARCH_SCRAPER`

Perform AI-powered web searches with structured, parsed results. Some sites block scrapers and return empty bodies; treat these as unrecoverable for that URL. JS-rendered pages may yield incomplete content.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mock` | boolean | No | If true, returns a mock response for testing without consuming credits. Default is false. |
| `query` | string | Yes | The search query describing what you want to find on the web. |
| `language` | string | No | ISO 639-1 language code for the search (e.g., 'en'). |
| `num_results` | integer | No | Number of websites to search (3-20). Each website incurs credit costs. Default is 3. Insufficient credits returns a 402 error; verify balance before setting values above 3. |
| `extraction_mode` | boolean | No | If true (default), uses AI to extract structured data (10 credits/page). If false, returns raw markdown content (2 credits/page). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Check SearchScraper Status

**Slug:** `SCRAPEGRAPH_AI_SEARCH_SCRAPER_STATUS`

Check the status and results of an asynchronous SearchScraper job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `request_id` | string | Yes | The unique identifier (UUID) of the SearchScraper request obtained from the SearchScraper action response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### SmartCrawler Status

**Slug:** `SCRAPEGRAPH_AI_SMART_CRAWLER_STATUS`

Check the status and retrieve results of a SmartCrawler web crawling job. Use this action to poll for completion and get the extracted content from a previously started SmartCrawler job. Returns the job status, crawled URLs, page content in markdown/HTML format, and LLM extraction results (if enabled). Implement a polling timeout (e.g., max retries or elapsed time cap) to avoid indefinite loops when waiting for long-running jobs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `task_id` | string | Yes | The unique identifier of the SmartCrawler task to check status for. Obtained from starting a SmartCrawler job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Start Smart Scraper

**Slug:** `SCRAPEGRAPH_AI_SMART_SCRAPER_START`

Start AI-powered web scraping with natural language extraction prompts. When `wait` is false (default), returns a `request_id`; poll for results using SCRAPEGRAPH_AI_SMART_SCRAPER_STATUS. Check `error` and `job_status` fields in the response before using extracted data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mock` | boolean | No | If true, return mock test data instead of actual extraction. Useful for testing without consuming credits. |
| `wait` | boolean | No | If true, wait for job completion and return full results. If false (default), return request_id immediately for async polling. Poll using SCRAPEGRAPH_AI_SMART_SCRAPER_STATUS with the returned `request_id`. |
| `steps` | array | No | Browser interaction actions to perform before extraction (e.g., click buttons, fill forms). Each step is a dict with action type and parameters. |
| `cookies` | object | No | Cookies to include in the request for authentication or session management. |
| `headers` | object | No | Custom HTTP headers for the scraping request (e.g., User-Agent, Accept-Language). Used when fetching website_url. |
| `stealth` | boolean | No | Enable anti-bot detection techniques. Adds +4 credit cost per request. Use for sites with bot protection. |
| `plain_text` | boolean | No | Return extracted content as plain text instead of JSON. Useful for simple text extraction. |
| `total_pages` | integer | No | Number of pages to scrape for paginated content (1-100). Default is 1. |
| `user_prompt` | string | Yes | Natural language description of what information to extract from the webpage. Be specific about the data you want (e.g., 'Extract the product name, price, and description'). |
| `website_url` | string | No | Full URL of the webpage to scrape (must include https://). Required if website_html and website_markdown are not provided. |
| `website_html` | string | No | Raw HTML content to scrape (max 2MB). Use this if you already have the page HTML. Required if website_url and website_markdown are not provided. |
| `output_schema` | object | No | JSON Schema defining the structure of extracted data. Helps ensure consistent, structured output. |
| `render_heavy_js` | boolean | No | Enable enhanced JavaScript rendering for Single Page Applications (SPAs) and heavy JS sites. May increase processing time. |
| `website_markdown` | string | No | Raw Markdown content to scrape (max 2MB). Use this if you already have the page content as Markdown. Required if website_url and website_html are not provided. |
| `number_of_scrolls` | integer | No | Number of scroll iterations for infinite scroll pages (0-50). Default is 0. Use for pages that load content on scroll. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### SmartScraper Status

**Slug:** `SCRAPEGRAPH_AI_SMART_SCRAPER_STATUS`

Check the status and retrieve results of a SmartScraper web scraping job. Use this action to poll for completion after starting a SmartScraper job with wait=false. The request_id is returned by the Start SmartScraper action. Typical workflow: 1. Start a scraping job with SCRAPEGRAPH_AI_SMART_SCRAPER_START (wait=false) 2. Use the returned request_id to check status with this action 3. Poll until status is 'completed' or 'failed' 4. When completed, the 'result' field contains the extracted data. When completed, also check the 'error' field before consuming 'result', as 'failed' status populates 'error' instead of 'result'.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `request_id` | string | Yes | The unique identifier (UUID) of the SmartScraper request returned by the Start SmartScraper action |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Start Smart Crawler (Async)

**Slug:** `SCRAPEGRAPH_AI_START_SMART_CRAWLER`

Tool to start a multi-page web crawl using SmartCrawler for AI-powered data extraction. Use when you need to extract structured data from multiple pages of a website. Returns immediately with a task_id - use the status check action to monitor progress and retrieve results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The starting URL for the crawl. Must be a valid HTTP or HTTPS URL. |
| `depth` | integer | No | Maximum crawl depth - how many link levels to follow from the start URL. Default is 1. |
| `rules` | object | No | Crawl rules for filtering URLs during the crawl process. |
| `prompt` | string | No | Instructions for what data to extract during the crawl. Required when extraction_mode=true. Be specific about the information you want extracted. |
| `schema` | object | No | JSON Schema object defining the structure of extracted data. Helps ensure consistent, structured output in extraction mode. |
| `breadth` | integer | No | Maximum number of links to crawl per depth level. If not specified (null), unlimited breadth is allowed. Ignored when sitemap=true. |
| `sitemap` | boolean | No | Whether to use sitemap.xml for URL discovery instead of link following. When enabled, ignores breadth parameter. Default is false. |
| `stealth` | boolean | No | Enable stealth mode to bypass bot detection and anti-scraping measures. Adds +4 credits per page. Use for sites with bot protection. Default is false. |
| `max_pages` | integer | No | Maximum total number of pages to crawl across all depth levels. Default is 10. |
| `batch_size` | integer | No | Number of pages to process in each batch during the crawl. Higher values may speed up large crawls. Default is 1. |
| `webhook_url` | string | No | Webhook URL to receive the job completion notification. The result will be POSTed to this URL when the crawl finishes. |
| `cache_website` | boolean | No | Whether to cache the website content for faster subsequent crawls. Default is false. |
| `extraction_mode` | boolean | No | When true, enables AI-powered extraction using LLM (default, 10 credits/page). When false, enables markdown conversion mode (NO AI processing, 2 credits/page, 80% cheaper). |
| `render_heavy_js` | boolean | No | Enable enhanced JavaScript rendering for Single Page Applications (SPAs) and sites with heavy JavaScript. May increase processing time. Default is false. |
| `same_domain_only` | boolean | No | Whether to restrict crawling to only pages on the same domain as the starting URL. Default is true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Submit Feedback

**Slug:** `SCRAPEGRAPH_AI_SUBMIT_FEEDBACK`

Submit feedback and ratings for completed ScrapeGraphAI requests.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `rating` | integer | Yes | Star rating from 0 (lowest) to 5 (highest) |
| `request_id` | string | Yes | UUID of the request/session this feedback is for |
| `feedback_text` | string | No | Optional comments about the request |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Submit Product Feedback

**Slug:** `SCRAPEGRAPH_AI_SUBMIT_PRODUCT_FEEDBACK`

Submit product feedback for ScrapeGraphAI. Use to provide ratings, comments, suggestions, and other feedback about the product itself.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Your name |
| `email` | string | No | Your email address |
| `issues` | string | No | Any issues you've encountered |
| `rating` | integer | No | Overall rating from 1 (lowest) to 5 (highest) |
| `company` | string | No | Your company name |
| `disliked` | string | No | What you disliked about the product |
| `use_cases` | array | No | Your use cases for the product |
| `liked_most` | string | No | What you liked most about the product |
| `setup_easy` | boolean | No | Whether the setup process was easy |
| `can_contact` | boolean | No | Whether you consent to be contacted about your feedback |
| `feedback_id` | string | Yes | UUID of the feedback submission |
| `contact_method` | string | No | Preferred method of contact |
| `how_discovered` | string | No | How you discovered ScrapeGraphAI |
| `recommend_score` | integer | No | How likely you are to recommend this product (0-10 scale) |
| `usage_frequency` | string | No | How frequently you use the product |
| `requested_features` | string | No | Features you'd like to see added |
| `improvement_suggestions` | string | No | Your suggestions for product improvements |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Convert JSON to TOON Format

**Slug:** `SCRAPEGRAPH_AI_TOONIFY`

Tool to convert JSON data to TOON (Token-Oriented Object Notation) format. Use when you need to reduce token usage for LLM processing while maintaining data structure.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | No | The JSON data to convert to TOON (Token-Oriented Object Notation) format. Can be any valid JSON object or array. TOON format reduces token usage by 30-60% compared to JSON while maintaining structure and readability. If not provided, returns empty response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate API Key

**Slug:** `SCRAPEGRAPH_AI_VALIDATE_API_KEY`

Validate your ScrapeGraphAI API key to ensure it is active and authorized. Use this action to check API key validity before making other API calls.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |
