# DataRobot

DataRobot is a machine learning platform that automates model building, deployment, and monitoring, enabling organizations to derive predictive insights from large datasets

- **Category:** artificial intelligence
- **Auth:** API_KEY
- **Composio Managed App Available?** N/A
- **Tools:** 840
- **Triggers:** 0
- **Slug:** `DATAROBOT`
- **Version:** 20260312_00

## Tools

### Add Users to Group

**Slug:** `DATAROBOT_ADD_USERS_TO_GROUP`

Tool to add one or more users to a DataRobot user group by groupId. Use when you have a valid groupId and want to add existing usernames. Limit 100 users per request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `users` | array | Yes | List of users to add; must contain between 1 and 100 items. |
| `groupId` | string | Yes | The identifier of the user group to which users will be added. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Add User to Organization

**Slug:** `DATAROBOT_ADD_USER_TO_ORGANIZATION`

Tool to add a user to an existing organization. Use when you have an organizationId and wish to add or create a user within it.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `create` | boolean | No | Set to true to create a new user account if the user doesn't exist. When true, 'password' is required. |
| `language` | string ("ar_001" | "de_DE" | "en" | "es_419" | "fr" | "ja" | "ko" | "pt_BR" | "test" | "uk_UA") | No | UI language preference. Options: ar_001 (Arabic), de_DE (German), en (English), es_419 (Spanish), fr (French), ja (Japanese), ko (Korean), pt_BR (Portuguese), uk_UA (Ukrainian). |
| `lastName` | string | No | Last name of the user (max 100 characters). |
| `orgAdmin` | boolean | No | Set to true to grant organization administrator privileges to this user. |
| `password` | string | No | Password for the new user account. Required when 'create' is true. Must meet password complexity requirements. |
| `username` | string | Yes | The email address of the user to add. Must be a valid email format. |
| `firstName` | string | No | First name of the user (max 100 characters). |
| `accessRoleIds` | array | No | List of access role IDs to assign to the user (max 100). Get valid IDs from list_access_roles action. |
| `organizationId` | string | Yes | The unique identifier of the organization. Obtain this from get_account_info or list_organization_users. |
| `requireClickthrough` | boolean | No | Set to true to require the user to accept a clickthrough agreement before accessing the platform. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Analyze Dataset Definition

**Slug:** `DATAROBOT_ANALYZE_DATASET_DEFINITION`

Tool to analyze a dataset definition by ID. Use when you need to trigger an analysis job that retrieves dataset metadata, schema, and statistics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition to analyze. Use DATAROBOT_GET_DATASET_DEFINITION or DATAROBOT_LIST_DATASET_DEFINITIONS to find available dataset definition IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Archive Model Package

**Slug:** `DATAROBOT_ARCHIVE_MODEL_PACKAGE`

Tool to archive a DataRobot model package. Use when you need to archive a model package that is no longer actively used.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelPackageId` | string | Yes | ID of the model package to archive. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Build Java Scoring Code Package

**Slug:** `DATAROBOT_BUILD_SCORING_CODE_JAVA_PACKAGE`

Initiates an asynchronous build of a Java JAR package containing DataRobot Scoring Code for a deployment. The JAR can be executed locally for predictions or used as a library in Java applications. Use after confirming the deployment is active and supports scoring code. Poll the returned statusId or location URL to track build progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `options` | object | No | Options controlling what features to include in the Java scoring code package. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to build scoring code for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Cancel Notebook Job

**Slug:** `DATAROBOT_CANCEL_NOTEBOOK_JOB`

Tool to cancel a running or pending notebook job execution. Use when you need to stop a notebook job that is currently executing or queued. The cancellation is immediate and cannot be undone.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook job to cancel. Must be a valid 24-character hexadecimal ObjectId. Obtain from DATAROBOT_LIST_NOTEBOOK_JOBS_RUN_HISTORY. |
| `useCaseId` | string | Yes | The ID of the use case that contains the notebook job. Required to identify the project context for the cancellation. Obtain from DATAROBOT_LIST_PROJECTS or project creation. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Cancel Project Job

**Slug:** `DATAROBOT_CANCEL_PROJECT_JOB`

Tool to cancel a pending job for a project. Use when you need to stop a queued or running job after confirming its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | The ID of the job to cancel. Obtain from DATAROBOT_LIST_PROJECT_JOBS by filtering for 'queue' or 'inprogress' status. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the job to cancel. Obtain from DATAROBOT_LIST_PROJECTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Check Project Status

**Slug:** `DATAROBOT_CHECK_PROJECT_STATUS`

Tool to check the status of a DataRobot project. Use after creating or loading a project to monitor its stage and Autopilot completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project to check status for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Clone Application Template

**Slug:** `DATAROBOT_CLONE_APPLICATION_TEMPLATE`

Tool to clone an application template into a codespace. Use when you need to create a new codespace from an existing template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `applicationTemplateId` | string | Yes | The ID of the application template to clone |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Clone Files Collection

**Slug:** `DATAROBOT_CLONE_FILES`

Tool to create a duplicate files collection in DataRobot. Use when you need to clone an existing files catalog item, optionally excluding specific files. The cloning operation is asynchronous - poll the returned location URL to check status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `omit` | array | No | File names to skip when cloning the files. Provide a list of filenames (e.g., ['file1.csv', 'file2.txt']) to exclude from the cloned collection. |
| `catalogId` | string | Yes | The catalog item ID of the files collection to clone |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Copy File or Folder

**Slug:** `DATAROBOT_COPY_FILE`

Tool to copy a file or folder within the same DataRobot catalog item. Use when you need to duplicate files or folders in the data registry with configurable overwrite strategies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string | Yes | The file or folder path to copy. Folder paths should end with '/'. |
| `target` | string | Yes | The new file or folder path to copy to. Folder paths should end with '/'. |
| `catalogId` | string | Yes | The catalog item ID containing the file or folder to copy. |
| `overwrite` | string ("RENAME" | "REPLACE" | "SKIP" | "ERROR") | No | How to deal with a name conflict in the target location. RENAME (default): rename a duplicate file using '<filename> (n).ext' pattern. REPLACE: prefer the file you copy. SKIP: prefer the file existing in the target. ERROR: fail with an error in case of a naming conflict. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Access Role

**Slug:** `DATAROBOT_CREATE_ACCESS_ROLE`

Create a custom Access Role for an organization. Use to define tailored permission sets controlling user access to DataRobot entities (projects, deployments, datasets, etc.). Requires an organization ID which can be obtained from the account info endpoint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Unique name for the Access Role within the organization. |
| `permissions` | object | Yes | Mapping of entity to its permission settings. Include only entities you wish to set. |
| `organizationId` | string | Yes | Organization ID to associate the role with. Required - obtain your organization ID from the account info endpoint. Creating roles with null organizationId is typically restricted to system administrators. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create and Start Notebook Codespace

**Slug:** `DATAROBOT_CREATE_AND_START_NOTEBOOK_CODESPACE`

Tool to create and start a new notebook codespace in DataRobot. Use when you need to set up an interactive Jupyter notebook environment within a use case. Note: DataRobot limits concurrent notebooks to 4 running in parallel. If the limit is reached, stop or delete an existing notebook before creating a new one.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | Yes | The ID of the use case to associate the notebook codespace with. Use DATAROBOT_LIST_USE_CASES to find available use case IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Automated Document

**Slug:** `DATAROBOT_CREATE_AUTOMATED_DOCUMENT`

Tool to request generation of automated compliance documents in DataRobot. Use when you need to generate documentation for projects, models, or deployments. The document generation is asynchronous. Poll the returned location URL to check status and retrieve the document once generation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `locale` | string ("EN_US" | "JA_JP") | No | Enum for document locale options. |
| `entityId` | string | Yes | ID of the entity to generate the document for. This can be a model ID or project ID, depending on the document type. For AUTOPILOT_SUMMARY use project ID, for MODEL_COMPLIANCE use model ID. |
| `templateId` | string | No | Template ID to use for the document outline. If not provided, DataRobot's standard template is used. |
| `documentType` | string ("AUTOPILOT_SUMMARY" | "MODEL_COMPLIANCE" | "DEPLOYMENT_REPORT" | "MODEL_COMPLIANCE_GEN_AI") | Yes | Type of automated document to generate: AUTOPILOT_SUMMARY (project summary), MODEL_COMPLIANCE (model compliance documentation), DEPLOYMENT_REPORT (deployment report with metrics), or MODEL_COMPLIANCE_GEN_AI (generative AI model compliance). |
| `outputFormat` | string ("docx" | "html") | Yes | Format for the generated document: 'docx' (Microsoft Word) or 'html' (HTML format). |
| `documentTypeSpecificParameters` | object | No | Parameters specific to DEPLOYMENT_REPORT document type. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Monitoring Job

**Slug:** `DATAROBOT_CREATE_BATCH_MONITORING`

Tool to create a DataRobot Batch Monitoring job for tracking model performance and data drift on batch predictions. Use when you need to monitor predictions at scale. The job is created asynchronously and returns immediately with a job ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | No | ID of leaderboard model which is used for processing predictions dataset. |
| `chunkSize` | string | No | Strategy for determining chunk size. Can be "auto", "fixed", "dynamic", or a fixed size in bytes. |
| `csvSettings` | object | Yes | CSV settings for intake and output files. |
| `abortOnError` | boolean | No | Should this job abort if too many errors are encountered. |
| `batchJobType` | string ("monitoring" | "prediction") | No | Batch job type enumeration |
| `deploymentId` | string | Yes | ID of deployment which is used in job for processing predictions dataset. Use LIST_DEPLOYMENTS to find available deployments. |
| `thresholdLow` | number | No | Compute explanations for predictions below this threshold. |
| `numConcurrent` | integer | No | Number of simultaneous requests to run against the prediction instance. |
| `pinnedModelId` | string | No | Specify a model ID used for scoring. |
| `thresholdHigh` | number | No | Compute explanations for predictions above this threshold. |
| `intakeSettings` | object | Yes | Intake option for the job. Must include "type" field. Example for HTTP: {"type": "http", "url": "https://example.com/test.csv"}. Example for local file: {"type": "localFile"}. |
| `modelPackageId` | string | No | ID of model package from registry used for processing predictions dataset. |
| `outputSettings` | object | No | Output option for the job. Must include "type" field if provided. Example: {"type": "localFile"}. |
| `maxExplanations` | integer | No | Number of explanations requested. Will be ordered by strength. 0 means no explanations. |
| `monitoringColumns` | object | No | Column names mapping for monitoring |
| `skipDriftTracking` | boolean | No | Skip drift tracking for this job. |
| `passthroughColumns` | array | No | Pass through columns from the original dataset. Each column name must be 1-50 characters. |
| `predictionInstance` | object | No | Override the default prediction instance from the deployment |
| `timeseriesSettings` | object | No | Time Series settings for time series jobs |
| `predictionThreshold` | number | No | Threshold for binary classification. Sets the boundary between FALSE and TRUE predictions (0.0-1.0). |
| `columnNamesRemapping` | string | No | Remap (rename or remove columns from) the output from this job. |
| `explanationAlgorithm` | string ("shap" | "xemp") | No | Algorithm for calculating prediction explanations |
| `includeProbabilities` | boolean | No | Include probabilities for all classes. |
| `explanationClassNames` | array | No | Class names for which explanations are returned. Mutually exclusive with explanationNumTopClasses. |
| `monitoringAggregation` | object | No | Defines the aggregation policy for monitoring jobs |
| `monitoringBatchPrefix` | string | No | Name of the batch to create with this job. |
| `passthroughColumnsSet` | string | No | Pass through all columns from the original dataset. Set to "all" to enable. |
| `includePredictionStatus` | boolean | No | Include prediction status column in the output. |
| `explanationNumTopClasses` | integer | No | Number of top predicted classes for explanations. Mutually exclusive with explanationClassNames. |
| `monitoringOutputSettings` | object | No | Output settings for monitoring jobs |
| `predictionWarningEnabled` | boolean | No | Enable prediction warnings. |
| `secondaryDatasetsConfigId` | string | No | Configuration id for secondary datasets to use when making a prediction. |
| `includeProbabilitiesClasses` | array | No | Include only probabilities for these specific class names. |
| `disableRowLevelErrorHandling` | boolean | No | Skip row by row error handling. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Monitoring Job Definition

**Slug:** `DATAROBOT_CREATE_BATCH_MONITORING_JOB_DEFINITION`

Tool to create a Batch Monitoring job definition for tracking deployment performance and data drift. Use when you need to set up scheduled or manual monitoring jobs for a DataRobot deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | A human-readable name for the definition, must be unique across organisations. |
| `enabled` | boolean | No | If this job definition is enabled as a scheduled job. Optional if no schedule is supplied. |
| `modelId` | string | No | ID of leaderboard model which is used in job for processing predictions dataset |
| `schedule` | object | No | Cron-like schedule configuration |
| `chunkSize` | string | No | Which strategy should be used to determine the chunk size. Can be either a named strategy like 'auto' or a fixed size in bytes. |
| `csvSettings` | object | Yes | The CSV settings used for this job |
| `abortOnError` | boolean | No | Should this job abort if too many errors are encountered |
| `batchJobType` | string ("monitoring" | "prediction") | No | Type of batch job |
| `deploymentId` | string | Yes | ID of deployment that the monitoring job is associated with. |
| `thresholdLow` | number | No | Compute explanations for predictions below this threshold |
| `numConcurrent` | integer | No | Number of simultaneous requests to run against the prediction instance |
| `pinnedModelId` | string | No | Specify a model ID used for scoring |
| `thresholdHigh` | number | No | Compute explanations for predictions above this threshold |
| `intakeSettings` | object | Yes | The intake option configured for this job. Must include "type" field (e.g., http, s3, localFile, etc.) and corresponding configuration. |
| `modelPackageId` | string | No | ID of model package from registry is used in job for processing predictions dataset |
| `outputSettings` | object | No | The output option configured for this job |
| `maxExplanations` | integer | No | Number of explanations requested. Will be ordered by strength. |
| `monitoringColumns` | object | No | Column names mapping for monitoring |
| `skipDriftTracking` | boolean | No | Skip drift tracking for this job. |
| `passthroughColumns` | array | No | Pass through columns from the original dataset |
| `predictionInstance` | object | No | Override prediction instance configuration |
| `timeseriesSettings` | object | No | Time Series settings for time series jobs |
| `predictionThreshold` | number | No | Threshold is the point that sets the class boundary for a predicted value. |
| `columnNamesRemapping` | string | No | Remap (rename or remove columns from) the output from this job |
| `explanationAlgorithm` | string ("shap" | "xemp") | No | Algorithm used to calculate prediction explanations |
| `includeProbabilities` | boolean | No | Include probabilities for all classes |
| `explanationClassNames` | array | No | Sets a list of selected class names for which corresponding explanations are returned in each row. |
| `monitoringAggregation` | object | No | Aggregation policy for monitoring jobs |
| `monitoringBatchPrefix` | string | No | Name of the batch to create with this job |
| `passthroughColumnsSet` | string | No | Pass through all columns from the original dataset (set to "all") |
| `includePredictionStatus` | boolean | No | Include prediction status column in the output |
| `explanationNumTopClasses` | integer | No | Sets the number of most probable (top predicted) classes for which corresponding explanations are returned in each row. |
| `monitoringOutputSettings` | object | No | Output settings for monitoring jobs |
| `predictionWarningEnabled` | boolean | No | Enable prediction warnings. |
| `secondaryDatasetsConfigId` | string | No | Configuration id for secondary datasets to use when making a prediction. |
| `includeProbabilitiesClasses` | array | No | Include only probabilities for these specific class names. |
| `disableRowLevelErrorHandling` | boolean | No | Skip row by row error handling |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Prediction Job Definition

**Slug:** `DATAROBOT_CREATE_BATCH_PREDICTION_JOB_DEFINITION`

Tool to create a Batch Prediction job definition. Use when you need to manually or scheduled scoring on large datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Human-readable unique name for the job definition. |
| `enabled` | boolean | No | Enable scheduler execution for this definition. |
| `schedule` | object | No | Cron-like schedule configuration |
| `chunkSize` | string | No | Chunking strategy: 'auto', 'fixed', 'dynamic', or byte size 20–41943040. |
| `abortOnError` | boolean | No | Abort job if too many row errors occur. Default true. |
| `batchJobType` | string ("prediction" | "monitoring") | No | Type of batch job. Defaults to 'prediction' if omitted. |
| `deploymentId` | string | Yes | Target deployment ID for scoring. |
| `intakeSettings` | object | Yes | Intake adapter configuration; see Prediction intake options. |
| `outputSettings` | object | No | Output adapter configuration; see Prediction output options. |
| `includeProbabilities` | boolean | No | Include probabilities for all classes. Default true. |
| `includePredictionStatus` | boolean | No | Include prediction status column. Default false. |
| `includeProbabilitiesClasses` | array | No | Only include probabilities for specified class names. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Predictions

**Slug:** `DATAROBOT_CREATE_BATCH_PREDICTIONS`

Tool to create a new batch predictions job in DataRobot. Use when you need to score large datasets using a deployment, model, or model package. The job processes predictions asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | No | ID of leaderboard model for processing predictions. Either deploymentId, modelId, or modelPackageId is required. |
| `chunkSize` | string | No | Chunk size strategy: "auto", or byte size (20-41943040). |
| `csvSettings` | object | No | CSV format settings for intake and output. |
| `abortOnError` | boolean | No | Abort job if too many errors are encountered. |
| `batchJobType` | string ("monitoring" | "prediction") | No | Batch job type options |
| `deploymentId` | string | No | ID of deployment for processing predictions. Either deploymentId, modelId, or modelPackageId is required. |
| `thresholdLow` | number | No | Compute explanations for predictions below this threshold. |
| `numConcurrent` | integer | No | Number of simultaneous requests to prediction instance. |
| `pinnedModelId` | string | No | Specify a model ID for scoring. |
| `thresholdHigh` | number | No | Compute explanations for predictions above this threshold. |
| `intakeSettings` | object | Yes | Intake configuration for input data. |
| `modelPackageId` | string | No | ID of model package from registry for processing predictions. Either deploymentId, modelId, or modelPackageId is required. |
| `outputSettings` | object | No | Output settings for batch prediction results |
| `maxExplanations` | integer | No | Number of explanations to include, ordered by strength. |
| `skipDriftTracking` | boolean | No | Skip drift tracking for this job. |
| `passthroughColumns` | array | No | Specific column names to pass through from original dataset (max 100 items). |
| `predictionInstance` | object | No | Override default prediction instance settings |
| `timeseriesSettings` | object | No | Time series specific settings |
| `predictionThreshold` | number | No | Threshold for binary classification (0.0-1.0). |
| `columnNamesRemapping` | string | No | Remap or remove columns from output (dict for rename, list for removal). |
| `explanationAlgorithm` | string ("shap" | "xemp") | No | Algorithm for prediction explanations |
| `includeProbabilities` | boolean | No | Include probabilities for all classes. |
| `explanationClassNames` | array | No | Class names for which to return explanations (mutually exclusive with explanationNumTopClasses). Must have 1-100 items. |
| `monitoringBatchPrefix` | string | No | Name of the batch to create with this job. |
| `passthroughColumnsSet` | string ("all") | No | Pass through all columns |
| `includePredictionStatus` | boolean | No | Include prediction status column in output. |
| `explanationNumTopClasses` | integer | No | Number of top predicted classes to explain (mutually exclusive with explanationClassNames). Default is 1 if neither specified. |
| `predictionWarningEnabled` | boolean | No | Enable prediction warnings. |
| `secondaryDatasetsConfigId` | string | No | Configuration ID for secondary datasets. |
| `includeProbabilitiesClasses` | array | No | Include only probabilities for these specific class names (max 100 items). |
| `disableRowLevelErrorHandling` | boolean | No | Skip row-by-row error handling. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Predictions From Existing

**Slug:** `DATAROBOT_CREATE_BATCH_PREDICTIONS_FROM_EXISTING`

Tool to create a new Batch Prediction job based on an existing job's configuration. Use when you need to re-run predictions with the same settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `partNumber` | integer | No | The number of which csv part is being uploaded when using multipart upload. Defaults to 0 for single-part uploads. |
| `predictionJobId` | string | Yes | ID of the existing Batch Prediction job to use as a template |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Batch Predictions From Job Definition

**Slug:** `DATAROBOT_CREATE_BATCH_PREDICTIONS_FROM_JOB_DEF`

Tool to launch a Batch Prediction job from a job definition. Use when you need to execute a previously created batch prediction job definition for scoring.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to execute. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Calendar from File Upload

**Slug:** `DATAROBOT_CREATE_CALENDARS_FILE_UPLOAD`

Tool to create a DataRobot calendar by uploading a CSV or XLSX file containing date events. Use when you need to define custom calendars for time-series modeling with specific event dates. The operation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | Calendar file to upload (CSV or XLSX format). Must have: header row, single date column in YYYY-MM-DD format. May optionally have: name column (2nd column), series ID column (specify in multiseriesIdColumns). |
| `name` | string | No | Name for the calendar. If not provided, defaults to the uploaded filename. |
| `multiseriesIdColumns` | string | No | Name of the multiseries ID column in the calendar file. Currently only one multiseries ID column is supported. If not specified, the calendar is treated as single series. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Calendars from Country Code

**Slug:** `DATAROBOT_CREATE_CALENDARS_FROM_COUNTRY_CODE`

Initialize generation of preloaded calendars from a country code. Creates calendars with national holidays for the specified country and date range. Calendar generation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endDate` | string | Yes | Last date of the range for which holidays are generated. Must be in ISO 8601 date-time format (e.g., '2024-12-31T23:59:59Z'). Must be after startDate. |
| `startDate` | string | Yes | First date of the range for which holidays are generated. Must be in ISO 8601 date-time format (e.g., '2024-01-01T00:00:00Z'). |
| `countryCode` | string ("AR" | "AT" | "AU" | "AW" | "BE" | "BG" | "BR" | "BY" | "CA" | "CH" | "CL" | "CO" | "CZ" | "DE" | "DK" | "DO" | "EE" | "ES" | "FI" | "FRA" | "GB" | "HK" | "HND" | "HR" | "HU" | "IE" | "IND" | "IS" | "IT" | "JP" | "KE" | "LT" | "LU" | "MX" | "NG" | "NI" | "NL" | "NO" | "NZ" | "PE" | "PL" | "PT" | "RU" | "SE" | "SE(NS)" | "SI" | "SK" | "TAR" | "UA" | "UK" | "US" | "ZA") | Yes | Code of the country for which holidays should be generated. Must be uppercase and from the supported list. Use GET /api/v2/calendarCountryCodes/ to retrieve available country codes. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Calendar from Dataset

**Slug:** `DATAROBOT_CREATE_CALENDARS_FROM_DATASET`

Tool to create a calendar from a dataset in DataRobot. Use when preparing time-series projects that require custom calendar definitions. Calendar creation is asynchronous - use the returned statusId to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Optional name for the calendar in the catalog. |
| `datasetId` | string | Yes | The ID of the dataset from which to create the calendar. |
| `deleteOnError` | boolean | No | Whether to delete the calendar file from Catalog if it's not valid. Defaults to false. |
| `datasetVersionId` | string | No | The ID of the dataset version from which to create the calendar. If omitted, uses the latest version. |
| `multiseriesIdColumns` | array | No | Optional multiseries ID columns for the calendar. Maximum of 1 column allowed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Change Request

**Slug:** `DATAROBOT_CREATE_CHANGE_REQUEST`

Create a Change Request for a DataRobot deployment to enable governance workflows. Use when you need to request approval for deployment changes such as changing importance, replacing models, updating status, or deleting deployments. Change requests require approval from authorized reviewers before changes can be applied.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `action` | string | Yes | Action to perform on the entity. Valid actions: approve, changeStatus, changeImportance, cleanupStats, delete, replaceModel, replaceModelPackage, updateSecondaryDatasetConfigs |
| `change` | object | No | Change parameters required for the action. Structure varies by action type: For changeImportance: {importance: 'HIGH'\|'MODERATE'\|'LOW'\|'CRITICAL'}, For changeStatus: {status: 'active'\|'inactive'}, For replaceModel: {modelId: 'string', replacementReason: 'string'}, For replaceModelPackage: {modelPackageId: 'string', replacementReason: 'string'}, For approve: {approvalStatus: 'APPROVED'}, For updateSecondaryDatasetConfigs: {secondaryDatasetsConfigId: 'string'}, For cleanupStats: {dataType: 'string', modelId: 'string'\|null, start: 'string'\|null, end: 'string'\|null}, For delete: null (no change object needed) |
| `comment` | string | No | Free form text comment on the requested changes (max 10,000 characters) |
| `entityId` | string | Yes | ID of the Product Entity the request is intended to change (24-character hex string). Use LIST_DEPLOYMENTS to find deployment IDs. |
| `autoApply` | boolean | Yes | Whether to automatically apply the change when the request is approved. If true, requested changes will be applied on approval. |
| `entityType` | string | Yes | Type of the Product Entity that is requested to be changed. Currently only 'deployment' is supported. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Code Snippets

**Slug:** `DATAROBOT_CREATE_CODE_SNIPPETS`

Tool to generate code snippets for DataRobot models, predictions, or workloads. Use when you need sample code to interact with DataRobot models, make predictions, or process workloads in various programming languages.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `config` | string | Yes | Template-specific configuration. Use ModelConfig for 'model' templates, PredictionConfig for 'prediction' templates, or WorkloadConfig for 'workload' templates. |
| `language` | string ("curl" | "powershell" | "python" | "qlik") | Yes | The programming language or tool for the generated snippet: 'python', 'curl', 'powershell', or 'qlik'. |
| `snippetId` | string | No | Optional specific snippet ID to return. This field is optional for prediction snippets. |
| `templateType` | string ("model" | "prediction" | "workload") | Yes | The template type for the code snippet: 'model' for model-related code, 'prediction' for making predictions, or 'workload' for workload processing. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Code Snippet

**Slug:** `DATAROBOT_CREATE_CODE_SNIPPETS_DOWNLOAD`

Tool to download code snippets for DataRobot deployments, models, or workloads. Use when you need sample code to make predictions, interact with models, or work with workloads. Generates code in Python, cURL, PowerShell, or Qlik based on your deployment/model configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `language` | string ("curl" | "powershell" | "python" | "qlik") | Yes | Programming language for the generated code snippet: 'python', 'curl', 'powershell', or 'qlik'. |
| `snippetId` | string | No | The specific snippet ID to return. Optional for prediction snippets. |
| `modelConfig` | object | No | Configuration for model template type |
| `templateType` | string ("model" | "prediction" | "workload") | Yes | The template type for the code snippet: 'model' for model-related code, 'prediction' for deployment predictions, or 'workload' for workload-related code. |
| `workloadConfig` | object | No | Configuration for workload template type |
| `predictionConfig` | object | No | Configuration for prediction template type |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Comment

**Slug:** `DATAROBOT_CREATE_COMMENT`

Tool to create a comment on a DataRobot entity (deployment, use case, model, catalog, etc.). Use when you need to add notes, feedback, or documentation to DataRobot objects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `content` | string | Yes | Content of the comment. Maximum length is 10,000 characters. |
| `entityId` | string | Yes | ID of the entity to post the comment to (e.g., deployment ID, use case ID, model ID). |
| `mentions` | array | No | A list of user IDs mentioned in the comment content. Maximum 100 user IDs. |
| `entityType` | string ("useCase" | "model" | "catalog" | "experimentContainer" | "deployment" | "workloadDeployment") | Yes | Type of the entity to post the comment to. Supported types: useCase, model, catalog, experimentContainer, deployment, workloadDeployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Compliance Doc Template

**Slug:** `DATAROBOT_CREATE_COMPLIANCE_DOC_TEMPLATES`

Tool to create a new compliance documentation template in DataRobot. Use when you need to define a reusable structure for model compliance documentation. Templates can include DataRobot-generated content, user-provided text, custom sections, and table of contents. Sections can be nested up to 5 levels deep with a maximum of 500 total sections.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the new template. Must be unique among templates created by the user. |
| `labels` | array | No | Names of labels to assign to this template for organization and filtering. |
| `sections` | array | Yes | Array of section objects defining the document structure. Each section must have a 'type' field. Types: 'datarobot' (requires contentId + title), 'user' (requires title + regularText + highlightedText), 'custom' (requires title + regularText + highlightedText), 'table_of_contents' (requires only title). Max nesting depth: 5 levels. Max total sections: 500. |
| `projectType` | string ("autoMl" | "textGeneration" | "timeSeries") | No | Enum for project template types. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Credentials

**Slug:** `DATAROBOT_CREATE_CREDENTIALS`

Store a new set of credentials in DataRobot for use with data sources and connections. Supports multiple credential types including basic auth, OAuth, cloud provider credentials (AWS, GCP, Azure), and database-specific authentication. Use when you need to securely store credentials for connecting to external data sources or services.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the credentials. Must be unique within your DataRobot account. |
| `user` | string | No | Username for basic authentication or database connections (required for basic and snowflake_key_pair_user_account types). |
| `token` | string | No | OAuth token (required for bearer type). |
| `gcpKey` | object | No | Google Cloud Platform service account key JSON object (for gcp credential type). |
| `authUrl` | string | No | Authentication URL (required for sap_oauth type). |
| `apiToken` | string | No | API token (required for api_token and tableau_access_token types). |
| `clientId` | string | No | OAuth client ID (for oauth, azure_oauth, azure_service_principal, snowflake_oauth_user_account, databricks_service_principal_account, box_jwt types). |
| `configId` | string | No | Configuration ID for various credential types (s3, gcp, azure_service_principal, snowflake_key_pair_user_account, databricks_service_principal_account). |
| `password` | string | No | Password for basic authentication (required for basic type). |
| `passphrase` | string | No | Passphrase for private key (for snowflake_key_pair_user_account and box_jwt types). |
| `description` | string | No | Optional description of the credentials. |
| `oauthScopes` | array | No | List of OAuth scopes (for snowflake_oauth_user_account, databricks_service_principal_account, oauth types). |
| `publicKeyId` | string | No | Public key ID (required for box_jwt type). |
| `sapAiApiUrl` | string | No | SAP AI API URL (required for sap_oauth type). |
| `clientSecret` | string | No | OAuth client secret (for oauth, azure_oauth, azure_service_principal, snowflake_oauth_user_account, databricks_service_principal_account, box_jwt types). |
| `enterpriseId` | string | No | Box enterprise ID (required for box_jwt type). |
| `refreshToken` | string | No | OAuth refresh token (required for bearer type). |
| `azureTenantId` | string | No | Azure tenant ID (for azure_oauth and azure_service_principal types). |
| `oauthConfigId` | string | No | OAuth configuration ID (for snowflake_oauth_user_account type). |
| `privateKeyStr` | string | No | Private key as string (for snowflake_key_pair_user_account and box_jwt types). |
| `awsAccessKeyId` | string | No | AWS access key ID (for s3 credential type). |
| `credentialType` | string ("adls_gen2_oauth" | "api_token" | "azure" | "azure_oauth" | "azure_service_principal" | "basic" | "bearer" | "box_jwt" | "databricks_access_token_account" | "databricks_service_principal_account" | "external_oauth_provider" | "gcp" | "oauth" | "rsa" | "s3" | "sap_oauth" | "snowflake_key_pair_user_account" | "snowflake_oauth_user_account" | "tableau_access_token") | Yes | Type of credentials to create. Determines which additional fields are required. |
| `googleConfigId` | string | No | Google OAuth configuration ID (for gcp type). |
| `oauthIssuerUrl` | string | No | OAuth issuer URL (for snowflake_oauth_user_account type). |
| `awsSessionToken` | string | No | AWS session token for temporary credentials (optional for s3 type). |
| `oauthIssuerType` | string | No | OAuth issuer type (for snowflake_oauth_user_account type). |
| `authenticationId` | string | No | Authentication ID (required for external_oauth_provider type). |
| `awsSecretAccessKey` | string | No | AWS secret access key (for s3 credential type). |
| `snowflakeAccountName` | string | No | Snowflake account name (for basic and snowflake_oauth_user_account credential types). |
| `azureConnectionString` | string | No | Azure storage connection string (required for azure credential type). |
| `databricksAccessToken` | string | No | Databricks access token (required for databricks_access_token_account type). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Application Source

**Slug:** `DATAROBOT_CREATE_CUSTOM_APPLICATION_SOURCE`

Tool to create a custom application source in DataRobot. Use when you need to create a new source for custom applications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the custom application source. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Application Source Version

**Slug:** `DATAROBOT_CREATE_CUSTOM_APPLICATION_SOURCE_VERSION`

Create a new custom application source version in DataRobot. Use when you need to create a new version of an existing custom application source with optional file uploads, environment configuration, or based on a previous version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `label` | string | No | The label for the new custom application source version (1-255 characters). |
| `appSourceId` | string | Yes | The ID of the application source to create a version for. |
| `baseVersion` | string | No | The ID of the version used as the source for parameter duplication. |
| `baseEnvironmentId` | string | No | The base environment ID to use with this source version. |
| `baseEnvironmentVersionId` | string | No | The base environment version ID to use with this source version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Application Source From Template

**Slug:** `DATAROBOT_CREATE_CUSTOM_APP_SOURCES_FROM_CUSTOM_TPL`

Tool to create a custom application source from a custom template. Use when you need to instantiate a new application source based on an existing custom template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customTemplateId` | string | Yes | The custom template ID for the custom application. Use LIST_APPLICATION_TEMPLATES to find available template IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Job

**Slug:** `DATAROBOT_CREATE_CUSTOM_JOB`

Tool to create a new DataRobot custom job. Use when you need to define a custom execution task that runs arbitrary Python code in DataRobot's managed environment. Only the 'name' field is required; DataRobot applies sensible defaults for other fields.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the custom job (max 255 characters). This is the only required field. |
| `jobType` | string ("default" | "hostedCustomMetric" | "notification" | "retraining") | No | Enum for custom job types |
| `resources` | object | No | Custom job resources configuration for Kubernetes cluster |
| `description` | string | No | Optional description of what this custom job does (max 10,000 characters). |
| `environmentId` | string | No | ID of the execution environment to use for this custom job. If not provided, DataRobot may use a default environment or require it based on other parameters. |
| `environmentVersionId` | string | No | ID of the execution environment version to use. If not provided, the latest execution environment version will be used. Only valid when environmentId is also specified. |
| `runtimeParameterValues` | string | No | JSON string of runtime parameter values to inject at execution time. Field names must match those defined in the metadata.yaml file's runtimeParameterDefinitions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Jobs Cleanup

**Slug:** `DATAROBOT_CREATE_CUSTOM_JOBS_CLEANUP`

Tool to permanently delete a custom job. Use when you need to permanently remove a soft-deleted custom job and all its components. The custom job must be soft-deleted first before permanent deletion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customJobId` | string | Yes | ID of the custom job to permanently delete. The custom job and its components must be soft-deleted before permanent deletion. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Job Hosted Custom Metric Template

**Slug:** `DATAROBOT_CREATE_CUSTOM_JOBS_HOSTED_CUSTOM_METRIC_TEMPLATE`

Tool to create a hosted custom metric template for a DataRobot custom job. Use when defining how custom metrics should be collected and aggregated for observability. The template must specify whether metrics are model-specific, the aggregation type, and for numeric metrics, directionality and units.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("average" | "categorical" | "gauge" | "sum") | Yes | Type and aggregation method for the metric: 'average' for mean values, 'sum' for totals, 'gauge' for point-in-time measurements, or 'categorical' for category-based metrics. |
| `units` | string | No | Units or Y-axis label for the custom metric (e.g., 'count', 'seconds', 'percentage'). Required for numeric custom metrics, not applicable for categorical metrics. |
| `timeStep` | string ("hour") | Yes | Time bucket size for aggregating custom metric data. Currently only 'hour' is supported. |
| `categories` | array | No | List of category definitions (max 25). Required for categorical custom metrics, not applicable for numeric metrics. |
| `customJobId` | string | Yes | ID of the custom job to create the hosted custom metric template for. Use GET_CUSTOM_JOB or LIST_CUSTOM_JOBS to find custom job IDs. |
| `isGeospatial` | boolean | No | Whether the metric is geospatial. Defaults to false if not specified. |
| `directionality` | string ("higherIsBetter" | "lowerIsBetter") | No | Directionality of the custom metric. |
| `isModelSpecific` | boolean | Yes | Whether the metric is related to the model (true) or deployment (false). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Model

**Slug:** `DATAROBOT_CREATE_CUSTOM_MODEL`

Tool to create a new DataRobot custom model for training or inference. Use when you need to register a custom model in DataRobot. For inference models, targetType and targetName are typically required (e.g., targetType='Regression' requires targetName).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Human-readable name for the custom model (max 255 characters). |
| `tags` | array | No | A list of tag name/value pairs for categorizing and organizing custom models. Minimum 1, maximum 50 tags. |
| `language` | string | No | Programming language name in which model is written. |
| `replicas` | integer | No | A fixed number of replicas that will be set for the given custom model. Maximum 25. |
| `requiresHa` | boolean | No | Require all custom model replicas to be deployed on different Kubernetes nodes for predictions fault tolerance. |
| `targetName` | string | No | The name of the target for labeling predictions. Required for inference models. Not allowed for training models. |
| `targetType` | string ("Binary" | "Regression" | "Multiclass" | "Anomaly" | "Transform" | "TextGeneration" | "GeoPoint" | "Unstructured" | "VectorDatabase" | "AgenticWorkflow" | "MCP") | No | The target type of the custom model. |
| `classLabels` | array | No | The class labels for multiclass classification. Required for multiclass inference models. If using DataRobot base environments and your model produces unlabeled class probabilities, the order should match the predicted output. Maximum 100 labels. |
| `description` | string | No | User-friendly description of the model (max 10,000 characters). |
| `playgroundId` | string | No | ID of the GenAI Playground associated with the given custom inference model. |
| `desiredMemory` | integer | No | The amount of memory (in bytes) that is expected to be allocated by the custom model. Minimum 128MB (134217728), maximum ~14GB (15032385536). |
| `maximumMemory` | integer | No | The maximum memory (in bytes) that might be allocated by the custom model. If exceeded, the custom model will be killed. Minimum 128MB (134217728), maximum ~14GB (15032385536). |
| `userProvidedId` | string | No | A user-provided unique ID associated with the given custom inference model (max 100 characters). |
| `customModelType` | string ("training" | "inference") | Yes | The type of custom model: 'training' for AutoML training models, or 'inference' for custom inference models. |
| `gitModelVersion` | object | No | Git-related attributes associated with a custom model version. |
| `negativeClassLabel` | string | No | The negative class label for binary classification models. If specified, positiveClassLabel must also be specified. Default value is '0'. |
| `positiveClassLabel` | string | No | The positive class label for binary classification models. If specified, negativeClassLabel must also be specified. Default value is '1'. |
| `supportsRegression` | boolean | No | Whether the model supports regression. |
| `networkEgressPolicy` | string ("NONE" | "DR_API_ACCESS" | "PUBLIC") | No | Network egress policy for the custom model. |
| `predictionThreshold` | number | No | The prediction threshold for binary classification custom models. Value between 0.0 and 1.0. Default is 0.5. |
| `calibratePredictions` | boolean | No | Whether model predictions should be calibrated by DataRobot. Only applies to anomaly detection training tasks. Improves probability estimates and facilitates comparison to DataRobot models. |
| `supportsBinaryClassification` | boolean | No | Whether the model supports binary classification. |
| `isTrainingDataForVersionsPermanentlyEnabled` | boolean | No | Indicates that training data assignment is permanently at the version level only for the custom model. Once enabled, this cannot be disabled. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Model Deployment Logs

**Slug:** `DATAROBOT_CREATE_CUSTOM_MODEL_DEPLOYMENT_LOGS`

Tool to request logs from a deployed custom model. Use when troubleshooting failed prediction requests or debugging custom model behavior. Returns a status ID for polling - logs are generated asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | The ID of the custom model deployment to retrieve logs from. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Clone Custom Model

**Slug:** `DATAROBOT_CREATE_CUSTOM_MODELS_FROM_CUSTOM_MODEL`

Tool to clone an existing custom model in DataRobot. Use when you need to create a copy of a custom model for reuse or experimentation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customModelId` | string | Yes | ID of the custom model to copy |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Custom Model Version

**Slug:** `DATAROBOT_CREATE_CUSTOM_MODELS_VERSIONS`

Tool to create a new version for an existing DataRobot custom model. Use when you need to update a custom model with new code files, environment changes, or configuration updates. Creates either a major or minor version based on the isMajorUpdate parameter.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | No | A file with code for a custom task or a custom model. For each file supplied as form data, you must have a corresponding filePath supplied that shows the relative location of the file. For example, you have two files: /home/username/custom-task/main.py and /home/username/custom-task/helpers/helper.py. When uploading these files, you would also need to include two filePath fields of, 'main.py' and 'helpers/helper.py'. If the supplied file already exists at the supplied filePath, the old file is replaced by the new file. |
| `filePath` | string | No | The local path of the file being uploaded. See the file field explanation for more details. |
| `replicas` | integer | No | A fixed number of replicas that will be set for the given custom-model. Maximum of 25. |
| `requiresHa` | boolean | No | Require all custom model replicas to be deployed on different Kubernetes nodes for predictions fault tolerance. |
| `holdoutData` | string | No | Holdout data configuration ID for version. This functionality has to be explicitly enabled for the current model. See isTrainingDataForVersionsPermanentlyEnabled parameter. |
| `trainingData` | string | No | Training data configuration ID for version. This functionality has to be explicitly enabled for the current model. See isTrainingDataForVersionsPermanentlyEnabled parameter. |
| `customModelId` | string | Yes | The ID of the custom model to create a version for. |
| `desiredMemory` | integer | No | The amount of memory (in bytes) that is expected to be allocated by the custom model. This setting is incompatible with setting the resourceBundleId. Must be between 134217728 (128MB) and 15032385536 (~14GB). |
| `isMajorUpdate` | string ("false" | "False" | "true" | "True") | Yes | If set to true, new major version will be created, otherwise minor version will be created. |
| `maximumMemory` | integer | No | The maximum memory (in bytes) that might be allocated by the custom-model. If exceeded, the custom-model will be killed. This setting is incompatible with setting the resourceBundleId. Must be between 134217728 (128MB) and 15032385536 (~14GB). |
| `gitModelVersion` | object | No | Git-related attributes associated with a custom model version |
| `requiredMetadata` | string | No | Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment's requiredMetadataKeys. Once set, they cannot be changed. If you to change them, make a new version. |
| `resourceBundleId` | string | No | A single identifier that represents a bundle of resources: Memory, CPU, GPU, etc. A list of available bundles can be obtained via the resource bundles endpoint. |
| `baseEnvironmentId` | string | No | The base environment to use with this model version. At least one of baseEnvironmentId and baseEnvironmentVersionId must be provided. If both are specified, the version must belong to the environment. |
| `networkEgressPolicy` | string ("NONE" | "PUBLIC") | No | Enum for network egress policy values |
| `requiredMetadataValues` | string | No | Additional parameters required by the execution environment. The required fieldNames are defined by the fieldNames in the base environment's requiredMetadataKeys. Field names and values are exposed as environment variables with values when running the custom model. Example: 'required_metadata_values': [{'field_name': 'hi', 'value': 'there'}] |
| `keepTrainingHoldoutData` | boolean | No | If the version should inherit training and holdout data from the previous version. Defaults to true. This field is only applicable if the model has training data for versions enabled. Otherwise the field value will be ignored. |
| `baseEnvironmentVersionId` | string | No | The base environment version ID to use with this model version. At least one of baseEnvironmentId and baseEnvironmentVersionId must be provided. If both are specified, the version must belong to the environment. If not specified: in the case where the previous model versions exist, the value from the latest model version is inherited, otherwise, the latest successfully built version of the environment specified in baseEnvironmentId is used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Data Disparity Insights

**Slug:** `DATAROBOT_CREATE_DATA_DISPARITY_INSIGHTS`

Tool to start data disparity insight calculations for a DataRobot model. Use when you need to analyze data disparity between two classes for a specific feature. The calculation is asynchronous; poll the returned location URL to check job status and retrieve results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `feature` | string | Yes | Feature name for which the insight is computed |
| `modelId` | string | Yes | The DataRobot model ID to compute data disparity insights for |
| `projectId` | string | Yes | The DataRobot project ID containing the model |
| `comparedClassNames` | array | Yes | An array of exactly two class names to calculate data disparity for. Must contain exactly 2 elements. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Data Engine Workspace State

**Slug:** `DATAROBOT_CREATE_DATA_ENGINE_WORKSPACE_STATES`

Create a Data Engine workspace state by executing a SQL query against DataRobot datasets. Use when you need to transform, join, or prepare data before modeling. The workspace state captures the query definition and can be used to generate new datasets or feed into projects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `query` | string | Yes | The SQL query to execute against the specified datasets. Must be valid SQL syntax. Maximum length is 320,000 characters. |
| `datasets` | array | No | List of source datasets to make available to the query. Each dataset needs an alias that you'll use to reference it in your SQL query. Only required if your query references existing datasets. |
| `language` | string ("SQL") | Yes | Query language to use. Currently only 'SQL' is supported. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Definition

**Slug:** `DATAROBOT_CREATE_DATASET_DEFINITIONS`

Create a dataset definition in DataRobot to define how to access and use a dataset from the AI Catalog. Use when you need to establish a reusable definition for a dataset that can be referenced in projects and deployments.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the dataset definition. If omitted, a default name will be generated based on the dataset. |
| `datasetId` | string | Yes | The ID of the AI Catalog dataset (24-character hex string). Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |
| `credentialsId` | string | No | The ID of the credentials to access the data store. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. |
| `datasetVersionId` | string | No | The version ID of the AI Catalog dataset. If omitted, uses the latest version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Chunk Definition

**Slug:** `DATAROBOT_CREATE_DATASET_DEFINITIONS_CHUNK_DEFINITIONS`

Tool to create a chunk definition for a dataset definition in DataRobot. Use when you need to define how to partition a dataset into chunks for distributed processing or validation strategies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the chunk definition. If omitted, a default name will be generated. |
| `targetClass` | string | No | Target class value. Required when partitionMethod is 'stratified'. |
| `targetColumn` | string | No | Target column name. Required when partitionMethod is 'stratified'. |
| `orderByColumns` | array | No | List of column names to use for sorting the data. Only applicable for row-based chunking. Maximum 10 columns allowed. |
| `partitionMethod` | string ("random" | "stratified" | "date") | Yes | The partition method to use for chunking the dataset. |
| `isDescendingOrder` | boolean | No | Whether to sort data in descending order. Only applicable for row-based chunking with orderByColumns. |
| `otvTrainingEndDate` | string | No | The end date of training data in ISO 8601 format (e.g., '2023-01-31' or '2023-01-31T23:59:59'). Used with 'date' partition method for Over Time Validation (OTV). |
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition to create a chunk definition for. Use DATAROBOT_LIST_DATASET_DEFINITIONS to find available dataset definition IDs. |
| `chunkingStrategyType` | string ("features" | "rows") | No | Chunking strategy type options. |
| `otvValidationEndDate` | string | No | The end date of validation scoring data in ISO 8601 format. Required when otvValidationStartDate is specified. Used with 'date' partition method for Over Time Validation (OTV). |
| `otvValidationStartDate` | string | No | The start date of validation scoring data in ISO 8601 format. Required when otvValidationEndDate is specified. Used with 'date' partition method for Over Time Validation (OTV). |
| `datetimePartitionColumn` | string | No | Date/time column name for partitioning. Required when partitionMethod is 'date'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Feature List

**Slug:** `DATAROBOT_CREATE_DATASET_FEATURELIST`

Tool to create a custom feature list within a dataset. Use when you need to define a specific subset of features for analysis or modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the feature list to be created. Must be unique within the dataset. |
| `features` | array | Yes | List of feature names to include in the feature list. All features must exist in the dataset. At least one feature is required. |
| `datasetId` | string | Yes | The ID of the dataset to create the feature list for. Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |
| `description` | string | No | Optional description of the feature list to help identify its purpose. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset from File

**Slug:** `DATAROBOT_CREATE_DATASET_FROM_FILE`

Tool to create a DataRobot dataset by uploading a file (CSV, Excel, etc.). Use when you need to upload data files to DataRobot's global catalog for modeling or prediction. Dataset creation is asynchronous - poll the statusId to monitor completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | The data file to upload for dataset creation. |
| `categories` | array | No | An array of strings describing the intended use of the dataset (e.g., 'PREDICTION', 'TRAINING'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Feature Transform

**Slug:** `DATAROBOT_CREATE_DATASETS_FEATURE_TRANSFORMS`

Tool to create a feature transform on a DataRobot dataset. Use when you need to transform an existing feature into a new feature with a different type or extract date components. Feature creation is asynchronous - poll the returned location URL to check status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new feature. Must not be the same as any existing features for this project. Must not contain '/' character. |
| `datasetId` | string | Yes | The ID of the dataset to create a feature transform on (24-character hex string). Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |
| `parentName` | string | Yes | The name of the parent feature to transform. |
| `replacement` | string | No | The replacement value in case of a failed transformation. Can be a string, boolean, number, or null. |
| `variableType` | string ("text" | "categorical" | "numeric" | "categoricalInt") | Yes | The type of the new feature. Must be one of: text, categorical (Deprecated in v2.21), numeric, or categoricalInt. |
| `dateExtraction` | string ("year" | "yearDay" | "month" | "monthDay" | "week" | "weekDay") | No | Date extraction options for date column transformations. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset from Data Source

**Slug:** `DATAROBOT_CREATE_DATASETS_FROM_DATA_SOURCE`

Tool to create a dataset from an external data source connector (database, S3, etc.). Use when you need to import data from a configured data source into DataRobot. The dataset creation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | [Deprecated] Username for database authentication. Use credentialId or credentialData instead for better security. |
| `password` | string | No | [Deprecated] Password (in cleartext) for database authentication. The password will be encrypted on the server side and never stored. Use credentialId or credentialData instead for better security. |
| `categories` | string | No | Array of strings or single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `doSnapshot` | boolean | No | If true, create a snapshot dataset (immutable copy). If false, create a remote dataset (query data source on demand). Creating snapshots from non-file sources requires 'Enable Create Snapshot Data Source' permission. |
| `sampleSize` | object | No | Sample size configuration for dataset ingestion. |
| `useKerberos` | boolean | No | If true, use Kerberos authentication for database connection. |
| `credentialId` | string | No | ID of stored credentials to authenticate with the database. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. Use this instead of credentialData for better security. |
| `dataSourceId` | string | Yes | ID of the DataSource to use as the source of data. Use DATAROBOT_LIST_DATA_SOURCES to find available data sources. |
| `credentialData` | string | No | Credentials to authenticate with the database. Use this to provide credentials directly instead of using credentialId. For better security, use credentialId instead. |
| `persistDataAfterIngestion` | boolean | No | If true, save all data for download and sampling, enabling extended data profile with statistics (min/max/median/mean, histogram, etc.). If false, only schema (feature names and types) is available. Cannot be false if doSnapshot is true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset from Recipe

**Slug:** `DATAROBOT_CREATE_DATASETS_FROM_RECIPE`

Tool to create a dataset from a DataRobot wrangling recipe. Use when you need to materialize a recipe into a reusable dataset. Creation is asynchronous - poll the location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name to be assigned to the new dataset. If omitted, a name will be auto-generated. |
| `recipeId` | string | Yes | The identifier for the Wrangling Recipe to use as the source of data. Use DATAROBOT_LIST_RECIPES to find available recipes. |
| `categories` | string | No | An array of strings or a single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `doSnapshot` | boolean | No | If true, create a snapshot dataset; if false, create a remote dataset. Creating snapshots from non-file sources requires an additional permission. |
| `useKerberos` | boolean | No | If true, use Kerberos authentication for database authentication. |
| `credentialId` | string | No | The ID of stored credentials to authenticate with the database. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. Use this OR credentialData, not both. |
| `credentialData` | string | No | The credentials to authenticate with the database, to be used instead of credentialId. Use this OR credentialId, not both. |
| `persistDataAfterIngestion` | boolean | No | If true, saves all data for download and sampling, enabling extended data profiles. If false, only saves schema. Cannot be false when doSnapshot is true. |
| `materializationDestination` | object | No | Destination table information for materializing the recipe. |
| `skipDuplicateDatesValidation` | boolean | No | If true, skip validation for date duplicates in time series recipes. By default, publishing fails if duplicates exist to prevent data quality issues. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset from URL

**Slug:** `DATAROBOT_CREATE_DATASETS_FROM_URL`

Tool to create a DataRobot dataset from a publicly accessible URL. Use when you need to import data from HTTP/HTTPS URLs into DataRobot's data catalog. Returns immediately with catalog and status IDs - dataset ingestion happens asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL to download the dataset used to create the dataset item and version. Must be a valid HTTP/HTTPS URL pointing to a dataset file (CSV, Excel, etc.). |
| `categories` | string | No | An array of strings or a single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `doSnapshot` | boolean | No | If true, create a snapshot dataset (immutable copy); if false, create a remote dataset (linked to source). Creating snapshots from non-file sources requires the 'Enable Create Snapshot Data Source' permission. |
| `sampleSize` | object | No | Sample size specification for dataset ingestion. |
| `persistDataAfterIngestion` | boolean | No | If true, enforce saving all data for download and sampling, and allow extended data profile (statistics, histograms, etc.). If false, do not enforce saving data (only schema will be available). Cannot be false when doSnapshot is true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Refresh Job

**Slug:** `DATAROBOT_CREATE_DATASETS_REFRESH_JOBS`

Tool to schedule a dataset refresh job in DataRobot. Use when you need to automate periodic data updates for a dataset. The schedule must be at least daily (hourly schedules are not supported).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Scheduled job name. |
| `enabled` | boolean | No | Boolean for whether the scheduled job is active (true) or inactive (false). |
| `schedule` | object | Yes | Schedule describing when to refresh the dataset. The smallest schedule allowed is daily. |
| `datasetId` | string | Yes | The ID of the dataset to schedule refresh jobs for. |
| `categories` | string | No | An array of strings describing the intended use of the dataset. The supported options are 'TRAINING', and 'PREDICTION'. |
| `credentials` | string | No | A JSON string describing the data engine queries credentials to use when refreshing. |
| `useKerberos` | boolean | No | If true, the Kerberos authentication system is used in conjunction with a credential ID. |
| `credentialId` | string | No | The ID of the set of credentials to use to run the scheduled job when the Kerberos authentication service is utilized. Required when useKerberos is true. |
| `scheduleReferenceDate` | string | No | The UTC reference date in RFC-3339 format of when the schedule starts from. This value is returned in /api/v2/datasets/(datasetId)/refreshJobs/(jobId)/ to help build a more intuitive schedule picker. The default is the current time. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Relationships

**Slug:** `DATAROBOT_CREATE_DATASETS_RELATIONSHIPS`

Tool to create dataset relationships in DataRobot by linking features between two datasets. Use when you need to enable advanced feature engineering across multiple datasets for modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the source dataset (24-character hex string). Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |
| `linkedFeatures` | array | Yes | List of features belonging to the linked dataset that will be linked. Must contain at least one feature. |
| `sourceFeatures` | array | Yes | List of features belonging to the source dataset that will be linked. Must contain at least one feature. |
| `linkedDatasetId` | string | Yes | The ID of another dataset with which to create relationships (24-character hex string). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Version from Latest Version

**Slug:** `DATAROBOT_CREATE_DATASETS_VERSIONS_FROM_LATEST_VERSION`

Tool to create a new dataset version from the latest version of its data source. Use when you need to refresh a dataset with updated data from its original source. The dataset version creation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | [Deprecated] The username for database authentication. Required only if the dataset was initially created from a data source. Use credentialId or credentialData instead for better security. |
| `password` | string | No | [Deprecated] The password (in cleartext) for database authentication. The password will be encrypted on the server-side HTTP request and never saved or stored. Required only if the previous data source was a data source. Use credentialId or credentialData instead for better security. |
| `datasetId` | string | Yes | The ID of the dataset to create a new version for. Use DATAROBOT_LIST_DATASETS to find dataset IDs. |
| `categories` | string | No | An array of strings or single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `credentials` | string | No | A list of credentials to use if this is a Spark dataset that requires credentials. |
| `useKerberos` | boolean | No | If true, use Kerberos for database authentication. |
| `credentialId` | string | No | The ID of stored credentials to authenticate with the database. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. Use this instead of credentialData for better security. |
| `credentialData` | string | No | The credentials to authenticate with the database, to be used instead of credentialId. Provide credentials directly for one-time use. |
| `useLatestSuccess` | boolean | No | If true, use the latest version that was successfully ingested instead of the latest version, which might be in an errored state. If no successful version is present, the latest errored version is used and the operation fails. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Version from URL

**Slug:** `DATAROBOT_CREATE_DATASETS_VERSIONS_FROM_URL`

Tool to create a new version for an existing DataRobot dataset from a publicly accessible URL. Use when you need to update an existing dataset with new data from HTTP/HTTPS URLs. Returns immediately with status ID - version creation happens asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL to download the dataset used to create the dataset version. Must be a valid HTTP/HTTPS URL pointing to a dataset file (CSV, Excel, etc.). |
| `datasetId` | string | Yes | The ID of the dataset to create a version for. |
| `categories` | string | No | An array of strings or a single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `doSnapshot` | boolean | No | If true, create a snapshot dataset (immutable copy); if false, create a remote dataset (linked to source). Creating snapshots from non-file sources requires the 'Enable Create Snapshot Data Source' permission. |
| `sampleSize` | object | No | Sample size specification for dataset ingestion. |
| `persistDataAfterIngestion` | boolean | No | If true, enforce saving all data for download and sampling, and allow extended data profile (statistics, histograms, etc.). If false, do not enforce saving data (only schema will be available). Cannot be false when doSnapshot is true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Version from Previous Version

**Slug:** `DATAROBOT_CREATE_DATASETS_VERSIONS_FROM_VERSION`

Tool to create a new dataset version from a specific previous version of its data source. Use when you need to refresh a dataset based on a particular historical version. The dataset version creation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | [Deprecated] The username for database authentication. Required only if the dataset was initially created from a data source. Use credentialId or credentialData instead for better security. |
| `password` | string | No | [Deprecated] The password (in cleartext) for database authentication. The password will be encrypted on the server-side HTTP request and never saved or stored. Required only if the previous data source was a data source. Use credentialId or credentialData instead for better security. |
| `datasetId` | string | Yes | The ID of the dataset to create a new version for. Use DATAROBOT_LIST_DATASETS to find dataset IDs. |
| `categories` | string | No | An array of strings or single string describing the intended use of the dataset (e.g., 'TRAINING', 'PREDICTION'). |
| `credentials` | string | No | A list of credentials to use if this is a Spark dataset that requires credentials. |
| `useKerberos` | boolean | No | If true, use Kerberos for database authentication. |
| `credentialId` | string | No | The ID of stored credentials to authenticate with the database. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. Use this instead of credentialData for better security. |
| `credentialData` | string | No | The credentials to authenticate with the database, to be used instead of credentialId. Provide credentials directly for one-time use. |
| `datasetVersionId` | string | Yes | The ID of the dataset version to use as the source. Use DATAROBOT_LIST_DATASETS_VERSIONS to find version IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Dataset Version from File

**Slug:** `DATAROBOT_CREATE_DATASET_VERSIONS_FROM_FILE`

Tool to create a new version of an existing DataRobot dataset by uploading a file (CSV, Excel, etc.). Use when you need to update an existing dataset with new data. Dataset version creation is asynchronous - poll the statusId to monitor completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | The data file to upload for the new dataset version. |
| `datasetId` | string | Yes | The ID of the dataset to create a new version for. |
| `categories` | string | No | An array of strings describing the intended use of the dataset (e.g., 'PREDICTION', 'TRAINING'). Can be a single string or a list of strings. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Data Slice

**Slug:** `DATAROBOT_CREATE_DATA_SLICES`

Create a new data slice in a DataRobot project to define a subset of data based on feature filters. Use when you need to analyze or model specific segments of your data (e.g., high-value customers, specific regions). Supports up to 3 filters with operators: eq, in, <, >, between, notBetween.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | User-provided name for the data slice. Must be between 1 and 500 characters. |
| `filters` | array | Yes | List of filters defining the data slice. Minimum 1 filter, maximum 3 filters. Each filter specifies criteria for selecting data. |
| `projectId` | string | Yes | The project ID where the data slice will be created. Use LIST_PROJECTS to find available project IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Data Slice Size Computation

**Slug:** `DATAROBOT_CREATE_DATA_SLICES_SLICE_SIZES`

Tool to compute the number of rows available after applying a data slice to a dataset subset. Use when validating data slice filters or checking how many rows will be included in analysis for a specific data partition. Returns slice size and validation messages. Status 202 indicates successful validation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_1_training" | "backtest_2" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_20" | "backtest_20_training" | "crossValidation" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation" | "vectorDatabase") | Yes | The source of data to use when calculating the slice size. Common values: 'training', 'validation', 'holdout', 'externalTestSet', or backtest partitions. |
| `modelId` | string | No | The model ID whose training dataset should be sliced. Only use this parameter when source is 'training'. |
| `projectId` | string | Yes | The project ID where the data slice exists. Use LIST_PROJECTS to find available project IDs. |
| `data_slice_id` | string | Yes | ID of the data slice to compute size for. Use GET_DATA_SLICE or CREATE_DATA_SLICES to get this ID. |
| `externalDatasetId` | string | No | The external dataset ID to use when calculating the size of a slice. Only use this parameter when source is 'externalTestSet'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Data Stage

**Slug:** `DATAROBOT_CREATE_DATA_STAGES`

Tool to create a data stage in DataRobot. Use when you need to create a new data stage with a specified filename for data staging operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `filename` | string | Yes | The filename associated with the data stage. This identifies the data stage and is used as a reference for subsequent operations. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment

**Slug:** `DATAROBOT_CREATE_DEPLOYMENT`

Create a DataRobot deployment from a model package to enable real-time or batch predictions. Prerequisites: 1. First obtain a modelPackageId using LIST_MODEL_PACKAGES or CREATE_MODEL_PACKAGE 2. Optionally get predictionEnvironmentId using LIST_PREDICTION_ENVIRONMENTS 3. For managed SaaS, get defaultPredictionServerId using LIST_PREDICTION_SERVERS The deployment is created asynchronously. Poll the returned location URL to check readiness status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `label` | string | Yes | Human-readable name for the deployment (max 512 characters) |
| `status` | string | No | Initial deployment status. One of: active (default) or inactive |
| `importance` | string | No | Deployment priority level. One of: CRITICAL, HIGH, MODERATE, or LOW |
| `description` | string | No | Optional description for the deployment (max 10,000 characters) |
| `modelPackageId` | string | Yes | ID of the DataRobot model package to deploy. Use LIST_MODEL_PACKAGES to find available packages or CREATE_MODEL_PACKAGE to create one from a trained model. |
| `predictionEnvironmentId` | string | No | ID of the prediction environment to run the deployment. Use LIST_PREDICTION_ENVIRONMENTS to find available environments. |
| `defaultPredictionServerId` | string | No | ID of the default prediction server. Required for DataRobot Cloud managed SaaS environments; omit for Self-Managed/Enterprise installations. Use LIST_PREDICTION_SERVERS to find available servers. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Custom Metric from Custom Job

**Slug:** `DATAROBOT_CREATE_DEPLOYMENT_CUSTOM_METRIC_FROM_CUSTOM_JOB`

Tool to create a deployment custom metric from an existing custom job in DataRobot. Use when you need to track custom business or operational metrics for a deployment. The custom job must have an associated hosted custom metric template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the custom metric (max 255 characters) |
| `value` | object | No | Value source configuration for custom metric |
| `schedule` | object | No | Scheduling configuration for the custom job execution |
| `timestamp` | object | No | Timestamp configuration for custom metric values |
| `customJobId` | string | Yes | ID of the custom job that provides metric data. The custom job must have an associated hosted custom metric template. |
| `description` | string | No | Description of the custom metric (max 1000 characters) |
| `sampleCount` | object | No | Points to a weight column for pre-aggregated metric values |
| `deploymentId` | string | Yes | Unique identifier of the deployment. Use LIST_DEPLOYMENTS to find available deployments. |
| `baselineValues` | array | No | Baseline values for the custom metric (max 5 values) |
| `geospatialSegmentAttribute` | string | No | Name of the column containing geospatial values for geospatial segmentation |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployments Actuals Data Exports

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_ACTUALS_DATA_EXPORTS`

Tool to create a deployment actuals data export for a specified time period. Use when you need to export actuals data from a deployment to analyze model performance and accuracy.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | Yes | End of the period of actuals data to collect in ISO 8601 format (e.g., '2026-02-13T23:59:59Z'). |
| `start` | string | Yes | Start of the period of actuals data to collect in ISO 8601 format (e.g., '2026-01-25T00:00:00Z'). |
| `modelId` | string | No | The ID of the model to filter actuals data. If not specified, exports actuals for all models in the deployment. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to export actuals data from. Use LIST_DEPLOYMENTS or GET_DEPLOYMENT to find available deployment IDs. |
| `onlyMatchedPredictions` | boolean | No | If true, exports only actuals with matching predictions. Defaults to true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Actuals from Dataset

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_ACTUALS_FROM_DATASET`

Tool to submit actuals values from AI Catalog dataset for deployment monitoring. Use when you need to upload actual outcomes for predictions to enable accuracy tracking and monitoring. The submission is asynchronous - poll the Location header URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | The username for database authentication. |
| `password` | string | No | The password for database authentication. |
| `datasetId` | string | Yes | The ID of dataset from catalog. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `timestampColumn` | string | No | Column name that contain datetime when actual values were obtained. |
| `datasetVersionId` | string | No | Version of dataset to retrieve. |
| `wasActedOnColumn` | string | No | Column name that contains boolean values if any action was made based on predictions data. |
| `actualValueColumn` | string | Yes | Column name that contains actual values. |
| `associationIdColumn` | string | Yes | Column name that contains unique identifiers used with a predicted rows. |
| `keepActualsWithoutPredictions` | boolean | No | Indicates if actual without predictions are kept. Defaults to true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Actuals from JSON

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_ACTUALS_FROM_JSON`

Submit actual values for predictions to a DataRobot deployment. Use this to track model accuracy and performance over time by providing ground truth values that can be compared against predictions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | A list of actual value records. Minimum size is 1 and maximum size is 10,000 items per request. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to submit actuals for. Use LIST_DEPLOYMENTS to find available deployment IDs. |
| `keepActualsWithoutPredictions` | boolean | No | Indicates whether actuals without matching predictions should be kept. Defaults to true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Custom Metric

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_CUSTOM_METRICS`

Create a deployment custom metric for monitoring deployment or model-specific metrics. Use for numeric metrics (average, gauge, sum) or categorical metrics to track custom KPIs beyond standard monitoring.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the custom metric |
| `type` | string ("average" | "categorical" | "gauge" | "sum") | Yes | Type and aggregation character of the metric. For numeric metrics use average, gauge, or sum. For categorical metrics use categorical. |
| `units` | string | No | Units or Y-axis label for numeric custom metrics (e.g., 'requests', 'seconds', 'errors'). Required for numeric metrics. |
| `value` | object | No | Reference to a column in a columnar dataset. |
| `timeStep` | string ("hour") | No | Time bucket size for aggregating metric values. Currently only 'hour' is supported. |
| `timestamp` | object | No | Configuration for timestamp spoofing when reading values from datasets. |
| `categories` | array | No | Category definitions for categorical custom metrics. Required for categorical metrics. Maximum 25 categories. |
| `description` | string | No | Optional description of the custom metric (max 1000 characters) |
| `sampleCount` | object | No | Reference to a column in a columnar dataset. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to add the custom metric to. Use LIST_DEPLOYMENTS to find available deployment IDs. |
| `baselineValues` | array | No | Baseline values for numeric custom metrics (average, gauge, sum). Required for numeric metrics. Maximum 5 values. |
| `directionality` | string ("higherIsBetter" | "lowerIsBetter") | No | Directionality of a numeric custom metric. |
| `isModelSpecific` | boolean | Yes | Determines whether the metric is related to the model (true) or deployment (false) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Custom Metrics Bulk Upload

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_CUSTOM_METRICS_BULK_UPLOAD`

Bulk upload custom metric values to a DataRobot deployment. Use this to efficiently submit multiple timestamped custom metric values in a single request for model monitoring and observability.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `buckets` | array | Yes | A list of timestamped buckets with custom metric values. Minimum 1 item, maximum 10,000 items per request. |
| `modelId` | string | No | For a model metric, the ID of the model of related champion/challenger to update the metric values. For a deployment metric, the ID of the model is not needed. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to upload custom metrics for. Use LIST_DEPLOYMENTS to find available deployment IDs. |
| `modelPackageId` | string | No | For a model metric, the ID of the model package of related champion/challenger to update the metric values. For a deployment metric, the ID of the model package is not needed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Upload Custom Metric Values from Dataset

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_CUSTOM_METRICS_FROM_DATASET`

Tool to upload custom metric values from a dataset to a deployment. Use when you need to populate custom metrics with data from an existing dataset. The operation is asynchronous and returns a location URL to check the import status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batch` | object | No | Column name specification for dataset column mapping |
| `value` | object | No | Column name specification for dataset column mapping |
| `modelId` | string | No | For a model metric, the ID of the model of related champion/challenger to update the metric values. For a deployment metric, the ID of the model is not needed. |
| `segments` | array | No | List of segments for the custom metric used in segmented analysis. Cannot be used with geospatial custom metrics. |
| `datasetId` | string | Yes | ID of the dataset to process for custom metric values. |
| `timestamp` | object | No | Timestamp column specification with optional format |
| `geospatial` | object | No | Column name specification for dataset column mapping |
| `sampleCount` | object | No | Column name specification for dataset column mapping |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `associationId` | object | No | Column name specification for dataset column mapping |
| `customMetricId` | string | Yes | Unique identifier of the custom metric. |
| `modelPackageId` | string | No | For a model metric, the ID of the model package of related champion/challenger to update the metric values. For a deployment metric, the ID of the model package is not needed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Upload Custom Metrics from JSON

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_CUSTOM_METRICS_FROM_JSON`

Tool to upload custom metric values from JSON for a DataRobot deployment. Use when you need to submit custom monitoring metrics (e.g., business KPIs, external performance measures) to track alongside model predictions. The operation is asynchronous - returns immediately with a location URL to poll for job status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dryRun` | boolean | No | If true, validates the request without saving data to the database. Use for testing before actual submission. Defaults to false. |
| `buckets` | array | Yes | List of timestamped buckets with custom metric values. Maximum 10,000 buckets per request. Each bucket must contain a 'value' field. |
| `modelId` | string | No | For model-level metrics, the ID of the model (champion/challenger) to update. Not needed for deployment-level metrics. |
| `deploymentId` | string | Yes | Unique identifier of the DataRobot deployment (24-character hex string). Use DATAROBOT_LIST_DEPLOYMENTS to find deployment IDs. |
| `customMetricId` | string | Yes | Unique identifier of the custom metric to update (24-character hex string). Use DATAROBOT_LIST_CUSTOM_METRICS to find metric IDs. |
| `modelPackageId` | string | No | For model-level metrics, the ID of the model package to update. Not needed for deployment-level metrics. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Monitoring Batch

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_MONITORING_BATCHES`

Tool to create a monitoring batch for a DataRobot deployment. Use when you need to organize predictions into named batches for monitoring and tracking purposes. Monitoring batches help group related predictions for analysis and reporting.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `isLocked` | boolean | No | Whether or not predictions can be added to the batch. Set to true to prevent new predictions from being added. |
| `batchName` | string | Yes | Name of the monitoring batch. |
| `description` | string | No | Description of the monitoring batch. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. Use DATAROBOT_LIST_DEPLOYMENTS to find available deployments. |
| `externalContextUrl` | string | No | External URL associated with the batch. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Deployment Monitoring Data

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_MONITORING_DATA_DELETIONS`

Tool to delete deployment monitoring data for a specific model within a time period. Use when you need to remove historical monitoring data for a model in a deployment. Specify time ranges using top-of-the-hour RFC3339 datetime strings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to delete monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z |
| `start` | string | No | Start of the period to delete monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z |
| `modelId` | string | Yes | The ID of the model for which monitoring data are being deleted. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. Use DATAROBOT_LIST_DEPLOYMENTS to find available deployments. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Prediction Data Export

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_PREDICTION_DATA_EXPORTS`

Tool to create a prediction data export for a deployment. Use when you need to export prediction data for observability and data exploration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period of prediction data to collect. Defaults to now, or a week after the start time. |
| `start` | string | No | Start of the period of prediction data to collect. Defaults to a week before the end time. |
| `modelId` | string | No | The ID of the model. |
| `batchIds` | array | No | IDs of batches to export (1-100 items). Null for real-time data exports. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `augmentationType` | string ("NO_AUGMENTATION" | "ACTUALS_AND_METRICS") | No | Type of augmentation to apply to prediction data. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Retraining Policy

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_RETRAINING_POLICIES`

Tool to create a deployment retraining policy in DataRobot. Use when you need to set up automated model retraining based on schedules, data drift, or accuracy decline triggers.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the retraining policy. |
| `action` | string ("create_challenger" | "create_model_package" | "model_replacement") | No | Action to take on the resultant new model |
| `trigger` | object | No | Retraining policy trigger configuration |
| `useCaseId` | string | No | The ID of the use case to be used in this policy. |
| `customJobId` | string | No | The ID of the custom job to be used in this policy. |
| `description` | string | No | Description of the retraining policy. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `projectOptions` | object | No | Options for projects used to build new models |
| `autopilotOptions` | object | No | Options for projects used to build new models |
| `timeSeriesOptions` | object | No | Time Series project option used to build new models |
| `featureListStrategy` | string ("informative_features" | "same_as_champion") | No | Configure the feature list strategy used for modeling |
| `modelSelectionStrategy` | string ("autopilot_recommended" | "same_blueprint" | "same_hyperparameters" | "custom_job") | No | Configure how new model is selected when the retraining policy runs |
| `projectOptionsStrategy` | string ("same_as_champion" | "override_champion" | "custom") | No | Configure the project option strategy used for modeling |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Deployment Training Data Export

**Slug:** `DATAROBOT_CREATE_DEPLOYMENTS_TRAINING_DATA_EXPORTS`

Tool to create a deployment training data export in DataRobot. Use when you need to export the training data used for a deployed model. The export is enqueued asynchronously and processes in the background. Poll the returned location URL to check export status and retrieve the data once ready.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | No | Optional ID of the specific model to export training data for. If not provided, uses the deployment's current champion model. |
| `deploymentId` | string | Yes | Unique identifier of the deployment for which to create a training data export. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Entity Notification Channel

**Slug:** `DATAROBOT_CREATE_ENTITY_NOTIFICATION_CHANNELS`

Tool to create an entity notification channel in DataRobot. Use when you need to set up notifications for deployments or custom jobs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification channel (max 100 characters) |
| `orgId` | string | No | The ID of organization that notification channel belongs to |
| `drEntities` | array | No | The IDs of the DataRobot Users, Group or Custom Job associated with the DataRobotUser, DataRobotGroup or DataRobotCustomJob channel types (1-100 items) |
| `payloadUrl` | string | No | The payload URL of the new notification channel (required for Webhook, Slack, MSTeams channel types) |
| `channelType` | string | Yes | The type of the new notification channel. Valid values: DataRobotCustomJob, DataRobotGroup, DataRobotUser, Database, Email, InApp, InsightsComputations, MSTeams, Slack, Webhook |
| `contentType` | string | No | The content type of the messages of the new notification channel. Valid values: application/json, application/x-www-form-urlencoded |
| `secretToken` | string | No | Secret token to be used for new notification channel |
| `validateSsl` | boolean | No | Defines if validate SSL or not in the notification channel |
| `emailAddress` | string | No | The email address to be used in the new notification channel (required for Email channel type) |
| `languageCode` | string | No | The preferred language code. Valid values: en, es_419, fr, ja, ko, ptBR |
| `customHeaders` | array | No | Custom headers and their values to be sent in the new notification channel (max 100 items) |
| `relatedEntityId` | string | Yes | The ID of the related entity (deployment or custom job) |
| `verificationCode` | string | No | Required if the channel type is Email |
| `relatedEntityType` | string | Yes | Type of related entity. Valid values: deployment, Deployment, DEPLOYMENT, customjob, Customjob, CUSTOMJOB |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Entity Notification Policy

**Slug:** `DATAROBOT_CREATE_ENTITY_NOTIFICATION_POLICIES`

Tool to create an entity notification policy in DataRobot. Use when you need to set up automated notifications for specific events on deployments or custom jobs (e.g., health changes, model replacements, batch job completions). Requires a pre-existing notification channel; create one via the entity notification channels endpoint first.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification policy. Must not exceed 100 characters. |
| `orgId` | string | No | The ID of the organization that owns the notification policy. If not specified, defaults to the user's current organization. |
| `active` | boolean | No | Defines if the notification policy is active or not. If not specified, defaults to true. |
| `channelId` | string | Yes | The ID of the notification channel to be used to send the notification. Must be a pre-existing channel created via the entity notification channels endpoint. |
| `eventType` | string | No | The specific type of event that triggers the notification. Examples include 'model_deployments.deployment_creation', 'model_deployments.accuracy_red', or 'batch_predictions.success'. Mutually exclusive with eventGroup. |
| `eventGroup` | string | No | The group of events that trigger the notification. Use this for broad event categories like 'model_deployments.all' or 'batch_predictions.all'. Mutually exclusive with eventType. |
| `channelScope` | string ("organization" | "Organization" | "ORGANIZATION" | "entity" | "Entity" | "ENTITY" | "template" | "Template" | "TEMPLATE") | Yes | Scope of the channel. Determines the visibility level of the notification channel (organization-level, entity-level, or template-based). |
| `relatedEntityId` | string | Yes | The ID of the related entity (deployment or custom job) that this policy monitors. |
| `maximalFrequency` | string | No | Maximal frequency between policy runs in ISO 8601 duration string format (e.g., 'PT1H' for 1 hour, 'P1D' for 1 day). Limits how often notifications are sent. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity. Specifies whether this policy monitors a deployment or a custom job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Entity Notification Policy from Template

**Slug:** `DATAROBOT_CREATE_ENTITY_NOTIFICATION_POLICIES_FROM_TEMPLATE`

Tool to create an entity notification policy from a template. Use when you need to set up automated notifications for events on deployments or custom jobs. The policy defines when notifications should be sent and through which channel (email, Slack, webhook, etc.).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification policy (max 100 characters). |
| `active` | boolean | No | Defines if the notification policy is active or not. Defaults to True if not specified. |
| `templateId` | string | Yes | The ID of the notification policy template to use. Use the API to list available templates for a notification channel. |
| `relatedEntityId` | string | Yes | The ID of the related entity (deployment or custom job). Use LIST_DEPLOYMENTS or LIST_CUSTOM_JOBS to find entity IDs. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of the related entity. Must be 'deployment' or 'customjob'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Entity Notification Policy Template

**Slug:** `DATAROBOT_CREATE_ENTITY_NOTIFICATION_POLICY_TEMPLATE`

Tool to create an entity notification policy template in DataRobot. Use when you need to set up notification rules for deployments or custom jobs. Templates allow you to define reusable notification policies that can be applied to multiple entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification policy template (max 100 characters). |
| `orgId` | string | No | The ID of the organization that owns the notification policy. If not specified, uses the default organization. |
| `active` | boolean | No | Defines if the notification policy is active or not. Defaults to true if not specified. |
| `channelId` | string | Yes | The ID of the notification channel to be used to send the notification. Use LIST_NOTIFICATION_CHANNELS to find available channels. |
| `eventType` | string | No | The specific type of the event that triggers the notification. Use either eventGroup or eventType, but not both. |
| `eventGroup` | string ("secure_config.all" | "dataset.all" | "file.all" | "comment.all" | "invite_job.all" | "deployment_prediction_explanations_computation.all" | "model_deployments.critical_health" | "model_deployments.critical_frequent_health_change" | "model_deployments.frequent_health_change" | "model_deployments.health" | "model_deployments.retraining_policy" | "inference_endpoints.health" | "model_deployments.management_agent" | "model_deployments.management_agent_health" | "prediction_request.all" | "challenger_management.all" | "challenger_replay.all" | "model_deployments.all" | "project.all" | "perma_delete_project.all" | "users_delete.all" | "applications.all" | "model_version.stage_transitions" | "model_version.all" | "use_case.all" | "batch_predictions.all" | "change_requests.all" | "custom_job_run.all" | "custom_job_run.unsuccessful" | "insights_computation.all" | "notebook_schedule.all" | "monitoring.all") | No | Event groups that trigger notifications. |
| `maximalFrequency` | string | No | Maximal frequency between policy runs in ISO 8601 duration string (e.g., 'PT1H' for 1 hour, 'P1D' for 1 day). |
| `relatedEntityType` | string ("deployment" | "customjob") | Yes | Type of related entity that this template applies to (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Entity Tag

**Slug:** `DATAROBOT_CREATE_ENTITY_TAG`

Tool to create a new entity tag in DataRobot. Use when you need to tag experiment containers for organization and categorization.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the tag to create. Must be 100 characters or less. |
| `entityType` | string ("experiment_container") | Yes | The type of entity to tag. Currently only 'experiment_container' is supported. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Execution Environment Version

**Slug:** `DATAROBOT_CREATE_EXECUTION_ENVIRONMENTS_VERSIONS`

Tool to create a new version for an existing DataRobot execution environment. Use when you need to add a new version to an execution environment by specifying a Docker image URI, Docker context, or Docker image file. The environment version build is asynchronous - the API returns immediately with a 202 status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `label` | string | No | Human-readable version indicator. If not specified, defaults to empty string. |
| `contextUrl` | string | No | Docker context URL of the environment version. It is not intended to be used for automatic processing. If not specified, defaults to empty string. |
| `description` | string | No | The description of the environment version. If not specified, defaults to empty string. |
| `environmentId` | string | Yes | The ID of the execution environment to create a version for. |
| `dockerImageUri` | string | No | The URI of the Docker image that is used to build the environment version (e.g., 'python:3.11-slim'). Parameter dockerContext may also be provided to upload context, but the image URI is used for the build. Use this for images hosted on Docker registries. |
| `environmentVersionId` | string | No | The ID the new environment version should use. Only admins can create environment versions with pre-defined IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Request Execution Environment Version Build

**Slug:** `DATAROBOT_CREATE_EXECUTION_ENVIRONMENTS_VERSIONS_DOWNLOAD`

Tool to request an on-demand image build for an execution environment version. Use when you need to trigger a build for a specific environment version before downloading or deploying it.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `environmentId` | string | Yes | The ID of the execution environment. |
| `environmentVersionId` | string | Yes | The ID of the environment version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create External Data Store Standard UDF

**Slug:** `DATAROBOT_CREATE_EXT_DS_STANDARD_USER_DEF_FUNCTIONS`

Tool to start a job that creates a standard user-defined function in an external data store. Use when you need to add rolling aggregation functions (median or most frequent) to a database for use in DataRobot feature engineering. Requires write permissions to the data store.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `schema` | string | Yes | The database schema name where the user-defined function will be created or detected. Typically uppercase for most databases (e.g., 'PUBLIC' for Snowflake). |
| `dataStoreId` | string | Yes | ID of the external data store where the user-defined function will be created. Use DATAROBOT_LIST_EXTERNAL_DATA_STORES to find available data store IDs. |
| `credentialId` | string | No | ID of the credentials to use for authentication to the data store. Use DATAROBOT_LIST_CREDENTIALS to find available credential IDs. If omitted, the default credentials for the data store will be used. |
| `functionType` | string ("rolling_median" | "rolling_most_frequent") | Yes | Type of standard user-defined function to create. Choose 'rolling_median' for median calculations over rolling windows, or 'rolling_most_frequent' for mode calculations over rolling windows. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create External Data Sources

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_SOURCES`

Tool to create a new external data source in DataRobot. Use when you need to create a connection to an external database or filesystem. Requires a valid dataStoreId from an existing data store connection.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("dr-connector-v1" | "dr-database-v1" | "jdbc") | Yes | Data source type. |
| `params` | string | Yes | Data source configuration parameters. Must match the data source type. |
| `canonicalName` | string | Yes | Data source canonical name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create External Data Store

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_STORES`

Tool to create a new external data store in DataRobot. Use when establishing a connection to external databases or data systems. The created data store ID can then be used with CREATE_EXTERNAL_DATA_SOURCES to define specific tables or queries.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("jdbc" | "dr-connector-v1" | "dr-database-v1") | Yes | Data store type. Use 'jdbc' for JDBC connections, 'dr-connector-v1' for DataRobot connectors, 'dr-database-v1' for DataRobot database drivers. |
| `params` | string | Yes | Data store configuration parameters. Must match the selected data store type. |
| `canonicalName` | string | Yes | The user-friendly name of the data store. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External Data Store Columns

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_STORES_COLUMNS`

Tool to retrieve column metadata from an external data store. Use when you need to discover the schema and data types of tables in an external database connection. Requires a valid dataStoreId and authentication credentials.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for the database connection. |
| `query` | string | No | Schema query to execute for retrieving columns. |
| `table` | string | No | Table name to retrieve columns from. |
| `schema` | string | No | Schema name to retrieve columns from. |
| `catalog` | string | No | Name of specified database catalog. |
| `password` | string | No | Password for the database connection. |
| `dataStoreId` | string | Yes | ID of the external data store to retrieve columns from. |
| `credentialId` | string | No | ID of credential mapping for authentication. Use LIST_CREDENTIALS to find available credentials. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Store Columns Info

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_STORES_COLUMNS_INFO`

Tool to retrieve column metadata for a table in an external data store. Use when you need to inspect the schema and structure of a database table. This operation is only supported for JDBC data stores.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for data store authentication (deprecated - use credentialId instead). |
| `limit` | integer | No | Maximum number of results to return per page. |
| `table` | string | Yes | Name of the table to retrieve column information for. |
| `types` | string | No | Include only credentials of the specified type. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `schema` | string | No | Schema name where the table resides. |
| `catalog` | string | No | Name of the database catalog. |
| `orderBy` | string ("creationDate" | "-creationDate") | No | Sort order for credentials. Defaults to -creationDate (newest first). |
| `password` | string | No | Password for data store authentication (deprecated - use credentialId instead). |
| `dataStoreId` | string | Yes | ID of the external data store (24-character hex string). |
| `useKerberos` | boolean | No | Whether to use Kerberos for data store authentication. |
| `credentialId` | string | No | ID of the credentials to use for authentication instead of username and password. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Retrieve Data Store Schemas

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_STORES_SCHEMAS`

Tool to retrieve data store schemas. Use when you need to list available schemas and catalogs from an external data store.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for data store authentication. |
| `password` | string | No | Password for data store authentication. |
| `dataStoreId` | string | Yes | ID of the external data store. |
| `useKerberos` | boolean | No | Whether to use Kerberos for data store authentication. |
| `credentialId` | string | No | ID of the set of credentials to use instead of username and password. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Store Tables

**Slug:** `DATAROBOT_CREATE_EXTERNAL_DATA_STORES_TABLES`

Tool to retrieve database tables and views from a DataRobot external data store. Use when you need to browse available tables before creating a dataset from an external data source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for data store authentication. Use credentialId instead for better security. |
| `schema` | string | No | Filter to show only tables in this specific schema. |
| `catalog` | string | No | Filter to show only tables in this specific catalog. |
| `password` | string | No | Password for data store authentication. Use credentialId instead for better security. |
| `dataStoreId` | string | Yes | ID of the external data store to query for available tables. |
| `useKerberos` | boolean | No | Whether to use Kerberos for data store authentication. Defaults to false. |
| `credentialId` | string | No | ID of the credentials to use for authenticating to the data store. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create External OAuth Provider

**Slug:** `DATAROBOT_CREATE_EXTERNAL_O_AUTH_PROVIDERS`

Create an external OAuth provider configuration in DataRobot for integrating with external services. Use this to configure OAuth authentication for services like GitHub, GitLab, Bitbucket, Google, Box, Microsoft, Jira, or Confluence. The provider enables secure authentication flows for accessing external resources from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Human-readable name for the OAuth provider configuration. |
| `type` | string ("github" | "gitlab" | "bitbucket" | "google" | "box" | "microsoft" | "jira" | "confluence") | Yes | Type of OAuth provider to configure. Must be one of the supported provider types. |
| `clientId` | string | Yes | OAuth client ID for the external provider. Obtain this from the OAuth provider's application settings. |
| `skipConsent` | boolean | No | Whether to bypass the OAuth consent screen. Set to true to skip user consent prompts. |
| `clientSecret` | string | Yes | OAuth client secret for the external provider. Keep this secure - obtain from the OAuth provider's application settings. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Empty Files Catalog Item

**Slug:** `DATAROBOT_CREATE_FILES`

Tool to create an empty files catalog item in DataRobot. Use when you need to create a placeholder catalog entry before uploading files to it. After creating the empty catalog item, use CREATE_FILES_FROM_FILE or CREATE_FILES_FROM_URL to populate it with actual files.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Copy Files in Batch

**Slug:** `DATAROBOT_CREATE_FILES_COPY_BATCH`

Tool to copy multiple files or folders in a batch operation within DataRobot's data registry. Use when you need to copy multiple files at once, either within the same catalog item or to a different catalog item.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `target` | string | No | Either a target folder to copy sources into or a target file name to copy a single file and rename it on copy. Folder paths must end with '/'. If not provided, files are copied within the same catalog item. |
| `sources` | array | Yes | List of file and folder names to copy. Folder paths must end with '/'. Minimum 1 item, maximum 100 items. |
| `catalogId` | string | Yes | The catalog item ID containing the files or folders to copy. |
| `overwrite` | string ("RENAME" | "REPLACE" | "SKIP" | "ERROR") | No | How to deal with a name conflict in the target location. RENAME (default): rename a duplicate file using '<filename> (n).ext' pattern. REPLACE: prefer files you copy. SKIP: prefer files existing in the target. ERROR: fail with an error in case of a naming conflict. |
| `targetCatalogId` | string | No | Target catalog ID to copy files into. If not provided, files are copied within the same catalog item. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Files Downloads

**Slug:** `DATAROBOT_CREATE_FILES_DOWNLOADS`

Tool to generate a temporary download URL for a DataRobot catalog file. Use when you need to download data from a catalog item. The download URL is valid for a limited time (default 60 seconds, maximum 5 minutes).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `duration` | integer | No | Access TTL (time-to-live) in seconds for the download URL. Maximum value is 300 seconds (5 minutes). Defaults to 60 seconds. |
| `fileName` | string | No | The name of a specific file to download from an unstructured dataset. If not specified, it will download either the original archive file or, if there is only one file, it will download that single file. |
| `catalog_id` | string | Yes | The catalog item ID of the file to download. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Add Files to Existing Catalog Item

**Slug:** `DATAROBOT_CREATE_FILES_FROM_FILE_BY_ID`

Tool to add file(s) to an existing DataRobot files catalog item. Use when you need to upload additional files to an existing catalog entry. The file addition is asynchronous - poll the returned location URL to check status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | The file to upload to the catalog item. |
| `prefix` | string | No | Folder path to prepend to uploaded file paths. Must end with '/'. Example: 'data/raw/' |
| `catalogId` | string | Yes | The catalog item ID of the existing files catalog entry to add files to. |
| `overwrite` | string ("RENAME" | "REPLACE" | "SKIP" | "ERROR") | No | How to handle name conflicts between existing files and the uploaded one. RENAME (default): rename uploaded file using '<filename> (n).ext' pattern. REPLACE: replace existing file with uploaded one. SKIP: keep existing file, skip uploaded one. ERROR: return HTTP 409 Conflict on naming conflict. |
| `useArchiveContents` | string ("True" | "False") | No | If true, extract archive contents (zip, tar, etc.) and associate them with the catalog entity. Defaults to 'True'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Files From Stage

**Slug:** `DATAROBOT_CREATE_FILES_FROM_STAGE`

Tool to apply staged files to a catalog item. Use when you need to finalize and commit files from a stage to a files catalog item.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `stageId` | string | Yes | The ID of the stage containing files to apply to the catalog item. |
| `catalogId` | string | Yes | The catalog item ID where staged files will be applied. |
| `overwrite` | string ("RENAME" | "REPLACE" | "SKIP" | "ERROR") | No | How to handle name conflicts between existing files and uploaded files. RENAME (default): rename uploaded file using '<filename> (n).ext' pattern. REPLACE: prefer uploaded file, replacing existing file. SKIP: prefer existing file, skip uploaded file. ERROR: return HTTP 409 Conflict if naming conflict occurs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Files From URL

**Slug:** `DATAROBOT_CREATE_FILES_FROM_URL`

Tool to create a files catalog item in DataRobot from a URL. Use when you need to import files from a publicly accessible URL into the DataRobot catalog. The file is downloaded and processed asynchronously - use GET_CATALOG_ITEM to check the import status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL to download the file from. Must be a valid, accessible URL pointing to a file to be added to the catalog. |
| `useArchiveContents` | string | No | If true, extract archive contents and associate them with the catalog entity. Valid values are 'true', 'True', 'false', or 'False'. Defaults to 'True'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Add Files to Catalog Item from URL

**Slug:** `DATAROBOT_CREATE_FILES_FROM_URL_BY_ID`

Tool to add file(s) into an existing files catalog item from a URL. Use when you need to add additional files from a URL to an existing DataRobot files catalog item. The file download and processing happens asynchronously - poll the returned location URL to check status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL to download the file(s) to add to the catalog entity. Must be a valid, accessible HTTP/HTTPS URL. |
| `prefix` | string | No | Folder path to prepend to uploaded file paths. Must end with "/". Example: "data/raw/" |
| `catalogId` | string | Yes | The catalog item ID to add files to. This must be an existing files catalog item in DataRobot. |
| `overwrite` | string ("RENAME" | "REPLACE" | "SKIP" | "ERROR") | No | How to deal with a name conflict between an existing file and an uploaded one. RENAME (default): rename the uploaded file using '<filename> (n).ext' pattern. REPLACE: prefer the uploaded file and overwrite the existing one. SKIP: prefer the existing file and skip the upload. ERROR: return HTTP 409 Conflict response in case of a naming conflict. |
| `useArchiveContents` | string | No | If true, extract archive contents and associate them with the catalog entity. Valid values are 'true', 'True', 'false', or 'False'. Defaults to 'True'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Files Download Links

**Slug:** `DATAROBOT_CREATE_FILES_LINKS`

Tool to generate temporary download URLs for files in a catalog item. Use when you need to retrieve data from an unstructured dataset via direct download links. The generated URLs are temporary and valid for the specified duration (default 10 minutes, max 50 minutes).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `duration` | integer | No | Access time-to-live (TTL) in seconds. Controls how long the download URLs remain valid. Default is 600 seconds (10 minutes). Maximum value is 3000 seconds (50 minutes). |
| `catalogId` | string | Yes | The catalog item ID containing the files to download. |
| `fileNames` | array | Yes | The names of files to download from an unstructured dataset. If not specified, it will download either the original archive file or, if there is only one file, it will download that. Maximum 100 items. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create File Stage

**Slug:** `DATAROBOT_CREATE_FILES_STAGES`

Tool to create an empty stage for File Registry files upload. Use when you need to upload files to DataRobot's File Registry. The stage acts as a temporary upload container before finalizing the file.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `catalogId` | string | Yes | The catalog item ID of the file in the File Registry. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Stage File for Batch Upload

**Slug:** `DATAROBOT_CREATE_FILES_STAGES_UPLOAD`

Tool to stage a file for a batch upload in DataRobot. Use when uploading files to a specific catalog item's stage. Returns the catalog ID and stage ID upon successful upload.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | The file to upload to the stage. |
| `stage_id` | string | Yes | The stage ID (24-character hex string) where the file will be uploaded. |
| `catalog_id` | string | Yes | The catalog item ID (24-character hex string) where the file will be staged. |
| `original_file_name` | string | No | If the contents of the file being uploaded are derived from a file in the catalog entity, the name of the file it is derived from. |
| `original_catalog_id` | string | No | If the contents of the file being uploaded are derived from a file in a catalog entity which was used to clone the current catalog entity, the original catalog entity id in which the file exists. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create File Version Download Links

**Slug:** `DATAROBOT_CREATE_FILES_VERSIONS_LINKS`

Tool to generate temporary download URLs for catalog file versions. Use when you need to retrieve data from a catalog item and version via URLs. The generated URLs expire after the specified duration (default 600 seconds, maximum 3000 seconds).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `duration` | integer | No | Access ttl in seconds (maximum value is 3000s). |
| `catalogId` | string | Yes | The catalog item ID. |
| `fileNames` | array | No | The names of files to download from an unstructured dataset. If not specified, it will download either the original archive file or, if there is only one file, it will download that. Maximum of 100 file names. |
| `catalogVersionId` | string | Yes | The catalog version item ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Chat Prompts

**Slug:** `DATAROBOT_CREATE_GENAI_CHAT_PROMPTS`

Tool to create a GenAI chat prompt and execute it with an LLM. Use when you want to send a prompt to a DataRobot LLM for completion, optionally leveraging vector database retrieval for RAG workflows.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `text` | string | Yes | The text of the user prompt (max 500,000 characters) |
| `llmId` | string | No | The ID of the LLM to use for this prompt. Updates associated chat/blueprint settings if provided. |
| `chatId` | string | No | The ID of the chat this prompt belongs to. If omitted, the prompt uses current chat settings. |
| `llmSettings` | object | No | LLM settings to use for the prompt. |
| `llmBlueprintId` | string | No | The ID of the LLM blueprint this prompt belongs to |
| `metadataFilter` | object | No | Metadata fields to filter vector database results |
| `vectorDatabaseId` | string | No | ID of the vector database to use for retrieval augmented generation |
| `vectorDatabaseSettings` | object | No | Vector database settings for the prompt. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Comparison Chat

**Slug:** `DATAROBOT_CREATE_GENAI_COMPARISON_CHATS`

Tool to create a new GenAI comparison chat within a playground for comparing different LLM responses. Use when you need to set up a new comparison chat environment to evaluate multiple model outputs side-by-side.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the comparison chat. Required field. |
| `playgroundId` | string | Yes | The ID of the playground to associate with the comparison chat. Required field. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Custom Model Embedding Validations

**Slug:** `DATAROBOT_CREATE_GENAI_CUSTOM_MODEL_EMBEDDING_VALIDATIONS`

Tool to create and run validation tests for GenAI custom model embeddings. Use when you need to verify that a custom model deployment meets DataRobot's requirements for embedding generation before using it in production.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Custom name for this validation (max 5000 characters). Defaults to 'Untitled' if not provided. |
| `modelId` | string | No | The ID of the model used in the deployment. If provided, helps identify which specific model version is being validated. |
| `useCaseId` | string | No | ID of the use case associated with this validation. Helps organize validations by business use case. |
| `deploymentId` | string | Yes | The ID of the custom model deployment to validate. Use DATAROBOT_LIST_DEPLOYMENTS to find available deployments. |
| `promptColumnName` | string | Yes | The name of the column the custom model uses for prompt text input. This column should contain the text data that will be embedded. |
| `targetColumnName` | string | Yes | The name of the column the custom model uses for prediction output. This column should contain the embedding vector output. |
| `predictionTimeout` | integer | No | Timeout in seconds for API prediction requests during validation (range: 1-600, default: 300). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Custom Model LLM Validations

**Slug:** `DATAROBOT_CREATE_GENAI_CUSTOM_MODEL_LLM_VALIDATIONS`

Tool to create a GenAI custom model LLM validation. Use when you need to test and validate LLM deployment behavior. Returns a validation object with status TESTING initially; poll the validation status to check completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Display name for the validation (default: 'Untitled', max 5000 chars) |
| `useCaseId` | string | No | Associated use case identifier for organizing validations |
| `chatModelId` | string | No | OpenAI chat completion API model identifier (max 5000 chars) |
| `deploymentId` | string | Yes | The custom model deployment identifier to validate |
| `promptColumnName` | string | No | Column name for prompt input (max 5000 chars) |
| `targetColumnName` | string | No | Column name for prediction output (max 5000 chars) |
| `predictionTimeout` | integer | No | Timeout in seconds for predictions (1-600, default: 300) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Custom Model Vector Database Validation

**Slug:** `DATAROBOT_CREATE_GENAI_CUSTOM_MODEL_VECTOR_DB_VALIDATION`

Create a validation for a GenAI custom model deployment's vector database capabilities. Returns immediately with validation ID - validation runs asynchronously and status can be checked using the returned ID. Use when you need to verify that a custom model deployment can properly handle vector database operations for RAG (Retrieval-Augmented Generation) use cases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Custom name for the validation. Defaults to 'Untitled' if not provided. Maximum length: 5000 characters. |
| `modelId` | string | No | The ID of the specific model version used in the deployment. If not provided, uses the current model from the deployment. |
| `useCaseId` | string | No | The ID of the use case to associate with the validated custom model. Links the validation to a specific business use case for tracking and governance. |
| `deploymentId` | string | Yes | The ID of the custom model deployment to validate. This must be a valid deployment ID from a custom GenAI model. |
| `promptColumnName` | string | Yes | The name of the column the custom model uses for prompt text input. This column will be used to send test prompts during validation. |
| `targetColumnName` | string | Yes | The name of the column the custom model uses for prediction output. This column will be checked for validation results. |
| `predictionTimeout` | integer | No | Timeout in seconds for prediction requests during validation. Must be between 1 and 600 seconds. Defaults to 300 seconds (5 minutes) if not specified. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Custom Model Version

**Slug:** `DATAROBOT_CREATE_GENAI_CUSTOM_MODEL_VERSIONS`

Tool to create a GenAI custom model version from an LLM blueprint in DataRobot. Use when you need to register a custom model version for deployment or testing. Prerequisites: You must first create an LLM blueprint before calling this endpoint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `llmBlueprintId` | string | Yes | ID of the LLM blueprint to use for creating the custom model version (24-character hex string). Obtain from creating an LLM blueprint first. |
| `promptColumnName` | string | No | The name of the column containing prompts in the training dataset. Defaults to 'promptText'. |
| `targetColumnName` | string | No | The name of the column containing target/result text in the training dataset. Defaults to 'resultText'. |
| `insightsConfiguration` | array | No | Array of insight configurations for evaluating the model (e.g., toxicity, jailbreak, PII checks) |
| `llmTestConfigurationIds` | array | No | Array of LLM test configuration IDs to run against the custom model version (24-character hex strings) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Evaluation Dataset Configuration

**Slug:** `DATAROBOT_CREATE_GENAI_EVALUATION_DATASET_CONFIGURATIONS`

Tool to create a GenAI evaluation dataset configuration for testing LLM applications. Use when you need to set up evaluation datasets with prompt columns and optional response/tool call columns for assessing GenAI playground outputs. Required for running AI robustness tests on GenAI use cases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Custom configuration name for the evaluation dataset (max 5000 characters). If omitted, DataRobot will auto-generate a name. |
| `datasetId` | string | Yes | The evaluation dataset identifier (24-character hex string). Use DATAROBOT_LIST_DATASETS to find available datasets. |
| `useCaseId` | string | Yes | The use case identifier (24-character hex string). This links the evaluation dataset to a specific use case. |
| `playgroundId` | string | Yes | The playground identifier (24-character hex string). This links the evaluation to a specific GenAI playground. |
| `promptColumnName` | string | Yes | The name of the dataset column containing prompt text. This column provides the input prompts for evaluation. |
| `correctnessEnabled` | boolean | No | Enable legacy correctness evaluation. This flag controls whether to use DataRobot's built-in correctness metrics. |
| `isSyntheticDataset` | boolean | No | Indicates whether the dataset contains synthetic data (default: false). Set to true if using AI-generated evaluation data. |
| `responseColumnName` | string | No | The name of the dataset column containing expected response text. Use this to compare actual LLM responses against ground truth. |
| `toolCallsColumnName` | string | No | The name of the dataset column containing expected tool calls for agentic workflows. Use this for evaluating agents that call external tools. |
| `agentGoalsColumnName` | string | No | The name of the dataset column containing expected agent goals for agentic workflows. Use this for evaluating multi-step agent behavior. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI LLM Blueprint

**Slug:** `DATAROBOT_CREATE_GENAI_LLM_BLUEPRINTS`

Tool to create a new GenAI LLM blueprint for generative AI applications. Use when you need to configure an LLM with specific settings, system prompts, and optional RAG capabilities. Requires a valid playgroundId and llmId. Set llmId to 'custom-model' when using customModelLlmSettings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the LLM Blueprint |
| `llmId` | string | Yes | The ID of the LLM to use. Must be 'custom-model' if customModelLlmSettings is provided. Use DATAROBOT_LIST_GENAI_LLMS to find available LLM IDs. |
| `promptType` | string | No | The prompt type for the LLM Blueprint (e.g., CHAT_HISTORY_AWARE for conversational context, ONE_TIME_PROMPT for single prompts) |
| `description` | string | No | Optional description of the LLM Blueprint's purpose and functionality |
| `llmSettings` | object | No | LLM configuration settings for the blueprint. |
| `playgroundId` | string | Yes | The ID of the Playground where the LLM Blueprint will be created |
| `vectorDatabaseId` | string | No | The ID of the Vector Database to enable RAG (Retrieval Augmented Generation) |
| `customModelLlmSettings` | object | No | Custom model LLM configuration settings. |
| `vectorDatabaseSettings` | object | No | Vector database configuration for RAG (Retrieval Augmented Generation). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI LLM Blueprints from Chat Prompt

**Slug:** `DATAROBOT_CREATE_GENAI_LLM_BLUEPRINTS_FROM_CHAT_PROMPT`

Tool to create a GenAI LLM blueprint from an existing chat prompt. Use when you have a chat prompt ID and want to convert it into a reusable LLM blueprint for experimentation and deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name for the LLM blueprint |
| `description` | string | No | Optional description for the LLM blueprint |
| `chatPromptId` | string | Yes | The ID of the chat prompt to convert into an LLM blueprint (24-character hex string). Use a GenAI chat prompt creation action to obtain this ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI LLM Blueprint from LLM Blueprint

**Slug:** `DATAROBOT_CREATE_GENAI_LLM_BLUEPRINTS_FROM_LLMBLUEPRINT`

Tool to create a new GenAI LLM Blueprint by copying from an existing LLM Blueprint. Use when you need to duplicate an LLM blueprint configuration with modifications or for version control. The new blueprint inherits all properties from the source blueprint except for the name and description.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name for the new LLM blueprint. Should be descriptive and unique. |
| `description` | string | No | Optional description for the new LLM blueprint. Use this to document the purpose or modifications of the copied blueprint. |
| `llmBlueprintId` | string | Yes | The ID of the source LLM blueprint to copy from (24-character hex string). Use DATAROBOT_LIST_GENAI_LLM_BLUEPRINTS to find available blueprint IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI LLM Test Configuration

**Slug:** `DATAROBOT_CREATE_GENAI_LLM_TEST_CONFIGURATION`

Tool to create a GenAI LLM test configuration for AI robustness testing. Use when you need to set up automated testing for LLM outputs against metrics like toxicity, jailbreak attempts, or PII leakage.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name for this LLM test configuration |
| `useCaseId` | string | Yes | The ID of the use case to associate with this LLM test configuration (24-character hex string) |
| `description` | string | No | Optional description for the LLM test configuration |
| `dataset_evaluations` | array | Yes | Array of dataset evaluation configurations (must have at least 1 item) |
| `llm_test_grading_criteria` | object | Yes | Overall grading criteria for the LLM test |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI LLM Test Suite

**Slug:** `DATAROBOT_CREATE_GENAI_LLM_TEST_SUITE`

Tool to create a new GenAI LLM test suite for evaluating LLM applications. Use when you need to group LLM test configurations together for AI robustness testing. Requires a valid useCaseId - use DATAROBOT_LIST_USE_CASES to find available use cases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the LLM test suite |
| `useCaseId` | string | Yes | The ID of the use case to associate with this test suite. Use DATAROBOT_LIST_USE_CASES to find available use case IDs. |
| `description` | string | No | Optional description for the test suite |
| `llmTestConfigurationIds` | array | No | Optional array of LLM test configuration IDs to include in the suite |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Playground

**Slug:** `DATAROBOT_CREATE_GENAI_PLAYGROUND`

Tool to create a new GenAI playground for experimenting with LLM applications. Use when you need to set up a new playground environment associated with a specific use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the playground. Required field. |
| `useCaseId` | string | Yes | The ID of the use case to associate the playground with. Required field. |
| `description` | string | Yes | A description for the playground. Required field. |
| `copyInsights` | object | No | Configuration for copying insights from another playground. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Playground OOTB Metric Configurations

**Slug:** `DATAROBOT_CREATE_GENAI_PLAYGROUNDS_OOTB_METRIC_CONFIGS`

Tool to create OOTB (Out-of-the-Box) metric configurations for a GenAI playground. Use when you need to add performance, safety, and governance metrics to monitor LLM responses in a playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The playground identifier (24-character hex string). Use DATAROBOT_LIST_GENAI_PLAYGROUNDS to find available playground IDs. |
| `ootbMetricConfigurations` | array | Yes | Array of OOTB metric configurations to create. Each configuration specifies a metric to add to the playground for monitoring LLM responses. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create GenAI Playground Trace Dataset

**Slug:** `DATAROBOT_CREATE_GENAI_PLAYGROUNDS_TRACE_DATASETS`

Tool to create a trace dataset for a GenAI playground. Use when you need to create a dataset for storing playground traces associated with a specific playground ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the playground to create the trace dataset for. |
| `name` | string | Yes | The name for the trace dataset. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Guard Configurations to New Custom Model Version

**Slug:** `DATAROBOT_CREATE_GUARD_CONFIG_TO_NEW_CUSTOM_MODEL_VER`

Tool to apply guard configurations to a new custom model version in DataRobot. Creates a new version with the specified moderation guards. Use when you need to add toxicity detection, PII filtering, or other content moderation to a custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of complete guard configurations to push. Must contain 1-200 configurations. |
| `customModelId` | string | Yes | ID of the custom model to apply guard configurations to |
| `overallConfig` | object | No | Overall moderation configuration (not specific to one guard). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Guard Configuration

**Slug:** `DATAROBOT_CREATE_GUARD_CONFIGURATION`

Create a guard configuration to monitor and control AI model behavior during prompt and response stages. Guards can detect issues like PII, toxic content, off-topic responses, or custom validation rules. Requires a templateId (from DATAROBOT_LIST_GUARD_TEMPLATES) and entityId (custom model or playground).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Guard configuration name (max 255 characters) |
| `stages` | array | Yes | The stages where the guard can run (prompt, response, or both) |
| `llmType` | string ("openAi" | "azureOpenAi" | "google" | "amazon" | "datarobot" | "nim") | No | Type of LLM used by the guard. |
| `awsModel` | string ("amazon-titan" | "anthropic-claude-2" | "anthropic-claude-3-haiku" | "anthropic-claude-3-sonnet" | "anthropic-claude-3-opus" | "anthropic-claude-3.5-sonnet-v1" | "anthropic-claude-3.5-sonnet-v2" | "amazon-nova-lite" | "amazon-nova-micro" | "amazon-nova-pro") | No | AWS model options for guard configurations. |
| `entityId` | string | Yes | ID of custom model or playground for this guard. Use DATAROBOT_LIST_CUSTOM_MODELS to find available entities. |
| `nemoInfo` | object | No | Configuration for NeMo guardrails guards. |
| `awsRegion` | string | No | AWS model region |
| `modelInfo` | object | No | Configuration for guards using deployed models. |
| `awsAccount` | string | No | ID of user credential containing an AWS account |
| `entityType` | string ("customModel" | "customModelVersion" | "playground") | Yes | Type of associated entity: customModel, customModelVersion, or playground |
| `templateId` | string | Yes | ID of template this guard is based on. Use DATAROBOT_LIST_GUARD_TEMPLATES to find available templates. |
| `description` | string | No | Guard configuration description (max 4096 characters) |
| `googleModel` | string ("chat-bison" | "google-gemini-1.5-flash" | "google-gemini-1.5-pro") | No | Google model options for guard configurations. |
| `deploymentId` | string | No | ID of deployed model, for model guards |
| `googleRegion` | string | No | Google model region |
| `intervention` | object | No | Intervention configuration specifying what action to take when conditions are met. |
| `openaiApiKey` | string | No | Deprecated; use openaiCredential instead |
| `openaiApiBase` | string | No | Azure OpenAI API Base URL |
| `allowedActions` | array | No | The actions this guard is allowed to take |
| `openaiCredential` | string | No | ID of user credential containing an OpenAI token |
| `llmGatewayModelId` | string | No | LLM Gateway model ID to use as judge |
| `nemoEvaluatorInfo` | object | No | Configuration for NeMo Evaluator guards. |
| `openaiDeploymentId` | string | No | OpenAI Deployment ID |
| `googleServiceAccount` | string | No | ID of user credential containing a Google service account |
| `additionalGuardConfig` | object | No | Additional configuration options for the guard. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Insights Lift Chart

**Slug:** `DATAROBOT_CREATE_INSIGHTS_LIFT_CHART`

Tool to request calculation of a Lift chart insight in DataRobot. Use when you need to analyze model performance across population segments with optional data slicing. The Lift chart calculation is asynchronous. Poll the returned location URL to check status and retrieve results once calculation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("validation" | "crossValidation" | "holdout" | "externalTestSet" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | Yes | The subset of data used to compute the Lift chart insight (e.g., validation, holdout, crossValidation, or backtest partitions). |
| `entityId` | string | Yes | ID of the entity (model, custom model, or vector database) for which to calculate the Lift chart insight. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | Type of entity for which insights will be calculated. Defaults to 'datarobotModel' if not specified. |
| `dataSliceId` | string | No | Optional ID of a data slice to filter the data used for Lift chart calculation. If provided, the insight will be computed only on the specified data slice. |
| `externalDatasetId` | string | No | Optional ID of an external dataset to use for Lift chart calculation. Use this when computing insights on data not in the training/validation sets. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create ROC Curve Insight

**Slug:** `DATAROBOT_CREATE_INSIGHTS_ROC_CURVE`

Request calculation of ROC curve insights for a DataRobot model, custom model, or vector database. The ROC curve is computed asynchronously and can optionally be filtered by a data slice. Use this to evaluate binary classification model performance across different data partitions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("validation" | "crossValidation" | "holdout" | "externalTestSet" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | Yes | The subset of data used to compute the ROC curve insight (e.g., validation, holdout, cross-validation, or backtest partitions). |
| `entityId` | string | Yes | The ID of the entity (model, custom model, or vector database) for which to calculate the ROC curve. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | The type of entity for which insights will be calculated. Defaults to 'datarobotModel'. |
| `dataSliceId` | string | No | The ID of a data slice to filter the data used for ROC curve computation. Optional. |
| `externalDatasetId` | string | No | The ID of an external dataset to use for ROC curve computation. Required when source is 'externalTestSet'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Insights SHAP Distributions

**Slug:** `DATAROBOT_CREATE_INSIGHTS_SHAP_DISTRIBUTIONS`

Tool to request calculation of SHAP Distributions in DataRobot. Use when you need to analyze the distribution of SHAP values across features with optional data slicing. SHAP (SHapley Additive exPlanations) Distributions help understand how feature values contribute to model predictions across the dataset. The calculation is asynchronous - poll the returned location URL to check status and retrieve results once calculation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_1_training" | "backtest_2" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_20" | "backtest_20_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | Yes | The subset of data used to compute the SHAP Distributions insight (e.g., validation, holdout, training, or backtest partitions). |
| `entityId` | string | Yes | ID of the entity (model, custom model, or vector database) for which to calculate SHAP Distributions. |
| `rowCount` | integer | No | (Deprecated) The number of rows to use for calculating SHAP Impact. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | Type of entity for which insights will be calculated. Defaults to 'datarobotModel' if not specified. |
| `dataSliceId` | string | No | Optional ID of a data slice to filter the data used for SHAP Distributions calculation. If provided, the insight will be computed only on the specified data slice. |
| `quickCompute` | boolean | No | (Deprecated) Limits the number of rows used from the selected source by default. Cannot be set to False for this insight. |
| `externalDatasetId` | string | No | Optional ID of an external dataset to use for SHAP Distributions calculation. Use this when computing insights on data not in the training/validation sets. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create SHAP Impact Insight

**Slug:** `DATAROBOT_CREATE_INSIGHTS_SHAP_IMPACT`

Tool to request calculation of SHAP Impact insights in DataRobot. Use when you need to understand feature importance based on Shapley values with optional data slicing. SHAP (SHapley Additive exPlanations) Impact provides model-agnostic feature importance by quantifying how much each feature contributes to individual predictions. The calculation is asynchronous - poll the returned location URL to check status and retrieve results once computation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_1_training" | "backtest_2" | "backtest_20" | "backtest_20_training" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | Yes | The subset of data used to compute the SHAP Impact insight (e.g., validation, holdout, training, or backtest partitions). |
| `entityId` | string | Yes | ID of the entity (model, custom model, or vector database) for which to calculate SHAP Impact. |
| `rowCount` | integer | No | (Deprecated) The number of rows to use for calculating SHAP Impact. Use quickCompute parameter instead. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | Type of entity for which insights will be calculated. Defaults to 'datarobotModel' if not specified. |
| `dataSliceId` | string | No | Optional ID of a data slice to filter the data used for SHAP Impact calculation. If provided, the insight will be computed only on the specified data slice. |
| `quickCompute` | boolean | No | When enabled, limits the number of rows used from the selected source by default for faster computation. When disabled, all rows are used. Defaults to true. |
| `externalDatasetId` | string | No | Optional ID of an external dataset to use for SHAP Impact calculation. Use this when computing insights on data not in the training/validation sets. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Insights SHAP Matrix

**Slug:** `DATAROBOT_CREATE_INSIGHTS_SHAP_MATRIX`

Tool to request calculation of a SHAP Matrix insight in DataRobot. Use when you need feature importance explanations showing how each feature impacts model predictions, with optional data slicing. The SHAP Matrix calculation is asynchronous. Poll the returned location URL to check status and retrieve results once calculation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_1_training" | "backtest_2" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_20" | "backtest_20_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | Yes | The subset of data used to compute the SHAP Matrix insight (e.g., validation, training, holdout, or backtest partitions). |
| `entityId` | string | Yes | ID of the entity (model, custom model, or vector database) for which to calculate the SHAP Matrix insight. |
| `rowCount` | integer | No | (Deprecated) The number of rows to use for calculating SHAP Impact. Use quickCompute parameter instead. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | Type of entity for which insights will be calculated. Defaults to 'datarobotModel' if not specified. |
| `dataSliceId` | string | No | Optional ID of a data slice to filter the data used for SHAP Matrix calculation. If provided, the insight will be computed only on the specified data slice. |
| `quickCompute` | boolean | No | When enabled (default), limits the number of rows used from the selected source. When disabled, all rows are used for SHAP calculation. |
| `externalDatasetId` | string | No | Optional ID of an external dataset to use for SHAP Matrix calculation. Use this when computing insights on data not in the training/validation sets. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create SHAP Preview Insights

**Slug:** `DATAROBOT_CREATE_INSIGHTS_SHAP_PREVIEW`

Request calculation of SHAP Preview insights with an optional data slice. Returns immediately with a queue ID - the SHAP computation happens asynchronously. Use the queue ID to poll for job status and retrieve results when complete.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_1_training" | "backtest_2" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_20" | "backtest_20_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | Yes | The subset of data used to compute the insight. Common values: 'validation' (validation set), 'holdout' (holdout data), 'training' (training data), or specific backtest partitions like 'backtest_0' for time-series models. |
| `entityId` | string | Yes | The ID of the entity (model, custom model, or vector database) for which SHAP Preview insights will be calculated. For DataRobot models, use the model ID from LIST_MODEL_RECORDS. |
| `rowCount` | integer | No | (Deprecated) The number of rows to use for calculating SHAP Impact. Use 'quickCompute' parameter instead for controlling computation size. |
| `entityType` | string ("datarobotModel" | "customModel" | "vectorDatabase") | No | The type of entity for which insights will be calculated. Defaults to 'datarobotModel'. |
| `dataSliceId` | string | No | The ID of a data slice to filter the data used for SHAP calculation. Data slices allow you to compute insights on specific subsets of your data. |
| `quickCompute` | boolean | No | When enabled (default), limits the number of rows used from the selected source (typically 2500 rows) for faster computation. When disabled, all rows from the source are used, which provides more accurate results but takes longer. |
| `externalDatasetId` | string | No | The ID of an external dataset to use for SHAP calculation instead of the model's original training data. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Modeling Featurelist

**Slug:** `DATAROBOT_CREATE_MODELING_FEATURELIST`

Tool to create a new modeling featurelist in a DataRobot project. Use when you need to define a custom set of features for model training.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the new featurelist. Must be unique within the project and not exceed 100 characters. |
| `features` | array | Yes | List of feature names to include in the featurelist. Must contain at least one feature. Feature names must match those in the project dataset. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project to create the featurelist in. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_CREATE_PROJECT. |
| `skipDatetimePartitionColumn` | boolean | No | Whether to exclude the datetime partition column from the featurelist. Default is False (include the column if present). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Model Package

**Slug:** `DATAROBOT_CREATE_MODEL_PACKAGE`

Tool to create a model package from a DataRobot Leaderboard model. Use after a model is trained and you need an offline package.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Optional name for the model package (max 1024 characters). |
| `modelId` | string | Yes | ID of the DataRobot model to package. |
| `description` | string | No | Optional description for the model package (max 2048 characters). |
| `predictionThreshold` | number | No | Binary classification threshold for predictions, between 0.0 and 1.0. |
| `computeAllTsIntervals` | boolean | No | If true, compute all time series prediction intervals (percentiles 1–100). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Model Package from JSON

**Slug:** `DATAROBOT_CREATE_MODEL_PACKAGES_FROM_JSON`

Tool to create a DataRobot model package from JSON metadata. Use when you have custom model metadata and want to register it as a model package without a Leaderboard model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The model package name. |
| `target` | object | Yes | The target information for the model package. Includes target name, type, and classification details if applicable. |
| `modelId` | string | No | The ID of the model. Optional if creating from scratch. |
| `datasets` | object | No | Dataset information for the model package |
| `timeseries` | object | No | Time series information for the model package |
| `textGeneration` | object | No | Text generation information for the model package |
| `modelDescription` | object | No | Model description information for the model package |
| `registeredModelName` | string | No | The registered model name. |
| `geospatialMonitoring` | object | No | Geospatial monitoring information for the model package |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook

**Slug:** `DATAROBOT_CREATE_NOTEBOOK`

Tool to create a new notebook in DataRobot Workbench for interactive code development. Use when you need to create a Jupyter notebook for data exploration, analysis, or model development.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the notebook to create. |
| `tags` | array | No | Optional array of tags for the notebook. |
| `useCaseId` | string | No | Optional use case ID to associate the notebook with. |
| `description` | string | No | Optional description for the notebook. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook Environment Variables

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_ENVIRONMENT_VARIABLES`

Tool to create one or more environment variables for a specific notebook. Use when you need to add environment variables like API keys, credentials, or configuration values to a notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID for which to create environment variables |
| `data` | array | Yes | List of environment variables to create. Must contain at least one variable. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Expose Notebook Port

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_EXECUTION_ENVIRONMENT_PORT`

Tool to expose a port on a DataRobot notebook execution environment. Use when you need to access a web service or application running inside a notebook (e.g., Flask/FastAPI apps, Jupyter extensions, or custom web servers). The notebook must be running to expose ports. Maximum 5 ports can be exposed per notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | Notebook execution environment ID (24-character hex string). The notebook must be running to expose ports. |
| `port` | integer | Yes | Port number to expose (must be between 1024-65535). Reserved ports 8888, 8889, and 8022 are not allowed. |
| `description` | string | No | Optional description of what the exposed port is used for (e.g., 'Test application port', 'Web service endpoint'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Clone Notebook from Revision

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_FROM_REVISION`

Tool to clone a notebook from an existing revision, creating a new notebook as a copy. Use when you need to create a new notebook based on a specific revision of an existing notebook. The operation is asynchronous and returns immediately with the new notebook ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name for the cloned notebook revision. If omitted, a default name will be assigned. |
| `isAuto` | boolean | No | Whether the revision is autosaved. Defaults to false if not provided. |
| `notebookId` | string | Yes | Notebook ID associated with the revision to clone. |
| `revisionId` | string | Yes | Revision ID to clone as a new notebook. |
| `notebookPath` | string | No | Path to the notebook file, if using Codespaces integration. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook Job

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_JOBS`

Tool to create a scheduled notebook job in DataRobot. Use when you need to run a Jupyter notebook on a schedule or programmatically. The notebook must exist in DataRobot Codespaces before creating the job. Configure cron-like schedules to automate notebook execution.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `enabled` | boolean | No | Whether the scheduled notebook job is enabled. Set to false to create a disabled job that won't run on schedule. Default is true. |
| `schedule` | object | Yes | Cron-like schedule configuration defining when the notebook should run automatically. Specify minute, hour, dayOfMonth, dayOfWeek, and month arrays. |
| `useCaseId` | string | Yes | The ID of the use case (project) this notebook is associated with. Obtain from DATAROBOT_LIST_PROJECTS or project creation. |
| `notebookId` | string | Yes | The ID of the notebook to schedule. Must be a valid 24-character hexadecimal ObjectId. Obtain from notebook listing or creation. |
| `parameters` | array | No | List of environment variables to pass to the notebook execution. Each parameter has a 'name' (max 256 chars, alphanumeric + underscores) and 'value' (max 131,072 chars). |
| `notebookPath` | string | No | The path to the notebook file in the DataRobot Codespaces file system. Required for Codespaces notebooks, but should be omitted for standalone notebooks. Must start with '/' (e.g., '/notebooks/notebook.ipynb'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook Job Manual Run

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_JOBS_MANUAL_RUN`

Tool to manually trigger a notebook job run in DataRobot. Use when you need to execute a notebook on-demand with optional parameters. For Codespace notebooks, both notebookId and notebookPath are required.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebookId` | string | Yes | The ID of the notebook to run manually. Must be a valid 24-character hex ObjectId. |
| `parameters` | array | No | Optional array of parameters to pass to the notebook as environment variables. Each parameter should have 'name' and 'value' properties. |
| `notebookPath` | string | Yes | The path to the notebook in the file system (required for Codespace notebooks). Example: '/notebook.ipynb' |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook Revision

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_REVISIONS`

Tool to create a new revision for a DataRobot notebook. Use when you need to save a checkpoint of a notebook's current state for version control or tracking changes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook for which to create a revision. Must be a valid 24-character hexadecimal ObjectId. |
| `name` | string | No | Name for the notebook revision. If not provided, a timestamp will be used automatically. |
| `isAuto` | boolean | No | Whether the revision is an autosave revision. Set to true for automatic saves, false for manual saves. Default is false. |
| `notebookPath` | string | No | Path to the notebook file in Codespaces. Required when working with Codespaces notebooks. Must start with '/' (e.g., '/notebooks/notebook.ipynb'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook Runtime Client Activity

**Slug:** `DATAROBOT_CREATE_NOTEBOOK_RUNTIME_CLIENT_ACTIVITY`

Tool to record client activity for a running notebook session. Use when tracking notebook session heartbeats or activity. This endpoint helps DataRobot track that a notebook session is still active.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID (24-character MongoDB ObjectId format). Must be a running notebook session. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Batch Create Notebook Cells

**Slug:** `DATAROBOT_CREATE_NOTEBOOKS_CELLS_BATCH_CREATE`

Tool to batch create multiple cells in a DataRobot notebook. Use when you need to add multiple code or markdown cells to an existing notebook at a specific position. The cells are inserted after the specified cell ID, allowing precise control over cell placement.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | Notebook ID (24-character hexadecimal ObjectId) where cells will be created. Obtain from notebook listing or creation. |
| `cells` | array | Yes | Array of cell objects to create. Each cell must have 'cell_type' (e.g., 'code', 'markdown') and 'source' (cell content) fields. Cells will be inserted in the order provided. |
| `afterCellId` | string | Yes | Cell ID after which to insert the new cells. Use an existing cell ID to specify the insertion point. The new cells will be inserted immediately after this cell. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Batch Delete Notebook Cells

**Slug:** `DATAROBOT_CREATE_NOTEBOOKS_CELLS_BATCH_DELETE`

Tool to batch delete multiple cells from a DataRobot notebook. Use when you need to remove multiple cells at once by their cell IDs. The deletion is permanent and cannot be undone.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | Notebook ID (24-character hexadecimal ObjectId) from which cells will be deleted. Obtain from notebook listing or creation. |
| `cellIds` | array | Yes | Array of cell IDs to delete from the notebook. Each ID must be a valid 24-character hexadecimal ObjectId. All specified cells will be permanently removed from the notebook. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notebook from File

**Slug:** `DATAROBOT_CREATE_NOTEBOOKS_FROM_FILE`

Tool to create a new notebook in DataRobot by uploading a notebook file. Use when you need to import an existing Jupyter notebook (.ipynb) into DataRobot. The notebook is created asynchronously and returns immediately with notebook metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | Notebook file to upload (typically.ipynb format). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notification Channel Template

**Slug:** `DATAROBOT_CREATE_NOTIFICATION_CHANNEL_TEMPLATE`

Tool to create a notification channel template in DataRobot. Use when you need to set up notification channels for alerts, monitoring, or integration with external services. Different channel types require different parameters (e.g., Email requires emailAddress, Webhook requires payloadUrl).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the notification channel template. |
| `orgId` | string | No | The ID of organization that the notification channel belongs to. |
| `drEntities` | array | No | The IDs of DataRobot Users, Groups, or Custom Jobs. Required for DataRobotUser, DataRobotGroup, or DataRobotCustomJob channel types. Minimum 1, maximum 100 entities. |
| `payloadUrl` | string | No | The payload URL for the notification channel. Required for certain channel types like Webhook, Slack, or MSTeams. |
| `channelType` | string ("DataRobotCustomJob" | "DataRobotGroup" | "DataRobotUser" | "Database" | "Email" | "InApp" | "InsightsComputations" | "MSTeams" | "Slack" | "Webhook") | Yes | The type of notification channel to create. |
| `contentType` | string ("application/json" | "application/x-www-form-urlencoded") | No | The content type of notification messages. |
| `secretToken` | string | No | Secret token to be used for the notification channel. Used for authentication with external services. |
| `validateSsl` | boolean | No | Whether to validate SSL certificates for the notification channel. |
| `emailAddress` | string | No | The email address for the notification channel. Required when channelType is Email. |
| `languageCode` | string ("en" | "es_419" | "fr" | "ja" | "ko" | "ptBR") | No | The preferred language code. |
| `customHeaders` | array | No | Custom headers and their values to be sent in the notification channel. Maximum 100 headers. |
| `verificationCode` | string | No | Required if the channel type is Email. Verification code to confirm email ownership. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Send Email Verification Code for Notification Channel

**Slug:** `DATAROBOT_CREATE_NOTIFICATION_EMAIL_CHANNEL_VERIFICATION`

Tool to send a 6-digit verification code to a user's email address for setting up a notification channel. Use this action before creating an email notification channel to verify the email address. The verification code sent via this endpoint should be used when creating the actual email notification channel.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification channel (max 100 characters). |
| `orgId` | string | No | The ID of the organization that the notification channel belongs to. |
| `channelType` | string | Yes | The type of the new notification channel. For email verification, this should be 'email'. |
| `emailAddress` | string | Yes | The email address of the recipient where the 6-digit verification code will be sent. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Notification Webhook Channel Test

**Slug:** `DATAROBOT_CREATE_NOTIFICATION_WEBHOOK_CHANNEL_TESTS`

Tool to test webhook notification channel configuration by creating a test notification. Use when you need to validate webhook settings before creating a production notification channel.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new notification channel (max 100 characters) |
| `orgId` | string | No | The identifier of the organization that notification channel belongs to |
| `payloadUrl` | string | No | The payload URL of the new notification channel. Required for Webhook channel type |
| `channelType` | string ("Database" | "Email" | "InApp" | "InsightsComputations" | "MSTeams" | "Slack" | "Webhook") | Yes | The type of the new notification channel. Must be one of: Database, Email, InApp, InsightsComputations, MSTeams, Slack, Webhook |
| `contentType` | string ("application/json" | "application/x-www-form-urlencoded") | No | Content type for webhook messages. |
| `secretToken` | string | No | Secret token to be used for new notification channel |
| `validateSsl` | boolean | No | Whether SSL will be validated in the notification channel. Defaults to true for Webhook channels |
| `emailAddress` | string | No | The email address to be used in the new notification channel. Required for Email channel type |
| `customHeaders` | array | No | Custom headers and their values to be sent in the new notification channel (max 100 headers) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Verify Email Channel Code

**Slug:** `DATAROBOT_CREATE_NOTIFY_EMAIL_CHANNEL_VERIFY_STATUS`

Verify the notification email channel verification code. Use when an admin needs to confirm their email address for notifications by entering a 6-digit verification code.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `emailAddress` | string | Yes | The email address of the recipient that received the verification code |
| `verificationCode` | string | Yes | The 6-digit verification code entered by the admin to verify email channel |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create OCR Job Resource

**Slug:** `DATAROBOT_CREATE_OCR_JOB_RESOURCES`

Tool to create an OCR (Optical Character Recognition) job resource in DataRobot. Use when you need to extract text from images or scanned documents in a dataset. The OCR job processes the input dataset and creates an output dataset with extracted text.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `language` | string ("ENGLISH" | "JAPANESE") | Yes | The language of the OCR input dataset. Must be one of the supported languages (ENGLISH or JAPANESE). |
| `datasetId` | string | Yes | OCR input dataset ID. The dataset must exist and be accessible to the user. |
| `engineSpecificParameters` | object | No | OCR engine-specific parameters. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create OpenTelemetry Metric Configuration

**Slug:** `DATAROBOT_CREATE_OTEL_METRICS_CONFIGS`

Tool to create an OpenTelemetry metric configuration for a DataRobot entity (deployment, use case, etc.). Use when you need to set up custom metric tracking for monitoring entity performance.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `unit` | string ("bytes" | "nanocores" | "percentage") | No | Unit of measurement for metrics. |
| `enabled` | boolean | No | Whether the OTel metric is enabled. Defaults to true. |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `otelName` | string | Yes | The OTel key of the metric. This is the unique identifier for the metric in the OpenTelemetry system. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. Supported types: deployment, use_case, experiment_container, custom_application, workload, workload_deployment. |
| `percentile` | number | No | The metric percentile for the percentile aggregation of histograms. Value must be between 0 and 1. Only relevant when aggregation is 'percentiles'. |
| `aggregation` | string ("sum" | "average" | "min" | "max" | "cardinality" | "percentiles" | "histogram") | No | Aggregation methods for metric display. |
| `displayName` | string | No | The display name of the metric. Human-readable name shown in the UI. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get OTEL Metrics Values Over Time Segments

**Slug:** `DATAROBOT_CREATE_OTEL_METRICS_VALUES_OVER_TIME_SEGMENTS`

Tool to get OpenTelemetry metric values for a specified entity, grouped by multiple attributes. Use when analyzing metrics segmented by attributes like HTTP method, status code, or other OpenTelemetry dimensions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | End time of the metric list in ISO 8601 format (e.g., '2026-02-13T23:59:59Z') |
| `entityId` | string | Yes | ID of the entity to which the metric belongs (e.g., deployment ID) |
| `interval` | string ("PT1M" | "PT5M" | "PT1H" | "P1D" | "P7D") | No | Time interval for metric values |
| `otelName` | string | Yes | The OpenTelemetry metric name to query (e.g., 'http.server.request.duration') |
| `segments` | array | Yes | List of segments to group results by. Each segment contains attribute filters. |
| `startTime` | string | No | Start time of the metric list in ISO 8601 format (e.g., '2026-02-12T00:00:00Z') |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs (e.g., deployment, use_case) |
| `aggregation` | string ("sum" | "average" | "min" | "max" | "cardinality" | "percentiles" | "histogram") | Yes | Aggregation method used for metric display |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Prediction Dataset from Data Source

**Slug:** `DATAROBOT_CREATE_PREDICTION_DATASET_FROM_DATA_SOURCE`

Upload a prediction dataset from a DataSource for making predictions on a DataRobot project. Returns immediately with a status URL - the upload happens asynchronously. Use when you need to make predictions using data from an external data source connector.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | [Deprecated] The username for database authentication. Use credentialId or credentialData instead. |
| `password` | string | No | [Deprecated] The password for database authentication. Use credentialId or credentialData instead. |
| `projectId` | string | Yes | The project ID to which the data source will be uploaded. |
| `credentials` | array | No | A list of credentials for the secondary datasets used in feature discovery project (max 30 items). |
| `useKerberos` | boolean | No | If true, use Kerberos authentication for database authentication. Default is false. |
| `credentialId` | string | No | The credential ID to use for database authentication. |
| `dataSourceId` | string | Yes | The ID of the DataSource to use for prediction dataset. |
| `forecastPoint` | string | No | For time series projects only. The time in the dataset relative to which predictions are generated. If not specified, defaults to the latest timestamp. Not valid for non-time-series projects. |
| `credentialData` | string | No | The credentials to authenticate with the database, to use instead of user/password or credential ID. |
| `actualValueColumn` | string | No | The actual value column name, valid for prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. |
| `predictionsEndDate` | string | No | For time series projects only. The end date for bulk predictions (exclusive). Used for historical predictions with training data. Must be provided with predictionsStartDate, cannot be used with forecastPoint. |
| `predictionsStartDate` | string | No | For time series projects only. The start date for bulk predictions. Used for historical predictions with training data. Must be provided with predictionsEndDate, cannot be used with forecastPoint. |
| `secondaryDatasetsConfigId` | string | No | For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction. |
| `relaxKnownInAdvanceFeaturesCheck` | boolean | No | For time series projects only. If true, missing values in known-in-advance features are allowed in the forecast window at prediction time. Default is false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Initialize Prediction Explanations

**Slug:** `DATAROBOT_CREATE_PREDICTION_EXPLANATIONS_INITIALIZATION`

Tool to initialize prediction explanations for a DataRobot model. Prediction explanations help understand which features most influenced individual predictions. Use after a model is trained to enable prediction explanation insights for that model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The unique identifier of the model within the project. Obtain from DATAROBOT_LIST_MODEL_RECORDS using the project_id. |
| `project_id` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `threshold_low` | number | No | The lower threshold, below which a prediction must score in order for prediction explanations to be computed for a row in the dataset. If neither thresholdHigh nor thresholdLow is specified, prediction explanations will be computed for all rows. |
| `threshold_high` | number | No | The high threshold, above which a prediction must score in order for prediction explanations to be computed. If neither thresholdHigh nor thresholdLow is specified, prediction explanations will be computed for all rows. |
| `max_explanations` | integer | No | The maximum number of prediction explanations to supply per row of the dataset. Must be between 1 and 10. Defaults to 3 if not specified. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create DataRobot Project

**Slug:** `DATAROBOT_CREATE_PROJECT`

Create a new DataRobot project from a dataset URL, existing dataset ID, or data source connector. Returns immediately with a project ID and status URL - project creation happens asynchronously. Use DATAROBOT_CHECK_PROJECT_STATUS or DATAROBOT_GET_PROJECT to verify the project is ready before starting modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | No | Public HTTP(S) URL pointing to a dataset file (CSV, Excel, etc.) to import. Use this for quick imports from public URLs. |
| `user` | string | No | [Deprecated] Database username for authentication. Use credentialId instead for better security. |
| `password` | string | No | [Deprecated] Database password for authentication. Use credentialId instead for better security. |
| `recipeId` | string | No | ID of a wrangling recipe to apply during project creation. Use DATAROBOT_LIST_RECIPES to find available recipes. |
| `datasetId` | string | No | ID of an existing DataRobot dataset (24-character hex string). Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |
| `projectName` | string | No | Human-readable name for the project. If omitted, defaults to 'Untitled Project' for database sources or the dataset filename for URL-based imports. |
| `useKerberos` | boolean | No | Set to true to use Kerberos authentication for database connections. Only valid with datasetId or dataSourceId. |
| `credentialId` | string | No | ID of stored credentials for authenticating to dataSourceId or secured datasets. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. |
| `dataSourceId` | string | No | ID of an external data source connector (e.g., database connection). Use DATAROBOT_LIST_DATA_SOURCES to find available connectors. |
| `datasetVersionId` | string | No | Specific version ID of the dataset to use. Only valid when datasetId is also provided. If omitted, uses the latest version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Clone DataRobot Project

**Slug:** `DATAROBOT_CREATE_PROJECT_CLONES`

Tool to clone an existing DataRobot project. Use when you need to create a copy of a project with its dataset and optionally its settings. Project cloning happens asynchronously - use DATAROBOT_CHECK_PROJECT_STATUS to verify completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The ID of the project to clone. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find available project IDs. |
| `copyOptions` | boolean | No | Whether all project options should be copied to the cloned project. Set to true to preserve all settings from the original project. |
| `projectName` | string | No | The name of the new cloned project. If omitted, DataRobot will generate a name based on the source project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Projects Autopilots

**Slug:** `DATAROBOT_CREATE_PROJECTS_AUTOPILOTS`

Tool to start Autopilot on a DataRobot project with a specific feature list. Use when you need to initiate automated model building with specified Autopilot settings. Prerequisites: The project must be in 'modeling' stage with a target already set.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `mode` | string ("auto" | "comprehensive" | "quick") | No | Autopilot mode: 'auto' (default, DataRobot chooses optimal settings), 'quick' (faster execution with fewer models), or 'comprehensive' (more thorough analysis with longer runtime). |
| `useGpu` | boolean | No | Use GPU workers for the Autopilot run. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. The project must be in 'modeling' stage (target already set). Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `featurelistId` | string | Yes | ID of the feature list to use for Autopilot. Obtain from DATAROBOT_LIST_FEATURE_LISTS using the project_id. This is a required parameter. |
| `blendBestModels` | boolean | No | Whether to blend best models during Autopilot run. Not supported in SHAP-only mode or multilabel projects. |
| `scoringCodeOnly` | boolean | No | Keep only models that can be converted to scorable Java code during Autopilot run. |
| `autopilotClusterList` | array | No | List of cluster counts for unsupervised clustering projects. Each value must be between 2 and 100; list length max 10. Only valid when unsupervisedMode is true and unsupervisedType is 'clustering'. |
| `prepareModelForDeployment` | boolean | No | Prepare model for deployment during Autopilot run. This includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning 'RECOMMENDED FOR DEPLOYMENT' label. |
| `runLeakageRemovedFeatureList` | boolean | No | Run Autopilot on the Leakage Removed feature list if it exists. |
| `considerBlendersInRecommendation` | boolean | No | Include blenders when selecting a model to prepare for deployment. Not supported in SHAP-only mode or multilabel projects. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Batch Transform Feature Types

**Slug:** `DATAROBOT_CREATE_PROJECTS_BATCH_TYPE_TRANSFORM_FEATURES`

Create multiple new features by transforming existing features to a different variable type. Use when you need to convert feature types in bulk (e.g., numeric to categorical, text to numeric). The operation is asynchronous - monitor the returned Location URL for completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `prefix` | string | No | The string that will preface all feature names. Optional if suffix is present. One or both (prefix/suffix) are required. |
| `suffix` | string | No | The string that will be appended at the end to all feature names. Optional if prefix is present. One or both (prefix/suffix) are required. |
| `projectId` | string | Yes | The project ID to create the features in. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `parentNames` | array | Yes | List of feature names that will be transformed into a new variable type. Must contain 1-500 feature names. |
| `variableType` | string ("text" | "categorical" | "numeric" | "categoricalInt") | Yes | The type of the new feature. Must be one of: 'text', 'categorical' (Deprecated in version v2.21), 'numeric', or 'categoricalInt'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Bias Mitigation Feature Info

**Slug:** `DATAROBOT_CREATE_PROJECTS_BIAS_MITIGATION_FEATURE_INFO`

Tool to submit a job to create bias mitigation data quality information for a given project and feature. Use when you need to create bias mitigation feature info for fairness analysis.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | ID of the DataRobot project for which to create bias mitigation feature information |
| `feature_name` | string | Yes | Name of the feature for which to create bias mitigation information |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Blender Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_BLENDER_MODELS`

Tool to create a blender model from multiple existing models in a DataRobot project. Blenders combine predictions from multiple models using methods like averaging or stacking. Use this after training multiple models to create an ensemble that may improve prediction accuracy.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelIds` | array | Yes | List of model IDs to blend together. Must contain at least one model ID. Use DATAROBOT_LIST_MODEL_RECORDS or similar actions to find available model IDs in the project. |
| `project_id` | string | Yes | The ID of the DataRobot project where the blender will be created. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |
| `blenderMethod` | string ("PLS" | "GLM" | "ENET" | "AVG" | "MED" | "MAE" | "MAEL1" | "FORECAST_DISTANCE_AVG" | "FORECAST_DISTANCE_ENET" | "MAX" | "MIN") | Yes | The blender method to use for combining models. Common choices: "AVG" for simple averaging, "GLM" for generalized linear model, "ENET" for elastic net. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate Cross-Series Properties

**Slug:** `DATAROBOT_CREATE_PROJECTS_CROSS_SERIES_PROPERTIES`

Tool to validate columns for potential use as the group-by column for cross-series functionality in a DataRobot project. Use when configuring multiseries or time series projects with cross-series features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_CREATE_PROJECT. |
| `multiseriesIdColumn` | string | Yes | The name of the column that will be used as the multiseries ID column for this project. |
| `datetimePartitionColumn` | string | Yes | The name of the column that will be used as the datetime partitioning column for the project. |
| `crossSeriesGroupByColumns` | array | No | If specified, these columns will be validated for usage as the group-by column for creating cross-series features. If not present, all columns from the dataset will be validated and only the eligible ones returned. To be valid, a column should be categorical or numerical (but not float), not be the series ID or equivalent to the series ID, not split any series, and not consist of only one value. |
| `userDefinedSegmentIdColumn` | string | No | The name of the column that will be used as the user defined segment ID column for this project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Datetime Model From Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_DATETIME_MODELS_FROM_MODEL`

Tool to retrain an existing datetime model with specified parameters. Use when you need to create a new version of a datetime model with different training settings, sample size, or feature list. This action is asynchronous and returns a job URL to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The ID of an existing model to use as the source for the training parameters. |
| `nClusters` | integer | No | The number of clusters to use in the specified unsupervised clustering model. Only valid in unsupervised clustering projects. Must be between 2 and 100. |
| `projectId` | string | Yes | The ID of the DataRobot project. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |
| `featurelistId` | string | No | If specified, the new model will be trained using this featurelist. Otherwise, the model will be trained on the same feature list as the source model. |
| `samplingMethod` | string ("random" | "latest") | No | Method for selecting training data when subsampling is used. |
| `trainingEndDate` | string | No | A datetime string representing the end date of the data to use for training this model. Note that only one of trainingDuration or trainingRowCount or trainingStartDate and trainingEndDate should be specified. If trainingStartDate and trainingEndDate are specified, the source model must be frozen. |
| `trainingDuration` | string | No | A duration string representing the training duration to use for training the new model. If specified, the model will be trained using the specified training duration. Otherwise, the original model's duration will be used. Only one of trainingRowCount, trainingDuration, trainingStartDate and trainingEndDate, or useProjectSettings may be specified. |
| `trainingRowCount` | integer | No | The number of rows of data that should be used to train the model. If not specified, the original model's row count will be used. Only one of trainingRowCount, trainingDuration, trainingStartDate and trainingEndDate, or useProjectSettings may be specified. |
| `trainingStartDate` | string | No | A datetime string representing the start date of the data to use for training this model. Note that only one of trainingDuration or trainingRowCount or trainingStartDate and trainingEndDate should be specified. If trainingStartDate and trainingEndDate are specified, the source model must be frozen. |
| `useProjectSettings` | boolean | No | If True, the model will be trained using the previously-specified custom backtest training settings. Only one of trainingRowCount, trainingDuration, trainingStartDate and trainingEndDate, or useProjectSettings may be specified. |
| `timeWindowSamplePct` | integer | No | An integer between 1 and 99 indicating the percentage of sampling within the time window. The points kept are determined by the samplingMethod option. If specified, trainingRowCount may not be specified, and the specified model must either be a duration or selectedDateRange model, or one of trainingDuration or trainingStartDate and trainingEndDate must be specified. |
| `monotonicDecreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If null, no such constraints are enforced. |
| `monotonicIncreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If null, no such constraints are enforced. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Prepare Model for Deployment

**Slug:** `DATAROBOT_CREATE_PROJECTS_DEPLOYMENT_READY_MODELS`

Tool to prepare a DataRobot model for deployment by marking it as deployment-ready. Use when you have a trained model that needs to be packaged for deployment. The operation is asynchronous - poll the returned location URL to check completion status. After the model is deployment-ready, use DATAROBOT_CREATE_MODEL_PACKAGE to create a deployable package.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | ID of the model to prepare for deployment (24-character hex string). Use DATAROBOT_LIST_MODELS or DATAROBOT_LIST_PROJECT_MODELS to find model IDs within a project. |
| `projectId` | string | Yes | ID of the DataRobot project containing the model (24-character hex string). Use DATAROBOT_LIST_PROJECTS to find project IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Feature Association Matrix

**Slug:** `DATAROBOT_CREATE_PROJECTS_FEATURE_ASSOCIATION_MATRIX`

Tool to compute a feature association matrix for a DataRobot project using a specific featurelist. Use when you need to analyze feature correlations and associations in a project. The computation is asynchronous and returns immediately with a job ID to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `featurelistId` | string | Yes | A featurelist ID to calculate feature association matrix for. Obtain from DATAROBOT_LIST_FEATURELISTS using the project_id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Feature List

**Slug:** `DATAROBOT_CREATE_PROJECTS_FEATURELISTS`

Tool to create a new featurelist in a DataRobot project. Use when you need to define a custom set of features for model training.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name for the new featurelist. Must be unique within the project and not exceed 100 characters. |
| `features` | array | Yes | List of feature names to include in the featurelist. Must contain at least one feature. Feature names must exist in the project. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_CREATE_PROJECT. |
| `skipDatetimePartitionColumn` | boolean | No | Whether featurelist should exclude the datetime partition column. Defaults to false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Frozen Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_FROZEN_MODELS`

Train a new frozen model with parameters from an existing model. Frozen models replicate the training parameters and hyperparameters of a source model, allowing you to retrain with different sample sizes or cluster counts while maintaining consistent methodology. Use when you want to test how a model performs with different data sizes or clustering configurations without changing its fundamental approach.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The ID of an existing model to use as a source of training parameters. The frozen model will replicate this model's hyperparameters and training approach. |
| `nClusters` | integer | No | The number of clusters to use in the specified unsupervised clustering model. Only valid in unsupervised clustering projects. Must be between 2 and 100. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `samplePct` | number | No | The percentage of the dataset to use with the model (between 0.0 and 100.0). Only one of samplePct and trainingRowCount should be specified. |
| `trainingRowCount` | integer | No | The integer number of rows of the dataset to use with the model. Only one of samplePct and trainingRowCount should be specified. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Train DataRobot Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS`

Tool to train a new model in a DataRobot project using a specific blueprint. Use this after obtaining a blueprintId from DATAROBOT_LIST_BLUEPRINTS or from existing models. The model training happens asynchronously - poll the returned location URL to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `nClusters` | integer | No | The number of clusters to use in the specified unsupervised clustering model. Only valid in unsupervised clustering projects. Must be between 2 and 100. |
| `samplePct` | number | No | The percentage of the dataset to use with the model. Only one of samplePct and trainingRowCount should be specified. The specified percentage should be between 0 and 100. |
| `project_id` | string | Yes | The ID of the DataRobot project where the model will be trained. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |
| `blueprintId` | string | Yes | The ID of a blueprint to use to generate the model. Allowed blueprints can be retrieved using DATAROBOT_LIST_BLUEPRINTS or taken from existing models. |
| `scoringType` | string ("validation" | "crossValidation") | No | Scoring type for model training validation. |
| `featurelistId` | string | No | If specified, the model will be trained using this featurelist. If not specified, the recommended featurelist for the specified blueprint will be used. If there is no recommended featurelist, the project's default will be used. |
| `sourceProjectId` | string | No | The project the blueprint comes from. Required only if the blueprintId comes from a different project. |
| `trainingRowCount` | integer | No | An integer representing the number of rows of the dataset to use with the model. Only one of samplePct and trainingRowCount should be specified. |
| `monotonicDecreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If null, no constraints will be enforced. If omitted, the project default is used. |
| `monotonicIncreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If null, no constraints will be enforced. If omitted, the project default is used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Advanced-Tuned Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS_ADVANCED_TUNING`

Submit a job to create a new version of a DataRobot model with different advanced tuning parameters. Use this to fine-tune model hyperparameters beyond the default Autopilot settings. The operation is asynchronous - poll the returned location URL to check job status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID to create an advanced-tuned version from. |
| `projectId` | string | Yes | The project ID containing the model to tune. |
| `tuningParameters` | array | Yes | List of parameters to tune with their new values. At least one parameter must be specified. |
| `tuningDescription` | string | No | Human-readable description of this advanced-tuning request to help identify the tuned model version. |
| `gridSearchArguments` | object | No | Grid search configuration for advanced model tuning. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Feature Effects

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS_FEATURE_EFFECTS`

Tool to request Feature Effects calculation for a DataRobot model. Feature Effects show how each feature impacts predictions, including partial dependence and predicted vs actual relationships. Use when you need to compute Feature Effects for model interpretability analysis. The computation is asynchronous - use the returned location URL to poll for completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The unique identifier of the model |
| `rowCount` | integer | No | The number of rows from dataset to use for Feature Effects calculation. Must be between 10 and 100000, or the training sample size of the model, whichever is less. Optional. |
| `project_id` | string | Yes | The unique identifier of the DataRobot project |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Calculate Model Feature Impact

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS_FEATURE_IMPACT`

Tool to add a request to calculate feature impact for a DataRobot model to the job queue. Use this to understand which features have the most influence on model predictions. The operation is asynchronous - poll the returned location URL to check job status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `backtest` | string | No | The backtest value used for Feature Impact computation. Applicable for datetime aware models only. Can be an integer backtest index or string identifier. |
| `model_id` | string | Yes | The ID of the model for which to compute feature impact. Use DATAROBOT_LIST_PROJECTS_MODELS or DATAROBOT_GET_PROJECTS_MODELS to find model IDs. |
| `rowCount` | integer | No | The sample size to use for Feature Impact computation (10-100000 rows). If not specified, defaults to the training sample size of the model. Maximum is 100000 rows or the training sample size, whichever is less. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the model. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Retrain Model From Existing Model

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS_FROM_MODEL`

Tool to retrain an existing model with specified parameters. Use when you need to create a new version of a model with different training settings, sample size, or feature list. This action is asynchronous and returns a job URL to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The ID of an existing model to use as the source for the training parameters. |
| `nClusters` | integer | No | The number of clusters to use in the specified unsupervised clustering model. Only valid in unsupervised clustering projects. Must be between 2 and 100. |
| `projectId` | string | Yes | The ID of the DataRobot project. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |
| `samplePct` | number | No | The percentage of the dataset to use to train the model. The specified percentage should be between 0 and 100. If not specified, original model sample percent will be used. |
| `scoringType` | string ("validation" | "crossValidation") | No | Validation type for model retraining. |
| `featurelistId` | string | No | If specified, the model will be trained using that featurelist, otherwise the model will be trained on the same feature list as before. |
| `trainingRowCount` | integer | No | The number of rows to use to train the model. If not specified, original model training row count will be used. |
| `monotonicDecreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If null, no such constraints are enforced. |
| `monotonicIncreasingFeaturelistId` | string | No | The ID of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If null, no such constraints are enforced. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create SHAP Impact for Model (Deprecated)

**Slug:** `DATAROBOT_CREATE_PROJECTS_MODELS_SHAP_IMPACT`

Tool to create SHAP-based Feature Impact for a specific DataRobot model. DEPRECATED API - prefer CreateInsightsShapImpact for new implementations. This endpoint initiates an asynchronous calculation of SHAP values for feature importance. Poll the returned location URL to check status and retrieve results once the calculation completes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The unique identifier of the model for which to calculate SHAP Impact. |
| `rowCount` | integer | No | The sample size to use for Feature Impact computation (10-100000 rows). It is possible to re-compute Feature Impact with a different row count. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Projects Payoff Matrices

**Slug:** `DATAROBOT_CREATE_PROJECTS_PAYOFF_MATRICES`

Tool to create a payoff matrix for a binary classification project in DataRobot. Use when you need to define cost-benefit values for model predictions to evaluate profit curves. Only works with binary classification projects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the payoff matrix to be created. |
| `projectId` | string | Yes | The project ID to create a payoff matrix for. Must be a binary classification project. |
| `trueNegativeValue` | number | Yes | True negative value to use for profit curve calculation. Value assigned when model correctly predicts negative class. |
| `truePositiveValue` | number | Yes | True positive value to use for profit curve calculation. Value assigned when model correctly predicts positive class. |
| `falseNegativeValue` | number | Yes | False negative value to use for profit curve calculation. Value assigned when model incorrectly predicts negative class (Type II error). |
| `falsePositiveValue` | number | Yes | False positive value to use for profit curve calculation. Value assigned when model incorrectly predicts positive class (Type I error). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Prediction Dataset Upload

**Slug:** `DATAROBOT_CREATE_PROJECTS_PRED_DATASETS_DATASET_UPLOADS`

Tool to create a prediction dataset upload for a DataRobot project. Use when you need to upload a dataset that will be used for making predictions on a project. Returns a new dataset ID for the prediction dataset. For time series projects, you can specify forecast points or bulk prediction date ranges.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | [DEPRECATED] The username for database authentication. Use credentialId or credentialData instead. |
| `password` | string | No | [DEPRECATED] The password (in cleartext) for database authentication. Use credentialId or credentialData instead. |
| `datasetId` | string | Yes | The ID of the dataset entry to use for prediction dataset. |
| `projectId` | string | Yes | The ID of the project for which to create the prediction dataset upload. |
| `credentials` | array | No | List of credentials for the secondary datasets used in feature discovery project. Maximum 30 items. |
| `useKerberos` | boolean | No | If true, use kerberos authentication for database authentication. Default is false. |
| `credentialId` | string | No | The ID of the set of credentials to authenticate with the database. |
| `forecastPoint` | string | No | For time series projects only. The time in the dataset relative to which predictions are generated. If not specified, uses the row with the latest timestamp. Cannot be used with predictionsStartDate/predictionsEndDate. |
| `credentialData` | object | No | The credentials to authenticate with the database, to be used instead of credential ID. Can be one of several credential types (basic, AWS, OAuth, SSH, GCP, Databricks, Azure). |
| `datasetVersionId` | string | No | The ID of the dataset version to use for the prediction dataset. If not specified, uses latest version associated with datasetId. |
| `actualValueColumn` | string | No | Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. |
| `predictionsEndDate` | string | No | For time series projects only. The end date (exclusive) for bulk predictions using training data. Must be provided with predictionsStartDate. Cannot be used with forecastPoint. |
| `predictionsStartDate` | string | No | For time series projects only. The start date for bulk predictions using training data (historical predictions, not future). Must be provided with predictionsEndDate. Cannot be used with forecastPoint. |
| `secondaryDatasetsConfigId` | string | No | For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction. |
| `relaxKnownInAdvanceFeaturesCheck` | boolean | No | For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window. Default is False. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Prediction Dataset from URL

**Slug:** `DATAROBOT_CREATE_PROJECTS_PREDICTION_DATASETS_URL_UPLOADS`

Tool to upload a prediction dataset from a URL to a DataRobot project. Use when you need to add prediction data from a publicly accessible URL to an existing project for scoring. The upload happens asynchronously - use the returned statusId to monitor progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | string | Yes | The URL to download the dataset from for predictions. Must be a valid HTTP/HTTPS URL pointing to a dataset file (CSV, Excel, etc.). |
| `projectId` | string | Yes | The project ID to which the data will be uploaded for prediction. |
| `credentials` | array | No | A list of credentials for the secondary datasets used in feature discovery project. Maximum 30 items. |
| `forecastPoint` | string | No | For time series projects only. The time in the dataset relative to which predictions are generated. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error. |
| `actualValueColumn` | string | No | Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. This value is optional. |
| `predictionsEndDate` | string | No | Used for time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with predictionsStartDate, and cannot be provided with the forecastPoint parameter. |
| `predictionsStartDate` | string | No | Used for time series projects only. The start date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with predictionsEndDate, and cannot be provided with the forecastPoint parameter. |
| `secondaryDatasetsConfigId` | string | No | For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction. |
| `relaxKnownInAdvanceFeaturesCheck` | boolean | No | For time series projects only. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or false, missing values are not allowed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Project Predictions

**Slug:** `DATAROBOT_CREATE_PROJECTS_PREDICTIONS`

Tool to create new predictions for a dataset using a trained model within a DataRobot project. Returns immediately with a status URL - prediction computation happens asynchronously. Use the location URL to poll for completion and retrieve results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID to make predictions with. Use DATAROBOT_LIST_MODEL_RECORDS to find available models in the project. |
| `datasetId` | string | Yes | The dataset ID to compute predictions for. Must have been previously uploaded. Use DATAROBOT_LIST_DATASETS to find available datasets. |
| `projectId` | string | Yes | The project ID to make predictions within. Use DATAROBOT_LIST_PROJECTS to find available projects. |
| `forecastPoint` | string | No | For time series projects only. The time in the dataset relative to which predictions are generated (ISO 8601 date-time format). If not specified, defaults to the latest timestamp in the dataset. |
| `maxExplanations` | integer | No | Maximum number of explanation values to return for each row, ordered by absolute value. Defaults to null for datasets narrower than 100 columns, 100 for wider datasets. Only valid when explanationAlgorithm is set. |
| `includeFdwCounts` | boolean | No | For time series projects with partial history only. Indicates if feature derivation window counts will be part of the response. |
| `actualValueColumn` | string | No | For time series projects only. Actual value column name, valid for prediction files if the project is unsupervised and the dataset is considered a bulk predictions dataset. |
| `predictionsEndDate` | string | No | For time series projects only. The end date for bulk predictions, exclusive (ISO 8601 date-time format). Used for generating historical predictions using training data. Must be provided with predictionsStartDate, cannot be used with forecastPoint. |
| `predictionThreshold` | number | No | Threshold used for binary classification in predictions (0.0-1.0). If not specified, model default prediction threshold will be used. |
| `explanationAlgorithm` | string ("shap") | No | Explanation algorithm for predictions. |
| `predictionsStartDate` | string | No | For time series projects only. The start date for bulk predictions (ISO 8601 date-time format). Used for generating historical predictions using training data. Must be provided with predictionsEndDate, cannot be used with forecastPoint. |
| `predictionIntervalsSize` | integer | No | Represents the percentile to use for the size of the prediction intervals (1-100). Defaults to 80 if includePredictionIntervals is True. |
| `includePredictionIntervals` | boolean | No | Specifies whether prediction intervals should be calculated. Defaults to True if predictionIntervalsSize is specified, otherwise False. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Project Rating Table

**Slug:** `DATAROBOT_CREATE_PROJECTS_RATING_TABLES`

Tool to upload a modified rating table file to a DataRobot project. Use when you need to create a new rating table from a parent model. Returns the created rating table ID and name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project that owns this rating table data. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `parentModelId` | string | Yes | The parent model ID from which this rating table file was derived. This links the rating table to a specific model in the project. |
| `ratingTableFile` | object | Yes | Rating table file to upload. |
| `ratingTableName` | string | Yes | Human-readable name for the new rating table to create. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Project Training Predictions

**Slug:** `DATAROBOT_CREATE_PROJECTS_TRAINING_PREDICTIONS`

Tool to create training predictions for a DataRobot project. Use when you need to compute predictions on the training data using a trained model. Returns immediately with a status URL - computation happens asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID to make predictions with. Use DATAROBOT_LIST_MODEL_RECORDS to find available models in the project. |
| `projectId` | string | Yes | Project ID to compute training predictions for. Use DATAROBOT_LIST_PROJECTS to find available projects. |
| `dataSubset` | string ("all" | "validationAndHoldout" | "holdout" | "allBacktests" | "validation" | "crossValidation") | No | Subset of data predicted on: 'all' returns predictions for all rows including training, validation, holdout and discarded rows (not available for large datasets or Date/Time partitioning). 'validationAndHoldout' returns predictions for validation and holdout scores (not available for large projects or Date/Time projects trained into validation). 'holdout' returns predictions for holdout score calculation (not available without holdout or for large datasets). 'allBacktests' returns predictions for backtesting scores in Date/Time projects. 'validation' returns predictions for validation score calculation. 'crossValidation' returns cross-validation predictions. |
| `maxExplanations` | integer | No | Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. Defaults to null for datasets narrower than 100 columns, 100 for wider datasets. Cannot be set if explanationAlgorithm is omitted. |
| `explanationAlgorithm` | string | No | If set to 'shap', the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Transform Feature Type

**Slug:** `DATAROBOT_CREATE_PROJECTS_TYPE_TRANSFORM_FEATURES`

Create a new feature by changing the type of an existing feature in a DataRobot project. Use when you need to convert a single feature to a different variable type (e.g., numeric to text, categorical to numeric). The operation is asynchronous - monitor the returned Location URL for completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the new feature. Must not be the same as any existing features for this project. Must not contain '/' character. |
| `projectId` | string | Yes | The project ID to create the feature in. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `parentName` | string | Yes | The name of the parent feature to transform. |
| `replacement` | string | No | The replacement value in case of a failed transformation. Can be string, boolean, number, or null. |
| `variableType` | string ("text" | "categorical" | "numeric" | "categoricalInt") | Yes | The type of the new feature. Must be one of: 'text', 'categorical' (Deprecated in version v2.21), 'numeric', or 'categoricalInt'. |
| `dateExtraction` | string ("year" | "yearDay" | "month" | "monthDay" | "week" | "weekDay") | No | Enum for date extraction values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Recipe from Recipe

**Slug:** `DATAROBOT_CREATE_RECIPE_FROM_RECIPE`

Tool to clone an existing wrangling recipe in DataRobot. Use when you need to create a copy of an existing recipe. The cloned recipe will have the same operations and settings as the original recipe.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The recipe name for the cloned recipe. If omitted, a default name will be generated. |
| `recipeId` | string | Yes | Recipe ID to create a Recipe from. Use DATAROBOT_LIST_RECIPES to find available recipe IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Recipe from Dataset

**Slug:** `DATAROBOT_CREATE_RECIPES_FROM_DATASET`

Tool to create a DataRobot recipe from a dataset. Use when creating wrangling, SQL, or feature discovery recipes. Supports two modes: simple mode (requires dialect and status) or extended mode (requires recipeType with optional inputs and settings).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | array | No | List of input data sources for the recipe. Optional in extended mode. |
| `status` | string ("draft" | "preview" | "published") | No | Recipe publication status. Required in simple mode, optional in extended mode. |
| `dialect` | string ("snowflake" | "bigquery" | "spark-feature-discovery" | "databricks" | "spark" | "postgres") | No | Source type data was retrieved from. Required in simple mode, optional in extended mode. |
| `datasetId` | string | Yes | ID of the dataset to create the recipe from. Use DATAROBOT_LIST_DATASETS to find available datasets. |
| `useCaseId` | string | No | ID of the use case to associate with the recipe. Optional in extended mode. |
| `recipeType` | string ("sql" | "Sql" | "SQL" | "wrangling" | "Wrangling" | "WRANGLING" | "featureDiscovery" | "FeatureDiscovery" | "FEATURE_DISCOVERY" | "featureDiscoveryPrivatePreview" | "FeatureDiscoveryPrivatePreview" | "FEATURE_DISCOVERY_PRIVATE_PREVIEW") | No | Type of the recipe workflow. Required in extended mode, optional in simple mode. |
| `snapshotPolicy` | string ("latest" | "specified") | No | Snapshot policy for the dataset version. Optional in extended mode. |
| `datasetVersionId` | string | No | Specific version ID of the dataset. If omitted, uses the latest version. Optional in extended mode. |
| `experimentContainerId` | string | No | ID of the experiment container. Optional in extended mode. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Recipe from Data Store

**Slug:** `DATAROBOT_CREATE_RECIPES_FROM_DATA_STORE`

Create a recipe and data source from a DataRobot data store. Use when connecting to external databases like Snowflake, BigQuery, Databricks, or Postgres to create a wrangling or SQL recipe. Requires an existing data store ID (from DATAROBOT_LIST_DATA_SOURCES) and use case ID (from GET /api/v2/useCases/).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | array | Yes | List of recipe inputs (data sources). Must contain at least 1 input and no more than 1000. |
| `dialect` | string ("snowflake" | "bigquery" | "databricks" | "spark" | "postgres") | Yes | SQL dialect of the data source: 'snowflake', 'bigquery', 'databricks', 'spark', or 'postgres' |
| `useCaseId` | string | No | ID of the Use Case to associate with the recipe. Use GET /api/v2/useCases/ to find available use cases. This field is required in practice even though marked as optional in the schema. |
| `recipeType` | string ("sql" | "wrangling") | Yes | Type of recipe workflow: 'sql' or 'wrangling' |
| `dataStoreId` | string | Yes | ID of the data store to use. Use DATAROBOT_LIST_DATA_SOURCES or API endpoint GET /api/v2/externalDataStores/ to find available data store IDs. |
| `dataSourceType` | string ("dr-database-v1" | "jdbc") | Yes | Data source type: 'dr-database-v1' or 'jdbc' |
| `experimentContainerId` | string | No | [DEPRECATED - use useCaseId instead] ID of the experimental container for the recipe |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Recipe Preview

**Slug:** `DATAROBOT_CREATE_RECIPES_PREVIEW`

Tool to start the job that generates a preview of the data after applying a wrangling recipe. Returns immediately with a status URL - preview generation happens asynchronously.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipeId` | string | Yes | The ID of the recipe to generate a preview for. Use DATAROBOT_LIST_RECIPES to find available recipe IDs. |
| `credentialId` | string | No | The ID of the credentials to use for the connection. If not given, the default credentials for the connection will be used. |
| `numberOfOperationsToUse` | integer | No | The number indicating how many operations from the beginning to compute a preview for. If not specified, all operations in the recipe will be used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Recipe SQL Query

**Slug:** `DATAROBOT_CREATE_RECIPES_SQL`

Tool to build SQL query for a DataRobot recipe. Use when you need to generate SQL from recipe operations or preview the SQL that would be executed. You can optionally override recipe operations or use input aliases instead of real table names.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipeId` | string | Yes | The ID of the recipe to generate SQL for. Use DATAROBOT_LIST_RECIPES to find available recipe IDs. |
| `operations` | array | No | List of operations to override the recipe operations when building SQL. If null or omitted, uses the original recipe operations. If empty list, produces basic SELECT query: 'SELECT <columns> FROM <table>'. Maximum 1000 operations allowed. |
| `inputsAsAliases` | boolean | No | Produce SQL that uses input aliases instead of real table names. Defaults to false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Relationships Configuration

**Slug:** `DATAROBOT_CREATE_RELATIONSHIPS_CONFIGURATIONS`

Tool to create a relationships configuration in DataRobot for connecting multiple datasets for feature engineering. Use when you need to define how datasets relate to each other for time-aware or multi-table machine learning projects. The relationships configuration enables DataRobot to automatically generate features from related datasets by defining join keys and temporal relationships.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `relationships` | array | Yes | List of relationships between the datasets. Defines how datasets are joined together for feature engineering. |
| `datasetDefinitions` | array | Yes | List of dataset definitions that will be used in the relationships. Each dataset must have a unique identifier within this configuration. |
| `featureDiscoveryMode` | string ("default" | "manual") | No | Mode of feature discovery. |
| `featureDiscoverySettings` | array | No | List of feature discovery settings to customize the feature discovery process. Only applicable when featureDiscoveryMode is 'manual'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Remote Events

**Slug:** `DATAROBOT_CREATE_REMOTE_EVENTS`

Tool to post a remote deployment event to DataRobot. Use when you need to record custom events related to deployments such as health changes, model replacements, or prediction failures.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | object | No | Event payload for model replacement events. |
| `orgId` | string | No | The identifier of the organization associated with the event. |
| `title` | string | No | The title of the event. |
| `message` | string | No | Descriptive message for health events. |
| `eventType` | string ("deploymentInfo" | "externalNaNPredictions" | "management.deploymentInfo" | "model_deployments.accuracy_green" | "model_deployments.accuracy_red" | "model_deployments.accuracy_yellow_from_green" | "model_deployments.data_drift_green" | "model_deployments.data_drift_red" | "model_deployments.data_drift_yellow_from_green" | "model_deployments.model_replacement" | "model_deployments.service_health_green" | "model_deployments.service_health_red" | "model_deployments.service_health_yellow_from_green" | "moderationMetricCreationError" | "moderationMetricReportingError" | "moderationModelConfigError" | "moderationModelModerationCompleted" | "moderationModelModerationStarted" | "moderationModelPostScorePhaseCompleted" | "moderationModelPostScorePhaseStarted" | "moderationModelPreScorePhaseCompleted" | "moderationModelPreScorePhaseStarted" | "moderationModelRuntimeError" | "moderationModelScoringCompleted" | "moderationModelScoringError" | "moderationModelScoringStarted" | "monitoring.external_model_nan_predictions" | "monitoring.spooler_channel_green" | "monitoring.spooler_channel_red" | "predictionRequestFailed" | "prediction_request.failed" | "serviceHealthChangeGreen" | "serviceHealthChangeRed" | "serviceHealthChangeYellowFromGreen" | "spoolerChannelGreen" | "spoolerChannelRed") | Yes | The type of the event. Labels in all_lower_case are deprecated. |
| `timestamp` | string | Yes | The time when the event occurred in ISO 8601 format (e.g., '2026-02-13T20:00:00Z'). |
| `deploymentId` | string | No | The identifier of the deployment associated with the event. |
| `moderationData` | object | No | Moderation event information. |
| `spoolerChannelData` | object | No | Spooler channel event payload. |
| `predictionRequestData` | object | No | Prediction event payload. |
| `predictionEnvironmentId` | string | No | The identifier of the prediction environment associated with the event. |
| `externalNanPredictionsData` | object | No | External NaN Predictions event payload. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Secure Configuration

**Slug:** `DATAROBOT_CREATE_SECURE_CONFIG`

Tool to create a secure configuration for storing credentials and sensitive data. Use when you need to securely store OAuth tokens, API keys, database credentials, or LLM provider credentials. The configuration can then be referenced in data sources, deployments, or other DataRobot resources.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Human-readable name for the secure configuration. Must be unique within your organization. |
| `values` | array | Yes | Array of key-value pairs containing the secure configuration values. Required keys depend on the schemaName selected. For example, 'OAuth 2.0' requires: clientId, clientSecret, tokenEndpointUrl, authorizationEndpointUrl. |
| `schemaName` | string ("OAuth 2.0" | "Azure OAuth 2.0" | "Google Service Account" | "AWS Credentials" | "Key Pair Credentials" | "Databricks Service Principal Account" | "Azure Service Principal" | "[GenAI] AWS Bedrock LLM Credentials" | "[GenAI] Azure OpenAI LLM Credentials" | "[GenAI] Google VertexAI LLM Credentials" | "[GenAI] Anthropic LLM Credentials" | "[GenAI] Cohere LLM Credentials" | "[GenAI] TogetherAI LLM Credentials" | "[GenAI] OpenAI LLM Credentials" | "[GenAI] Groq LLM Credentials" | "[GenAI] Cerebras LLM Credentials" | "NGC API Token" | "OAuth Client Secret") | Yes | Type of secure configuration schema to use. Determines which keys are required in the values array. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create String Encryption

**Slug:** `DATAROBOT_CREATE_STRING_ENCRYPTIONS`

Tool to encrypt a string which DataRobot can decrypt when needed. Use when storing sensitive data like passwords or credentials that DataRobot needs to access data stores.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `plainText` | string | Yes | String to be encrypted. DataRobot will decrypt the string when needed to access data stores. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Usage Data Exports

**Slug:** `DATAROBOT_CREATE_USAGE_DATA_EXPORTS`

Tool to create a customer usage data artifact request in DataRobot. Use when you need to export usage tracking data for audit, billing, or compliance purposes. Requires "CAN_ACCESS_USER_ACTIVITY" permission. The artifact generation is asynchronous. Poll the returned location URL to check status and retrieve the artifact ID once generation completes. The artifact will be in .zip format.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | The upper bound of stored events timestamp to include within the artifact (ISO 8601 date-time format, e.g., '2025-12-31T23:59:59Z'). |
| `start` | string | No | The lower bound of stored events timestamp to include within the artifact (ISO 8601 date-time format, e.g., '2025-01-01T00:00:00Z'). |
| `userId` | string | No | Only actions performed by this user will be retrieved. Use Get Account Info or List Organization Users to obtain user IDs. |
| `include` | array | No | Additional fields to be included in the export. |
| `noCache` | boolean | No | Switches off caching when set to true. When false or not specified, uses cached data if available. |
| `projectId` | string | No | Only actions that are connected with the project will be retrieved. Use List Projects to obtain project IDs. |
| `includeReport` | array | No | The list of reports that should be generated. Will default to None if not specified. Options: ADMIN_USAGE (admin actions), APP_USAGE (application usage), PREDICTION_USAGE (prediction activity), SYSTEM_INFO (system information). |
| `includeIdentifyingFields` | boolean | No | Indicates if identifying information like user names, project names, etc. should be included. Defaults to True. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Use Case

**Slug:** `DATAROBOT_CREATE_USE_CASE`

Tool to create a new DataRobot use case. Use when you need to create a container for organizing related projects, deployments, models, and other resources around a business problem or initiative.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the use case (max 100 characters). If omitted, DataRobot will generate a default name. |
| `description` | string | No | Description of the use case providing context about its purpose and goals. |
| `advancedTour` | string ("flightDelays" | "hospital") | No | Advanced tour options for use cases. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Link Multiple Entities to Use Case

**Slug:** `DATAROBOT_CREATE_USE_CASES_MULTILINK`

Tool to link multiple entities to a DataRobot use case in bulk. Use when you need to associate projects, notebooks, deployments, datasets, or other resources with a use case for organizational purposes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `workflow` | string ("migration" | "creation" | "move" | "unspecified") | No | The workflow that is attaching this entity. Used for analytics only, does not affect the operation. Options: migration, creation, move, or unspecified (default). |
| `useCaseId` | string | Yes | The ID of the use case to link entities to. Use LIST_USE_CASES or GET_USE_CASES to find available use case IDs. |
| `entitiesList` | array | Yes | List of entities to link to this use case. Minimum 1, maximum 100 entities per request. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create User Blueprint From Blueprint

**Slug:** `DATAROBOT_CREATE_USER_BLUEPRINT_FROM_BLUEPRINT_ID`

Clone a blueprint from a project to create a user blueprint. Use when you need to save a project blueprint to your catalog for reuse in other projects. The created user blueprint can be customized and applied to different datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelType` | string | No | The title/name to give to the blueprint (max 1000 characters). |
| `projectId` | string | Yes | The ID of the project where the user blueprint will be created. This is the active project context. |
| `blueprintId` | string | Yes | The ID of the blueprint to clone. Obtain from project blueprints list. |
| `description` | string | No | Optional description for the user blueprint. |
| `saveToCatalog` | boolean | No | Whether to save the blueprint to the catalog. Default is True. |
| `isInplaceEditor` | boolean | No | Whether the request is sent from the in-place user blueprint editor. Default is False. |
| `getDynamicLabels` | boolean | No | Whether to add dynamic labels to a decompressed blueprint. Only valid when decompressedBlueprint is True. |
| `decompressedBlueprint` | boolean | No | Whether to retrieve the blueprint in decompressed format. Default is False. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate User Blueprints in Bulk

**Slug:** `DATAROBOT_CREATE_USER_BLUEPRINTS_BULK_VALIDATIONS`

Tool to validate multiple user blueprints in bulk and check their configuration correctness. Use when you need to verify that custom blueprints are properly configured before running them in a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | No | String representation of ObjectId for the currently active project. The user blueprint is validated against this project context. Required when blueprints contain project-specific tasks like column selection. |
| `userBlueprintIds` | array | Yes | List of user blueprint IDs to validate (at least 1 required). Each ID should be a valid ObjectId string (24-character hex or UUID format). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Add User Blueprints to Project

**Slug:** `DATAROBOT_CREATE_USER_BLUEPRINTS_PROJECT_BLUEPRINTS`

Add user blueprints to a DataRobot project's repository. Use when you want to make custom blueprints available for modeling in a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The ID of the project to add the user blueprints to. Use DATAROBOT_LIST_PROJECTS to find available projects. |
| `deleteAfter` | boolean | No | Whether to delete the user blueprint(s) after adding them to the project menu. Default is False. |
| `describeFailures` | boolean | No | Whether to include extra fields describing why any blueprints were not added to the project. When True, the notAddedToMenu field will contain detailed error information. Default is False. |
| `userBlueprintIds` | array | Yes | List of user blueprint IDs to add to the project's repository. Use DATAROBOT_LIST_USER_BLUEPRINTS to find available user blueprint IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate User Blueprint Task Parameters

**Slug:** `DATAROBOT_CREATE_USER_BLUEPRINTS_TASK_PARAMETERS`

Tool to validate task parameters for custom tasks in DataRobot User Blueprints. Use when building custom blueprints to verify that parameter values are acceptable before saving. Returns validation errors for any invalid parameters, or an empty errors list if all are valid.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `taskCode` | string | Yes | The task code identifying the custom task to validate parameters for. Examples: PNI2, RDT5, BINNING, KERASC, ENETCD, PDM3. |
| `projectId` | string | No | The project ID where this user blueprint is being edited. Optional context for validation. |
| `outputMethod` | string ("P" | "Pm" | "S" | "Sm" | "T" | "TS") | Yes | The method representing how the task will output data. Valid values: P, Pm, S, Sm, T, TS. |
| `taskParameters` | array | Yes | A list of task parameters with their proposed values to be validated. Can be an empty array to validate the task itself without specific parameters. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create User Group

**Slug:** `DATAROBOT_CREATE_USER_GROUP`

Tool to create a new user group. Use when you need to add a group to an existing DataRobot organization after confirming orgId.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Unique name for the new user group (max length 100). |
| `email` | string | No | Optional contact email for this user group. |
| `orgId` | string | Yes | Identifier of the organization to which the group will belong. |
| `description` | string | No | Optional human-readable description for the group (max length 1000). |
| `accessRoleId` | string | No | Identifier of the access role to assign to the group. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Value Tracker

**Slug:** `DATAROBOT_CREATE_VALUE_TRACKERS`

Tool to create a new DataRobot value tracker for tracking ML project value and lifecycle stages. Use when you need to register a new ML initiative and track its business impact, feasibility, and stage progression. Value trackers help measure ROI and manage the lifecycle from ideation through production to retirement.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the value tracker (max 512 characters) |
| `notes` | string | No | User notes about the value tracker |
| `stage` | string ("ideation" | "queued" | "dataPrepAndModeling" | "validatingAndDeploying" | "inProduction" | "retired" | "onHold") | Yes | Current stage of the value tracker lifecycle (required). Options: ideation, queued, dataPrepAndModeling, validatingAndDeploying, inProduction, retired, onHold |
| `description` | string | No | Description of the value tracker (max 1024 characters) |
| `feasibility` | integer | No | Assessment of how the value tracker can be accomplished across multiple dimensions, rated from 1 (low) to 5 (high) |
| `targetDates` | array | No | Array of target dates for different stages |
| `businessImpact` | integer | No | Expected effects on overall business operations, rated from 1 (low) to 5 (high) |
| `potentialValue` | object | No | Monetary value with currency and optional details. |
| `predictionTargets` | array | No | List of prediction target names associated with this value tracker |
| `potentialValueTemplate` | object | No | Template type and parameter information for potential value calculation. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Value Tracker Attachments

**Slug:** `DATAROBOT_CREATE_VALUE_TRACKERS_ATTACHMENTS`

Tool to attach resources to a DataRobot value tracker. Use when you need to link datasets, projects, deployments, or other objects to a value tracker for monitoring. Returns the list of successfully attached objects. Requires that the value tracker is writable and all specified objects are readable.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `valueTrackerId` | string | Yes | The unique identifier of the value tracker to attach objects to (24-character hex string). |
| `attachedObjects` | array | Yes | An array of attachment objects specifying the resources to attach to the value tracker. At least one object is required. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Access Role

**Slug:** `DATAROBOT_DELETE_ACCESS_ROLE`

Tool to delete a custom Access Role. Use when you need to remove a custom role by its ID after verifying the ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roleId` | string | Yes | The ID of the custom Access Role to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Batch Job

**Slug:** `DATAROBOT_DELETE_BATCH_JOB`

Tool to cancel a DataRobot batch job. Use when you need to abort a queued or running batch prediction job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batch_job_id` | string | Yes | ID of the batch job to cancel. Obtain from DATAROBOT_LIST_BATCH_JOBS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Batch Monitoring Job Definition

**Slug:** `DATAROBOT_DELETE_BATCH_MONITORING_JOB_DEFINITION`

Tool to delete a Batch Prediction job definition. Use when you need to permanently remove a batch monitoring job definition by its ID. Cannot delete if there are currently running jobs in the queue.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Batch Prediction

**Slug:** `DATAROBOT_DELETE_BATCH_PREDICTION`

Tool to cancel a DataRobot Batch Prediction job. Use when you need to abort a running or queued batch prediction job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `prediction_job_id` | string | Yes | ID of the Batch Prediction job to cancel. Obtain from DATAROBOT_LIST_BATCH_JOBS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Batch Prediction Job Definition

**Slug:** `DATAROBOT_DELETE_BATCH_PREDICTION_JOB_DEFINITION`

Tool to delete a Batch Prediction job definition. Use when you need to permanently remove a job definition by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Calendar

**Slug:** `DATAROBOT_DELETE_CALENDARS`

Tool to delete a DataRobot calendar. Use when you need to permanently remove a calendar by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `calendarId` | string | Yes | The unique identifier of the calendar to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Comment

**Slug:** `DATAROBOT_DELETE_COMMENT`

Tool to delete a DataRobot comment. Use when you need to permanently remove a comment by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `commentId` | string | Yes | The unique identifier of the comment to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Compliance Doc Template

**Slug:** `DATAROBOT_DELETE_COMPLIANCE_DOC_TEMPLATE`

Tool to delete a compliance documentation template. Use when you need to permanently remove a compliance doc template by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `templateId` | string | Yes | The ID of the model compliance document template to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Credentials

**Slug:** `DATAROBOT_DELETE_CREDENTIALS`

Tool to delete a credentials set. Use when you need to permanently remove credentials by their unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `credentialId` | string | Yes | Credentials entity ID to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Custom Application

**Slug:** `DATAROBOT_DELETE_CUSTOM_APPLICATION`

Tool to delete a DataRobot custom application. Use when you need to permanently remove an application by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `hardDelete` | string ("false" | "False" | "true" | "True") | No | Marks that this application should be hard deleted instead of soft deleted. Defaults to false (soft delete). |
| `applicationId` | string | Yes | The ID of the custom application to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Custom Application Source

**Slug:** `DATAROBOT_DELETE_CUSTOM_APPLICATION_SOURCES`

Tool to delete a custom application source with all its versions. Use when you need to permanently remove a custom application source by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `hardDelete` | string ("false" | "False" | "true" | "True") | No | Marks that this application source should be hard deleted instead of soft deleted. Defaults to 'false' for soft delete. |
| `appSourceId` | string | Yes | The ID of the application source to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Custom Application Source Version

**Slug:** `DATAROBOT_DELETE_CUSTOM_APPLICATION_SOURCES_VERSIONS`

Tool to delete a custom application source version if it is still mutable. Use when you need to remove a specific version of a custom application source by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `appSourceId` | string | Yes | The ID of the application source. |
| `appSourceVersionId` | string | Yes | The ID of the application source version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Custom Job

**Slug:** `DATAROBOT_DELETE_CUSTOM_JOB`

Tool to delete a DataRobot custom job. Use when you need to permanently remove a custom job by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `custom_job_id` | string | Yes | ID of the custom job to delete. Obtain from DATAROBOT_LIST_CUSTOM_JOBS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Custom Model

**Slug:** `DATAROBOT_DELETE_CUSTOM_MODEL`

Tool to delete a custom model in DataRobot. Use when you need to permanently remove a custom model by its unique ID. Cannot delete models that are currently deployed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customModelId` | string | Yes | The unique identifier of the custom model to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset

**Slug:** `DATAROBOT_DELETE_DATASET`

Tool to delete a dataset from DataRobot. Use when you need to permanently remove a dataset by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The unique identifier of the dataset to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset Definition

**Slug:** `DATAROBOT_DELETE_DATASET_DEFINITIONS`

Tool to soft delete a dataset definition by ID. Use when you need to remove a dataset definition without permanently deleting the underlying data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Chunk Definition

**Slug:** `DATAROBOT_DELETE_DATASET_DEFINITIONS_CHUNK_DEFINITIONS`

Tool to soft delete a chunk definition by ID. Use when you need to remove a chunk definition from a dataset definition.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `chunkDefinitionId` | string | Yes | The ID of the chunk definition to delete. |
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset Featurelist

**Slug:** `DATAROBOT_DELETE_DATASET_FEATURELIST`

Tool to delete a dataset featurelist. Use when you need to permanently remove a featurelist from a dataset by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The unique identifier of the dataset. |
| `featurelistId` | string | Yes | The unique identifier of the featurelist to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset Refresh Job

**Slug:** `DATAROBOT_DELETE_DATASETS_REFRESH_JOBS`

Tool to delete a DataRobot dataset refresh job. Use when you need to remove a scheduled dataset refresh job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobId` | string | Yes | ID of the user scheduled dataset refresh job. |
| `datasetId` | string | Yes | The dataset associated with the scheduled refresh job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset Relationship

**Slug:** `DATAROBOT_DELETE_DATASETS_RELATIONSHIPS`

Tool to delete a dataset relationship. Use when you need to permanently remove a relationship between datasets by their unique IDs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The unique identifier of the dataset. |
| `datasetRelationshipId` | string | Yes | The unique identifier of the dataset relationship to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Dataset Version

**Slug:** `DATAROBOT_DELETE_DATASETS_VERSIONS`

Tool to delete a specific version of a dataset from DataRobot's catalog. Use when you need to permanently remove a dataset version that is no longer needed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | The ID of the dataset entry. |
| `dataset_version_id` | string | Yes | The ID of the dataset version to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Data Slices

**Slug:** `DATAROBOT_DELETE_DATA_SLICES`

Tool to delete multiple data slices in bulk. Use when you need to remove one or more data slices by their IDs. Supports deletion of up to 20 data slices in a single request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ids` | array | Yes | List of data slice IDs to delete. Must provide between 1 and 20 data slice IDs (24-character hex strings). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Data Slice

**Slug:** `DATAROBOT_DELETE_DATA_SLICES_BY_ID`

Tool to delete a data slice. Use when you need to permanently remove a data slice by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataSliceId` | string | Yes | The unique identifier of the data slice to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Deployment

**Slug:** `DATAROBOT_DELETE_DEPLOYMENT`

Tool to delete a DataRobot deployment. Use when you need to permanently remove a deployment by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the DataRobot deployment to delete. |
| `ignoreManagementAgent` | string ("true" | "True" | "false" | "False") | No | If true/True, do not wait for the management agent to delete the deployment first. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Actuals Data Export

**Slug:** `DATAROBOT_DELETE_DEPLOYMENTS_ACTUALS_DATA_EXPORT`

Tool to delete an actuals data export job from a deployment. Use when you need to remove an export by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `exportId` | string | Yes | ID of the actuals data export job. |
| `deploymentId` | string | Yes | ID of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Deployment Custom Metric

**Slug:** `DATAROBOT_DELETE_DEPLOYMENTS_CUSTOM_METRIC`

Tool to delete a custom metric from a deployment. Use when you need to permanently remove a custom metric by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `customMetricId` | string | Yes | Unique identifier of the custom metric to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Deployment Monitoring Batch

**Slug:** `DATAROBOT_DELETE_DEPLOYMENTS_MONITORING_BATCHES`

Tool to delete a monitoring batch from a deployment. Use when you need to permanently remove a monitoring batch by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `monitoringBatchId` | string | Yes | ID of the monitoring batch. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Deployment Retraining Policy

**Slug:** `DATAROBOT_DELETE_DEPLOYMENTS_RETRAINING_POLICIES`

Tool to delete a retraining policy from a deployment. Use when you need to permanently remove a retraining policy by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `retrainingPolicyId` | string | Yes | ID of the retraining policy. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Entity Notification Channel

**Slug:** `DATAROBOT_DELETE_ENTITY_NOTIFICATION_CHANNEL`

Tool to delete an entity notification channel. Use when you need to remove a notification channel from a deployment or custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `channel_id` | string | Yes | The ID of the entity notification channel to delete. |
| `related_entity_id` | string | Yes | The ID of the related entity (deployment ID or custom job ID). |
| `related_entity_type` | string ("deployment" | "customjob") | Yes | Type of related entity (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Entity Notification Policy

**Slug:** `DATAROBOT_DELETE_ENTITY_NOTIFICATION_POLICY`

Tool to delete an entity notification policy. Use when you need to remove a notification policy for a deployment or custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `policyId` | string | Yes | The unique identifier of the notification policy to delete. |
| `relatedEntityId` | string | Yes | The unique identifier of the related entity (e.g., deployment ID or custom job ID). |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob, case-insensitive variants supported). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Entity Tag

**Slug:** `DATAROBOT_DELETE_ENTITY_TAGS`

Tool to delete an entity tag. Use when you need to remove an entity tag by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityTagId` | string | Yes | The ID of the entity tag to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete External Data Source

**Slug:** `DATAROBOT_DELETE_EXTERNAL_DATA_SOURCE`

Tool to delete an external data source. Use when you need to permanently remove a data source by its unique ID. Note: deletion will fail if the data source is in use by one or more datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataSourceId` | string | Yes | The ID of the external data source to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete External Data Store

**Slug:** `DATAROBOT_DELETE_EXTERNAL_DATA_STORE`

Tool to delete an external data store. Use when you need to permanently remove a data store by its unique ID. Note: deletion will fail if the data store is in use by one or more data sources.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataStoreId` | string | Yes | ID of the data store. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete External OAuth Provider

**Slug:** `DATAROBOT_DELETE_EXTERNAL_O_AUTH_PROVIDERS`

Tool to delete an external OAuth provider from DataRobot. Use when removing OAuth integrations for external services. This is an asynchronous operation that returns a job location URL for tracking deletion progress.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `providerId` | string | Yes | The unique identifier of the OAuth provider to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Files

**Slug:** `DATAROBOT_DELETE_FILES`

Tool to delete a file from DataRobot. Use when you need to permanently remove a file by its catalog ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `catalogId` | string | Yes | The catalog item ID of the file to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Files from Catalog

**Slug:** `DATAROBOT_DELETE_FILES_ALL_FILES`

Tool to delete files or folders from a DataRobot catalog item. Use when you need to remove specific files or folders by their paths. Folder paths should end with a slash '/'. Maximum 1000 paths can be deleted in a single request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `paths` | array | Yes | File and folder paths to delete. Folder paths should end with slash '/'. Minimum 1 path, maximum 1000 paths. |
| `catalogId` | string | Yes | The catalog item ID from which to delete files or folders. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI Cost Metric Configuration

**Slug:** `DATAROBOT_DELETE_GENAI_COST_METRIC_CONFIGURATION`

Tool to delete a GenAI cost metric configuration. Use when you need to remove a cost metric configuration by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `costMetricConfigurationId` | string | Yes | The identifier of the cost metric configuration to remove. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI Custom Model LLM Validation

**Slug:** `DATAROBOT_DELETE_GENAI_CUSTOM_MODEL_LLM_VALIDATION`

Tool to delete a custom model LLM validation from DataRobot. Use when you need to remove a specific LLM validation by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `validation_id` | string | Yes | The identifier of the custom model LLM validation to remove. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI Custom Model Vector Database Validations

**Slug:** `DATAROBOT_DELETE_GENAI_CUSTOM_MODEL_VECTOR_DB_VALIDATIONS`

Tool to delete a custom model vector database validation in DataRobot. Use when you need to remove a validation test for a custom model's vector database compatibility.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the custom model vector database validation to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI LLM Blueprints

**Slug:** `DATAROBOT_DELETE_GENAI_LLM_BLUEPRINTS`

Tool to delete a GenAI LLM blueprint in DataRobot. Use when you need to remove an LLM blueprint configuration by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the LLM blueprint to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI LLM Test Configurations

**Slug:** `DATAROBOT_DELETE_GENAI_LLM_TEST_CONFIGURATIONS`

Tool to delete an LLM test configuration in DataRobot. Use when you need to remove a test configuration for LLM robustness testing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the LLM test configuration to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI OOTB Metric Configuration

**Slug:** `DATAROBOT_DELETE_GENAI_OOTB_METRIC_CONFIGURATION`

Tool to delete a GenAI out-of-the-box (OOTB) metric configuration. Use when you need to remove an OOTB metric configuration by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The identifier of the OOTB metric configuration to remove. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete GenAI Playground

**Slug:** `DATAROBOT_DELETE_GENAI_PLAYGROUND`

Tool to delete a GenAI playground. Use when you need to permanently remove a playground by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the GenAI playground to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Multiple Groups

**Slug:** `DATAROBOT_DELETE_GROUPS`

Tool to delete multiple user groups by their IDs in a single request. Use when you need to remove several groups at once (limit 100 per request).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `groups` | array | Yes | List of user groups to delete; must contain between 1 and 100 items. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Remove Users from Group

**Slug:** `DATAROBOT_DELETE_GROUPS_USERS`

Tool to remove one or more users from a DataRobot user group by groupId. Use when you need to revoke group membership for existing users. Limit 100 users per request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `users` | array | Yes | List of users to remove; must contain between 1 and 100 items. |
| `groupId` | string | Yes | The identifier of the user group from which users will be removed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Modeling Featurelist

**Slug:** `DATAROBOT_DELETE_MODELING_FEATURELIST`

Tool to delete a specified modeling featurelist from a DataRobot project. Use when you need to remove a modeling featurelist by its ID. Supports dry-run mode to preview deletion impact and automatic deletion of dependencies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dryRun` | string ("false" | "False" | "true" | "True") | No | Enum for dry run option values. |
| `projectId` | string | Yes | The project ID. |
| `featurelistId` | string | Yes | The featurelist ID. |
| `deleteDependencies` | string ("false" | "False" | "true" | "True") | No | Enum for delete dependencies option values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook Cell

**Slug:** `DATAROBOT_DELETE_NOTEBOOK_CELL`

Tool to delete a specific cell from a DataRobot notebook. Use when you need to permanently remove a cell from a notebook by its unique ID. Use after confirming the cell is no longer needed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `cellId` | string | Yes | The unique identifier of the cell to delete from the notebook. Must be a valid MongoDB ObjectId (24-character hexadecimal string). |
| `notebookId` | string | Yes | The unique identifier of the notebook containing the cell to delete. Must be a valid MongoDB ObjectId (24-character hexadecimal string). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook Environment Variable

**Slug:** `DATAROBOT_DELETE_NOTEBOOK_ENVIRONMENT_VARIABLES_BY_ID`

Tool to delete a notebook environment variable by ID. Use when you need to remove an environment variable from a specific notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `envVarId` | string | Yes | The ID of the environment variable to delete. |
| `notebookId` | string | Yes | The ID of the notebook containing the environment variable. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook Execution Environment Port

**Slug:** `DATAROBOT_DELETE_NOTEBOOK_EXECUTION_ENVIRONMENTS_PORTS`

Tool to delete an exposed port from a notebook execution environment in DataRobot. Use when you need to remove a specific port that was previously exposed for a notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `portId` | string | Yes | The unique identifier of the exposed port to delete. |
| `notebookId` | string | Yes | The unique identifier of the notebook execution environment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook Job

**Slug:** `DATAROBOT_DELETE_NOTEBOOK_JOBS`

Tool to delete a notebook job in DataRobot. Use when you need to permanently remove a scheduled or manual notebook job by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook job to delete. Must be a valid 24-character hex ObjectId. Obtain from DATAROBOT_LIST_NOTEBOOK_JOBS or DATAROBOT_CREATE_NOTEBOOK_JOBS_MANUAL_RUN. |
| `useCaseId` | string | Yes | The ID of the use case this notebook job is associated with. Must be a valid 24-character hex ObjectId. Required for authorization. Obtain from DATAROBOT_LIST_USE_CASES. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook Revision

**Slug:** `DATAROBOT_DELETE_NOTEBOOK_REVISION`

Tool to delete a specific notebook revision by its ID. Use when you need to permanently remove a notebook revision.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebookId` | string | Yes | The unique identifier of the notebook containing the revision to delete. |
| `revisionId` | string | Yes | The unique identifier of the specific revision to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notebook

**Slug:** `DATAROBOT_DELETE_NOTEBOOKS`

Tool to delete a notebook in DataRobot. Use when you need to permanently remove a notebook by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook to delete. Must be a valid 24-character hex ObjectId (MongoDB ObjectId format). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Notification Channel Template

**Slug:** `DATAROBOT_DELETE_NOTIFICATION_CHANNEL_TEMPLATE`

Tool to delete a notification channel template. Use when you need to permanently remove a notification channel template by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `channelId` | string | Yes | The id of the notification channel template to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete OpenTelemetry Metric Configuration

**Slug:** `DATAROBOT_DELETE_OTEL_METRICS_CONFIG`

Tool to delete an OpenTelemetry metric configuration for a specified entity. Use when you need to remove a metric configuration by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. Must be one of: deployment, use_case, experiment_container, custom_application, workload, or workload_deployment. |
| `otelMetricId` | string | Yes | The ID of the OpenTelemetry metric configuration to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Payoff Matrix

**Slug:** `DATAROBOT_DELETE_PAYOFF_MATRIX`

Tool to delete a payoff matrix from a DataRobot project. Use when you need to permanently remove a payoff matrix by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project containing the payoff matrix. |
| `payoffMatrixId` | string | Yes | ObjectId of the payoff matrix to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Prediction Explanations Initialization

**Slug:** `DATAROBOT_DELETE_PREDICTION_EXPLANATIONS_INITIALIZATION`

Tool to delete an existing prediction explanations initialization for a model. Use when you need to remove prediction explanation configuration from a specific model in a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The unique identifier of the model. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Project

**Slug:** `DATAROBOT_DELETE_PROJECT`

Tool to delete a DataRobot project. Use when you need to permanently remove a project by its unique ID. Use after confirming the project is no longer needed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Project Model

**Slug:** `DATAROBOT_DELETE_PROJECT_MODEL`

Tool to delete a model from a DataRobot project's leaderboard. Use when you need to permanently remove a model by its ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The ID of the model to delete from the project leaderboard. Obtain from DATAROBOT_LIST_MODELS. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the model to delete. Obtain from DATAROBOT_LIST_PROJECTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Project Model Job

**Slug:** `DATAROBOT_DELETE_PROJECT_MODEL_JOB`

Tool to cancel a modeling job for a DataRobot project. Use when you need to stop a queued or running model job by its ID. The job must not have already completed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | The ID of the model job to cancel. This is the modeling job identifier that you want to cancel. Obtain from listing model jobs for the project. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the model job to cancel. Obtain from DATAROBOT_LIST_PROJECTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Feature List

**Slug:** `DATAROBOT_DELETE_PROJECTS_FEATURELISTS`

Tool to delete a specified featurelist from a DataRobot project. Use when you need to remove a featurelist by its ID. Supports dry-run mode to preview deletion impact and automatic deletion of dependencies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dryRun` | string ("false" | "False" | "true" | "True") | No | Enum for dry run option values. |
| `projectId` | string | Yes | The project ID. |
| `featurelistId` | string | Yes | The featurelist ID. |
| `deleteDependencies` | string ("false" | "False" | "true" | "True") | No | Enum for delete dependencies option values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Projects Prediction Datasets

**Slug:** `DATAROBOT_DELETE_PROJECTS_PREDICTION_DATASETS`

Tool to delete a prediction dataset that was uploaded for a DataRobot project. Use when you need to permanently remove a prediction dataset by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The dataset ID to delete. |
| `projectId` | string | Yes | The project ID that owns the dataset. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Quota

**Slug:** `DATAROBOT_DELETE_QUOTA`

Tool to delete a quota by its ID. Use when you need to permanently remove a quota from the system.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `quotaId` | string | Yes | Specific quota ID to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Recipe

**Slug:** `DATAROBOT_DELETE_RECIPE`

Tool to delete a wrangling recipe. Use when you need to permanently remove a recipe by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipeId` | string | Yes | The unique identifier of the wrangling recipe to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Registered Model

**Slug:** `DATAROBOT_DELETE_REGISTERED_MODELS`

Tool to archive a registered model in DataRobot. Use when you need to remove a registered model by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `registeredModelId` | string | Yes | Unique identifier of the registered model to archive. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Secure Configuration

**Slug:** `DATAROBOT_DELETE_SECURE_CONFIGS`

Tool to delete a secure configuration and its values. Use when you need to permanently remove a secure configuration by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `secureConfigId` | string | Yes | The id of the secure configuration to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Spark Sessions

**Slug:** `DATAROBOT_DELETE_SPARK_SESSIONS`

Tool to stop a DataRobot Spark wrangling session by size. Use when you need to terminate an active Spark session to free up resources.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `size` | string ("small" | "medium" | "large") | No | The Spark instance size to stop. Options: small, medium, or large. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Status

**Slug:** `DATAROBOT_DELETE_STATUS`

Tool to delete a task status object. Use when you need to remove a status entry by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `statusId` | string | Yes | The ID of the status object to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Use Case

**Slug:** `DATAROBOT_DELETE_USE_CASE`

Tool to delete a DataRobot use case. Use when you need to permanently remove a use case by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | Yes | The ID of the use case to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Use Case Reference

**Slug:** `DATAROBOT_DELETE_USE_CASE_REFERENCE`

Tool to remove a related entity from a DataRobot use case. Use when you need to unlink a resource (dataset, project, deployment, etc.) from a use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The primary id of the entity to remove from the use case. |
| `useCaseId` | string | Yes | The id linking the Use Case with the entity type. |
| `deleteResource` | boolean | No | If True, delete the linked resource itself; if False, only remove the association with the use case. |
| `referenceCollectionType` | string ("projects" | "datasets" | "files" | "notebooks" | "applications" | "recipes" | "customModelVersions" | "registeredModelVersions" | "deployments" | "customApplications" | "customJobs") | Yes | The reference collection type (e.g., datasets, projects, deployments). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete User Blueprint

**Slug:** `DATAROBOT_DELETE_USER_BLUEPRINT`

Tool to delete a user-owned blueprint in DataRobot. Use when you need to permanently remove a user blueprint by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `userBlueprintId` | string | Yes | Used to identify a specific user-owned blueprint. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete User Blueprints

**Slug:** `DATAROBOT_DELETE_USER_BLUEPRINTS`

Tool to delete one or more user blueprints by their IDs. Use when you need to remove custom blueprints from DataRobot. Returns lists of successfully and unsuccessfully deleted blueprint IDs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `userBlueprintIds` | array | Yes | List of user blueprint IDs to delete. Each ID must be a valid MongoDB ObjectId (24-character hexadecimal string). At least one ID is required. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete User Group

**Slug:** `DATAROBOT_DELETE_USER_GROUP`

Tool to delete a user group by its ID. Use after confirming the group ID to remove that group.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `group_id` | string | Yes | ID of the user group to delete |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete User Notification

**Slug:** `DATAROBOT_DELETE_USER_NOTIFICATION`

Tool to delete a user notification by ID. Use when you need to permanently remove a specific notification from the user's notification list.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `userNotificationId` | string | Yes | Unique identifier of the notification. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete All User Notifications

**Slug:** `DATAROBOT_DELETE_USER_NOTIFICATIONS`

Tool to delete all user notifications in DataRobot. Use when you need to clear all notifications for the authenticated user.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete Value Tracker

**Slug:** `DATAROBOT_DELETE_VALUE_TRACKER`

Tool to delete a DataRobot value tracker. Use when you need to permanently remove a value tracker by its unique ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `valueTrackerId` | string | Yes | The unique identifier of the value tracker to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Detect External Data Store UDFs

**Slug:** `DATAROBOT_DETECT_EXT_DS_STANDARD_USER_DEF_FUNCTIONS`

Tool to start the job that detects standard user-defined functions for an external data store. Use when you need to detect rolling_median or rolling_most_frequent functions in a database schema.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `force` | boolean | Yes | Forces detection to be submitted, even if a cache with detected standard user-defined functions for given parameters is already present. |
| `schema` | string | Yes | The schema to create or detect user-defined functions in. |
| `dataStoreId` | string | Yes | ID of the external data store to detect functions in. |
| `credentialId` | string | No | ID of the set of credentials to use for authenticating to the data store. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. |
| `functionType` | string ("rolling_median" | "rolling_most_frequent") | Yes | Standard user-defined function type to detect. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Custom Model

**Slug:** `DATAROBOT_DOWNLOAD_CUSTOM_MODEL`

Tool to download the latest custom model version content from DataRobot. Use when you need to retrieve a custom model archive.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `pps` | string ("false" | "False" | "true" | "True") | No | Download model version from PPS tab. If 'true' or 'True', model archive includes dependencies install script. If 'false' or 'False', dependencies script is not included. |
| `customModelId` | string | Yes | The ID of the custom model to download. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Custom Model Version

**Slug:** `DATAROBOT_DOWNLOAD_CUSTOM_MODEL_VERSION`

Tool to download custom model version content from DataRobot as a file archive. Use when you need to retrieve the full model version package including code and dependencies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `pps` | string ("false" | "False" | "true" | "True") | No | Options for including PPS dependencies in the download. |
| `custom_model_id` | string | Yes | The ID of the custom model. |
| `custom_model_version_id` | string | Yes | The ID of the custom model version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download File

**Slug:** `DATAROBOT_DOWNLOAD_FILE`

Tool to download file data from a DataRobot catalog item by streaming it. Use when you need to retrieve the actual file content from the catalog.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `fileName` | string | No | The name of a file to download from an unstructured dataset. If not specified, it will download either the original archive file or, if there is only one file, it will download that. |
| `catalogId` | string | Yes | The catalog item ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Scoring Code

**Slug:** `DATAROBOT_DOWNLOAD_SCORING_CODE`

Tool to download scoring code for a DataRobot deployment. Use after deployment is active.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sourceCode` | boolean | No | Whether to return source code (not executable) instead of a compiled JAR. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `includeAgent` | boolean | No | Include DataRobot tracking agent in the package. Cannot be true when requesting source code. |
| `includePredictionIntervals` | boolean | No | Include prediction intervals in the downloaded package. |
| `includePredictionExplanations` | boolean | No | Include prediction explanations in the downloaded package. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Evaluate Entitlements

**Slug:** `DATAROBOT_EVALUATE_ENTITLEMENTS`

Tool to evaluate which entitlements are enabled for the authenticated user's DataRobot account. Use when checking feature availability or permissions before attempting operations that require specific entitlements.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entitlements` | array | Yes | Array of entitlements to evaluate (maximum 100 items). Each entitlement should specify a name to check. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Export Tenant Usage

**Slug:** `DATAROBOT_EXPORT_TENANT_USAGE`

Export tenant resource usage data for billing and cost analysis. Retrieves detailed usage records including CPU, GPU, and LLM workloads for a specified tenant and date range. Use the Get Account Info action first to obtain the required tenantId. Note: Start date cannot be before 2025-07-01.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | Yes | End date for usage export in YYYY-MM-DD format. Must be on or after the start date. |
| `start` | string | Yes | Start date for usage export in YYYY-MM-DD format. Note: Start date cannot be before 2025-07-01. |
| `userId` | string | No | Optional user ID to filter usage by a specific user. Obtain user IDs from the Get Account Info or List Organization Users actions. |
| `tenantId` | string | Yes | The UUID of the tenant to export usage for. Obtain this from the Get Account Info action (tenantId field). |
| `workloadCategory` | string ("all" | "gpuUsage" | "llmUsage" | "cpuUsage") | No | Optional workload category to filter usage. Allowed values: 'all' (all categories), 'gpuUsage' (GPU workloads), 'llmUsage' (LLM workloads), 'cpuUsage' (CPU workloads). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Finalize Batch Predictions CSV Upload

**Slug:** `DATAROBOT_FINALIZE_BATCH_PREDICTIONS_CSV_UPLOAD`

Tool to finalize a multipart CSV upload for batch predictions. Use after uploading all CSV parts via PUT requests to submit the job to the scoring queue. Only works for jobs created with multipart=true.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `predictionJobId` | string | Yes | ID of the Batch Prediction job to finalize the multipart upload for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Access Role

**Slug:** `DATAROBOT_GET_ACCESS_ROLE`

Tool to retrieve details for a specific Access Role by ID. Use when you need confirmation of role permissions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `role_id` | string | Yes | ID of the Access Role to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Account Info

**Slug:** `DATAROBOT_GET_ACCOUNT_INFO`

Retrieves account information for the currently authenticated user, including user ID (uid), email, name, tenant ID, and organization ID (orgId). No parameters required. Use this action to get the current user's orgId for listing organization users or tenantId for tenant-level operations.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Accuracy Metrics Config

**Slug:** `DATAROBOT_GET_ACCURACY_METRICS_CONFIG`

Tool to retrieve which accuracy metrics are displayed and their order for a deployment. Use when you need the configured metrics order for a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment to retrieve accuracy metrics configuration for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Accuracy Over Time

**Slug:** `DATAROBOT_GET_ACCURACY_OVER_TIME`

Tool to retrieve baseline and accuracy metric values over time buckets for a deployment. Use when analyzing model performance trends; call after confirming the deployment is live.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of time range (RFC3339 ‘top of the hour’). Defaults to next top of hour if not provided. |
| `start` | string | No | Start of time range (RFC3339 ‘top of the hour’). Defaults to 7 days before end if not provided. |
| `bucketSize` | string | No | Duration of each bucket in ISO 8601 duration format. Examples: 'PT1H' (1 hour), 'P1D' (1 day), 'P7D' (7 days). Must not exceed total period; auto-computed if omitted. |
| `deploymentId` | string | Yes | Unique identifier of the deployment (24-character alphanumeric ID). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Batch Job

**Slug:** `DATAROBOT_GET_BATCH_JOB`

Tool to retrieve a DataRobot batch job by ID. Use when you need details about a specific batch prediction or monitoring job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchJobId` | string | Yes | ID of the Batch job to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Batch Prediction Job Definition

**Slug:** `DATAROBOT_GET_BATCH_PREDICTION_JOB_DEFINITION`

Tool to retrieve a Batch Prediction job definition by ID. Use when you need to inspect configuration or check scheduling status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Portable Batch Prediction Job Definition

**Slug:** `DATAROBOT_GET_BATCH_PREDICTION_JOB_DEFINITION_PORTABLE`

Tool to retrieve a portable batch prediction job definition snippet. Use when you need the configuration to run portable batch predictions outside DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to retrieve as portable snippet. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Batch Predictions

**Slug:** `DATAROBOT_GET_BATCH_PREDICTIONS`

Tool to retrieve a Batch Prediction job by ID. Use when you need to check the status or details of a batch prediction job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `predictionJobId` | string | Yes | ID of the Batch Prediction job |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Calendar

**Slug:** `DATAROBOT_GET_CALENDARS`

Tool to retrieve information about a calendar by ID. Use when you need calendar details including events and format.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `calendar_id` | string | Yes | The ID of the calendar to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Catalog Item

**Slug:** `DATAROBOT_GET_CATALOG_ITEM`

Tool to retrieve catalog item details by ID. Use when you need information about a specific catalog item including its status, type, and metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `catalogId` | string | Yes | Catalog item ID to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Change Request

**Slug:** `DATAROBOT_GET_CHANGE_REQUEST`

Tool to retrieve a DataRobot change request by ID. Use when you need details about a pending or completed change request for a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `changeRequestId` | string | Yes | ID of the Change Request to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Compliance Doc Template

**Slug:** `DATAROBOT_GET_COMPLIANCE_DOC_TEMPLATE`

Tool to retrieve a compliance documentation template by ID. Use when you need to view template details, structure, or sections before generating compliance documents.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `templateId` | string | Yes | The ID of a model compliance document template accessible by the user |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Credentials

**Slug:** `DATAROBOT_GET_CREDENTIALS`

Tool to retrieve the credentials set by ID. Use when you need to fetch details of a specific credential.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `credentialId` | string | Yes | Credentials entity ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Credentials Associations

**Slug:** `DATAROBOT_GET_CREDENTIALS_ASSOCIATIONS`

Tool to list credentials associated with a specific object. Use when you need to retrieve credentials linked to a data connection or batch prediction job definition.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("isDefault" | "-isDefault") | No | Sort order for credentials associations list. |
| `associationId` | string | Yes | The compound ID of the data connection. <association_id> == <object_type>:<object_id> Where: <object_id> is the ID of the data connection; <object_type> is the type of data connection from CredentialMappingResourceTypes Enum: dataconnection, batch_prediction_job_definition. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Application Source

**Slug:** `DATAROBOT_GET_CUSTOM_APPLICATION_SOURCE`

Tool to retrieve a custom application source by ID. Use when you need to get details about a specific custom application source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `appSourceId` | string | Yes | The ID of the custom application source to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Job

**Slug:** `DATAROBOT_GET_CUSTOM_JOB`

Tool to retrieve a custom job by ID. Use when you need details about a specific DataRobot custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customJobId` | string | Yes | ID of the custom job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Job Item

**Slug:** `DATAROBOT_GET_CUSTOM_JOB_ITEM`

Tool to retrieve custom job file content by custom job ID and item ID. Use when you need to access the contents of a specific file associated with a DataRobot custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `itemId` | string | Yes | ID of the file item. |
| `customJobId` | string | Yes | ID of the custom job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Model

**Slug:** `DATAROBOT_GET_CUSTOM_MODELS`

Tool to retrieve a DataRobot custom model by ID. Use when you need custom model metadata and version details before deployment or testing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customModelId` | string | Yes | The ID of the custom model to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Model Version

**Slug:** `DATAROBOT_GET_CUSTOM_MODEL_VERSION`

Tool to retrieve a specific custom model version in DataRobot. Use when you need details about a particular version of a custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customModelId` | string | Yes | The ID of the custom model. |
| `customModelVersionId` | string | Yes | The ID of the custom model version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Template

**Slug:** `DATAROBOT_GET_CUSTOM_TEMPLATE`

Tool to retrieve a single custom template by ID from DataRobot. Use when you need to get detailed information about a specific custom template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `customTemplateId` | string | Yes | The unique identifier of the custom template to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Custom Templates Files

**Slug:** `DATAROBOT_GET_CUSTOM_TEMPLATES_FILES`

Tool to retrieve a single custom template file by its ID. Use when you need to access the content and metadata of a specific file within a custom template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `fileId` | string | Yes | The ID of the file. |
| `customTemplateId` | string | Yes | The ID of the custom template. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Engine Workspace State

**Slug:** `DATAROBOT_GET_DATA_ENGINE_WORKSPACE_STATE`

Tool to retrieve a data engine workspace state by ID. Use when you need details about a specific data engine query execution.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `workspaceStateId` | string | Yes | ID of the data engine workspace state to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset

**Slug:** `DATAROBOT_GET_DATASET`

Tool to retrieve a dataset by ID from DataRobot's catalog. Use when you need detailed metadata about a specific dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Definition

**Slug:** `DATAROBOT_GET_DATASET_DEFINITION`

Tool to retrieve a dataset definition by ID. Use when you need to inspect dataset schema, size, or metadata before using it in modeling or predictions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `version` | integer | No | The version of the dataset definition information to retrieve. If not specified, returns the latest version. |
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Featurelist

**Slug:** `DATAROBOT_GET_DATASET_FEATURELIST`

Tool to retrieve a specific featurelist from a dataset. Use when you need to get details of a dataset featurelist including its features and metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset. |
| `featurelistId` | string | Yes | The ID of the featurelist. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset File

**Slug:** `DATAROBOT_GET_DATASET_FILE`

Tool to download the original dataset file from DataRobot. Use when you need to retrieve the raw data for a dataset. Note: dataset must have dataPersisted=true and be a snapshot to be downloadable.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset to download. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Feature Histogram

**Slug:** `DATAROBOT_GET_DATASETS_FEATURE_HISTOGRAMS`

Tool to retrieve histogram data for a specific feature in a dataset. Use when you need to analyze the distribution of values for a feature in the DataRobot dataset catalog.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `key` | string | No | Only required for Summarized categorical feature. Name of the top 50 key for which plot to be retrieved. |
| `binLimit` | integer | No | Maximum number of bins in the returned plot. |
| `usePlot2` | string | No | Use frequent values plot data instead of histogram for supported feature types. |
| `datasetId` | string | Yes | The ID of the dataset entry to retrieve. |
| `featureName` | string | Yes | The name of the feature to get histogram for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Feature Transform

**Slug:** `DATAROBOT_GET_DATASETS_FEATURE_TRANSFORMS`

Tool to retrieve a feature transform for a specific dataset feature. Use when you need details about how a feature was transformed in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The dataset to select feature from. |
| `featureName` | string | Yes | The name of the feature. Note that DataRobot renames some features, so the feature name may not be the one from your original data. Non-ascii features names should be utf-8-encoded (before URL-quoting). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Refresh Job

**Slug:** `DATAROBOT_GET_DATASETS_REFRESH_JOB`

Tool to retrieve a scheduled dataset refresh job by ID. Use when you need details about a specific dataset refresh schedule configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | The ID of the scheduled dataset refresh job to retrieve. |
| `dataset_id` | string | Yes | The ID of the dataset associated with the scheduled refresh job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Version

**Slug:** `DATAROBOT_GET_DATASETS_VERSIONS`

Tool to retrieve detailed information about a specific dataset version. Use when you need to inspect dataset metadata, schema, and processing state for a particular version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset entry. |
| `datasetVersionId` | string | Yes | The ID of the dataset version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Feature Histogram

**Slug:** `DATAROBOT_GET_DATASETS_VERSIONS_FEATURE_HISTOGRAMS`

Tool to retrieve dataset feature histogram from DataRobot. Use when you need to visualize feature value distributions or understand feature characteristics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `key` | string | No | Only required for the Summarized categorical feature. Name of the top 50 key for which plot to be retrieved. |
| `binLimit` | integer | No | Maximum number of bins in the returned plot. |
| `usePlot2` | string | No | Use frequent values plot data instead of histogram for supported feature types. |
| `datasetId` | string | Yes | The ID of the dataset entry to retrieve. |
| `featureName` | string | Yes | The name of the feature. |
| `datasetVersionId` | string | Yes | The ID of the dataset version to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Dataset Version Featurelist

**Slug:** `DATAROBOT_GET_DATASETS_VERSIONS_FEATURELISTS`

Tool to retrieve a specific featurelist from a dataset version. Use when you need to get details of a featurelist associated with a specific version of a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset to retrieve featurelist for. |
| `featurelistId` | string | Yes | The ID of the featurelist. |
| `datasetVersionId` | string | Yes | The ID of the dataset version to retrieve featurelists for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Slice

**Slug:** `DATAROBOT_GET_DATA_SLICE`

Tool to retrieve a Data Slice by ID. Use when you need to inspect a data slice configuration or filters for a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data_slice_id` | string | Yes | ID of the data slice to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Datetime Partitioning

**Slug:** `DATAROBOT_GET_DATETIME_PARTITIONING`

Tool to retrieve datetime partitioning configuration for a project. Use when you need to inspect a project's time-series settings before modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID for which to retrieve datetime partitioning configuration. Only projects with datetime partitioning configured (cvMethod='datetime') will return data. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment

**Slug:** `DATAROBOT_GET_DEPLOYMENT`

Tool to retrieve a deployment by ID. Use after creating or updating a deployment to fetch its full metadata and status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Accuracy

**Slug:** `DATAROBOT_GET_DEPLOYMENT_ACCURACY`

Tool to retrieve accuracy metrics for a deployment over a time period. Use when you need to analyze model performance trends or drift for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | Period end; top-of-hour RFC3339; defaults to next top of the hour. |
| `start` | string | No | Period start; top-of-hour RFC3339; defaults to 7 days before end. |
| `metric` | string | No | Metric name; required if requesting multiple models. |
| `batchId` | string | No | Batch ID to filter metrics. |
| `modelId` | array | No | ID(s) of model(s) to retrieve metrics for. |
| `targetClass` | string | No | Target class filter. |
| `deploymentId` | string | Yes | Unique identifier of the DataRobot deployment. |
| `segmentValue` | string | No | Segment attribute value. |
| `baselineModelId` | string | No | Model ID to use as baseline for comparison. Must be one of the IDs specified in modelId. |
| `segmentAttribute` | string | No | Segment attribute name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Capabilities

**Slug:** `DATAROBOT_GET_DEPLOYMENT_CAPABILITIES`

Tool to retrieve the capabilities for a deployment. Use after creating or loading a deployment to check supported features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment to retrieve capabilities for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Champion Model Package

**Slug:** `DATAROBOT_GET_DEPLOYMENT_CHAMPION_MODEL_PACKAGE`

Tool to retrieve the champion model package for a deployment. Use when you need detailed information about the model package currently deployed as champion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Features

**Slug:** `DATAROBOT_GET_DEPLOYMENT_FEATURES`

Tool to retrieve features in the universe dataset associated with a deployment. Use after deployment creation to explore its feature set.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of features to return; defaults to 0 (all features). |
| `offset` | integer | No | Number of features to skip; defaults to 0. |
| `search` | string | No | Case-insensitive search against names of the deployment’s features. |
| `orderBy` | string ("name" | "-name" | "importance" | "-importance") | No | Sort order to apply to the list of features. Allowed values: name, -name, importance, -importance. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `forSegmentedAnalysis` | boolean | No | When true, return only features usable for segmented analysis. |
| `includeNonPredictionFeatures` | boolean | No | When true, return all raw features in the universe dataset; when false, only raw features used for predictions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Actuals Data Export

**Slug:** `DATAROBOT_GET_DEPLOYMENTS_ACTUALS_DATA_EXPORT`

Tool to retrieve a single actuals data export for a deployment. Use when you need to check the status or results of an actuals data export job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `exportId` | string | Yes | ID of the actuals data export job. |
| `deploymentId` | string | Yes | ID of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Custom Metric

**Slug:** `DATAROBOT_GET_DEPLOYMENTS_CUSTOM_METRIC`

Tool to retrieve a single custom metric metadata for a deployment. Use when you need detailed information about a specific custom metric configured for a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | ID of the deployment |
| `customMetricId` | string | Yes | ID of the custom metric |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Settings Checklist

**Slug:** `DATAROBOT_GET_DEPLOYMENT_SETTINGS_CHECKLIST`

Tool to return a checklist of deployment settings and their configuration state. Use when you need to verify which settings are set, partial, or not set for a given deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Monitoring Batch

**Slug:** `DATAROBOT_GET_DEPLOYMENTS_MONITORING_BATCHES`

Tool to retrieve a monitoring batch in a deployment. Use when you need to check the status or details of a specific monitoring batch.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment |
| `monitoringBatchId` | string | Yes | ID of the monitoring batch |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Prediction Data Export

**Slug:** `DATAROBOT_GET_DEPLOYMENTS_PREDICTION_DATA_EXPORTS`

Tool to retrieve a single prediction data export for a deployment. Use when you need to check the status or results of a prediction data export job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `exportId` | string | Yes | Unique identifier of the export. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Training Data Export

**Slug:** `DATAROBOT_GET_DEPLOYMENTS_TRAINING_DATA_EXPORTS`

Tool to retrieve a single training data export for a deployment. Use when you need to get details of a training data export by ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `exportId` | string | Yes | Unique identifier of the export. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Channel By ID

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_CHANNEL_BY_ID`

Tool to retrieve an entity notification channel by ID for a specific deployment or custom job. Use when you need to get details about a notification channel configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `channelId` | string | Yes | The ID of the entity notification channel to retrieve. Obtain from DATAROBOT_LIST_ENTITY_NOTIFICATION_CHANNELS. |
| `relatedEntityId` | string | Yes | The ID of the related entity (deployment ID or custom job ID). Obtain from DATAROBOT_LIST_DEPLOYMENTS or DATAROBOT_LIST_CUSTOM_JOBS. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob). Use deployment for deployment-related notifications or customjob for custom job notifications. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Channels

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_CHANNELS`

Tool to list notification channels related to a specific entity. Use when retrieving notification channels for a deployment or custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of notification channels to return (1-1000). |
| `offset` | integer | No | Number of notification channels to skip for pagination. |
| `namePart` | string | No | Filter channels whose names contain this substring (case-sensitive). |
| `relatedEntityId` | string | Yes | The ID of the related entity. |
| `relatedEntityType` | string ("deployment" | "customjob") | Yes | Type of related entity (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Policies

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_POLICIES`

Tool to list entity notification policies for deployments or custom jobs. Use when retrieving notification configurations for a specific entity.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many notification channels to return. |
| `offset` | integer | No | How many notification channels to skip. |
| `namePart` | string | No | Only return the notification channels whose names contain the given substring. |
| `channelId` | string | No | Return policies with this channel. |
| `eventGroup` | string ("secure_config.all" | "dataset.all" | "file.all" | "comment.all" | "invite_job.all" | "deployment_prediction_explanations_computation.all" | "model_deployments.critical_health" | "model_deployments.critical_frequent_health_change" | "model_deployments.frequent_health_change" | "model_deployments.health" | "model_deployments.retraining_policy" | "inference_endpoints.health" | "model_deployments.management_agent" | "model_deployments.management_agent_health" | "prediction_request.all" | "challenger_management.all" | "challenger_replay.all" | "model_deployments.all" | "project.all" | "perma_delete_project.all" | "users_delete.all" | "applications.all" | "model_version.stage_transitions" | "model_version.all" | "use_case.all" | "batch_predictions.all" | "change_requests.all" | "custom_job_run.all" | "custom_job_run.unsuccessful" | "insights_computation.all" | "notebook_schedule.all" | "monitoring.all") | No | Return policies with this event group. |
| `channelScope` | string ("organization" | "entity" | "template") | No | Scope of the channel. |
| `relatedEntityId` | string | Yes | The id of related entity. |
| `relatedEntityType` | string ("deployment" | "customjob") | Yes | Type of related entity. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Policy by ID

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_POLICY_BY_ID`

Tool to retrieve an entity notification policy by ID. Use when you need to fetch details about a specific notification policy for a deployment or custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `policy_id` | string | Yes | The ID of the notification policy to retrieve. |
| `related_entity_id` | string | Yes | The ID of the related entity (deployment or custom job). |
| `related_entity_type` | string ("deployment" | "customjob") | Yes | Type of related entity: 'deployment' for deployments or 'customjob' for custom jobs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Policy Templates

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_POLICY_TEMPLATES`

Tool to list entity notification policy templates for a specific entity type. Use when retrieving notification policy templates for deployments or custom jobs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many notification channels to return. |
| `offset` | integer | No | How many notification channels to skip. |
| `namePart` | string | No | Only return the notification channels whose names contain the given substring. |
| `channelId` | string | No | Return policies with this channel. |
| `eventGroup` | string ("secure_config.all" | "dataset.all" | "file.all" | "comment.all" | "invite_job.all" | "deployment_prediction_explanations_computation.all" | "model_deployments.critical_health" | "model_deployments.critical_frequent_health_change" | "model_deployments.frequent_health_change" | "model_deployments.health" | "model_deployments.retraining_policy" | "inference_endpoints.health" | "model_deployments.management_agent" | "model_deployments.management_agent_health" | "prediction_request.all" | "challenger_management.all" | "challenger_replay.all" | "model_deployments.all" | "project.all" | "perma_delete_project.all" | "users_delete.all" | "applications.all" | "model_version.stage_transitions" | "model_version.all" | "use_case.all" | "batch_predictions.all" | "change_requests.all" | "custom_job_run.all" | "custom_job_run.unsuccessful" | "insights_computation.all" | "notebook_schedule.all" | "monitoring.all") | No | Event group filter for notification policy templates. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob, case-insensitive). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Entity Notification Policy Template by ID

**Slug:** `DATAROBOT_GET_ENTITY_NOTIFICATION_POLICY_TEMPLATES_BY_ID`

Tool to retrieve a specific entity notification policy template by ID. Use when you need to fetch details about a specific notification policy template for a deployment or custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `policyId` | string | Yes | The id of the notification policy template |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob, case-insensitive). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Execution Environment

**Slug:** `DATAROBOT_GET_EXECUTION_ENVIRONMENT`

Tool to retrieve details about a specific execution environment by its ID. Use when you need to check environment configuration, versions, and deployment status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `environmentId` | string | Yes | The ID of the execution environment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Execution Environment Version

**Slug:** `DATAROBOT_GET_EXECUTION_ENVIRONMENTS_VERSIONS`

Tool to retrieve a specific execution environment version by ID. Use when you need details about a particular environment version's build status, Docker context, and configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `environmentId` | string | Yes | Execution environment Id. |
| `environmentVersionId` | string | Yes | Execution environment version Id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External Data Driver

**Slug:** `DATAROBOT_GET_EXTERNAL_DATA_DRIVERS`

Tool to retrieve external data driver details by driver ID. Use when you need driver metadata, configuration, or authentication types.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `driverId` | string | Yes | Driver ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External Data Source

**Slug:** `DATAROBOT_GET_EXTERNAL_DATA_SOURCE`

Tool to retrieve external data source details by ID. Use when you need to inspect data source configuration or metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataSourceId` | string | Yes | The ID of the Data Source. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External Data Store

**Slug:** `DATAROBOT_GET_EXTERNAL_DATA_STORES`

Tool to retrieve external data store details by ID. Use when you need information about a specific data store configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataStoreId` | string | Yes | ID of the data store to retrieve |
| `substituteUrlParameters` | string ("false" | "False" | "true" | "True") | No | Enum for substituteUrlParameters query parameter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External Driver Configurations

**Slug:** `DATAROBOT_GET_EXTERNAL_DRIVER_CONFIGURATIONS`

Tool to retrieve external driver configuration details by ID. Use when you need driver configuration metadata including JDBC settings, field schemas, and authentication types.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `configurationId` | string | Yes | Driver configuration ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get External OAuth Provider

**Slug:** `DATAROBOT_GET_EXTERNAL_O_AUTH_PROVIDER`

Tool to retrieve an external OAuth provider by ID. Use when you need the provider's configuration and connection details.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `providerId` | string | Yes | The unique identifier of the external OAuth provider to retrieve. Obtain this from the List External OAuth Providers action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Feature List

**Slug:** `DATAROBOT_GET_FEATURELIST`

Tool to retrieve a specific feature list by ID. Use when you need detailed information about a particular feature list in a DataRobot project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS. |
| `featurelistId` | string | Yes | Unique identifier of the feature list to retrieve. Obtain from DATAROBOT_LIST_FEATURELISTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Frozen Model

**Slug:** `DATAROBOT_GET_FROZEN_MODEL`

Tool to retrieve a frozen model from a DataRobot project. Use when you need details about a specific frozen model's configuration and performance metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model ID |
| `project_id` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI Chat

**Slug:** `DATAROBOT_GET_GENAI_CHAT`

Tool to retrieve a GenAI chat by ID. Use when you need to fetch chat details including status, associated LLM blueprint, and prompt count.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The chat ID to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI Chat Prompt

**Slug:** `DATAROBOT_GET_GENAI_CHAT_PROMPT`

Tool to retrieve a GenAI chat prompt by ID. Use when you need to fetch details about a specific chat prompt interaction.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier for the chat prompt to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI Comparison Chat

**Slug:** `DATAROBOT_GET_GENAI_COMPARISON_CHATS`

Tool to retrieve a GenAI comparison chat by ID. Use when you need to fetch details about a specific comparison chat in DataRobot's GenAI playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the comparison chat to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI Custom Model LLM Validations

**Slug:** `DATAROBOT_GET_GENAI_CUSTOM_MODEL_LLM_VALIDATIONS`

Tool to retrieve a GenAI custom model LLM validation by ID. Use when you need validation details, status, or configuration for a custom model deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `validation_id` | string | Yes | The ID of the custom model LLM validation to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI LLM

**Slug:** `DATAROBOT_GET_GENAI_LLM`

Tool to retrieve details for a specific GenAI LLM by its ID. Use when you need to get configuration, settings, or metadata for a particular language model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `llm_id` | string | Yes | The unique identifier for the LLM resource to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI LLM Test Suite

**Slug:** `DATAROBOT_GET_GENAI_LLM_TEST_SUITE`

Tool to retrieve a GenAI LLM test suite by ID. Use when you need details about a specific LLM test suite configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The identifier of the LLM test suite to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI OOTB Metric Configuration

**Slug:** `DATAROBOT_GET_GENAI_OOTB_METRIC_CONFIGURATION`

Tool to retrieve a GenAI OOTB metric configuration by ID. Use when you need details about a specific out-of-the-box metric configuration for GenAI playgrounds.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | ID of the OOTB metric configuration to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get GenAI Playground

**Slug:** `DATAROBOT_GET_GENAI_PLAYGROUNDS`

Tool to retrieve a DataRobot GenAI playground by ID. Use when you need playground details for GenAI operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the playground to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Guard Configuration

**Slug:** `DATAROBOT_GET_GUARD_CONFIGURATION`

Tool to retrieve a DataRobot guard configuration by ID. Use when you need to inspect or verify guard settings for custom models, playgrounds, or other entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `configId` | string | Yes | ID of the guard configuration to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Guard Template

**Slug:** `DATAROBOT_GET_GUARD_TEMPLATES`

Tool to retrieve information about a guard template by ID. Use when you need details about a specific guardrail template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `templateId` | string | Yes | ID of the guard template to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Quality Report for Dataset Version Feature

**Slug:** `DATAROBOT_GET_INSIGHTS_DATA_QUALITY_REPORT_DATASET_VERSIONS`

Tool to retrieve data quality report for a feature of a dataset version. Use when you need to assess data quality issues for a specific feature.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the entity (dataset version). |
| `featureName` | string | Yes | The name of the feature that the report is retrieved for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Quality Summary for Dataset

**Slug:** `DATAROBOT_GET_INSIGHTS_DATA_QUALITY_SUMMARY_DATASETS`

Tool to retrieve data quality summary for a dataset. Use when you need to assess data quality issues, check for anomalies, or review data quality check results before modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the entity (dataset, dataset version, or project) |
| `featureListId` | string | No | The ID of the feature list to provide the summary for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Data Quality Summary for Dataset Version

**Slug:** `DATAROBOT_GET_INSIGHTS_DATA_QUALITY_SUMMARY_DATASET_VERSIONS`

Tool to retrieve data quality summary for a dataset version. Use when you need to check data quality issues, warnings, or recommendations for a specific dataset version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the dataset version entity. |
| `featureListId` | string | No | The ID of the feature list to provide the summary for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Feature Effects Insights

**Slug:** `DATAROBOT_GET_INSIGHTS_FEATURE_EFFECTS_MODELS`

Tool to retrieve feature effects insights for a DataRobot model. Use when analyzing feature importance and impact on model predictions for a specific data source (validation, training, or backtest).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. Maximum value is 10. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `source` | string ("validation" | "training" | "backtest_0" | "backtest_1" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20" | "holdout" | "backtest_0_training" | "backtest_1_training" | "backtest_2_training" | "backtest_3_training" | "backtest_4_training" | "backtest_5_training" | "backtest_6_training" | "backtest_7_training" | "backtest_8_training" | "backtest_9_training" | "backtest_10_training" | "backtest_11_training" | "backtest_12_training" | "backtest_13_training" | "backtest_14_training" | "backtest_15_training" | "backtest_16_training" | "backtest_17_training" | "backtest_18_training" | "backtest_19_training" | "backtest_20_training" | "holdout_training") | Yes | The subset of data used to compute the insight (e.g., validation, training, backtest_0, holdout). |
| `entityId` | string | Yes | The ID of the model to retrieve feature effects for. |
| `dataSliceId` | string | No | ID of the data slice to filter feature effects. If not specified, returns feature effects for all slices. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Filter for unsliced insights only. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Feature Impact Insights

**Slug:** `DATAROBOT_GET_INSIGHTS_FEATURE_IMPACT_MODELS`

Tool to retrieve feature impact insights for a DataRobot model. Use when you need to understand which features are most important in model predictions. Supports filtering by data slice, source partition, and pagination for large result sets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `source` | string ("training" | "backtest_1_training" | "backtest_2_training" | "backtest_3_training" | "backtest_4_training" | "backtest_5_training" | "backtest_6_training" | "backtest_7_training" | "backtest_8_training" | "backtest_9_training" | "backtest_10_training" | "backtest_11_training" | "backtest_12_training" | "backtest_13_training" | "backtest_14_training" | "backtest_15_training" | "backtest_16_training" | "backtest_17_training" | "backtest_18_training" | "backtest_19_training" | "backtest_20_training" | "holdout_training") | No | The subset of data used to compute the insight. |
| `entityId` | string | Yes | The ID of the model to retrieve feature impact insights for. |
| `dataSliceId` | string | No | ID of the data slice to filter feature impact insights. If not specified, returns insights for all slices. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Return only insights without a data_slice_id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Lift Chart Insights

**Slug:** `DATAROBOT_GET_INSIGHTS_LIFT_CHART_MODELS`

Tool to retrieve Lift chart insights for a DataRobot model. Use when you need to analyze model performance across different population segments, comparing predicted vs actual outcomes. Supports filtering by data slice, source partition, and pagination for large result sets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `source` | string ("validation" | "crossValidation" | "holdout" | "externalTestSet" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | No | The subset of data used to compute the insight. |
| `entityId` | string | Yes | The ID of the model to retrieve Lift chart insights for. |
| `dataSliceId` | string | No | ID of the data slice to filter Lift chart insights. If not specified, returns insights for all slices. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Return only insights without a data_slice_id. |
| `externalDatasetId` | string | No | The ID of the external dataset to filter insights by. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get ROC Curve Insights for Model

**Slug:** `DATAROBOT_GET_INSIGHTS_ROC_CURVE_MODELS`

Tool to retrieve paginated ROC curve insights for a specific model. Use when you need to analyze model performance using ROC curves for different data sources or slices.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set (for pagination). |
| `source` | string ("validation" | "crossValidation" | "holdout" | "externalTestSet" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | No | The subset of data used to compute the ROC curve insight. |
| `entityId` | string | Yes | The ID of the model to retrieve ROC curve insights for. |
| `dataSliceId` | string | No | ID of the data slice to filter insights by. If not specified, returns insights for all slices. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Boolean values for unsliced filter. |
| `externalDatasetId` | string | No | The ID of the external dataset to filter insights by. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get SHAP Distributions Insights

**Slug:** `DATAROBOT_GET_INSIGHTS_SHAP_DISTRIBUTIONS_MODELS`

Tool to retrieve SHAP (SHapley Additive exPlanations) distribution insights for a DataRobot model. Use when you need to understand how feature values contribute to model predictions across the dataset. SHAP distributions show how features impact predictions for different data subsets, helping explain model behavior. Supports filtering by data slice, source partition, external datasets, and pagination for large result sets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. |
| `accept` | string ("application/json" | "text/csv") | No | Requested MIME type for the returned data. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_1_training" | "backtest_2" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_20" | "backtest_20_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | No | The subset of data used to compute the insight. |
| `entityId` | string | Yes | The ID of the model to retrieve SHAP distributions insights for. |
| `seriesId` | string | No | The series ID used to filter records. Required for multiseries time-series projects to identify which series to analyze. |
| `dataSliceId` | string | No | ID of the data slice to filter SHAP distributions insights. If not specified, returns insights for all slices. |
| `quickCompute` | boolean | No | When enabled (default true), limits the rows used from the selected subset (training sample or slice) for faster computation. Disable for complete but slower computation. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Return only insights without a data_slice_id. |
| `featuresOrderBy` | string ("featureImpact" | "-featureImpact" | "featureName" | "-featureName") | No | Order SHAP distributions by the specified field value. |
| `forecastDistance` | integer | No | The forecast distance used to retrieve insight data. For time-series projects, specifies how many time units ahead to analyze. |
| `externalDatasetId` | string | No | The ID of the external dataset to filter SHAP distributions insights. |
| `featureFilterName` | string | No | The name of a specific feature to return. Use when analyzing a particular feature's SHAP distribution. |
| `featureFilterCount` | integer | No | The maximum number of features to return in the SHAP distributions. Use to focus on top N features. |
| `predictionFilterRowCount` | integer | No | The maximum number of distribution rows to return. Limits the number of prediction rows included in the response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get SHAP Impact Insights for Model

**Slug:** `DATAROBOT_GET_INSIGHTS_SHAP_IMPACT_MODELS`

Tool to retrieve paginated SHAP Impact insights for a specific model. Use when analyzing feature importance and impact on model predictions for a given model ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return. Use this to control page size for pagination. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. Use this with limit for pagination. |
| `source` | string ("backtest_0" | "backtest_0Training" | "backtest_1" | "backtest_10" | "backtest_10Training" | "backtest_11" | "backtest_11Training" | "backtest_12" | "backtest_12Training" | "backtest_13" | "backtest_13Training" | "backtest_14" | "backtest_14Training" | "backtest_15" | "backtest_15Training" | "backtest_16" | "backtest_16Training" | "backtest_17" | "backtest_17Training" | "backtest_18" | "backtest_18Training" | "backtest_19" | "backtest_19Training" | "backtest_1Training" | "backtest_2" | "backtest_20" | "backtest_20Training" | "backtest_2Training" | "backtest_3" | "backtest_3Training" | "backtest_4" | "backtest_4Training" | "backtest_5" | "backtest_5Training" | "backtest_6" | "backtest_6Training" | "backtest_7" | "backtest_7Training" | "backtest_8" | "backtest_8Training" | "backtest_9" | "backtest_9Training" | "externalTestSet" | "holdout" | "holdoutTraining" | "training" | "validation") | No | Subset of data used to compute the insight. |
| `entityId` | string | Yes | The ID of the model to retrieve SHAP Impact insights for. |
| `dataSliceId` | string | No | ID of the data slice to filter insights. If not specified, returns insights for all slices. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Whether to return only insights without a data_slice_id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get SHAP Preview Insights for Model

**Slug:** `DATAROBOT_GET_INSIGHTS_SHAP_PREVIEW_MODELS`

Tool to retrieve SHAP Preview insights for a DataRobot model. Use when analyzing feature importance and SHAP values for model predictions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return. |
| `Accept` | string ("application/json" | "text/csv") | No | Requested MIME type for the returned data. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_1_training" | "backtest_2" | "backtest_20" | "backtest_20_training" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation") | No | Data source for SHAP preview insights. |
| `entityId` | string | Yes | The ID of the model to retrieve SHAP Preview insights for. |
| `seriesId` | string | No | The series ID used to filter records (for multiseries projects). |
| `dataSliceId` | string | No | ID of the data slice to filter insights. |
| `quickCompute` | boolean | No | When enabled, limits the rows used from the selected source subset by default. When disabled, all rows are used. Defaults to true. |
| `unslicedOnly` | string ("false" | "False" | "true" | "True") | No | Whether to return only insights without a data slice ID. |
| `forecastDistance` | integer | No | The forecast distance used to retrieve insight data. |
| `externalDatasetId` | string | No | The ID of the external dataset to filter by. |
| `featureFilterName` | string | No | The names of specific features to return for each preview. |
| `featureFilterCount` | integer | No | The maximum number of features to return for each preview. |
| `predictionFilterOperator` | string ("eq" | "in" | "<" | ">" | "between" | "notBetween") | No | Operator to apply to filtered predictions. |
| `predictionFilterRowCount` | integer | No | The maximum number of preview rows to return. |
| `predictionFilterPercentiles` | integer | No | The number of percentile intervals to select from the total number of rows. This field supersedes predictionFilterRowCount if both are present. |
| `predictionFilterOperandFirst` | number | No | The first operand to apply to filtered predictions. |
| `predictionFilterOperandSecond` | number | No | The second operand to apply to filtered predictions. Required when using 'between' or 'notBetween' operators. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get MLOps Compute Bundle

**Slug:** `DATAROBOT_GET_MLOPS_COMPUTE_BUNDLE`

Tool to retrieve a specific MLOps compute bundle by ID. Use when you need to check resource specifications (CPU, memory, GPU) for deployments or custom models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `resource_request_bundle_id` | string | Yes | ID of the compute bundle to retrieve (e.g., 'cpu.nano') |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Number of Iterations Trained

**Slug:** `DATAROBOT_GET_MODEL_NUM_ITERATIONS_TRAINED`

Tool to retrieve the number of iterations trained for a DataRobot model. Use when you need early stopping information for a specific model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | ID of the model to retrieve iteration count for |
| `project_id` | string | Yes | ID of the DataRobot project containing the model |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Package

**Slug:** `DATAROBOT_GET_MODEL_PACKAGE`

Tool to retrieve a model package by ID. Use when you need detailed information about a specific model package including capabilities, datasets, target info, and deployment status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelPackageId` | string | Yes | ID of the model package to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Word Cloud

**Slug:** `DATAROBOT_GET_MODEL_WORD_CLOUD`

Tool to retrieve word cloud data for a DataRobot text-based model. Returns the most important ngrams (words/phrases) with their coefficients, frequencies, and counts. Use when analyzing feature importance for NLP models or understanding which text features drive predictions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The unique identifier of the model within the project |
| `project_id` | string | Yes | The unique identifier of the DataRobot project |
| `exclude_stop_words` | string ("false" | "False" | "true" | "True") | No | Enum for exclude stop words parameter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook

**Slug:** `DATAROBOT_GET_NOTEBOOK`

Tool to retrieve a specific DataRobot notebook by ID. Use when you need detailed information about a specific notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Code Snippet

**Slug:** `DATAROBOT_GET_NOTEBOOK_CODE_SNIPPET`

Tool to retrieve a notebook code snippet by ID. Use when you need to fetch details of a specific code snippet from DataRobot notebooks.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `snippet_id` | string | Yes | ID of the notebook code snippet to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Environment Variables

**Slug:** `DATAROBOT_GET_NOTEBOOK_ENVIRONMENT_VARIABLES`

Tool to retrieve notebook environment variables by ID. Use when you need to view configured environment variables for a specific notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID whose environment variables to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Execution Environment

**Slug:** `DATAROBOT_GET_NOTEBOOK_EXECUTION_ENVIRONMENT`

Tool to retrieve a notebook execution environment by ID. Use when you need details about a specific execution environment configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook execution environment to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Execution Status

**Slug:** `DATAROBOT_GET_NOTEBOOK_EXECUTION_STATUS`

Tool to retrieve the execution status of a DataRobot notebook. Use when you need to check the current execution state of a notebook, including running status and queued cells.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebook_id` | string | Yes | The notebook ID (24-character hex string ObjectId format). Note: Only certain notebook types support execution status - notebooks that don't support this will return 'Operation not supported for notebook type.' |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Job

**Slug:** `DATAROBOT_GET_NOTEBOOK_JOBS`

Tool to retrieve a DataRobot notebook job by ID. Use when you need to inspect the configuration, schedule, or status of a specific notebook job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook job to retrieve. Must be a valid 24-character hex ObjectId. |
| `useCaseId` | string | Yes | The ID of the use case associated with the notebook job. Required for authorization. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notebook Revision by ID

**Slug:** `DATAROBOT_GET_NOTEBOOK_REVISIONS_BY_ID`

Tool to retrieve a specific notebook revision by notebook ID and revision ID. Use when you need details about a particular saved version of a notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebookId` | string | Yes | Valid ObjectId (12-byte input or 24-character hex string) for the notebook |
| `revisionId` | string | Yes | Valid ObjectId (12-byte input or 24-character hex string) for the revision |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notification Channel Template

**Slug:** `DATAROBOT_GET_NOTIFICATION_CHANNEL_TEMPLATES`

Tool to retrieve a specific notification channel template by ID. Use when you need to fetch details about a notification channel template configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `channelId` | string | Yes | The ID of the notification channel template to retrieve. Obtain from DATAROBOT_LIST_NOTIFICATION_CHANNEL_TEMPLATES. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Notification Webhook Channel Tests

**Slug:** `DATAROBOT_GET_NOTIFICATION_WEBHOOK_CHANNEL_TESTS`

Tool to retrieve the status of a notification webhook channel test. Use when checking test results after creating a webhook channel test.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notificationId` | string | Yes | The identifier of the notification webhook channel test to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get OpenTelemetry Metric Configuration

**Slug:** `DATAROBOT_GET_OTEL_METRICS_CONFIGS`

Tool to retrieve an OpenTelemetry metric configuration for a specific entity. Use when you need details about OTel metrics for deployments, use cases, or other DataRobot entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |
| `otelMetricId` | string | Yes | The ID of the OpenTelemetry metric configuration. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get OTel Metrics Values Over Time Segments

**Slug:** `DATAROBOT_GET_OTEL_METRICS_VALUES_OVER_TIME_SEGMENTS`

Tool to get OpenTelemetry metric values grouped by segments attribute. Use when analyzing metrics across different segment values.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the metric list. |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `otelName` | string | No | The OTel key of the metric. |
| `startTime` | string | No | The start time of the metric list. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |
| `resolution` | string ("PT1M" | "PT5M" | "PT1H" | "P1D" | "P7D") | No | Time period resolution for metric values. |
| `aggregation` | string | No | The aggregation method used for metric display. |
| `segmentLimit` | integer | No | The maximum number of segment values to return when segmentValues is not provided. |
| `segmentValue` | string | No | The values for grouping metrics by segment. |
| `segmentAttribute` | string | Yes | Name of the attribute by which to group results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get OTel Metrics Values by Segments

**Slug:** `DATAROBOT_GET_OTEL_METRICS_VALUES_SEGMENTS`

Tool to retrieve OpenTelemetry metric values grouped by a segment attribute over a time period. Use when analyzing metrics segmented by attributes like model version or deployment ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the metric period in RFC3339 format (e.g., 2023-09-08T00:00:00Z). |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `otelName` | string | No | The OTel key/name of the specific metric to retrieve. If not provided, all available metrics are returned. |
| `startTime` | string | No | The start time of the metric period in RFC3339 format (e.g., 2023-09-01T00:00:00Z). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs (e.g., deployment, use_case). |
| `aggregation` | string | No | The aggregation method used for metric display (e.g., avg, sum, count, min, max). |
| `segmentLimit` | integer | No | The maximum number of segment values to return when segmentValue is not provided. |
| `segmentValue` | string | No | The specific values for grouping metrics by segment. If not provided, top segments are returned. |
| `segmentAttribute` | string | Yes | Name of the attribute by which to group results (e.g., model_version, deployment_id). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get OpenTelemetry Traces

**Slug:** `DATAROBOT_GET_OTEL_TRACES`

Tool to retrieve OpenTelemetry traces for monitoring and debugging. Use when you need to inspect trace details for deployments, use cases, or other entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `traceId` | string | Yes | OTel Trace ID (32 character hexadecimal string). |
| `entityId` | string | Yes | ID of the entity to which the trace belongs. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the trace belongs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project

**Slug:** `DATAROBOT_GET_PROJECT`

Tool to retrieve a DataRobot project by ID. Use when you need project metadata before further operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | ID of the DataRobot project to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Access Control

**Slug:** `DATAROBOT_GET_PROJECT_ACCESS_CONTROL`

Tool to list users with their roles on a project. Use after assigning permissions or when auditing project access. Example prompt: "List access control entries for project 5f6a7b8c9d0e1f2a3b4c5d6e, first page of 20."

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return per page. Defaults to 20. |
| `offset` | integer | No | Number of items to skip for pagination. Defaults to 0 (start from beginning). |
| `userId` | string | No | Filter results to a specific user ID. |
| `username` | string | No | Filter results to a specific username. |
| `projectId` | string | Yes | Unique identifier of the project to retrieve access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Blueprint

**Slug:** `DATAROBOT_GET_PROJECT_BLUEPRINT`

Tool to retrieve a blueprint by its ID. Use when you need blueprint metadata and model details for a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The ID of the project containing the blueprint. |
| `blueprint_id` | string | Yes | The ID of the blueprint to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Blueprint JSON

**Slug:** `DATAROBOT_GET_PROJECT_BLUEPRINT_JSON`

Tool to retrieve the JSON representation of a DataRobot blueprint. Use when you need the detailed blueprint structure with task configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The ID of the project containing the blueprint. |
| `blueprint_id` | string | Yes | The ID of the blueprint to retrieve JSON for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Job

**Slug:** `DATAROBOT_GET_PROJECT_JOB`

Tool to retrieve details for an in-progress project job. Use after starting a project job to monitor its status before completion.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | The ID of the job to retrieve details for. |
| `project_id` | string | Yes | The ID of the project containing the job to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Multicategorical Invalid Format File

**Slug:** `DATAROBOT_GET_PROJECT_MULTICATEGORICAL_INVALID_FORMAT_FILE`

Tool to retrieve a file with format errors of potential multicategorical features for a DataRobot project. Use when you need to inspect formatting issues in multicategorical features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The ID of the project to retrieve the multicategorical invalid format file from. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Batch Type Transform Features Result

**Slug:** `DATAROBOT_GET_PROJECTS_BATCH_TYPE_TRANSFORM_FEATURES_RESULT`

Tool to retrieve the result of a batch variable type transformation. Use when you need to check the status and outcome of a feature type transformation job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | integer | Yes | ID of the batch variable type transformation job. |
| `project_id` | string | Yes | The project containing transformed features. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Projects Duplicate Images

**Slug:** `DATAROBOT_GET_PROJECTS_DUPLICATE_IMAGES`

Tool to get a list of duplicate images containing the number of occurrences of each image. Use when analyzing Visual AI projects for duplicate images.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `column` | string | Yes | Column parameter to filter the list of duplicate images returned |
| `offset` | integer | No | This many results will be skipped. |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Feature Histograms

**Slug:** `DATAROBOT_GET_PROJECTS_FEATURE_HISTOGRAMS`

Tool to retrieve feature histogram data for a specific feature in a DataRobot project. Use when you need to visualize feature distributions or analyze feature value patterns.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `key` | string | No | Name of the top 50 key for which plot to be retrieved. Only required for summarized categorical features. |
| `binLimit` | integer | No | Maximum number of bins in the returned histogram plot. |
| `projectId` | string | Yes | ID of the DataRobot project. |
| `featureName` | string | Yes | The name of the feature. Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use List Features action to check the actual feature names. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Feature

**Slug:** `DATAROBOT_GET_PROJECTS_FEATURES`

Tool to retrieve detailed information about a specific feature in a DataRobot project. Use when you need to inspect feature properties, statistics, or metadata for a given feature name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The ID of the DataRobot project |
| `featureName` | string | Yes | The name of the feature. Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use list_featurelists to check the feature name. For non-ASCII feature names, the feature name should be utf-8-encoded (before URL-quoting). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Modeling Featurelist

**Slug:** `DATAROBOT_GET_PROJECTS_MODELING_FEATURELISTS`

Tool to retrieve a single modeling featurelist by ID. Use when you need details about a specific featurelist within a DataRobot project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID. |
| `featurelistId` | string | Yes | The featurelist ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Modeling Feature

**Slug:** `DATAROBOT_GET_PROJECTS_MODELING_FEATURES`

Tool to retrieve detailed information about a specific modeling feature in a DataRobot project. Use when you need to inspect modeling feature properties, statistics, importance, or metadata for a given feature name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The ID of the DataRobot project |
| `featureName` | string | Yes | The name of the feature. Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use list features endpoint to check the feature name. For non-ASCII feature names, the feature name should be utf-8-encoded (before URL-quoting). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Model

**Slug:** `DATAROBOT_GET_PROJECTS_MODELS`

Tool to retrieve a model from a DataRobot project. Use when you need details about a specific model's configuration, performance metrics, and training settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model ID |
| `project_id` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Lift Chart

**Slug:** `DATAROBOT_GET_PROJECTS_MODELS_LIFT_CHART`

Tool to retrieve lift chart data from a single source for a project model. Use when you need to analyze lift chart performance metrics for a specific data source like validation, crossValidation, holdout, or backtests.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("validation" | "crossValidation" | "holdout" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | Yes | Source of the data (e.g., validation, crossValidation, holdout, or backtest_N) |
| `modelId` | string | Yes | The model ID |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model ROC Curve

**Slug:** `DATAROBOT_GET_PROJECTS_MODELS_ROC_CURVE`

Tool to retrieve ROC curve data for a specific model from a single data source. Use when analyzing binary classification model performance across different thresholds.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("validation" | "crossValidation" | "holdout" | "backtest_2" | "backtest_3" | "backtest_4" | "backtest_5" | "backtest_6" | "backtest_7" | "backtest_8" | "backtest_9" | "backtest_10" | "backtest_11" | "backtest_12" | "backtest_13" | "backtest_14" | "backtest_15" | "backtest_16" | "backtest_17" | "backtest_18" | "backtest_19" | "backtest_20") | Yes | Source of the data to retrieve ROC curve from (e.g., validation, holdout, crossValidation, or backtest partitions). |
| `modelId` | string | Yes | ID of the model to retrieve ROC curve data for. |
| `projectId` | string | Yes | ID of the DataRobot project containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Projects Prediction Datasets

**Slug:** `DATAROBOT_GET_PROJECTS_PREDICTION_DATASETS`

Tool to get metadata of a specific prediction dataset in a DataRobot project. Use when you need to retrieve details about a dataset uploaded for prediction.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The dataset ID to query for |
| `projectId` | string | Yes | The project ID that owns the prediction dataset |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Rating Table Model

**Slug:** `DATAROBOT_GET_PROJECTS_RATING_TABLE_MODELS`

Tool to retrieve a rating table model from a DataRobot project. Use when you need details about a specific rating table model's configuration, performance metrics, and training settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model to retrieve |
| `project_id` | string | Yes | The project to retrieve the model from |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Rating Table

**Slug:** `DATAROBOT_GET_PROJECTS_RATING_TABLES`

Tool to retrieve rating table information from a DataRobot project. Use when you need details about a specific rating table including validation status and associated models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The project that owns this rating table |
| `rating_table_id` | string | Yes | The rating table ID to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Project Training Predictions

**Slug:** `DATAROBOT_GET_PROJECTS_TRAINING_PREDICTIONS`

Tool to retrieve training predictions for a project. Use when you need to access prediction results from model training for analysis or validation purposes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned |
| `offset` | integer | No | This many results will be skipped |
| `projectId` | string | Yes | Project ID to retrieve training predictions for |
| `predictionId` | string | Yes | Prediction ID to retrieve training predictions for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Quotas

**Slug:** `DATAROBOT_GET_QUOTAS`

Tool to retrieve a specific quota by ID. Use when you need details about resource quotas and policies.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `quotaId` | string | Yes | Unique identifier of the quota to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Quota Template

**Slug:** `DATAROBOT_GET_QUOTA_TEMPLATE`

Tool to retrieve a specific quota template by ID. Use when you need details of a particular quota template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `quotaTemplateId` | string | Yes | Specific quota template ID to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Recipe Operation

**Slug:** `DATAROBOT_GET_RECIPE_OPERATION`

Tool to retrieve details of a specific operation from a wrangling recipe. Use when inspecting individual data transformation steps.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_id` | string | Yes | ID of the wrangling recipe to retrieve the operation from. |
| `operation_index` | integer | Yes | Zero-based index of the operation within the recipe. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Recipe

**Slug:** `DATAROBOT_GET_RECIPES`

Tool to retrieve a DataRobot wrangling recipe by ID. Use when you need details about a specific recipe.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipeId` | string | Yes | The ID of the recipe to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Recommended Settings

**Slug:** `DATAROBOT_GET_RECOMMENDED_SETTINGS`

Tool to retrieve configured recommended settings for an entity type. Use when you need to get recommended configuration settings for deployments or other entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityType` | string ("deployment" | "Deployment" | "DEPLOYMENT") | Yes | Type of the entity to get the recommended settings for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Registered Model

**Slug:** `DATAROBOT_GET_REGISTERED_MODEL`

Tool to retrieve a registered model by ID. Use when you need to fetch metadata about a specific registered model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `registeredModelId` | string | Yes | ID of the registered model to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Registered Model Version

**Slug:** `DATAROBOT_GET_REGISTERED_MODEL_VERSION`

Tool to retrieve a specific version of a registered model. Use when you need detailed information about a registered model version including its capabilities, datasets, target info, and deployment status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `versionId` | string | Yes | ID of the registered model's version. |
| `registeredModelId` | string | Yes | ID of the registered model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Secure Config

**Slug:** `DATAROBOT_GET_SECURE_CONFIG`

Tool to retrieve a secure configuration by ID. Use when you need to inspect secure configuration metadata before performing operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `secure_config_id` | string | Yes | The ID of the secure configuration to retrieve (32-36 character identifier). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Secure Config Schema

**Slug:** `DATAROBOT_GET_SECURE_CONFIG_SCHEMA`

Tool to retrieve a secure configuration schema by ID. Use when you need the schema definition for a secure config.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `secure_config_schema_id` | string | Yes | The ID of the secure configuration schema to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Tenant Active Users

**Slug:** `DATAROBOT_GET_TENANT_ACTIVE_USERS`

Retrieve active users in a tenant over a date range. Returns a list of users who were active during the specified period. Use when an admin needs to audit user activity or generate usage reports. Note: The start date must be on or after 2025-07-01.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | Yes | Inclusive end date for the period (YYYY-MM-DD). Must be on or after the start date. |
| `start` | string | Yes | Inclusive start date for the period (YYYY-MM-DD). Must be on or after 2025-07-01. |
| `tenantId` | string | Yes | The ID of the tenant to retrieve active users for. Can be obtained from the Get Account Info action (tenant.id field). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Use Case

**Slug:** `DATAROBOT_GET_USE_CASES`

Tool to retrieve a DataRobot use case by ID. Use when you need to get details about a specific use case including its metadata, associated resources, and member information.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | Yes | The ID of the use case to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Use Cases Datasets

**Slug:** `DATAROBOT_GET_USE_CASES_DATASETS`

Tool to get dataset details in the scope of a use case. Use when you need detailed information about a specific dataset within a use case context.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The ID of the dataset |
| `useCaseId` | string | Yes | The ID linking the use case with the entity type |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get User Group

**Slug:** `DATAROBOT_GET_USER_GROUP`

Tool to retrieve a user group by its ID. Use when you need the group's properties and permissions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `groupId` | string | Yes | The unique identifier of the user group to retrieve. Obtain this from the List User Groups action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Value Tracker

**Slug:** `DATAROBOT_GET_VALUE_TRACKER`

Tool to retrieve a value tracker by ID. Use when you need to get details about a specific value tracker.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `valueTrackerId` | string | Yes | The id of the value tracker to retrieve |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Value Tracker Value Templates

**Slug:** `DATAROBOT_GET_VALUE_TRACKER_VALUE_TEMPLATES`

Tool to retrieve a value tracker value template by its type. Use when you need to understand the schema and parameters for classification or regression value tracking.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `template_type` | string ("classification" | "regression") | Yes | The type of value tracker template to retrieve: classification or regression |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Initialize Model Compliance Documentation

**Slug:** `DATAROBOT_INITIALIZE_MODEL_COMPLIANCE_DOCS`

Tool to initialize compliance documentation pre-processing for a model or model package. Use when you need to prepare a model for compliance document generation. The initialization is asynchronous. Poll the returned location URL to check status and wait for completion before generating compliance documents.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the model or model package the document corresponds to. This is a 24-character hex string representing the model ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Invite Users

**Slug:** `DATAROBOT_INVITE_USERS`

Tool to invite multiple users by email to join the DataRobot platform. Use when you need to send invitation emails to new users. The API returns a 202 status with a Location header to poll for job status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `orgId` | string | No | Organization ID to invite users to. If not specified, users are invited to the default organization. |
| `emails` | array | Yes | List of email addresses to invite (1-20 emails). Each must be a valid email format. |
| `language` | string ("ar_001" | "de_DE" | "en" | "es_419" | "fr" | "ja" | "ko" | "pt_BR" | "test" | "uk_UA") | No | Language options for invitation emails. |
| `seatType` | string ("Non-Builder User") | No | Seat type for invited users. |
| `resources` | array | No | List of resources (projects, datasets, etc.) to share with invited users (1-20 resources). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Link Entity to Use Case

**Slug:** `DATAROBOT_LINK_ENTITY_TO_USE_CASE`

Tool to link a single entity to a DataRobot use case. Use when you need to associate a project, dataset, notebook, deployment, or other resource with a use case for organizational purposes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the entity to link to the use case (e.g., dataset ID, project ID, notebook ID). |
| `workflow` | string ("migration" | "creation" | "move" | "unspecified") | No | The workflow that is attaching this entity. Used for analytics only, does not affect the operation. Options: migration, creation, move, or unspecified (default). |
| `useCaseId` | string | Yes | The ID of the use case to link the entity to. Use LIST_USE_CASES or GET_USE_CASES to find available use case IDs. |
| `referenceCollectionType` | string ("projects" | "datasets" | "files" | "notebooks" | "applications" | "recipes" | "customModelVersions" | "registeredModelVersions" | "deployments" | "customApplications" | "customJobs") | Yes | The type of entity to link. Specifies the reference collection type (projects, datasets, files, notebooks, etc.). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Link Notebooks to Use Case (Bulk)

**Slug:** `DATAROBOT_LINK_NOTEBOOKS_BULK_TO_USE_CASE`

Tool to bulk link multiple DataRobot notebooks to a use case. Use when you need to associate multiple notebooks with a specific use case in DataRobot Workbench. All notebook IDs and the use case ID must be valid 24-character hexadecimal ObjectIds.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | Yes | The use case ID (24-character hex ObjectId) to link the notebooks to. Must be a valid MongoDB ObjectId format (24 hexadecimal characters). |
| `notebookIds` | array | Yes | Array of notebook IDs (24-character hex ObjectIds) to link to the use case. Each ID must be a valid MongoDB ObjectId format (24 hexadecimal characters). Must contain at least one notebook ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Access Roles

**Slug:** `DATAROBOT_LIST_ACCESS_ROLES`

Tool to list access roles. Use when you need to retrieve available access roles, with optional global filtering.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return; must be at least 1. |
| `offset` | integer | No | Number of results to skip; must be non-negative. |
| `globalRoles` | string ("included" | "excluded" | "only") | No | Whether to include global roles: 'included' (default, returns both global and org-specific roles), 'excluded' (only org-specific roles), or 'only' (only global roles). |
| `organizationId` | string | No | Restrict roles to those usable by this organization; ignored when globalRoles='only'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Access Role Users

**Slug:** `DATAROBOT_LIST_ACCESS_ROLE_USERS`

Tool to list users assigned to an Access Role. Use when you need to fetch all users directly, via groups, or via organization by role ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of users to return in this call. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `roleId` | string | Yes | Identifier of the Access Role to list users for. |
| `namePart` | string | No | Filter where name or username partially matches. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Applications

**Slug:** `DATAROBOT_LIST_APPLICATIONS`

Tool to list DataRobot applications created by the authenticated user. Use when retrieving a paginated list of applications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `lid` | string | No | Filter by application ID. |
| `limit` | integer | No | Maximum number of results to return. If 0, all results are returned. |
| `offset` | integer | No | Number of results to skip (for pagination). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Application Templates

**Slug:** `DATAROBOT_LIST_APPLICATION_TEMPLATES`

Tool to list application templates the user has access to. Use when browsing available application templates in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Application Templates Media

**Slug:** `DATAROBOT_LIST_APPLICATION_TEMPLATES_MEDIA`

Tool to retrieve an application template image from DataRobot. Use when you need to download the media/image associated with a specific application template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `applicationTemplateId` | string | Yes | The ID of the application template. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Approval Policy Match

**Slug:** `DATAROBOT_LIST_APPROVAL_POLICY_MATCH`

Tool to find approval policy ID matching the query. Use when determining which approval policy applies to a specific entity type and action.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `action` | string ("create" | "Create" | "CREATE" | "update" | "Update" | "UPDATE" | "delete" | "Delete" | "DELETE") | Yes | Policy action to search for (create, update, or delete). |
| `fieldName` | string | No | Optional name of the entity field to filter policies by. |
| `entityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "deploymentModel" | "DeploymentModel" | "DEPLOYMENT_MODEL" | "deploymentConfig" | "DeploymentConfig" | "DEPLOYMENT_CONFIG" | "deploymentStatus" | "DeploymentStatus" | "DEPLOYMENT_STATUS" | "deploymentMonitoringData" | "DeploymentMonitoringData" | "DEPLOYMENT_MONITORING_DATA") | Yes | Type of the entity to search approval policies for. |
| `fieldValue` | string | No | Optional value of the entity field to filter policies by. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Approval Policy Triggers

**Slug:** `DATAROBOT_LIST_APPROVAL_POLICY_TRIGGERS`

Tool to get a list of available approval policy triggers. Use when you need to view all possible triggers that can be configured for approval policies.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Automated Document Options

**Slug:** `DATAROBOT_LIST_AUTOMATED_DOCUMENT_OPTIONS`

Tool to list all available automated document types and locales. Use when determining which document types can be generated for compliance documentation.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Automated Documents

**Slug:** `DATAROBOT_LIST_AUTOMATED_DOCUMENTS`

Tool to list all automated documents in DataRobot. Use when retrieving compliance documents, model reports, or other generated documentation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | No | Filter documents by entity ID (e.g., project ID, deployment ID, model ID). |
| `outputFormat` | string | No | Filter documents by output format (e.g., DOCX, HTML). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Batch Jobs

**Slug:** `DATAROBOT_LIST_BATCH_JOBS`

Tool to list DataRobot batch jobs. Use when you need to retrieve batch job records filtered by status or source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return (1-1000) |
| `offset` | integer | No | Number of items to skip for pagination |
| `source` | array | No | Filter by job source. Prefix with '-' to exclude a source. Valid values include: other, api_client_r, api_client_python, integration_snippet_requests, integration_snippet_api_client, integration_snippet_standalone_python, integration_snippet_standalone_powershell, ui_make_prediction, ui_leaderboard_make_prediction, ui_uxr_make_prediction, integration_job, scheduled_from_job_definition, manual_run_from_job_definition, model_package_insights, portable_batch_predictions, data_pipelines, challenger_replay |
| `status` | array | No | Filter by job status. Can specify multiple statuses to include jobs matching any of them. Valid values: INITIALIZING, RUNNING, COMPLETED, ABORTED, FAILED |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Batch Monitoring Job Definitions

**Slug:** `DATAROBOT_LIST_BATCH_MONITORING_JOB_DEFINITIONS`

Tool to list DataRobot batch monitoring job definitions. Use when you need to retrieve scheduled batch monitoring job configurations with optional filtering by name or deployment ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return (1-10000) |
| `offset` | integer | No | Number of items to skip for pagination |
| `searchName` | string | No | Filter by name. Returns definitions with names containing this string (case-insensitive search) |
| `deploymentId` | string | No | Filter to include only job definitions for a specific deployment ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Batch Prediction Job Definitions

**Slug:** `DATAROBOT_LIST_BATCH_PREDICTION_JOB_DEFINITIONS`

Tool to list batch prediction job definitions. Use when retrieving scheduled or manual batch prediction configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `searchName` | string | No | Filter by name (case-insensitive partial match) |
| `deploymentId` | string | No | Filter by deployment ID to get definitions for a specific deployment |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Batch Predictions

**Slug:** `DATAROBOT_LIST_BATCH_PREDICTIONS`

Tool to list DataRobot batch prediction jobs. Use when you need to retrieve batch prediction job records filtered by status, source, deployment, or other criteria.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobId` | string | No | Filter to only job by specific id |
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `source` | array | No | Filter by job source. Prefix values with a dash (-) to exclude those sources. Repeat the parameter for filtering on multiple sources. |
| `status` | array | No | Filter by job status. Repeat the parameter for filtering on multiple statuses. Can specify multiple statuses to include jobs matching any of them. |
| `allJobs` | boolean | No | [DEPRECATED - replaced with RBAC permission model] - No effect |
| `modelId` | string | No | ID of leaderboard model which is used in job for processing predictions dataset |
| `orderBy` | string ("created" | "-created" | "status" | "-status") | No | Sort order for batch prediction list. |
| `hostname` | string | No | Filter to only jobs for this particular prediction instance hostname |
| `intakeType` | string | No | Filter to only jobs for these particular intakes type |
| `outputType` | string | No | Filter to only jobs for these particular outputs type |
| `cutoffHours` | integer | No | Only list jobs created at most this amount of hours ago |
| `endDateTime` | string | No | ISO-formatted datetime of the latest time the job was added (inclusive). For example "2008-08-24T12:00:00Z". |
| `deploymentId` | string | No | Filter to only jobs for this particular deployment |
| `startDateTime` | string | No | ISO-formatted datetime of the earliest time the job was added (inclusive). For example "2008-08-24T12:00:00Z". Will ignore cutoffHours if set. |
| `batchPredictionJobDefinitionId` | string | No | Filter to only jobs for this particular definition |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Calendar Country Codes

**Slug:** `DATAROBOT_LIST_CALENDAR_COUNTRY_CODES`

Tool to retrieve the list of allowed country codes for preloaded calendar generation. Use when you need to find which country codes are supported for calendar features in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return; must be at least 1. Default is 100. |
| `offset` | integer | No | Number of results to skip; must be non-negative. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Calendars

**Slug:** `DATAROBOT_LIST_CALENDARS`

Tool to list all available calendars for a user in DataRobot. Use when you need to browse or filter calendars for time-series modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Optional (default: 0), at most this many results will be returned. If 0, all results will be returned. |
| `offset` | integer | No | Optional (default: 0), this many results will be skipped. |
| `projectId` | string | No | Optional, if provided will filter returned calendars to those being used in the specified project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Calendars Access Control

**Slug:** `DATAROBOT_LIST_CALENDARS_ACCESS_CONTROL`

Tool to list users with their roles on a calendar. Use when you need to retrieve access control information for a specific calendar.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Optional (default: 0), at most this many results will be returned. If 0, all results will be returned. |
| `offset` | integer | No | Optional (default: 0), this many results will be skipped. |
| `userId` | string | No | Optional, only return the access control information for a user with this user ID. Should not be specified if username is specified. |
| `username` | string | No | Optional, only return the access control information for a user with this username. Should not be specified if userId is specified. |
| `calendarId` | string | Yes | The ID of the calendar to retrieve access control information for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Catalog Items

**Slug:** `DATAROBOT_LIST_CATALOG_ITEMS`

Tool to list all catalog items accessible by the user. Use when you need to browse or search available datasets, user blueprints, and files in the DataRobot catalog.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tag` | string | No | Filter results to display only items with the specified catalog item tags, in lower case, with no spaces. |
| `type` | string ("dataset" | "snapshot_dataset" | "remote_dataset" | "user_blueprint" | "files") | No | Filter results by catalog type. The 'dataset' option matches both 'snapshot_dataset' and 'remote_dataset'. |
| `limit` | integer | No | Sets the maximum number of results returned. Enter 0 to specify no limit. |
| `offset` | integer | No | Specifies the number of results to skip for pagination. |
| `orderBy` | string ("originalName" | "-originalName" | "catalogName" | "-catalogName" | "description" | "-description" | "created" | "-created" | "relevance" | "-relevance") | No | The attribute sort order applied to the returned catalog list: 'catalogName', 'originalName', 'description', 'created', or 'relevance'. For all options other than 'relevance', prefix the attribute name with a dash to sort in descending order. e.g., orderBy='-catalogName'. Defaults to '-created'. |
| `category` | string | No | Category type(s) used for filtering. Searches are case sensitive and support '&' and 'OR' operators. |
| `useCache` | string ("false" | "False" | "true" | "True") | No | Sets whether to use the cache, for Mongo search only. |
| `searchFor` | string | No | A value to search for in the dataset's name, description, tags, column names, categories, and latest errors. The search is case insensitive. If no value is provided, or if the empty string is used, or if the string contains only whitespace, no filtering occurs. Partial matching is performed on the dataset name and description fields; all other fields require an exact match. |
| `accessType` | string ("owner" | "shared" | "any" | "created") | No | Access type used to filter returned results. Valid options are 'owner', 'shared', 'created', and 'any' (the default): 'owner' items are owned by the requester, 'shared' items have been shared with the requester, 'created' items have been created by the requester, and 'any' items matches all. |
| `ownerUserId` | string | No | Filter results to display only those owned by user(s) identified by the specified UID. |
| `filterFailed` | string ("false" | "False" | "true" | "True") | No | Sets whether to exclude from the search results all catalog items that failed during import. If True, invalid catalog items will be excluded; default is False. |
| `ownerUsername` | string | No | Filter results to display only those owned by user(s) identified by the specified username. |
| `datasourceType` | string | No | Data source types used for filtering. |
| `initialCacheSize` | integer | No | The initial cache size, for Mongo search only. |
| `isUxrPreviewable` | boolean | No | Filter results by catalogType = 'snapshot_dataset' and catalogType = 'remote_dataset' and data_origin in ['snowflake', 'bigquery-v1'] |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Change Requests

**Slug:** `DATAROBOT_LIST_CHANGE_REQUESTS`

Tool to list change requests in DataRobot. Use when you need to retrieve change requests for deployments or other entities, optionally filtering by status, owner, or entity.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `status` | string | No | Filter change requests by their current status (e.g., 'pending', 'approved', 'rejected'). |
| `orderBy` | string ("createdAt" | "-createdAt" | "processedAt" | "-processedAt" | "updatedAt" | "-updatedAt") | No | Sort order for change requests. |
| `entityId` | string | No | ID of the entity to filter change requests by. Use to get change requests for a specific deployment or entity. |
| `entityType` | string ("deployment" | "Deployment" | "DEPLOYMENT") | Yes | Type of the entity to filter requests by. Required parameter. |
| `myRequests` | string ("false" | "False" | "true" | "True") | No | Boolean filter values (API expects string 'true'/'false'). |
| `showApproved` | string ("false" | "False" | "true" | "True") | No | Boolean filter values (API expects string 'true'/'false'). |
| `showCancelled` | string ("false" | "False" | "true" | "True") | No | Boolean filter values (API expects string 'true'/'false'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Code Snippets

**Slug:** `DATAROBOT_LIST_CODE_SNIPPETS`

Tool to retrieve available code snippets from DataRobot. Use when you need to get code examples for models, predictions, or workloads in various languages.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `filters` | string | No | Optional comma-separated list of sub filters to limit the returned notebooks. |
| `language` | string ("curl" | "powershell" | "python" | "qlik") | Yes | The selected language the generated snippet or notebook should be written in (curl, powershell, python, or qlik). |
| `templateType` | string ("model" | "prediction" | "workload") | Yes | The selected template type the generated snippet or notebook should be for (model, prediction, or workload). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Compliance Doc Templates

**Slug:** `DATAROBOT_LIST_COMPLIANCE_DOC_TEMPLATES`

Tool to list compliance documentation templates in DataRobot. Use when you need to browse or filter available templates for compliance documentation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `labels` | string | No | Name of labels to filter by |
| `offset` | integer | No | Number of results to skip for pagination |
| `orderBy` | string ("id" | "-id") | No | Sort order for compliance doc template list. |
| `namePart` | string | No | When present, only return templates with names that contain the given substring |
| `projectType` | string ("autoMl" | "textGeneration" | "timeSeries") | No | Type of project templates to search for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Compliance Doc Templates Default

**Slug:** `DATAROBOT_LIST_COMPLIANCE_DOC_TEMPLATES_DEFAULT`

Tool to retrieve the default compliance documentation template from DataRobot. Use when you need to get the template structure for creating compliance documentation for AutoML projects, time series projects, or text generation models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("normal" | "textGeneration" | "timeSeries") | No | Specifies the type of the default template to retrieve. The 'normal' template is applicable for all AutoML projects that are not time series. The 'timeSeries' template is only applicable to time series projects. The 'textGeneration' template is only applicable to text generation registry models. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Credentials

**Slug:** `DATAROBOT_LIST_CREDENTIALS`

Tool to list all available credentials. Use when you need to retrieve credentials accessible to the authenticated user.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `types` | array | No | Include only credentials of the specified type; repeat to filter multiple types. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("creationDate" | "-creationDate") | No | Sort order; defaults to creationDate descending. Allowed values: creationDate, -creationDate. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Credentials Associations

**Slug:** `DATAROBOT_LIST_CREDENTIALS_ASSOCIATIONS`

Tool to list all objects associated with specific credentials. Use when you need to find which deployments, data sources, or other objects are using a particular credential.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `types` | string | No | Includes only credentials of the specified type. Repeat the parameter for filtering on multiple statuses. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("creationDate" | "-creationDate") | No | Order by options for credentials associations. |
| `credentialId` | string | Yes | Credentials entity ID to retrieve associations for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Applications

**Slug:** `DATAROBOT_LIST_CUSTOM_APPLICATIONS`

Tool to list custom applications created by the authenticated user. Use when retrieving a paginated list of custom applications from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Search custom applications by name. |
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("name" | "-name" | "createdAt" | "-createdAt" | "updatedAt" | "-updatedAt" | "bundleSize" | "-bundleSize" | "replicas" | "-replicas") | No | Sort order options for custom applications. |
| `requireSource` | boolean | No | Whether to only fetch apps created from a custom application source. |
| `includeSourceLabels` | boolean | No | Whether to include the name of the application source and the label of the source version. |
| `customApplicationSourceId` | string | No | Filter custom applications created only from specific sources. To find apps not linked to a custom application source, use the value 'null'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Application Sources

**Slug:** `DATAROBOT_LIST_CUSTOM_APPLICATION_SOURCES`

Tool to list custom application sources created by the authenticated user. Use when retrieving a paginated list of custom application sources for managing application deployments.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Filter custom application sources by name (partial match). |
| `limit` | integer | No | Maximum number of results to return (1-100). |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("name" | "-name" | "createdAt" | "-createdAt" | "updatedAt" | "-updatedAt") | No | Sort order for custom application sources list. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Application Sources Versions

**Slug:** `DATAROBOT_LIST_CUSTOM_APPLICATION_SOURCES_VERSIONS`

Tool to list custom application source versions of a specified application source. Use when you need to retrieve paginated versions of a custom application source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `appSourceId` | string | Yes | The ID of the application source. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Job Limits

**Slug:** `DATAROBOT_LIST_CUSTOM_JOB_LIMITS`

Tool to retrieve custom job limits from DataRobot. Use when you need to check parallel job run limits or timeout settings.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Jobs

**Slug:** `DATAROBOT_LIST_CUSTOM_JOBS`

Tool to list custom jobs in DataRobot. Use when you need to browse or filter available custom jobs, optionally filtering by running status, search term, or job type.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Filter to include only custom jobs whose name or description contain this string (case-sensitive). |
| `jobType` | array | No | List of job types to filter by. Include multiple types to match any of them. |
| `onlyRunning` | string ("false" | "False" | "true" | "True") | No | Filter for whether to show only running custom jobs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Jobs Custom Metrics

**Slug:** `DATAROBOT_LIST_CUSTOM_JOBS_CUSTOM_METRICS`

Tool to list all custom metrics associated with a custom job in DataRobot. Use when you need to browse or retrieve custom metrics linked to a specific custom job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `customJobId` | string | Yes | ID of the custom job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Job Runs

**Slug:** `DATAROBOT_LIST_CUSTOM_JOBS_RUNS`

Tool to list custom job runs for a specific custom job in DataRobot. Use when you need to browse execution history of a custom job, optionally filtering by scheduled job ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `customJobId` | string | Yes | ID of the custom job to list runs for. |
| `scheduledJobId` | string | No | If supplied, only include custom job runs that are scheduled with this scheduled job ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Jobs Shared Roles

**Slug:** `DATAROBOT_LIST_CUSTOM_JOBS_SHARED_ROLES`

Tool to get the access control list for a custom job. Use when you need to view who has access to a specific custom job and their roles.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return roles for a user, group or organization with this identifier. |
| `name` | string | No | Only return roles for a user, group or organization with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 10. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `customJobId` | string | Yes | ID of the custom job. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | List access controls for recipients with this type. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Model Deployments

**Slug:** `DATAROBOT_LIST_CUSTOM_MODEL_DEPLOYMENTS`

Tool to list custom model deployments in DataRobot. Use when retrieving custom model deployments sorted by creation time.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `customModelIds` | string | No | Comma-separated list of custom model IDs to filter deployments. |
| `environmentIds` | string | No | Comma-separated list of execution environment IDs to filter deployments. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Model Deployments Logs

**Slug:** `DATAROBOT_LIST_CUSTOM_MODEL_DEPLOYMENTS_LOGS`

Tool to retrieve custom model deployment logs. Use when debugging or monitoring custom model deployments.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | The ID of the custom model deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Model Limits

**Slug:** `DATAROBOT_LIST_CUSTOM_MODEL_LIMITS`

Tool to get custom model resource limits in DataRobot. Use when you need to check memory, replica, and testing constraints for custom models.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Models

**Slug:** `DATAROBOT_LIST_CUSTOM_MODELS`

Tool to list custom models in DataRobot. Use when retrieving paginated custom models from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `orderBy` | string ("created" | "-created" | "updated" | "-updated") | No | Sort order for custom models. |
| `tagKeys` | string | No | List of tag keys to filter by. |
| `searchFor` | string | No | String to search for occurrence in custom model's description, language and name. Search is case insensitive. If not specified, all custom models will be returned. |
| `tagValues` | string | No | List of tag values to filter by. |
| `isDeployed` | string ("false" | "False" | "true" | "True") | No | Deployment status filter. |
| `targetType` | string ("Binary" | "Regression" | "Multiclass" | "Anomaly" | "Transform" | "TextGeneration" | "GeoPoint" | "Unstructured" | "VectorDatabase" | "AgenticWorkflow" | "MCP") | No | Target type of the custom model. |
| `customModelType` | string ("training" | "inference") | No | Type of custom model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Models Access Control

**Slug:** `DATAROBOT_LIST_CUSTOM_MODELS_ACCESS_CONTROL`

Tool to list users with their roles on a custom model. Use when you need to retrieve access control information for a specific custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. Defaults to 1000. |
| `offset` | integer | No | This many results will be skipped. Defaults to 0. |
| `customModelId` | string | Yes | The ID of the custom model to retrieve access control information for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Models Versions

**Slug:** `DATAROBOT_LIST_CUSTOM_MODELS_VERSIONS`

Tool to list custom model versions in DataRobot. Use when retrieving all versions of a specific custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `customModelId` | string | Yes | The ID of the custom model. |
| `mainBranchCommitSha` | string | No | Specifies the commit SHA-1 in GitHub repository from the main branch that corresponds to a given custom model version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Model Tests

**Slug:** `DATAROBOT_LIST_CUSTOM_MODEL_TESTS`

Tool to list custom model tests for a specific custom model in DataRobot. Use when you need to retrieve testing history and results for a custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `replicas` | integer | No | A fixed number of replicas that will be set for the given custom-model. |
| `requiresHa` | boolean | No | Require all custom model replicas to be deployed on different Kubernetes nodes for predictions fault tolerance. |
| `customModelId` | string | Yes | ID of the Custom Model to retrieve testing history for. This is required. |
| `desiredMemory` | integer | No | The amount of memory that is expected to be allocated by the custom model. This setting is incompatible with setting the resourceBundleId. |
| `maximumMemory` | integer | No | The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed. This setting is incompatible with setting the resourceBundleId. |
| `resourceBundleId` | string | No | A single identifier that represents a bundle of resources: Memory, CPU, GPU, etc. A list of available bundles can be obtained via the resource bundles endpoint. |
| `networkEgressPolicy` | string ("NONE" | "PUBLIC") | No | Network egress policy options for custom model tests. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Templates

**Slug:** `DATAROBOT_LIST_CUSTOM_TEMPLATES`

Tool to retrieve a list of custom templates from DataRobot. Use when you need to browse, filter, or search available custom templates for applications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tag` | string | No | Only return custom templates with a matching tag. |
| `limit` | integer | No | Maximum number of results to return (1-100). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Only return custom templates whose name or description contain this text. |
| `orderBy` | string ("name" | "-name" | "createdAt" | "-createdAt" | "templateType" | "-templateType" | "templateSubType" | "-templateSubType") | No | Sort order options for custom templates. |
| `category` | string | No | Only return custom templates with this category (use case). |
| `publisher` | string | No | Only return custom templates with this publisher. |
| `showHidden` | boolean | No | Set to true to include hidden templates that are not visible in the UI. |
| `templateType` | string | No | Only return custom templates of this type. |
| `templateSubType` | string | No | Only return custom templates of this sub-type. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Custom Training Blueprints

**Slug:** `DATAROBOT_LIST_CUSTOM_TRAINING_BLUEPRINTS`

Tool to list custom training blueprints in DataRobot. Use when you need to retrieve blueprints for custom model training, optionally filtered by model ID or target types.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (max 1000). |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `reverse` | string ("false" | "False" | "true" | "True") | No | List blueprints in reverse order. Accepts 'true', 'True', 'false', or 'False'. |
| `targetTypes` | array | No | Filter by custom model target types. Provide a list of target types to filter by. |
| `customModelId` | string | No | Filter blueprints for a specific custom model ID. If not provided, returns all blueprints. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Definitions

**Slug:** `DATAROBOT_LIST_DATASET_DEFINITIONS`

Tool to list all dataset definitions for the user. Use when you need to browse or retrieve available dataset definitions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-100). |
| `offset` | integer | No | Number of results to skip (for pagination). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Definition Versions

**Slug:** `DATAROBOT_LIST_DATASET_DEFINITIONS_VERSIONS`

Tool to list all dataset definition versions for a given dataset definition. Use when you need to view version history or select a specific version of a dataset definition.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (at most 100). |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Permissions

**Slug:** `DATAROBOT_LIST_DATASET_PERMISSIONS`

Tool to retrieve permissions for a specific dataset in DataRobot. Use when you need to check what operations the current user can perform on a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | The ID of the dataset to retrieve permissions for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Datasets

**Slug:** `DATAROBOT_LIST_DATASETS`

Tool to list all datasets in the DataRobot global catalog. Use when you need to browse or filter available datasets before modeling or prediction.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `orderBy` | string | No | Sort order for returned datasets. Use 'created' for ascending or '-created' for descending order by creation date. Only 'created' field is supported for sorting. |
| `category` | string | No | Filter datasets by intended-use category. Common values: 'PREDICTION', 'TRAINING'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Datasets Access Control

**Slug:** `DATAROBOT_LIST_DATASETS_ACCESS_CONTROL`

Tool to list users with their roles on a dataset. Use when you need to retrieve access control information for a specific dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `userId` | string | No | Only return the access control information for a user with this user ID. |
| `username` | string | No | Only return the access control information for a user with this username. |
| `datasetId` | string | Yes | The ID of the dataset to retrieve access control information for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset All Features Details

**Slug:** `DATAROBOT_LIST_DATASETS_ALL_FEATURES_DETAILS`

Tool to retrieve detailed information about all features in a DataRobot dataset. Use when you need feature statistics, types, and metadata for analysis or model preparation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change and a maximum limit may be imposed without notice. |
| `offset` | integer | No | This many results will be skipped for pagination. |
| `orderBy` | string ("featureType" | "name" | "id" | "unique" | "missing" | "stddev" | "mean" | "median" | "min" | "max" | "dataQualityIssues" | "-featureType" | "-name" | "-id" | "-unique" | "-missing" | "-stddev" | "-mean" | "-median" | "-min" | "-max" | "-dataQualityIssues") | No | How the features should be ordered. Use negative prefix (e.g., '-name') for descending order. |
| `datasetId` | string | Yes | The ID of the dataset to retrieve features for. |
| `searchFor` | string | No | A value to search for in the feature name. The search is case insensitive. If no value is provided, an empty string is used, or the string contains only whitespace, no filtering occurs. |
| `includePlot` | string ("false" | "False" | "true" | "True") | No | Boolean value for including plot data. |
| `featurelistId` | string | No | ID of a featurelist. If specified, only returns features that are present in the specified featurelist. |
| `includeDataQuality` | string ("false" | "False" | "true" | "True") | No | Boolean value for including data quality information. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Featurelists

**Slug:** `DATAROBOT_LIST_DATASETS_FEATURELISTS`

Tool to retrieve featurelists associated with a specific dataset in DataRobot. Use when you need to browse or filter featurelists for a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return per page. |
| `offset` | integer | No | Number of results to skip before returning results. |
| `orderBy` | string ("name" | "description" | "featuresNumber" | "creationDate" | "userCreated" | "-name" | "-description" | "-featuresNumber" | "-creationDate" | "-userCreated") | No | Field to order featurelists by. Prefix with '-' for descending order. Options: name, description, featuresNumber, creationDate, userCreated. |
| `datasetId` | string | Yes | The ID of the dataset to retrieve featurelists for. |
| `searchFor` | string | No | Search term to filter featurelists by name (case-insensitive). If empty or whitespace-only, no filtering is applied. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Feature Transforms

**Slug:** `DATAROBOT_LIST_DATASETS_FEATURE_TRANSFORMS`

Tool to list all feature transforms applied to a dataset. Use when you need to understand the transformations applied to dataset features before modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of feature transforms to return per page. |
| `offset` | integer | No | Number of feature transforms to skip for pagination. |
| `datasetId` | string | Yes | Unique identifier of the dataset to list feature transforms for. Obtain from DATAROBOT_LIST_DATASETS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Projects

**Slug:** `DATAROBOT_LIST_DATASETS_PROJECTS`

Tool to list all projects associated with a specific dataset. Use when you need to find which projects are using a particular dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return per page. |
| `offset` | integer | No | Number of items to skip for pagination. |
| `datasetId` | string | Yes | The ID of the dataset to retrieve projects for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Refresh Jobs

**Slug:** `DATAROBOT_LIST_DATASETS_REFRESH_JOBS`

Tool to list scheduled refresh jobs for a specific dataset. Use when you need to view or manage dataset refresh schedules.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of refresh jobs to return. |
| `offset` | integer | No | Number of refresh jobs to skip before returning results. |
| `dataset_id` | string | Yes | The ID of the dataset to retrieve refresh jobs for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Refresh Job Execution Results

**Slug:** `DATAROBOT_LIST_DATASETS_REFRESH_JOBS_EXECUTION_RESULTS`

Tool to list execution results of a dataset refresh job. Use when you need to view the history and status of refresh job executions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results returned. The default may change and a maximum limit may be imposed without notice. |
| `job_id` | string | Yes | ID of the user scheduled dataset refresh job. |
| `offset` | integer | No | Number of results that will be skipped. |
| `dataset_id` | string | Yes | The dataset associated with the scheduled refresh job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Relationships

**Slug:** `DATAROBOT_LIST_DATASETS_RELATIONSHIPS`

Tool to list related datasets for a specific dataset. Use when you need to discover relationships between datasets in DataRobot's catalog.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `datasetId` | string | Yes | The ID of the dataset to list relationships for. |
| `linkedDatasetId` | string | No | Providing linkedDatasetId will filter such that only relationships between datasetId (from the path) and linkedDatasetId will be returned. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Datasets Shared Roles

**Slug:** `DATAROBOT_LIST_DATASETS_SHARED_ROLES`

Tool to list shared roles for a dataset. Use when you need to view who has access to a dataset and their permission levels.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return the access control information for an organization, group, or user with this ID. |
| `name` | string | No | Only return the access control information for an organization, group, or user with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 100. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `datasetId` | string | Yes | The ID of the dataset. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Describes the type of share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Versions

**Slug:** `DATAROBOT_LIST_DATASETS_VERSIONS`

Tool to list all versions of a specific dataset in DataRobot. Use when you need to browse or filter different versions of a dataset before selecting one for modeling or prediction.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped for pagination. |
| `orderBy` | string ("created" | "-created") | No | Sorting order for dataset versions. |
| `category` | string ("TRAINING" | "PREDICTION" | "SAMPLE") | No | Dataset category indicating intended use. |
| `datasetId` | string | Yes | The ID of the dataset to list versions for. |
| `filterFailed` | string ("false" | "False" | "true" | "True") | No | Whether to exclude failed datasets from results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Version All Features Details

**Slug:** `DATAROBOT_LIST_DATASETS_VERSIONS_ALL_FEATURES_DETAILS`

Tool to retrieve detailed information about all features in a specific DataRobot dataset version. Use when you need feature statistics, types, and metadata for a particular version of a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change and a maximum limit may be imposed without notice. |
| `offset` | integer | No | This many results will be skipped for pagination. |
| `orderBy` | string ("featureType" | "name" | "id" | "unique" | "missing" | "stddev" | "mean" | "median" | "min" | "max" | "dataQualityIssues" | "-featureType" | "-name" | "-id" | "-unique" | "-missing" | "-stddev" | "-mean" | "-median" | "-min" | "-max" | "-dataQualityIssues") | No | How the features should be ordered. Use negative prefix (e.g., '-name') for descending order. |
| `datasetId` | string | Yes | The ID of the dataset entry. |
| `searchFor` | string | No | A value to search for in the feature name. The search is case insensitive. If no value is provided, an empty string is used, or the string contains only whitespace, no filtering occurs. |
| `includePlot` | string ("false" | "False" | "true" | "True") | No | Boolean value for including plot data. |
| `featurelistId` | string | No | ID of a featurelist. If specified, only returns features that are present in the specified featurelist. |
| `datasetVersionId` | string | Yes | The ID of the dataset version. |
| `includeDataQuality` | string ("false" | "False" | "true" | "True") | No | Boolean value for including data quality information. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Version Feature Lists

**Slug:** `DATAROBOT_LIST_DATASETS_VERSIONS_FEATURELISTS`

Tool to retrieve feature lists for a specific dataset version in DataRobot. Use when you need to list, search, or filter feature lists associated with a particular version of a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of feature lists to return per page. Default is 100. |
| `offset` | integer | No | Number of feature lists to skip for pagination. Default is 0. |
| `orderBy` | string ("name" | "description" | "featuresNumber" | "creationDate" | "userCreated" | "-name" | "-description" | "-featuresNumber" | "-creationDate" | "-userCreated") | No | Field to sort feature lists by. Prefix with '-' for descending order. Options: name, description, featuresNumber, creationDate, userCreated. |
| `datasetId` | string | Yes | Unique identifier of the dataset. Obtain from DATAROBOT_LIST_DATASETS. |
| `searchFor` | string | No | Filter feature lists by name using case-insensitive substring search. If empty or contains only whitespace, no filtering is applied. |
| `datasetVersionId` | string | Yes | Unique identifier of the dataset version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Dataset Version File

**Slug:** `DATAROBOT_LIST_DATASETS_VERSIONS_FILE`

Tool to download original dataset data from a specific dataset version. Use when you need to retrieve the raw file for a dataset version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | The ID of the dataset entry. |
| `dataset_version_id` | string | Yes | The ID of the dataset version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Dataset Version Projects

**Slug:** `DATAROBOT_LIST_DATASET_VERSION_PROJECTS`

Tool to list all projects that use a specific dataset version. Use when you need to find which projects are using a particular version of a dataset.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return in the response. |
| `offset` | integer | No | Number of items to skip before returning results (for pagination). |
| `datasetId` | string | Yes | The ID of the dataset entry. |
| `datasetVersionId` | string | Yes | The ID of the dataset version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Data Slices Slice Sizes

**Slug:** `DATAROBOT_LIST_DATA_SLICES_SLICE_SIZES`

Tool to retrieve the number of rows available after applying a data slice to a specified dataset subset. Use when you need to check the size of a data slice for a specific source (e.g., crossValidation, training, holdout).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("backtest_0" | "backtest_0_training" | "backtest_1" | "backtest_10" | "backtest_10_training" | "backtest_11" | "backtest_11_training" | "backtest_12" | "backtest_12_training" | "backtest_13" | "backtest_13_training" | "backtest_14" | "backtest_14_training" | "backtest_15" | "backtest_15_training" | "backtest_16" | "backtest_16_training" | "backtest_17" | "backtest_17_training" | "backtest_18" | "backtest_18_training" | "backtest_19" | "backtest_19_training" | "backtest_1_training" | "backtest_2" | "backtest_20" | "backtest_20_training" | "backtest_2_training" | "backtest_3" | "backtest_3_training" | "backtest_4" | "backtest_4_training" | "backtest_5" | "backtest_5_training" | "backtest_6" | "backtest_6_training" | "backtest_7" | "backtest_7_training" | "backtest_8" | "backtest_8_training" | "backtest_9" | "backtest_9_training" | "crossValidation" | "externalTestSet" | "holdout" | "holdout_training" | "training" | "validation" | "vectorDatabase") | Yes | The source of data to use to calculate the size. Use 'externalTestSet' with externalDatasetId parameter, or 'training' with modelId parameter. |
| `modelId` | string | No | The model ID whose training dataset should be sliced. Use this parameter only when the source is 'training'. |
| `projectId` | string | Yes | The project ID. |
| `dataSliceId` | string | Yes | ID of the data slice. |
| `externalDatasetId` | string | No | The external dataset ID to use when calculating the size of a slice. Use this parameter only when the source is 'externalTestSet'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Data Sources

**Slug:** `DATAROBOT_LIST_DATA_SOURCES`

Tool to list all available data sources. Use when retrieving the catalog of data connections.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("all" | "databases" | "dr-connector-v1" | "dr-database-v1" | "jdbc") | No | Optional filter by data source type. Allowed values: `all`, `databases`, `dr-connector-v1`, `dr-database-v1`, `jdbc`. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deleted Custom Jobs

**Slug:** `DATAROBOT_LIST_DELETED_CUSTOM_JOBS`

Tool to list all deleted custom jobs in DataRobot. Use when you need to retrieve information about custom jobs that have been deleted from the system.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Challengers

**Slug:** `DATAROBOT_LIST_DEPLOYMENT_CHALLENGERS`

Tool to list challenger models for a deployment. Use when retrieving challenger models that can replace the current champion model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Runtime Parameters

**Slug:** `DATAROBOT_LIST_DEPLOYMENT_RUNTIME_PARAMETERS`

Tool to list runtime parameters for a deployment. Use when retrieving deployment runtime parameter configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `orderBy` | string ("createdAt" | "-createdAt" | "name" | "-name") | No | The sort order to apply to the runtime parameters list. Prefix the attribute name with a dash to sort in descending order. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS`

Tool to list deployments a user can view. Use when retrieving paginated deployments from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `role` | string ("OWNER" | "USER") | No | Filter by user role on the deployment. |
| `limit` | integer | No | Number of deployments to return (1-100). |
| `offset` | integer | No | Number of deployments to skip. |
| `search` | string | No | Case-insensitive match on label and description. |
| `status` | array | No | Filter by deployment status. |
| `orderBy` | string | No | Sort order. Defaults to lastPredictionTimestamp desc. Allowed values: label, -label, serviceHealth, -serviceHealth, modelHealth, -modelHealth, accuracyHealth, -accuracyHealth, recentPredictions, -recentPredictions, lastPredictionTimestamp, -lastPredictionTimestamp, currentModelDeployedTimestamp, -currentModelDeployedTimestamp, createdAtTimestamp, -createdAtTimestamp, importance, -importance, fairnessHealth, -fairnessHealth, customMetricsHealth, -customMetricsHealth, actualsTimelinessHealth, -actualsTimelinessHealth, predictionsTimelinessHealth, -predictionsTimelinessHealth |
| `tagKeys` | array | No | List of tag keys; OR match across provided values. |
| `createdBy` | string | No | Filter by creator user ID. |
| `tagValues` | array | No | List of tag values; OR match across provided values. |
| `importance` | array | No | Filter by deployment importance levels. |
| `createdByMe` | boolean | No | Filter to deployments created by current user. |
| `modelHealth` | array | No | Filter by model health. Allowed: failing, not_started, passing, unavailable, unknown, warning. |
| `serviceHealth` | array | No | Filter by service health. Allowed: failing, not_started, passing, unavailable, unknown, warning. |
| `accuracyHealth` | array | No | Filter by accuracy health. Allowed: failing, not_started, passing, unavailable, unknown, warning. |
| `buildEnvironmentType` | array | No | Filter by current model build environment type. |
| `championModelTargetType` | string | No | Filter by champion target type. |
| `executionEnvironmentType` | array | No | Filter by execution environment type. |
| `defaultPredictionServerId` | array | No | Filter by default prediction server ID. |
| `lastPredictionTimestampEnd` | string | No | Include deployments with predictions before timestamp. |
| `lastPredictionTimestampStart` | string | No | Include deployments with predictions on/after timestamp. |
| `champion_model_execution_type` | string ("custom_inference_model" | "external" | "dedicated") | No | Filter by champion model execution type. |
| `predictionEnvironmentPlatform` | array | No | Filter by prediction environment platform. |
| `predictionUsageDailyAvgLessThan` | integer | No | Avg daily predictions over last week less than value. |
| `predictionUsageDailyAvgGreaterThan` | integer | No | Avg daily predictions over last week greater than value. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Accuracy Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_ACCURACY_OVER_BATCH`

Tool to retrieve accuracy metrics over batches for a deployment. Use when analyzing batch-level model accuracy trends or comparing performance across batches.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `metric` | string ("AUC" | "Accuracy" | "Balanced Accuracy" | "F1" | "FPR" | "FVE Binomial" | "FVE Gamma" | "FVE Poisson" | "FVE Tweedie" | "Gamma Deviance" | "Gini Norm" | "Kolmogorov-Smirnov" | "LogLoss" | "MAE" | "MAPE" | "MCC" | "NPV" | "PPV" | "Poisson Deviance" | "R Squared" | "RMSE" | "RMSLE" | "Rate@Top10%" | "Rate@Top5%" | "TNR" | "TPR" | "Tweedie Deviance") | No | Enum for accuracy metrics available in DataRobot. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Actuals Data Exports

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_ACTUALS_DATA_EXPORTS`

Tool to retrieve a list of asynchronous actuals data exports for a deployment. Use when you need to monitor or retrieve actuals data export status for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this number of objects to retrieve. |
| `offset` | integer | No | Offset this number of objects to retrieve. |
| `status` | string ("CANCELLED" | "CREATED" | "FAILED" | "SCHEDULED" | "SUCCEEDED" | "None") | No | Status of an actuals data export. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Batch Service Stats

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_BATCH_SERVICE_STATS`

Tool to retrieve service health metrics for a deployment's batch predictions. Use when analyzing batch service performance, execution times, or error rates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. Required if segmentAttribute is specified. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Segment attribute for segment analysis. |
| `responseTimeQuantile` | number | No | Quantile for responseTime metric (0.0 to 1.0). Default is 0.5 (median). |
| `executionTimeQuantile` | number | No | Quantile for executionTime metric (0.0 to 1.0). Default is 0.5 (median). |
| `slowRequestsThreshold` | integer | No | Threshold in milliseconds for slowRequests metric. Default is 1000ms. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Challenger Replay Settings

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CHALLENGER_REPLAY_SETTINGS`

Tool to retrieve challenger replay settings for a deployment. Use when checking scheduled replay configuration for challenger models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Custom Metrics

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS`

Tool to retrieve a list of custom metrics for a deployment. Use when you need to view all custom metrics configured for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this number of objects to retrieve. |
| `offset` | integer | No | Offset this number of objects to retrieve. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Custom Metrics Batch Summary

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS_BATCH_SUMMARY`

Tool to retrieve the summary of deployment batch custom metric. Use when you need to get aggregated custom metric data for a specific deployment batch.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring stats, defaults to the next top of the hour from now. |
| `start` | string | No | Start of the period to retrieve monitoring stats, defaults to 7 days ago from the end of the period. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The model ID of related champion/challenger to retrieve custom metric values for. |
| `deploymentId` | string | Yes | ID of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `customMetricId` | string | Yes | ID of the custom metric. |
| `modelPackageId` | string | No | The model package ID of related champion/challenger to retrieve custom metric values for. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Custom Metrics Summary

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS_SUMMARY`

Tool to retrieve the summary of a deployment custom metric. Use when you need to view aggregated statistics and performance metrics for a specific custom metric within a deployment over a time period.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring stats, defaults to the next top of the hour from now. |
| `start` | string | No | Start of the period to retrieve monitoring stats, defaults to 7 days ago from the end of the period. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The model ID of related champion/challenger to retrieve custom metric values for. |
| `deploymentId` | string | Yes | ID of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `customMetricId` | string | Yes | ID of the custom metric. |
| `modelPackageId` | string | No | The model package ID of related champion/challenger to retrieve custom metric values for. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Custom Metrics Summary

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS_SUMMARY_BY_ID`

Tool to retrieve the bulk summary of deployment custom metrics. Use when you need to get an overview of all custom metric values for a deployment over a specific time period.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring stats, defaults to the next top of the hour from now. |
| `start` | string | No | Start of the period to retrieve monitoring stats, defaults to 7 days ago from the end of the period. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The model ID of related champion/challenger to retrieve custom metric values for. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `modelPackageId` | string | No | The model package ID of related champion/challenger to retrieve custom metric values for. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Custom Metrics Values Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS_VALUES_OVER_BATCH`

Tool to retrieve custom metric values over batch for a deployment. Use when you need to analyze custom metric performance across different batches for a specific deployment and custom metric.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The model ID of related champion/challenger to retrieve batch custom metric values for. |
| `deploymentId` | string | Yes | ID of the deployment. |
| `segmentValue` | string | No | The value of the `segmentAttribute` to segment on. |
| `customMetricId` | string | Yes | ID of the custom metric. |
| `modelPackageId` | string | No | The model package ID of related champion/challenger to retrieve batch custom metric values for. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Custom Metrics Values Over Time

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_CUSTOM_METRICS_VALUES_OVER_TIME`

Tool to retrieve custom metric values over time for a deployment. Use when analyzing custom metric trends or monitoring custom metrics across time periods.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring stats, defaults to the next top of the hour from now. |
| `start` | string | No | Start of the period to retrieve monitoring stats, defaults to 7 days ago from the end of the period. |
| `modelId` | string | No | The model ID of related champion/challenger to retrieve custom metric values for. |
| `bucketSize` | string | No | Time duration of a bucket, default to seven days. |
| `deploymentId` | string | Yes | ID of the deployment |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `customMetricId` | string | Yes | ID of the custom metric |
| `modelPackageId` | string | No | The model package ID of related champion/challenger to retrieve custom metric values for. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Segment Attributes

**Slug:** `DATAROBOT_LIST_DEPLOYMENT_SEGMENT_ATTRIBUTES`

Tool to retrieve segment attributes for a deployment based on monitoring type. Use when you need to analyze deployment segmentation for service health, data drift, accuracy, or other monitoring metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `monitoringType` | string ("serviceHealth" | "dataDrift" | "accuracy" | "humility" | "customMetrics" | "geospatial") | No | The monitoring type for which segment attributes are being retrieved. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Feature Drift

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_FEATURE_DRIFT`

Tool to retrieve feature drift scores for a deployment over a time period. Use when you need to analyze feature drift metrics to monitor data quality and distribution changes in production.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `limit` | integer | No | The number of features to return, defaults to 200. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `metric` | string ("psi" | "kl_divergence" | "dissimilarity" | "hellinger" | "js_divergence") | No | Metric used to calculate drift score. |
| `offset` | integer | No | The number of features to skip, defaults to 0. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | ID of the model in the deployment. If not set, defaults to the deployment current model. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of a segment attribute used for segment analysis. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Feature Drift Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_FEATURE_DRIFT_OVER_BATCH`

Tool to retrieve drift over batch info for features of a deployment. Use when analyzing feature drift across batches to monitor data quality.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `driftMetric` | string ("psi" | "kl_divergence" | "dissimilarity" | "hellinger" | "js_divergence") | No | The metric used to calculate data drift scores. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `featureNames` | string | Yes | Comma-separated list of feature names, limited to two per request. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Health Settings

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_HEALTH_SETTINGS`

Tool to retrieve deployment health settings. Use when you need to check the configuration for health monitoring of a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Health Settings Defaults

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_HEALTH_SETTINGS_DEFAULTS`

Tool to retrieve default deployment health settings for a deployment. Use when you need to understand the default health monitoring configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Humility Stats

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_HUMILITY_STATS`

Tool to retrieve humility stats for a deployment. Use when you need to analyze humility metrics and rule violations for a specific deployment over a time period.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `bucketSize` | string | No | The time duration of a bucket. Needs to be multiple of one hour. Can not be longer than the total length of the period. If not set, a default value will be calculated based on the start and end time. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Enum for segment attribute values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Humility Stats Over Time

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_HUMILITY_STATS_OVER_TIME`

Tool to retrieve humility statistics over time for a deployment. Use when monitoring humility rule performance and trends across time periods.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `bucketSize` | string | No | The time duration of a bucket. Needs to be multiple of one hour. Can not be longer than the total length of the period. If not set, a default value will be calculated based on the start and end time. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Segment attribute for segment analysis. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Model History

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_MODEL_HISTORY`

Tool to retrieve champion model history for a deployment. Use when examining model changes and deployment history over time.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-100). |
| `offset` | integer | No | Number of results to skip. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Monitoring Batches

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_MONITORING_BATCHES`

Tool to list monitoring batches for a deployment in DataRobot. Use when you need to retrieve paginated monitoring batches with optional filtering by creator, search term, creation time, or prediction timestamps.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `search` | string | No | Search by matching batch name in a case-insensitive manner or exact match of batch ID. |
| `orderBy` | string ("name" | "-name" | "createdAt" | "-createdAt" | "earliestPredictionTimestamp" | "-earliestPredictionTimestamp" | "latestPredictionTimestamp" | "-latestPredictionTimestamp") | No | Order of the returning batches. |
| `createdBy` | string | No | ID of the user who created a batch. |
| `endBefore` | string | No | Filter for batches with an end time before the given time |
| `startAfter` | string | No | Filter for batches with a start time after the given time |
| `createdAfter` | string | No | Filter for batches created after the given time |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `createdBefore` | string | No | Filter for batches created before the given time |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Monitoring Batch Models

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_MONITORING_BATCHES_MODELS`

Tool to list information about models that have data in a monitoring batch. Use when you need to view which models contributed predictions to a specific batch.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `modelId` | string | No | ID of the model associated with a batch. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `monitoringBatchId` | string | Yes | ID of the monitoring batch. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Deployment Monitoring Batch Limits

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_MONITORING_BATCH_LIMITS`

Tool to retrieve the limits related to monitoring batches for a deployment. Use when you need to check constraints on batch size and prediction counts for monitoring operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Prediction Data Exports

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTION_DATA_EXPORTS`

Tool to list prediction data exports for a deployment. Use when retrieving paginated prediction data exports from DataRobot for observability and data exploration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batch` | string ("false" | "False" | "true" | "True") | No | Filter for batch vs real-time exports. |
| `limit` | integer | No | Specifies the number of rows to return after the offset. |
| `offset` | integer | No | Specifies the number of rows to skip before starting to return rows from the query. |
| `status` | string ("CANCELLED" | "CREATED" | "FAILED" | "SCHEDULED" | "SUCCEEDED" | "WARNING") | No | Status of prediction data export processing. |
| `modelId` | string | No | Id of model used for prediction data export. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Prediction Results

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTION_RESULTS`

Tool to retrieve predictions results for a deployment. Use when you need to analyze prediction outputs and compare them with actual values.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period for which prediction results are being retrieved. |
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `start` | string | No | Start of the period for which prediction results are being retrieved. |
| `offset` | integer | No | Number of results to skip. |
| `batchId` | string | No | The id of the batch for which prediction results are being retrieved. |
| `modelId` | string | No | The id of the model for which prediction results are being retrieved. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `actualsPresent` | boolean | No | Filters predictions results to only those who have actuals present or with missing actuals. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Predictions Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTIONS_OVER_BATCH`

Tool to retrieve prediction metadata over batches for a deployment. Use when analyzing batch prediction performance and trends.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |
| `includePercentiles` | string | No | Include percentiles in the response, only applicable to deployments with binary classification, location and regression target. Allowed values: false, False, true, True. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Predictions Over Time

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTIONS_OVER_TIME`

Tool to retrieve metrics about predictions over time for a deployment. Use when analyzing prediction patterns, distribution trends, or monitoring prediction characteristics across time periods for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `modelId` | string | No | The ID of the models for which metrics are being retrieved. |
| `bucketSize` | string ("PT1H" | "P1D" | "P7D" | "P1M") | No | Time duration of prediction buckets. |
| `targetClass` | string | No | Target class to filter out results. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |
| `includePercentiles` | string ("false" | "False" | "true" | "True") | No | Include percentiles in the response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Predictions vs Actuals Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTIONS_VS_ACTUALS_OVER_BATCH`

Tool to retrieve metrics about predictions and actuals over a specific set of batches. Use when you need to analyze mean predicted & actual values, predicted & actual class distributions for a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `targetClass` | string | No | Target class to filter out results. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Predictions Vs Actuals Over Time

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_PREDICTIONS_VS_ACTUALS_OVER_TIME`

Tool to retrieve predictions vs actuals over time for a deployment. Use when analyzing prediction quality and comparing predicted values with actual outcomes across time periods.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: ``2019-08-01T00:00:00Z``. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: ``2019-08-01T00:00:00Z``. |
| `modelId` | string | No | The ID of the models for which metrics are being retrieved. |
| `bucketSize` | string ("PT1H" | "P1D" | "P7D" | "P1M") | No | Time duration options for buckets. |
| `targetClass` | string | No | Target class to filter out results. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the `segmentAttribute` to segment on. |
| `segmentAttribute` | string | No | The name of the segment on which segment analysis is being performed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Quota Consumers

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_QUOTA_CONSUMERS`

Tool to retrieve deployment quota consumers. Use when querying resource consumption or quota allocation for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Retraining Policies

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_RETRAINING_POLICIES`

Tool to list retraining policies for a deployment. Use when retrieving policies that automate model retraining based on triggers.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Retraining Settings

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_RETRAINING_SETTINGS`

Tool to fetch deployment retraining settings. Use when you need to retrieve the retraining configuration for a specific deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Segment Values

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SEGMENT_VALUES`

Tool to retrieve deployment segment values for monitoring and analytics. Use when filtering or analyzing deployment predictions by segment attributes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `search` | string | No | The search query to filter the list of segment values. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentAttribute` | string | No | The name of the segment attribute whose values the user wants to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Service Stats

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SERVICE_STATS`

Tool to retrieve service health metrics for a deployment. Use when analyzing deployment service performance, request rates, error rates, or execution times.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Segment attribute for segment analysis. |
| `responseTimeQuantile` | number | No | Quantile for responseTime metric (0.0 to 1.0). Default is 0.5 (median). |
| `executionTimeQuantile` | number | No | Quantile for executionTime metric (0.0 to 1.0). Default is 0.5 (median). |
| `slowRequestsThreshold` | integer | No | Threshold in milliseconds for slowRequests metric. Default is 1000ms. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Service Stats Over Batch

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SERVICE_STATS_OVER_BATCH`

Tool to retrieve service health metrics over batch for a deployment. Use when analyzing service performance across different batches.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `metric` | string ("totalPredictions" | "totalRequests" | "slowRequests" | "executionTime" | "responseTime" | "userErrorRate" | "serverErrorRate" | "numConsumers" | "cacheHitRatio") | No | Service health metric types. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | The id of the model for which metrics are being retrieved. |
| `quantile` | number | No | Quantile for executionTime and responseTime metrics |
| `threshold` | integer | No | Threshold for slowQueries metric. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Segment attribute for segment analysis. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Service Stats Over Time

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SERVICE_STATS_OVER_TIME`

Tool to retrieve service health metrics over time for a deployment. Use when analyzing service performance trends and patterns across time periods.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `metric` | string ("totalPredictions" | "totalRequests" | "slowRequests" | "executionTime" | "responseTime" | "userErrorRate" | "serverErrorRate" | "numConsumers" | "cacheHitRatio" | "medianLoad" | "peakLoad") | No | Service health metric types. |
| `modelId` | string | No | The ID of the models for which metrics are being retrieved. |
| `quantile` | number | No | A quantile for resulting data, used if metric is executionTime or responseTime, defaults to 0.5. |
| `threshold` | integer | No | A threshold for filtering results, used if metric is slowQueries, defaults to 1000. |
| `bucketSize` | string | No | The time duration of a bucket. Needs to be multiple of one hour. Can not be longer than the total length of the period. If not set, a default value will be calculated based on the start and end time. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string ("DataRobot-Consumer" | "DataRobot-Remote-IP" | "DataRobot-Host-IP") | No | Segment attribute for segment analysis. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployment Settings

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SETTINGS`

Tool to retrieve deployment settings. Use when you need to check configuration settings for data drift, predictions, accuracy, and other deployment features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Shared Roles

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_SHARED_ROLES`

Tool to get a model deployment's access control list. Use when you need to view who has access to a deployment and their permission levels.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return roles for a user, group or organization with this identifier. |
| `name` | string | No | Only return roles for a user, group or organization with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 10. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `deploymentId` | string | Yes | The ID of the deployment. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Describes the type of share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Target Drift

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_TARGET_DRIFT`

Tool to retrieve target drift for a deployment over a specified time period. Use when analyzing target distribution changes between training and prediction data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `end` | string | No | End of the period to retrieve monitoring data, defaults to the next top of the hour. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `start` | string | No | Start of the period to retrieve monitoring data, defaults to 7 days ago from the end of the period. Note: this field only accepts top of the hour RFC3339 datetime strings, for example: 2019-08-01T00:00:00Z. |
| `metric` | string ("psi" | "kl_divergence" | "dissimilarity" | "hellinger" | "js_divergence") | No | Metrics used to calculate drift score. |
| `batchId` | string | No | The id of the batch for which metrics are being retrieved. |
| `modelId` | string | No | An ID of the model in the deployment. If not set, defaults to the deployment current model. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `segmentValue` | string | No | The value of the segmentAttribute to segment on. |
| `segmentAttribute` | string | No | The name of a segment attribute used for segment analysis. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Training Data Exports

**Slug:** `DATAROBOT_LIST_DEPLOYMENTS_TRAINING_DATA_EXPORTS`

Tool to list training data exports for a deployment. Use when retrieving paginated training data exports from DataRobot for observability and data exploration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Specifies the number of rows to return after the offset. |
| `offset` | integer | No | Specifies the number of rows to skip before starting to return rows from the query. |
| `modelId` | string | No | Id of model used for training data export. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Deployments Model Secondary Dataset Configuration History

**Slug:** `DATAROBOT_LIST_DEPLOY_MODEL_SECONDARY_DS_CONFIG_HISTORY`

Tool to list the secondary datasets configuration history for a deployment. Use when tracking changes to secondary dataset configurations over time.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of items to return, defaults to 100 if not provided. |
| `offset` | integer | No | Number of items to skip. Defaults to 0 if not provided. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Entitlement Set Leases

**Slug:** `DATAROBOT_LIST_ENTITLEMENT_SET_LEASES`

Tool to retrieve entitlement set leases from DataRobot. Use when you need to list or filter entitlement set leases by entitlement set ID, tenant ID, or status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Pagination limit (max 100). |
| `offset` | integer | No | Pagination offset. |
| `status` | string | No | Status to filter leases by (e.g., ACTIVE, EXPIRED). |
| `tenantId` | string | No | UUID of the tenant to filter leases by. |
| `entitlementSetId` | string | No | UUID of the entitlement set to filter leases by. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Entity Notification Policy Templates Related Policies

**Slug:** `DATAROBOT_LIST_ENTITY_NOTIFY_POLICY_TPL_RELATED_POLICIES`

Tool to retrieve all policies created from a notification policy template. Use when you need to view policies associated with a specific template that are visible to the user.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many notification channels to return. |
| `offset` | integer | No | How many notification channels to skip (for pagination). |
| `policyId` | string | Yes | The ID of the notification policy template. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or custom job). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Entity Notification Policy Templates Shared Roles

**Slug:** `DATAROBOT_LIST_ENTITY_NOTIFY_POLICY_TPL_SHARED_ROLES`

Tool to list shared roles for an entity notification policy template. Use when retrieving the access control list for notification policy templates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Filter roles for a user, group, or organization with this identifier. |
| `name` | string | No | Filter roles for a user, group, or organization with this name. |
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `policyId` | string | Yes | The ID of the notification policy template. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob). |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Type of share recipient for filtering access controls. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Entity Tags

**Slug:** `DATAROBOT_LIST_ENTITY_TAGS`

Tool to retrieve a list of entity tags from DataRobot. Use when you need to list or search for entity tags.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `search` | string | No | Returns only Entity Tags with names that match the given string. |
| `orderBy` | string ("id" | "-id" | "name" | "-name" | "entityType" | "-entityType") | No | The order in which to sort and return Entity Tags. Valid values: id, -id, name, -name, entityType, -entityType (prefix with - for descending order). |
| `entityType` | string ("experiment_container") | No | Entity type for filtering entity tags. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Event Logs

**Slug:** `DATAROBOT_LIST_EVENT_LOGS`

Tool to retrieve audit log records from DataRobot. Use when you need to track user actions, administrative events, or system activities for compliance and monitoring.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `event` | string ("ADLS OAuth Failed" | "ADLS OAuth Token Obtained" | "ADLS OAuth Token Renewal Succeeded" | "ADLS OAuth User Login Started" | "ADLS OAuth User Login Succeeded" | "API Key Created" | "API Key Deleted" | "API Key Updated" | "AZURE OAuth Failed" | "AZURE OAuth Token Obtained" | "AZURE OAuth Token Renewal Succeeded" | "AZURE OAuth User Login Started" | "AZURE OAuth User Login Succeeded" | "Abort Autopilot" | "Access granted to a resource for a subject entity" | "Access granted to a resource referenced by an experiment container" | "Access request created" | "Access revoked to a resource for a subject entity" | "Access revoked to a resource referenced by an experiment container" | "Activate account" | "Activated On First Login" | "Actuals Uploaded" | "Add Model" | "Add New Dataset For Predictions" | "Add SAML configuration" | "Advanced Tuning Requested" | "App Config Changed" | "App Template Cloned" | "App Template Created" | "App Template Deleted" | "App Template Media Deleted" | "App Template Media Uploaded" | "App Template Updated" | "Approval Workflow Policy Action" | "Approval Workflow Policy Created" | "Approval Workflow Policy Deleted" | "Approval Workflow Policy Updated" | "Approve account" | "Association ID Set" | "Automated Application Access Revoked from the Group" | "Automated Application Access Revoked from the Organization" | "Automated Application Access Revoked from the User" | "Automated Application Created" | "Automated Application Deleted" | "Automated Application Domain Prefix Changed" | "Automated Application Duplicated" | "Automated Application Shared" | "Automated Application Shared with Group" | "Automated Application Shared with Organization" | "Automated Application Upgraded" | "Automated Demo Application Created" | "Automated Document Created" | "Automated Document Deleted" | "Automated Document Downloaded" | "Automated Document Previewed" | "Automated Document Requested" | "Automatic Time Series Task Plan Requested" | "Available Forecast Points Computation Job Started" | "Base Image Built" | "Batch Monitoring Disabled" | "Batch Monitoring Enabled" | "Batch Prediction Created from Dataset" | "Batch prediction job aborted" | "Batch prediction job completed" | "Batch prediction job created" | "Batch prediction job failed" | "Batch prediction job started" | "Bias And Fairness Cross Class Calculated" | "Bias And Fairness Insights Calculated" | "Bias And Fairness Per Class Calculated" | "Bias and Fairness monitoring settings updated." | "Bias and Fairness protected features specified." | "Blending Models Limit Exceeded" | "Branded Theme Created" | "Branded Theme Deleted" | "Branded Theme Updated" | "Bulk Datasets Deleted" | "Bulk Datasets Tags Appended" | "CCM Balancer Terminated" | "CCM CLUSTER Reprovisioned" | "CCM Cluster Created" | "CCM Cluster Terminated" | "CCM Resource Group Created" | "Calculation of prediction intervals is requested" | "Challenger Insight Generation Started" | "Challenger Model Created" | "Challenger Model Deleted" | "Challenger Model Promoted" | "Challenger Models Disabled" | "Challenger Models Enabled" | "Change Request Cancelled" | "Change Request Created" | "Change Request Reopened" | "Change Request Resolved" | "Change Request Review Added" | "Change Request Review Requested" | "Change Request Updated" | "Change password" | "Clustering Cluster Names Updated" | "Code Snippet Created" | "Codespace Created" | "Codespace Deleted" | "Codespace Metadata Edited" | "Codespace Session Started" | "Codespace Session Stopped" | "Comment Created" | "Comment Deleted" | "Comment Updated" | "Completed Feature Discovery Secondary Datasets" | "Completed Feature Discovery for Primary Dataset" | "Completed Relationship Quality Assessment" | "Compliance Doc Deleted" | "Compliance Doc Downloaded" | "Compliance Doc Generated" | "Compliance Doc Previewed" | "Compute Cluster Added" | "Compute Cluster Deleted" | "Compute Cluster Updated" | "Compute External Insights" | "Compute Reason Codes" | "Create Memory Agent" | "Create Memory Event" | "Create Memory Session" | "Create account" | "Created dataset from Data Engine workspace" | "Created dataset version from Data Engine workspace" | "Credential Created" | "Credential Deleted" | "Credential Updated" | "Credential Values Retrieved Based On OAuth Configuration ID" | "Custom Application Access Revoked from the Group" | "Custom Application Access Revoked from the Organization" | "Custom Application Access Revoked from the User" | "Custom Application Created" | "Custom Application Deleted" | "Custom Application Failed to Start" | "Custom Application Managed Image Created" | "Custom Application Published" | "Custom Application Renamed" | "Custom Application Shared with Group" | "Custom Application Shared with Organization" | "Custom Application Shared with User" | "Custom Application Source Access Revoked from the Group" | "Custom Application Source Access Revoked from the Organization" | "Custom Application Source Access Revoked from the User" | "Custom Application Source Shared with Group" | "Custom Application Source Shared with Organization" | "Custom Application Source Shared with User" | "Custom Application Started" | "Custom Application Stopped" | "Custom Application Visited" | "Custom Application Visited by Guest" | "Custom Job Run Executed" | "Custom Metric Bulk Upload Succeeded" | "Custom Metric Creation Succeeded" | "Custom Metric Dataset Upload Succeeded" | "Custom Metric JSON Upload Succeeded" | "Custom Model Conversion Failed" | "Custom Model Conversion Files Uploaded" | "Custom Model Conversion Succeeded" | "Custom Model Updated from Codespace" | "Custom Model Version Uploaded to Codespace" | "Custom RBAC Access Role Created" | "Custom RBAC Access Role Deleted" | "Custom RBAC Access Role Updated" | "Custom Registered Model Created" | "Custom Registered Model Version Added" | "Custom Task Deploy" | "Custom Task Fit" | "Custom inference model added" | "Custom inference model assign training data request received" | "Custom inference model updated" | "Custom inference model version assign training data request received" | "Custom inference model version created from remote repository content" | "Custom model item added" | "Custom model item created from template" | "Custom task added" | "Custom task updated" | "Custom task version added" | "Data Connection Created" | "Data Connection Deleted" | "Data Connection Tested" | "Data Connection Updated" | "Data Matching ANN Index Profile Built" | "Data Matching Query Requested" | "Data Sample Queried For Wrangling" | "Data Sampled for Chunk Definition" | "Data Source is created" | "Data Sources Permadelete Executed" | "Data Sources Permadelete Failed" | "Data Sources Permadelete Submitted" | "Data Store Config Request Submitted" | "Data Stores Permadelete Executed" | "Data Stores Permadelete Failed" | "Data Stores Permadelete Submitted" | "Data engine query generator created" | "Data engine query generator deleted" | "Data engine workspace created" | "Data engine workspace deleted" | "Data engine workspace state previewed" | "Data engine workspace updated" | "Dataset Categories Modified" | "Dataset Column Aliases Modified" | "Dataset Created" | "Dataset Deleted" | "Dataset Description Modified" | "Dataset Download" | "Dataset Materialized" | "Dataset Name Modified" | "Dataset Reloaded" | "Dataset Shared" | "Dataset Sharing Removed" | "Dataset Tags Modified" | "Dataset Undeleted" | "Dataset Upload" | "Dataset Upload is Completed" | "Dataset Version Created from Recipe" | "Dataset Version Deleted" | "Dataset Version Undeleted" | "Dataset featurelist created" | "Dataset featurelist deleted" | "Dataset featurelist updated" | "Dataset for predictions with actual value column processed" | "Dataset refresh job created" | "Dataset refresh job deleted" | "Dataset refresh job updated" | "Dataset relationship created" | "Dataset relationship updated" | "Dataset transform created" | "Datasets Permadelete Executed" | "Datasets Permadelete Failed" | "Datasets Permadelete Submitted" | "Deactivate Account" | "Decision Flow Created" | "Decision Flow Model Package Created" | "Decision Flow Test Downloaded" | "Decision Flow Version Created" | "Decision Flow Version Deleted" | "Default value for Do-Not-Derive is changed" | "Delete SAML configuration" | "Deny account" | "Deploy Model To Hadoop" | "Deployment Activated" | "Deployment Actuals Export Created" | "Deployment Added" | "Deployment Deactivated" | "Deployment Deleted" | "Deployment Humility Rule Added" | "Deployment Humility Rule Deleted" | "Deployment Humility Rule Submitted" | "Deployment Humility Rule Updated" | "Deployment Humility Setting Updated" | "Deployment Monitoring Batch Created" | "Deployment Monitoring Timeliness Setting Changed" | "Deployment Permanently Erased" | "Deployment Predictions Data Permanently Erased" | "Deployment Processing Limit Interval Changed" | "Deployment Statistics Reset" | "Deployment prediction export created" | "Deployment prediction warning setting updated" | "Deployment training data export created" | "Detected Data Quality: Disguised Missing Values" | "Detected Data Quality: Excess Zero" | "Detected Data Quality: Imputation Leakage" | "Detected Data Quality: Inconsistent Gaps" | "Detected Data Quality: Inliers" | "Detected Data Quality: Lagged Features" | "Detected Data Quality: Leading or Trailing Series" | "Detected Data Quality: Missing Documents" | "Detected Data Quality: Missing Images" | "Detected Data Quality: Multicategorical Invalid Format" | "Detected Data Quality: New Series in Recent Data" | "Detected Data Quality: Outliers" | "Detected Data Quality: Quantile Target Sparsity" | "Detected Data Quality: Quantile Target Zero Inflation" | "Detected Data Quality: Target Leakage" | "Detected Data Quality: Target had infrequent negative values" | "Do-Not-Derive is used" | "Documentation Request" | "Download All Charts" | "Download Chart" | "Download Codegen" | "Download Codegen From Deployment" | "Download Deployment Chart" | "Download Model" | "Download Model Package" | "Download Model Package From Deployment" | "Download Predictions" | "Empty Catalog Item Created" | "Empty Cluster Status Created" | "Entitlement Definition Created" | "Entitlement Definition Deleted" | "Entitlement Definition Updated" | "Entitlement Set Created" | "Entitlement Set Deleted" | "Entitlement Set Lease Created" | "Entitlement Set Lease Deleted" | "Entitlement Set Lease Updated" | "Entitlement Set Updated" | "Entitlement Set Updated Entitlements" | "Entity Notification Channel Created" | "Entity Notification Channel Deleted" | "Entity Notification Channel Updated" | "Entity Notification Policy Created" | "Entity Notification Policy Deleted" | "Entity Notification Policy Updated" | "Entity Tag Created" | "Entity Tag Deleted" | "Entity Tag Updated" | "Entity notification channel created" | "Ephemeral Session Started" | "Ephemeral Session Stopped" | "Experiment Container Created" | "Experiment Container Dataset Registered" | "Experiment Container Dataset Unregistered" | "Experiment Container Deleted " | "Experiment Container Entity Linked" | "Experiment Container Entity Migrated" | "Experiment Container Entity Moved" | "Experiment Container Entity Unlinked" | "Experiment Container Reference To Catalog Dataset Removed" | "Experiment Container Reference To Catalog Dataset Version Removed" | "Experiment Container Updated" | "External Predictions Configured" | "External Registered Model Created" | "External Registered Model Version Added" | "FEAR Predict Job Started" | "FaaS Function Created" | "FaaS Function Deleted" | "FaaS Function Perma Deleted" | "FaaS Function Updated" | "Failed Decision Flow Test" | "Feature Discovery Relationship Quality Assessment Inputs Metrics" | "Feature Discovery Relationship Quality Assessment Warnings Metrics" | "Feature Drift Settings Changed" | "Feature Over Geo Computed" | "File Deleted" | "File Download" | "File Permadelete Executed" | "File Permadelete Failed" | "File Permadelete Submitted" | "File Shared" | "File Sharing Removed" | "File Undeleted" | "File Upload" | "File Upload is Completed" | "Finish Autopilot" | "First Login After DR Account Migration" | "GenAI Agent Chat Completion Requested" | "GenAI Chat Created" | "GenAI Chat Deleted" | "GenAI Chat Prompt Created" | "GenAI Chat Prompt Deleted" | "GenAI Chat Prompt Updated" | "GenAI Chat Updated" | "GenAI Comparison Chat Created" | "GenAI Comparison Chat Deleted" | "GenAI Comparison Chat Updated" | "GenAI Comparison Prompt Created" | "GenAI Comparison Prompt Deleted" | "GenAI Comparison Prompt Updated" | "GenAI Cost Metric Configuration Created" | "GenAI Cost Metric Configuration Deleted" | "GenAI Cost Metric Configuration Updated" | "GenAI Evaluation Dataset Configuration Created" | "GenAI Evaluation Dataset Configuration Deleted" | "GenAI Evaluation Dataset Configuration Updated" | "GenAI External Vector Database Updated" | "GenAI Insights Upserted" | "GenAI LLM Blueprint Created" | "GenAI LLM Blueprint Created from Chat Prompt" | "GenAI LLM Blueprint Created from LLM Blueprint" | "GenAI LLM Blueprint Deleted" | "GenAI LLM Blueprint Sent to Model Workshop" | "GenAI LLM Blueprint Updated" | "GenAI LLM Test Configuration Created" | "GenAI LLM Test Configuration Deleted" | "GenAI LLM Test Configuration Updated" | "GenAI LLM Test Result Created" | "GenAI LLM Test Result Deleted" | "GenAI LLM Test Result Updated" | "GenAI LLM Test Suite Created" | "GenAI LLM Test Suite Deleted" | "GenAI LLM Test Suite Updated" | "GenAI Metrics Transferred to Model Workshop" | "GenAI Moderation Config Saved" | "GenAI Moderation Model Deployed" | "GenAI Playground Created" | "GenAI Playground Deleted" | "GenAI Playground Trace Exported" | "GenAI Playground Updated" | "GenAI Prompt Template Created" | "GenAI Prompt Template Deleted" | "GenAI Prompt Template Version Created" | "GenAI Vector Database Created" | "GenAI Vector Database Deleted" | "GenAI Vector Database Downloaded" | "GenAI Vector Database Exported" | "GenAI Vector Database Updated" | "General Feedback Submitted" | "Generic Custom Job Created" | "Generic Custom Job Manual Run Created" | "Generic Custom Job Scheduled Run Created" | "Geometry Over Geo Computed" | "Geospatial Feature Transform Created" | "Geospatial Primary Location Column Selected" | "Global SAML Configuration Added" | "Global SAML Configuration Deleted" | "Global SAML Configuration Updated" | "Group Members Updated" | "Group created" | "Group deleted" | "Group updated" | "Hosted Custom Metric Custom Job Created" | "Hosted Custom Metric Deployment Connection Created" | "Incremental Learning Model Created" | "Interaction Feature Created" | "Interaction Feature Deployment Created" | "Invitation Accepted" | "Invitation sent" | "Job definition created" | "Job definition updated" | "Login Fail" | "Login Succeeded Via Global SAML SSO" | "Login Success Via SAML SSO" | "Login Successful" | "Logout" | "MLOPS Integrations Deployment Launched" | "MLOPS Integrations Deployment Model Replaced" | "MLOPS Integrations Deployment Stopped" | "MLOPS Integrations Prediction Environment Created" | "MLOPS Integrations Prediction Environment Deleted" | "MLOps Installer Download Request Received" | "Managed Image Built" | "Memory Events List Requested" | "Memory Session Delete Requested" | "Memory Sessions List Requested" | "Model Deployment Access Revoked" | "Model Deployment Shared" | "Model Insights Deleted" | "Model Insights Job Submitted" | "Models Starred" | "Multi-Factor Auth Disable" | "Multi-Factor Auth Enable" | "Multilabel Labelwise ROC With Missing TPR Or FPR Requested" | "Native Registered Model Created" | "Native Registered Model Version Added" | "Network Policy Created" | "Network Policy Deleted" | "Network Policy Updated" | "No predictors are left because of Do-Not-Derive" | "Non Existent Value Tracker Attachment Removed" | "Notebook Conversion to Codespace Complete" | "Notebook Conversion to Codespace Initiated" | "Notebook Created" | "Notebook Deleted" | "Notebook Environment Variable Deleted" | "Notebook Environment Variable Edited" | "Notebook Environment Variables Created" | "Notebook Environment Variables Deleted" | "Notebook Metadata Edited" | "Notebook Revision Created" | "Notebook Revision Deleted" | "Notebook Revision Restored" | "Notebook Schedule Created" | "Notebook Schedule Deleted" | "Notebook Schedule Disabled" | "Notebook Schedule Enabled" | "Notebook Schedule Launched" | "Notebook Session Ports Created" | "Notebook Session Ports Deleted" | "Notebook Session Ports Updated" | "Notebook Session Started" | "Notebook Session Stopped" | "Notification Channel Deleted" | "Notification Channel Template Created" | "Notification Channel Template Deleted" | "Notification Channel Template Updated" | "Notification Custom Job Created" | "Notification Policy Created" | "Notification Policy Created From Template" | "Notification Policy Deleted" | "Notification Policy Template Created" | "Notification Policy Template Deleted" | "Notification Policy Template Updated" | "Notification Policy Updated" | "Notification channel created" | "Notification channel deleted" | "Notification channel updated" | "Notification policy created" | "Notification policy deleted" | "Notification policy updated" | "Number of bias mitigation jobs on Autopilot stage." | "OAuth Provider Access Token Generated" | "OAuth Provider Authorization Created" | "OAuth Provider Authorization Revoked" | "OCR Job Resource Completed" | "OCR Job Resource Created" | "OCR Job Resource Started" | "OIDC Configuration Created" | "OIDC Configuration Deleted" | "OIDC Configuration Updated" | "Online Conformal PI Calculation Requested" | "Organization Perma-Deletion Completed" | "Organization Perma-Deletion Failed" | "Organization Perma-Deletion Marked" | "Organization Perma-Deletion Requested" | "Organization Perma-Deletion Started" | "Organization Perma-Deletion Unmarked" | "Organization created" | "Organization deleted" | "Organization updated" | "Organizations Perma-Deletion Requested" | "PPS Docker Image Download Request Received" | "Period accuracy file validation failed" | "Period accuracy file validation successful" | "Period accuracy insight computed" | "Pipeline downsampling build and run started." | "Pipeline downsampling run failed to start." | "Predictions by Forecast Date Settings Updated" | "Prime Downloaded" | "Prime Run" | "Project Access Revoked from the Group" | "Project Access Revoked from the Organization" | "Project Access Revoked from the User" | "Project Autopilot Configured" | "Project Cloned" | "Project Created" | "Project Created from Dataset" | "Project Created from Project Export File" | "Project Created from Wrangled Dataset" | "Project Deleted" | "Project Description Updated" | "Project Exported as Project Export File" | "Project Options Retrieved" | "Project Options Updated" | "Project Permadelete Executed" | "Project Permadelete Failed" | "Project Permadelete Submitted" | "Project Renamed" | "Project Restored" | "Project Shared" | "Project Shared with Group" | "Project Shared with Organization" | "Project Target Selected" | "Published Recipe Data Uploaded" | "Rate limit user group changed" | "Recipe Access Revoked from Group" | "Recipe Access Revoked from Organization" | "Recipe Access Revoked from User" | "Recipe Created" | "Recipe Deleted" | "Recipe Operations Added" | "Recipe Published" | "Recipe Shared" | "Recipe Shared with Group" | "Recipe Shared with Organization" | "Recipe metadata updated" | "Registered Model Shared" | "Registered Model Updated" | "Registered Model Version Stage Transitioned" | "Registered Model Version Updated" | "Remote Repository Registered" | "Replaced Model" | "Request External Insights" | "Request External Insights - All Datasets" | "Request Model Insights" | "Restart Autopilot" | "Restore Reduced Features" | "Retraining Custom Job Created" | "Retraining Policy Cancelled" | "Retraining Policy Created" | "Retraining Policy Deleted" | "Retraining Policy Failed" | "Retraining Policy Started" | "Retraining Policy Succeeded" | "RuleFit Code Downloaded" | "SHAP Impact Computed" | "SHAP Matrix Computed" | "SHAP Predictions Explanations Computed" | "SHAP Predictions Explanations Preview Computed" | "SHAP Training Predictions Explanations Computed" | "Secure Configuration Created" | "Secure Configuration Deleted" | "Secure Configuration Shared" | "Secure Configuration Sharing Removed" | "Secure Configuration Values Updated" | "Segment Analysis Enabled" | "Segment Attributes Specified" | "Select Model Metric" | "ServiceUser Created" | "ServiceUser Deleted" | "ServiceUser Impersonated Token Requested" | "ServiceUser Token Requested" | "ServiceUser Updated" | "Start Autopilot" | "Successful Decision Flow Test" | "Successful Login using OIDC flow" | "Successful Login using OIDC token exchange" | "Successful Login via Google Idp" | "Target is set as Do-Not-Derive" | "Tenant Created" | "Tenant Encryption Key generated and managed by DataRobot KMS" | "Tenant Encryption Key rotated" | "Tenant Encryption Key scheduled for deletion" | "Tenant Encryption Key was set for the tenant" | "Tenant Perma-Deletion Completed" | "Tenant Perma-Deletion Failed" | "Tenant Perma-Deletion Requested" | "Tenant Perma-Deletion Started" | "Tenant Updated" | "Text prediction explanations computed" | "Tracing Dependency Graph Requested" | "Tracing List Requested" | "Tracing Span Histogram Requested" | "Train Model" | "Trial Account Provisioning Completed" | "Trial Account Provisioning Failed" | "Trial Account Provisioning Started" | "Unsupervised Mode Started" | "Update SAML configuration" | "Update account" | "User Agreement Accepted" | "User Agreement Declined" | "User Append Columns Download With Predictions" | "User Blueprint Added To Repository" | "User Blueprint Created" | "User Blueprint Deleted" | "User Blueprint Deleted In Bulk" | "User Blueprint Description Modified" | "User Blueprint Name Modified" | "User Blueprint Retrieved" | "User Blueprint Tags Modified" | "User Blueprint Tasks Retrieved" | "User Blueprint Updated" | "User Blueprint Validated" | "User Blueprints Listed" | "User Provisioned From JWT" | "Users Perma-Deletion Canceled" | "Users Perma-Deletion Canceling" | "Users Perma-Deletion Completed" | "Users Perma-Deletion Failed" | "Users Perma-Deletion Preview Building Canceled" | "Users Perma-Deletion Preview Building Canceling" | "Users Perma-Deletion Preview Building Completed" | "Users Perma-Deletion Preview Building Failed" | "Users Perma-Deletion Preview Building Started" | "Users Perma-Deletion Preview Building Submitted" | "Users Perma-Deletion Started" | "Users Perma-Deletion Submitted" | "Value Tracker Attachment Added" | "Value Tracker Attachment Removed" | "Value Tracker Created" | "Value Tracker Stage Changed" | "Value Tracker Updated" | "Workspace scheduled batch processing job created" | "Workspace scheduled batch processing job deleted" | "Workspace scheduled batch processing job updated" | "aiAPI Portal Login") | No | Enum for audit log event types. |
| `order` | string ("asc" | "desc") | No | Enum for sort order. |
| `orgId` | string | No | The organization to select log records for. |
| `offset` | integer | No | This many results will be skipped. Defaults to 0. |
| `userId` | string | No | The user to select log records for. |
| `projectId` | string | No | The project to select log records for. |
| `maxTimestamp` | string | No | The upper bound for timestamps. E.g. '2016-12-13T11:12:13.141516Z'. |
| `minTimestamp` | string | No | The lower bound for timestamps. E.g. '2016-12-13T11:12:13.141516Z'. |
| `auditReportType` | string ("APP_USAGE" | "ADMIN_USAGE") | No | Enum for audit report type. |
| `includeIdentifyingFields` | string ("false" | "False" | "true" | "True") | No | Enum for includeIdentifyingFields parameter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Event Logs Events

**Slug:** `DATAROBOT_LIST_EVENT_LOGS_EVENTS`

Tool to retrieve all available events from DataRobot event logs. Use when you need to list event labels for filtering or querying event logs. Note: This is a deprecated API.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Event Logs Prediction Usage

**Slug:** `DATAROBOT_LIST_EVENT_LOGS_PREDICTION_USAGE`

Tool to retrieve prediction usage event logs from DataRobot. Use when you need to track prediction activity within a specified time range (max 24 hours).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `order` | string ("asc" | "desc") | No | Sort order for prediction usage rows. |
| `offset` | integer | No | This many results will be skipped. Defaults to 0. |
| `userId` | string | No | The user to retrieve prediction usage for. |
| `projectId` | string | No | The project to retrieve prediction usage for. |
| `maxTimestamp` | string | Yes | The upper bound for timestamps. Time range should not exceed 24 hours. ISO 8601 format (e.g., '2016-12-13T11:12:13.141516Z'). |
| `minTimestamp` | string | Yes | The lower bound for timestamps. ISO 8601 format (e.g., '2016-12-13T11:12:13.141516Z'). |
| `includeIdentifyingFields` | string ("false" | "False" | "true" | "True") | No | Whether to include identifying information. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Execution Environments

**Slug:** `DATAROBOT_LIST_EXECUTION_ENVIRONMENTS`

Tool to list execution environments in DataRobot. Use when retrieving available execution environments for custom models, notebooks, or other use cases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `isPublic` | boolean | No | If set, only return execution environments matching this public/private setting. |
| `useCases` | string ("customModel" | "notebook" | "gpu" | "customApplication" | "sparkApplication" | "customJob") | No | Execution environment use case types. |
| `searchFor` | string | No | String to search for in execution environment description and label. Search is case insensitive. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Execution Environments Versions Build Log

**Slug:** `DATAROBOT_LIST_EXECUTION_ENVIRONMENTS_VERSIONS_BUILD_LOG`

Tool to download execution environment build log. Use when you need to retrieve build logs for a specific execution environment version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `environment_id` | string | Yes | Execution environment Id. |
| `environment_version_id` | string | Yes | Execution environment version Id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Download Execution Environment Version

**Slug:** `DATAROBOT_LIST_EXECUTION_ENVIRONMENTS_VERSIONS_DOWNLOAD`

Tool to download execution environment version files from DataRobot. Downloads either a Docker image tarball or Docker context depending on the imageFile parameter. Use when you need to retrieve the built environment for deployment or inspection.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `imageFile` | string ("false" | "False" | "true" | "True") | No | If true, the built Docker image will be downloaded as a tar archive, otherwise the Docker context will be returned. |
| `environmentId` | string | Yes | The ID of the execution environment. |
| `environmentVersionId` | string | Yes | The ID of the environment version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Execution Environment Versions

**Slug:** `DATAROBOT_LIST_EXECUTION_ENVIRONMENT_VERSIONS`

Tool to list all versions of an execution environment in DataRobot. Use when you need to browse available versions of a specific execution environment, optionally filtered by build status or search criteria.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Case-insensitive search string to filter by version ID, image ID, label, or description. |
| `buildStatus` | string ("submitted" | "processing" | "failed" | "success" | "aborted") | No | Build status of execution environment version. |
| `environmentId` | string | Yes | The ID of the execution environment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Store Standard User-Defined Functions

**Slug:** `DATAROBOT_LIST_EXT_DS_STANDARD_USER_DEF_FUNCTIONS`

Tool to retrieve detected standard user-defined functions for a given external data store. Use when you need to list available standard UDFs for a specific data store, credentials, function type, and schema.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `schema` | string | Yes | The schema to create or detect user-defined functions in. |
| `dataStoreId` | string | Yes | ID of the external data store. |
| `credentialId` | string | No | ID of the set of credentials to use instead of username and password. |
| `functionName` | string | No | Standard user-defined function name to filter results by. |
| `functionType` | string ("rolling_median" | "rolling_most_frequent") | Yes | Standard user-defined function type to retrieve. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Driver Configuration

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_DRIVER_CONFIGURATION`

Tool to retrieve external data driver configuration details by driver ID. Use when you need to understand JDBC connection requirements for a specific driver.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `driverId` | string | Yes | ID of the external data driver to retrieve configuration for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Drivers

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_DRIVERS`

Tool to list all available external data drivers in DataRobot. Use when retrieving the catalog of data drivers for data connectivity.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("all" | "dr-connector-v1" | "dr-database-v1" | "jdbc") | No | Driver type filter. Either 'jdbc', 'dr-database-v1', 'dr-connector-v1', or 'all'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Sources Access Control

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_SOURCES_ACCESS_CONTROL`

Tool to list users with their roles on an external data source. Use when you need to retrieve access control information for a specific data source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `userId` | string | No | Optional, only return the access control information for a user with this user ID. |
| `username` | string | No | Optional, only return the access control information for a user with this username. |
| `dataSourceId` | string | Yes | The ID of the external data source to retrieve access control information for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Sources Permissions

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_SOURCES_PERMISSIONS`

Tool to list permissions for the current user on an external data source. Use when you need to check what actions the user can perform on a specific data source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataSourceId` | string | Yes | The ID of the Data Source. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Sources Shared Roles

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_SOURCES_SHARED_ROLES`

Tool to get an external data source's access control list. Use when you need to view who has access to a data source and their permission levels.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return roles for a user, group, or organization with this identifier. |
| `name` | string | No | Only return roles for a user, group, or organization with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 10. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `dataSourceId` | string | Yes | The ID of the external data source. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Describes the type of share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Stores

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_STORES`

Tool to list external data stores in DataRobot. Use when you need to browse available data stores, filter by type, database type, connector type, or search by name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Search for data stores whose canonicalName matches or contains the specified name. The search is case insensitive. |
| `type` | string ("all" | "databases" | "dr-connector-v1" | "dr-database-v1" | "jdbc") | No | Filter for data store types. |
| `limit` | integer | No | Maximum number of results to return; defaults to 100. |
| `offset` | integer | No | Number of results to skip; defaults to 0. |
| `dataType` | string ("all" | "structured" | "unstructured") | No | Filter for data types supported by data stores. |
| `showHidden` | string ("false" | "False" | "true" | "True") | No | Options for showing hidden OAuth fields. |
| `databaseType` | string | No | Includes only data stores of the specified database type. For JDBC based data stores, the database_type value is the string between the first and the second colons of a jdbc url. For example, a snowflake jdbc url is jdbc:snowflake://, the database_type is hence snowflake. If an empty string is used, or if the string contains only whitespace, no filtering occurs. |
| `connectorType` | string | No | Includes only data stores of the specified connector type. |
| `substituteUrlParameters` | string ("false" | "False" | "true" | "True") | No | Options for substituting URL parameters. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Store Credentials

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_STORES_CREDENTIALS`

Tool to list credentials associated with a specified external data store. Use when you need to retrieve all credentials linked to a data store.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `types` | string | No | Includes only credentials of the specified type. Repeat the parameter for filtering on multiple statuses. |
| `offset` | integer | No | Number of results to skip. |
| `orderBy` | string ("creationDate" | "-creationDate") | No | Enum for ordering credentials. |
| `dataStoreId` | string | Yes | ID of the data store. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Stores Permissions

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_STORES_PERMISSIONS`

Tool to retrieve permissions for an external data store. Use when you need to check what actions a user can perform on a specific data store.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataStoreId` | string | Yes | ID of the external data store to retrieve permissions for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Data Stores Shared Roles

**Slug:** `DATAROBOT_LIST_EXTERNAL_DATA_STORES_SHARED_ROLES`

Tool to list access control entries (shared roles) for an external data store. Use when you need to retrieve who has access to a specific data store and their roles.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Filter results to roles for a user, group, or organization with this identifier. |
| `name` | string | No | Filter results to roles for a user, group, or organization with this name. |
| `limit` | integer | No | Maximum number of results to return per page. Defaults to 10. |
| `offset` | integer | No | Number of results to skip for pagination. Defaults to 0. |
| `dataStoreId` | string | Yes | Unique identifier of the external data store. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Type of recipient for the shared role. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External Driver Configurations

**Slug:** `DATAROBOT_LIST_EXTERNAL_DRIVER_CONFIGURATIONS`

Tool to list available external driver configurations in DataRobot. Use when you need to retrieve driver configurations for data connectivity, optionally filtered by type or visibility.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("all" | "dr-connector-v1" | "dr-database-v1" | "jdbc") | No | Enum for driver configuration types. |
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `showHidden` | string ("false" | "False" | "true" | "True") | No | Options for showing hidden configurations. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List External OAuth Providers

**Slug:** `DATAROBOT_LIST_EXTERNAL_O_AUTH_PROVIDERS`

Tool to list external OAuth providers configured in DataRobot. Use when retrieving available OAuth integrations for external services like GitHub, GitLab, Bitbucket, Google, Box, Microsoft, Jira, or Confluence.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ids` | array | No | Filter by provider IDs. Multiple IDs can be specified. |
| `host` | array | No | Filter by host URL. Multiple hosts can be specified. |
| `types` | array | No | Filter by provider types. Multiple types can be specified. |
| `orderBy` | string ("createdAt" | "-createdAt") | No | Sort results by creation date. Use 'createdAt' for ascending or '-createdAt' for descending order. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Feature Association Featurelists

**Slug:** `DATAROBOT_LIST_FEATURE_ASSOCIATION_FEATURELISTS`

Tool to list all featurelists with feature association matrix availability flags for a project. Use when you need to check which featurelists have feature association matrices available.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Feature Lists

**Slug:** `DATAROBOT_LIST_FEATURE_LISTS`

Tool to list all feature lists for a project. Use when you need to retrieve and filter feature lists associated with a DataRobot project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sortBy` | string ("name" | "description" | "features" | "numModels" | "created" | "isUserCreated" | "-name" | "-description" | "-features" | "-numModels" | "-created" | "-isUserCreated") | No | Property to sort feature lists by. Allowed values: name, description, features, numModels, created, isUserCreated. Prefix with '-' for descending order. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project to list feature lists for. Obtain from DATAROBOT_LIST_PROJECTS. |
| `searchFor` | string | No | Substring to filter feature list names by partial match. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List All Files for Catalog Item

**Slug:** `DATAROBOT_LIST_FILES_ALL_FILES`

Tool to list all files associated with a catalog item in DataRobot. Use when you need to browse or retrieve files from a specific catalog entry, with support for pagination and filtering by file type or path prefix.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `prefix` | string | No | If specified, will only return files with paths that start with the given folder prefix. Must end with '/'. |
| `fileType` | string | No | If specified, will only return files that match the specified type(s). |
| `catalogId` | string | Yes | The catalog item ID. |
| `recursive` | string ("false" | "False" | "true" | "True") | No | Whether to list all files recursively. If a prefix is specified, whether to list all files under that prefix recursively. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Chat Prompts

**Slug:** `DATAROBOT_LIST_GENAI_CHAT_PROMPTS`

Tool to list GenAI chat prompts in DataRobot. Use when retrieving chat prompts associated with playgrounds, LLM blueprints, or chat sessions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Retrieve only the specified number of values (for pagination). |
| `chatId` | string | No | Only retrieve the chat prompts associated with this chat ID. |
| `offset` | integer | No | Skip the specified number of values (for pagination). |
| `playgroundId` | string | No | Only retrieve the chat prompts associated with this playground ID. |
| `llmBlueprintId` | string | No | Only retrieve the chat prompts associated with this LLM blueprint ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Chats

**Slug:** `DATAROBOT_LIST_GENAI_CHATS`

Tool to list GenAI chats available to the user. Use when retrieving paginated chats from DataRobot GenAI.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | The property to sort chats by. Prefix with dash (-) for descending order. Default is creation time descending. |
| `limit` | integer | No | Maximum number of chats to return (for pagination). |
| `offset` | integer | No | Number of chats to skip (for pagination). |
| `llm_blueprint` | string | No | Returns only those chats associated with a particular LLM blueprint, specified by either the entity or the ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Comparison Prompts

**Slug:** `DATAROBOT_LIST_GENAI_COMPARISON_PROMPTS`

Tool to list GenAI comparison prompts filtered by comparison chat ID or LLM blueprint IDs. Use when retrieving comparison prompts for analysis or testing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `llmBlueprintIds` | array | No | Filter comparison prompts by LLM blueprint IDs. Exactly one of comparison_chat_id or llm_blueprint_ids must be supplied. |
| `comparisonChatId` | string | No | Filter comparison prompts by comparison chat ID. Exactly one of comparison_chat_id or llm_blueprint_ids must be supplied. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Custom Model Embedding Validations

**Slug:** `DATAROBOT_LIST_GENAI_CUSTOM_MODEL_EMBEDDING_VALIDATIONS`

Tool to list GenAI custom model embedding validations. Use when retrieving paginated custom model embedding validation records from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Sort option supporting 'name', 'deploymentName', 'userName', or 'creationDate'. Prefix with '-' for descending order (e.g., '-creationDate' for newest first). |
| `limit` | integer | No | Pagination limit for result set size - maximum number of records to return. |
| `offset` | integer | No | Pagination offset - number of records to skip. |
| `search` | string | No | Search parameter to match validation names. |
| `modelId` | string | No | Filter by model ID. |
| `useCaseId` | string | No | Filter by associated use case identifiers. |
| `deploymentId` | string | No | Filter by deployment ID. |
| `completedOnly` | boolean | No | Boolean flag to show only finished validations (true) or all validations (false). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Custom Model Vector Database Validations

**Slug:** `DATAROBOT_LIST_GENAI_CUSTOM_MODEL_VECTOR_DB_VALIDATIONS`

Tool to list custom model vector database validations for GenAI use cases. Use when retrieving validations for custom models used as vector databases in RAG workflows.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Sort results by field name. Allowed values: name, deploymentName, userName, creationDate. Prefix with '-' for descending order (e.g., '-creationDate'). |
| `limit` | integer | No | Maximum number of records to retrieve. |
| `offset` | integer | No | Number of records to skip for pagination. |
| `search` | string | No | Filter by matching search query against validation fields. |
| `modelId` | string | No | Filter by model ID. |
| `useCaseId` | string | No | Filter by associated use case IDs. |
| `deploymentId` | string | No | Filter by deployment ID. |
| `playgroundId` | string | No | Filter by playground ID. |
| `completedOnly` | boolean | No | Return only completed validations. Default is false. |
| `promptColumnName` | string | No | Filter by prompt input column name. |
| `targetColumnName` | string | No | Filter by prediction output column name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Evaluation Dataset Configurations

**Slug:** `DATAROBOT_LIST_GENAI_EVALUATION_DATASET_CONFIGURATIONS`

Tool to list GenAI evaluation dataset configurations. Use when you need to retrieve evaluation dataset configurations for a specific use case and playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Apply sorting; valid options include name, creationUserId, creationDate, datasetId, userName, datasetName, promptColumnName, responseColumnName. |
| `limit` | integer | No | Retrieve only the specified number of values. |
| `offset` | integer | No | Skip the specified number of values for pagination. |
| `search` | string | No | Only retrieve the evaluation dataset configurations matching the search query. |
| `useCaseId` | string | Yes | Only retrieve the evaluation dataset configurations associated with this use case ID. |
| `playgroundId` | string | Yes | Only retrieve the evaluation dataset configuration associated with this playground ID. |
| `completedOnly` | boolean | No | If true, retrieve only configurations where the dataset has completed status (default: false). |
| `correctnessEnabledOnly` | boolean | No | If true, retrieve only configurations with correctness enabled (default: false). |
| `evaluationDatasetConfigurationId` | string | No | Filter by specific configuration ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Blueprints

**Slug:** `DATAROBOT_LIST_GENAI_LLM_BLUEPRINTS`

Tool to list LLM blueprints for building generative AI applications with various large language models. Use when you need to browse available LLM blueprints for GenAI deployments.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Apply this sort order to the results. |
| `limit` | integer | No | Retrieve only the specified number of values. |
| `offset` | integer | No | Skip the specified number of values. |
| `search` | string | No | Only retrieve the LLM blueprints matching the search query. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLMs

**Slug:** `DATAROBOT_LIST_GENAI_LLMS`

Tool to list all available GenAI LLMs in DataRobot. Use when you need to retrieve available language models for GenAI applications.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Test Configurations Supported Insights

**Slug:** `DATAROBOT_LIST_GENAI_LLM_TEST_CONFIG_SUPPORTED_INSIGHTS`

Tool to list supported insights for LLM test configurations in DataRobot. Use when retrieving available insight types for a use case or playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | No | Filter by Use Case ID. Exactly one of useCaseId or playgroundId must be specified. |
| `playgroundId` | string | No | Filter by Playground ID. Exactly one of useCaseId or playgroundId must be specified. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Test Configurations

**Slug:** `DATAROBOT_LIST_GENAI_LLM_TEST_CONFIGURATIONS`

Tool to list GenAI LLM test configurations. Use when retrieving paginated LLM test configurations from DataRobot AI Robustness Tests.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Order results by specified fields. Prefix with dash for descending order. |
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `search` | string | No | Filter configurations by name or content matching the search query. |
| `useCaseId` | string | No | Filter configurations by associated use case ID. |
| `playgroundId` | string | No | Filter configurations by associated playground ID. |
| `completedOnly` | boolean | No | Filter to return only completed configurations. |
| `llmTestConfigurationId` | string | No | Retrieve specific configuration by ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Test Configurations OOTB Datasets

**Slug:** `DATAROBOT_LIST_GENAI_LLM_TEST_CONFIGURATIONS_OOTB_DATASETS`

Tool to list out-of-the-box (OOTB) datasets for GenAI LLM test configurations. Use when you need to retrieve available datasets for LLM compliance and robustness testing.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Test Results

**Slug:** `DATAROBOT_LIST_GENAI_LLM_TEST_RESULTS`

Tool to list GenAI LLM test results filtered by use case and playground. Use when retrieving test results for LLM robustness evaluations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Supports sorting by name, creationUserId, creationDate; prefix with dash for descending order (e.g., '-creationDate'). |
| `limit` | integer | No | Retrieve only the specified number of values. |
| `offset` | integer | No | Skip the specified number of values for pagination. |
| `search` | string | No | Only retrieve the LLM test results matching the search query. |
| `useCaseId` | string | Yes | Only retrieve the LLM test results associated with this use case ID. |
| `playgroundId` | string | Yes | Only retrieve the LLM test results associated with this playground ID. |
| `llmTestResultId` | string | No | Only retrieve the LLM test result with this ID. |
| `llmTestConfigurationId` | string | No | Filters results by associated test configuration ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI LLM Test Suites

**Slug:** `DATAROBOT_LIST_GENAI_LLM_TEST_SUITES`

Tool to list GenAI LLM test suites in DataRobot. Use when retrieving AI robustness test suites for LLM models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Apply sorting to results. Use field name for ascending or -field for descending (e.g., 'name' or '-createdAt'). |
| `limit` | integer | No | Retrieve only a specified number of records for pagination. |
| `offset` | integer | No | Skip a specified number of records for pagination. |
| `search` | string | No | Filter results using search queries. Searches across test suite names and descriptions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Playgrounds

**Slug:** `DATAROBOT_LIST_GENAI_PLAYGROUNDS`

Tool to list all GenAI playgrounds accessible by the user. Use when you need to browse or filter available playgrounds for LLM blueprint development.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Property to sort playgrounds by. |
| `search` | string | No | String for filtering playgrounds. Playgrounds that contain the string in name will be returned. |
| `useCase` | string | No | Filter playgrounds to those associated with a specific Use Case ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Playgrounds OOTB Metric Configurations

**Slug:** `DATAROBOT_LIST_GENAI_PLAYGROUNDS_OOTB_METRIC_CONFIGURATIONS`

Tool to list OOTB metric configurations for a GenAI playground. Use when you need to retrieve all out-of-the-box metric configurations associated with a specific playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The playground identifier |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Playgrounds Supported Insights

**Slug:** `DATAROBOT_LIST_GENAI_PLAYGROUNDS_SUPPORTED_INSIGHTS`

Tool to list supported insights for a GenAI playground. Use when you need to retrieve all available insight configurations for a specific playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The playground identifier for which to retrieve supported insights |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Playground Traces

**Slug:** `DATAROBOT_LIST_GENAI_PLAYGROUNDS_TRACE`

Tool to list all prompt traces for a GenAI playground. Use when you need to retrieve execution history and tracing data from playground prompts.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The playground ID to retrieve traces from. |
| `limit` | integer | No | Maximum number of results to return; must be at least 1. |
| `offset` | integer | No | Number of results to skip for pagination; must be non-negative. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Sidecar Model Metric Validations

**Slug:** `DATAROBOT_LIST_GENAI_SIDECAR_MODEL_METRIC_VALIDATIONS`

Tool to list GenAI sidecar model metric validations from DataRobot. Use when retrieving available metric validation configurations for generative AI models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return. |
| `offset` | integer | No | Number of results to skip for pagination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI User Limits LLM API Calls

**Slug:** `DATAROBOT_LIST_GENAI_USER_LIMITS_LLM_API_CALLS`

Tool to retrieve the count of LLM API calls made by the authenticated user. Use when you need to check how many LLM API requests the user has made that count towards their usage limits.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI User Limits Vector Databases

**Slug:** `DATAROBOT_LIST_GENAI_USER_LIMITS_VECTOR_DATABASES`

Tool to retrieve the number of vector databases the user has created which count towards the usage limit. Use when checking GenAI user limits for vector database resources.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Vector Databases

**Slug:** `DATAROBOT_LIST_GENAI_VECTOR_DATABASES`

Tool to list all GenAI vector databases used for RAG (Retrieval Augmented Generation) applications. Use when you need to retrieve or filter vector databases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string | No | Applies sorting; valid options include name and creationDate. |
| `limit` | integer | No | Retrieve only the specified number of values. |
| `offset` | integer | No | Skip the specified number of values. |
| `search` | string | No | Filters results matching a search query. |
| `useCaseId` | string | No | Filters by associated use case ID. |
| `playgroundId` | string | No | Filters by associated playground ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Vector Databases Supported Retrieval Settings

**Slug:** `DATAROBOT_LIST_GENAI_VECTOR_DBS_SUPPORTED_RETRIEVAL_SETS`

Tool to list all supported retrieval settings for GenAI vector databases. Returns configuration options for retrieval parameters including retriever names, retrieval modes, and document retrieval limits.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List GenAI Vector Databases Supported Text Chunkings

**Slug:** `DATAROBOT_LIST_GENAI_VECTOR_DBS_SUPPORTED_TEXT_CHUNKINGS`

Tool to list all supported text chunking configurations for GenAI vector databases. Returns recommended chunking parameters for each supported embedding model including recursive and semantic chunking methods.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Guard Configurations Prediction Environments In Use

**Slug:** `DATAROBOT_LIST_GUARD_CONFIG_PRED_ENVS_IN_USE`

Tool to show prediction environments in use for moderation by a specific custom model version. Use when you need to identify which prediction environments are using a particular guard configuration custom model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned per page. |
| `offset` | integer | No | This many results will be skipped for pagination. |
| `customModelVersionId` | string | Yes | Show prediction environment information for this custom model version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Guard Configurations

**Slug:** `DATAROBOT_LIST_GUARD_CONFIGURATIONS`

Tool to list guard configurations for a specific entity in DataRobot. Use when you need to retrieve guard configurations associated with a custom model, custom model version, or playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped (for pagination). |
| `entityId` | string | Yes | Filter guard configurations by the given entity ID. |
| `entityType` | string ("customModel" | "customModelVersion" | "playground") | Yes | Entity type of the given entity ID (customModel, customModelVersion, or playground). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Guard Templates

**Slug:** `DATAROBOT_LIST_GUARD_TEMPLATES`

Tool to list guard templates in DataRobot. Use when retrieving available guardrails templates for LLM deployments.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Search for templates by name. |
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `isAgentic` | string ("false" | "False" | "true" | "True") | No | Boolean values as strings for API query parameters. |
| `forPlayground` | string ("false" | "False" | "true" | "True") | No | Boolean values as strings for API query parameters. |
| `forProduction` | string ("false" | "False" | "true" | "True") | No | Boolean values as strings for API query parameters. |
| `includeAgentic` | string ("false" | "False" | "true" | "True") | No | Boolean values as strings for API query parameters. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Image Augmentation Lists

**Slug:** `DATAROBOT_LIST_IMAGE_AUGMENTATION_LISTS`

Tool to list image augmentation lists for a DataRobot project. Use when retrieving augmentation lists for image-based projects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. To specify no limit, use 0. The default may change without notice. |
| `offset` | integer | No | This many results will be skipped (for pagination). |
| `projectId` | string | Yes | Project ID to retrieve augmentation lists from. |
| `featureName` | string | No | Name of the image feature that the augmentation list is associated with. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List MLOps Compute Bundles

**Slug:** `DATAROBOT_LIST_MLOPS_COMPUTE_BUNDLES`

Tool to list resource bundles for MLOps compute. Use when retrieving available compute bundles for custom models, jobs, or applications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip. |
| `entityId` | string | No | Identifier used to return recommended resource bundles. Must be used together with entityType parameter. |
| `useCases` | string ("customApplication" | "customJob" | "customModel" | "modelingMachineWorker" | "predictionAPI" | "sapAICore" | "sparkApplication") | No | Use cases for compute bundles. |
| `entityType` | string ("customModelTemplate") | No | Type of entity for which to get recommended bundles. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Features

**Slug:** `DATAROBOT_LIST_MODEL_FEATURES`

Tool to retrieve the list of features used in a specific DataRobot model. Use when you need to understand which features a model is using for predictions or analysis.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The unique identifier of the model to retrieve features for. |
| `project_id` | string | Yes | The unique identifier of the DataRobot project containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Package Features

**Slug:** `DATAROBOT_LIST_MODEL_PACKAGE_FEATURES`

Tool to retrieve features in a model package. Use after model package creation to explore its feature set.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of features to return; defaults to 50. |
| `offset` | integer | No | Number of features to skip; defaults to 0. |
| `search` | string | No | Case-insensitive search against names of the deployment's features. |
| `orderBy` | string ("name" | "-name" | "importance" | "-importance") | No | Sort order to apply to the list of features. Allowed values: name, -name, importance, -importance. |
| `modelPackageId` | string | Yes | ID of the model package. |
| `forSegmentedAnalysis` | string ("false" | "False" | "true" | "True") | No | When True, return only features usable for segmented analysis. |
| `includeNonPredictionFeatures` | string ("false" | "False" | "true" | "True") | No | When True, return all raw features in the universe dataset; when False, only raw features used for predictions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Packages

**Slug:** `DATAROBOT_LIST_MODEL_PACKAGES`

Tool to list model packages. Use when you need to search or page through model packages.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `search` | string | No | Term to search in package name, model name, or description. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Package Capabilities

**Slug:** `DATAROBOT_LIST_MODEL_PACKAGES_CAPABILITIES`

Tool to retrieve capabilities of a model package. Use after creating or loading a model package to check supported features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelPackageId` | string | Yes | ID of the model package. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Package Model Logs

**Slug:** `DATAROBOT_LIST_MODEL_PACKAGES_MODEL_LOGS`

Tool to list model logs for a specific model package. Use when you need to retrieve log entries generated during model package operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results will be returned |
| `offset` | integer | No | Number of results that will be skipped |
| `modelPackageId` | string | Yes | ID of the model package to retrieve logs for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Packages Shared Roles

**Slug:** `DATAROBOT_LIST_MODEL_PACKAGES_SHARED_ROLES`

Tool to get model package's access control list. Use when you need to retrieve shared roles for a specific model package.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Filter results to only return roles for a user, group, or organization with this identifier. |
| `name` | string | No | Filter results to only return roles for a user, group, or organization with this name. |
| `limit` | integer | No | Maximum number of results to return per page. Defaults to 10. |
| `offset` | integer | No | Number of results to skip for pagination. Defaults to 0. |
| `modelPackageId` | string | Yes | ID of the model package to retrieve shared roles for. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Type of the share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Records

**Slug:** `DATAROBOT_LIST_MODEL_RECORDS`

Retrieve a paginated list of trained model records from a DataRobot project. Returns model metadata including model type, metrics, training info, and configuration. Use this to explore models built during AutoML or manually submitted models. Supports filtering by characteristics, search terms, labels, blueprints, model families, and sorting by metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of model records to return per page. Defaults to 100. |
| `labels` | array | No | Filter by user-applied labels |
| `offset` | integer | No | Number of records to skip for pagination. Use with limit for paging through results. Defaults to 0. |
| `families` | array | No | Filter by model families |
| `projectId` | string | Yes | The unique identifier of the DataRobot project to retrieve models from. |
| `blueprints` | string | No | Filter by comma-separated blueprint IDs |
| `searchTerm` | string | No | Case-insensitive substring search in descriptions |
| `withMetric` | string | No | Only include scores for this metric |
| `featurelists` | string | No | Filter by comma-separated featurelist names |
| `sortByMetric` | string | No | Metric name to sort results by |
| `characteristics` | array | No | Filter by model characteristics |
| `sortByPartition` | string ("backtesting" | "crossValidation" | "validation" | "holdout") | No | Partition to use for sorting by metric |
| `trainingFilters` | string | No | Filter by training length/type |
| `numberOfClusters` | string | No | Filter by number of clusters (unsupervised) |
| `showInSampleScores` | boolean | No | Include in-sample scores where available |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Supported Capabilities

**Slug:** `DATAROBOT_LIST_MODEL_SUPPORTED_CAPABILITIES`

Tool to retrieve supported capabilities for a model. Use after training a model to check which insights and features are available.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | ID of the model |
| `projectId` | string | Yes | ID of the DataRobot project |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Moderation Supported LLMs

**Slug:** `DATAROBOT_LIST_MODERATION_SUPPORTED_LLMS`

Tool to list supported LLMs for moderation in DataRobot. Use when retrieving available LLM models for moderation purposes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Multilabel Insights Pairwise Manual Selections

**Slug:** `DATAROBOT_LIST_MULTILABEL_INSIGHTS_PAIRWISE_MANUAL_SELS`

Tool to retrieve all manually selected label lists for pairwise multilabel insights analysis. Use when analyzing multilabel features to get user-defined label combinations for pairwise comparison.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `multilabelInsightsKey` | string | Yes | Key for multilabel insights, unique per project, feature, and EDA stage. The most recent key can be retrieved via GET /api/v2/projects/{projectId}/features/ or GET /api/v2/projects/{projectId}/features/{featureName}/ |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Code Snippets

**Slug:** `DATAROBOT_LIST_NOTEBOOK_CODE_SNIPPETS`

Tool to retrieve all available notebook code snippets from DataRobot. Use when you need to discover or browse code snippets for notebooks.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (pagination). |
| `offset` | integer | No | Number of results to skip (pagination offset). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Code Snippets Tags

**Slug:** `DATAROBOT_LIST_NOTEBOOK_CODE_SNIPPETS_TAGS`

Tool to retrieve available tags for notebook code snippets from DataRobot. Use when you need to discover or filter code snippet tags.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Execution Environment Notebooks

**Slug:** `DATAROBOT_LIST_NOTEBOOK_EXECUTION_ENVIRONMENT_NOTEBOOKS`

Tool to list notebooks that use a specific execution environment in DataRobot. Use when you need to see which notebooks are using a particular execution environment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The execution environment ID (24-character hexadecimal string in MongoDB ObjectId format). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Execution Environments

**Slug:** `DATAROBOT_LIST_NOTEBOOK_EXECUTION_ENVIRONMENTS`

Tool to list all notebook execution environments in DataRobot. Use when you need to browse available execution environments for notebooks.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return per page. |
| `offset` | integer | No | Number of items to skip before starting to return results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Execution Environments Machines

**Slug:** `DATAROBOT_LIST_NOTEBOOK_EXECUTION_ENVIRONMENTS_MACHINES`

Tool to list available machine types for notebook execution environments. Use when you need to retrieve machine specifications for notebooks.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Execution Environment Ports

**Slug:** `DATAROBOT_LIST_NOTEBOOK_EXECUTION_ENVIRONMENTS_PORTS`

Tool to list exposed ports for a notebook execution environment. Use when you need to retrieve port information for a specific notebook environment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | Notebook execution environment ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Execution Environment Versions

**Slug:** `DATAROBOT_LIST_NOTEBOOK_EXECUTION_ENVIRONMENTS_VERSIONS`

Tool to list all versions of a notebook execution environment. Use when retrieving version history for a specific notebook execution environment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The execution environment ID. |
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Jobs Run History

**Slug:** `DATAROBOT_LIST_NOTEBOOK_JOBS_RUN_HISTORY`

Tool to list notebook job run history in DataRobot. Use when you need to retrieve historical execution records of scheduled or manually triggered notebook jobs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return (1-1000) |
| `offset` | integer | No | Number of items to skip for pagination |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebooks

**Slug:** `DATAROBOT_LIST_NOTEBOOKS`

Tool to list Jupyter notebooks in DataRobot workspace. Use when you need to browse or filter available notebooks for data exploration and model development.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of items to return. Maximum is 1000. |
| `offset` | integer | No | Number of items to skip for pagination. |
| `projectId` | string | No | Filter notebooks by project ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Cells

**Slug:** `DATAROBOT_LIST_NOTEBOOKS_CELLS`

Tool to retrieve all cells from a DataRobot notebook. Use when you need to inspect or analyze the contents of a specific notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID to retrieve cells from |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebook Sessions Terminals

**Slug:** `DATAROBOT_LIST_NOTEBOOK_SESSIONS_TERMINALS`

Tool to list all terminals in a DataRobot notebook session. Use when you need to retrieve terminal sessions for a running notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook session ID (24-character hex ObjectId). Must be a running notebook session. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebooks Filter Options

**Slug:** `DATAROBOT_LIST_NOTEBOOKS_FILTER_OPTIONS`

Tool to retrieve available filter options for notebooks, including tags and owners. Use when you need to get valid filter values before filtering notebooks.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notebooks Shared Roles

**Slug:** `DATAROBOT_LIST_NOTEBOOKS_SHARED_ROLES`

Tool to get access control lists for multiple notebooks. Use when you need to retrieve shared roles for several notebooks at once.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebookIds` | array | Yes | Comma separated notebook IDs to get info of. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notification Channel Templates

**Slug:** `DATAROBOT_LIST_NOTIFICATION_CHANNEL_TEMPLATES`

Tool to list notification channel templates in DataRobot. Use when retrieving available notification channel templates for setting up alerts and notifications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of notification channels to return. |
| `offset` | integer | No | Number of notification channels to skip for pagination. |
| `namePart` | string | No | Filter to return only notification channels whose names contain the given substring. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notification Channel Templates Shared Roles

**Slug:** `DATAROBOT_LIST_NOTIFICATION_CHANNEL_TEMPLATES_SHARED_ROLES`

Tool to get channel template's access control list. Use when you need to view who has access to a notification channel template and their permission levels.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return roles for a user, group or organization with this identifier. |
| `name` | string | No | Only return roles for a user, group or organization with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 10. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `channelId` | string | Yes | The ID of the notification channel template. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Describes the type of share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notification Events

**Slug:** `DATAROBOT_LIST_NOTIFICATION_EVENTS`

Tool to list notification event types and groups available for notification policies. Use when creating or updating notification policies to see available event types.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | No | Enum for related entity types. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notification Channel Templates Policy Templates

**Slug:** `DATAROBOT_LIST_NOTIFY_CHANNEL_TPL_POLICY_TEMPLATES`

Tool to retrieve list of all policy templates using a specific notification channel template. Use when you need to identify which policies are configured to use a particular channel.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many policy templates to return. Maximum is 1000. |
| `offset` | integer | No | How many policy templates to skip for pagination. |
| `channelId` | string | Yes | The id of the notification channel template. Obtain from DATAROBOT_LIST_NOTIFICATION_CHANNEL_TEMPLATES or DATAROBOT_GET_NOTIFICATION_CHANNEL_TEMPLATES. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Notification Channel Templates Related Policies

**Slug:** `DATAROBOT_LIST_NOTIFY_CHANNEL_TPL_RELATED_POLICIES`

Tool to retrieve all policies created from a notification channel template. Use when you need to view policies associated with a specific channel template that are visible to the user.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many notification channels to return. |
| `offset` | integer | No | How many notification channels to skip (for pagination). |
| `channelId` | string | Yes | The ID of the notification channel template. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OCR Job Resources

**Slug:** `DATAROBOT_LIST_OCR_JOB_RESOURCES`

Tool to retrieve user's OCR job resources from DataRobot. Use when you need to list and browse OCR job resources with pagination support.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The max number of results to return. |
| `offset` | integer | No | The number of results to skip (for pagination). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Organization Users

**Slug:** `DATAROBOT_LIST_ORGANIZATION_USERS`

Tool to list memberships (users) in an organization. Use when you need to page through or filter users by ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ids` | array | No | Optional list of user IDs to filter the results. Only users with these IDs will be returned. |
| `limit` | integer | No | Maximum number of users to return per page. Defaults to 100. Use 0 to return all users. |
| `offset` | integer | No | Number of users to skip before starting to collect the result set. Defaults to 0. |
| `organizationId` | string | Yes | The unique identifier of the organization to list users for. Use DATAROBOT_GET_ACCOUNT_INFO to get your organization ID (orgId field). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Logs

**Slug:** `DATAROBOT_LIST_OTEL_LOGS`

Tool to retrieve OpenTelemetry logs for a specified entity. Use when debugging deployments or investigating issues with custom applications, workloads, or other entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `level` | string ("debug" | "info" | "warn" | "warning" | "error" | "critical") | No | The minimum log level of logs to include. |
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `spanId` | string | No | The OTel span ID logs must be associated with (if any). |
| `endTime` | string | No | The end time of the log list. |
| `traceId` | string | No | The OTel trace ID logs must be associated with (if any). |
| `entityId` | string | Yes | ID of the entity to which the logs belong. |
| `excludes` | string | No | A list of values which must be excluded from log entry. |
| `includes` | string | No | A list of strings which must be included in log entry. |
| `startTime` | string | No | The start time of the log list. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the logs belong. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Autocollected Values

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_AUTOCOLLECTED_VALUES`

Tool to get aggregated values of OpenTelemetry metrics that DataRobot automatically collects for a specified entity. Use when monitoring deployments, use cases, or workloads with auto-collected metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the metric list (ISO 8601 format recommended). |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `startTime` | string | No | The start time of the metric list (ISO 8601 format recommended). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Configs

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_CONFIGS`

Tool to list OpenTelemetry metric configurations for a specified entity. Use when you need to retrieve metric collection settings for deployments, use cases, or other DataRobot entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-1000). The default may change without notice |
| `offset` | integer | No | Number of results to skip for pagination |
| `entityId` | string | Yes | ID of the entity to which the metric belongs |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. Valid values: deployment, use_case, experiment_container, custom_application, workload, workload_deployment |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Pod Info

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_POD_INFO`

Tool to list pods and containers found in OpenTelemetry metrics of the specified entity. Use when retrieving pod and container information from deployment, use case, or workload monitoring data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the metric list (ISO 8601 format). |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `startTime` | string | No | The start time of the metric list (ISO 8601 format). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Summary

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_SUMMARY`

Tool to list reported OpenTelemetry metrics of the specified entity. Use when retrieving available OTEL metrics for deployments, use cases, or other DataRobot entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `search` | string | No | Only show reported metrics whose name contains this string. |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |
| `metricType` | string | No | Only show reported metrics of this type. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Value Over Time

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_VALUE_OVER_TIME`

Tool to get a single OpenTelemetry metric value of the specified entity over time. Use when analyzing container resource usage, performance metrics, or custom telemetry for deployments, use cases, or other DataRobot entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `units` | string | No | The unit of measurement for the metric |
| `endTime` | string | No | The end time of the metric list in ISO 8601 format |
| `entityId` | string | Yes | ID of the entity to which the metric belongs |
| `otelName` | string | Yes | The OTel key of the metric (e.g., container_usageNanoCores) |
| `startTime` | string | No | The start time of the metric list in ISO 8601 format |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs (deployment, use_case, etc.) |
| `percentile` | number | No | The metric percentile for the percentile aggregation of histograms |
| `resolution` | string ("PT1M" | "PT5M" | "PT1H" | "P1D" | "P7D") | No | Period resolution for metric values. |
| `aggregation` | string ("sum" | "average" | "min" | "max" | "cardinality" | "percentiles" | "histogram") | Yes | The aggregation method used for metric display |
| `displayName` | string | No | The display name of the metric |
| `bucketInterval` | number | No | Bucket size used for histogram aggregation |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Metrics Values

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_VALUES`

Tool to get OpenTelemetry metrics values for a specific entity over a single time period. Use when retrieving OTEL performance metrics for deployments, use cases, or other entities.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | End time of the metric period (ISO 8601 format). |
| `entityId` | string | Yes | ID of the entity to retrieve metrics for. |
| `startTime` | string | No | Start time of the metric period (ISO 8601 format). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity (deployment, use_case, etc.). |
| `histogramBuckets` | string ("false" | "False" | "true" | "True") | No | Return histograms as buckets instead of percentile values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OTel Metrics Values Over Time

**Slug:** `DATAROBOT_LIST_OTEL_METRICS_VALUES_OVER_TIME`

Tool to retrieve OpenTelemetry configured metrics values for a specific entity over time. Use when monitoring or analyzing entity performance metrics over time periods.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the metric list (ISO 8601 format). If not provided, defaults to current time. |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `startTime` | string | No | The start time of the metric list (ISO 8601 format). If not provided, defaults to a system-determined start time. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs (e.g., deployment, use_case, custom_application). |
| `resolution` | string ("PT1M" | "PT5M" | "PT1H" | "P1D" | "P7D") | No | Period for values of the metric list. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List OpenTelemetry Traces

**Slug:** `DATAROBOT_LIST_OTEL_TRACES`

Tool to list OpenTelemetry traces for a specified entity (deployment, use case, etc.). Use when retrieving observability data to debug or monitor AI applications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `endTime` | string | No | The end time of the trace (ISO 8601 format or timestamp). |
| `entityId` | string | Yes | ID of the entity to which the trace belongs. |
| `startTime` | string | No | The start time of the trace (ISO 8601 format or timestamp). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the trace belongs. |
| `searchKeys` | string | No | A comma-separated list of search keys to filter traces by specific attributes. |
| `maxTraceCost` | integer | No | Maximum cost of the trace. |
| `minTraceCost` | integer | No | Minimum cost of the trace. |
| `searchValues` | string | No | A comma-separated list of search values corresponding to the search keys. |
| `maxSpanDuration` | integer | No | Maximum duration of the span in nanoseconds. |
| `minSpanDuration` | integer | No | Minimum duration of the span in nanoseconds. |
| `minTraceDuration` | integer | No | Minimum duration of the trace in nanoseconds. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Overall Moderation Configuration

**Slug:** `DATAROBOT_LIST_OVERALL_MODERATION_CONFIGURATION`

Tool to get overall moderation configuration for an entity. Use when you need to retrieve moderation settings for custom models, custom model versions, or playgrounds.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | Retrieve overall moderation configuration for the given entity ID. |
| `entityType` | string ("customModel" | "customModelVersion" | "playground") | Yes | Entity type of the given entity ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Pinned Use Cases

**Slug:** `DATAROBOT_LIST_PINNED_USECASES`

Tool to list all pinned use cases in DataRobot. Use when you need to retrieve the user's pinned use cases (up to 8).

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Prediction Environments

**Slug:** `DATAROBOT_LIST_PREDICTION_ENVIRONMENTS`

Lists all available prediction environments. Use this to find an environment ID for deploying models. Returns environment details including platform type, supported model formats, and management status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (default is 100). |
| `offset` | integer | No | Number of results to skip for pagination. |
| `platform` | string ("aws" | "gcp" | "azure" | "onPremise" | "datarobot" | "datarobotServerless" | "openShift" | "other" | "snowflake" | "sapAiCore") | No | Filter environments by platform type (e.g., 'datarobotServerless', 'aws'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Prediction Servers

**Slug:** `DATAROBOT_LIST_PREDICTION_SERVERS`

Tool to list prediction servers available to the user. Use after authenticating to retrieve real-time and batch scoring endpoints.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip (for pagination) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Images

**Slug:** `DATAROBOT_LIST_PROJECT_IMAGES`

Tool to retrieve image metadata for a DataRobot project. Use when you need to list images from a project with image data, optionally filtering by column or target values.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `column` | string | No | Name of the column to query. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `projectId` | string | Yes | The ID of the DataRobot project. |
| `targetValue` | string | No | For classification projects - when specified, only images corresponding to this target value will be returned. Mutually exclusive with targetBinStart/targetBinEnd. |
| `targetBinEnd` | string | No | For regression projects - when specified, only images corresponding to the target values below this will be returned. Mutually exclusive with targetValue. Must be specified with targetBinStart. |
| `targetBinStart` | string | No | For regression projects - when specified, only images corresponding to the target values above this will be returned. Mutually exclusive with targetValue. Must be specified with targetBinEnd. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Jobs

**Slug:** `DATAROBOT_LIST_PROJECT_JOBS`

Tool to list all jobs for a given DataRobot project. Use when you need to inspect or monitor the status of jobs within a project, optionally filtering by status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `status` | string ("queue" | "inprogress" | "error") | No | Optional filter to return only jobs with this status. Allowed values: queue, inprogress, error. |
| `project_id` | string | Yes | The ID of the project to list jobs for. Get this ID from the list_projects action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Models

**Slug:** `DATAROBOT_LIST_PROJECT_MODELS`

Tool to list models for a DataRobot project. Use when retrieving models from a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | If specified, filters for models with a model type matching `name`. |
| `orderBy` | string ("metric" | "-metric" | "samplePct" | "-samplePct") | No | Sorting options for models. |
| `isStarred` | string ("false" | "False" | "true" | "True") | No | Boolean string values for isStarred filter. |
| `projectId` | string | Yes | The project ID. |
| `samplePct` | number | No | If specified, filters for models with a matching sample percentage. |
| `withMetric` | string | No | If specified, the returned models will only have scores for this metric. If not, all metrics will be included. |
| `showInSampleScores` | boolean | No | If specified, will return metric scores for models trained into validation/holdout for projects that do not have stacked predictions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects

**Slug:** `DATAROBOT_LIST_PROJECTS`

Tool to list all available DataRobot projects. Use when retrieving a catalog of projects to select from.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `orderBy` | string ("projectName" | "-projectName") | No | Sort order, either 'projectName' or '-projectName'. |
| `projectId` | string | No | Filter by project ID (exact match). |
| `projectName` | string | No | Filter by project name (exact match). |
| `featureDiscovery` | string ("false" | "False" | "true" | "True") | No | Filter to return only Feature Discovery projects: true/false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Bias Mitigated Models for Project

**Slug:** `DATAROBOT_LIST_PROJECTS_BIAS_MITIGATED_MODELS`

Tool to list bias-mitigated models for a DataRobot project. Use when retrieving models that have been created with bias mitigation techniques applied.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `projectId` | string | Yes | The project ID to retrieve bias-mitigated models for. |
| `parentModelId` | string | No | Retrieve mitigated models for the specified parent model ID. If not specified, retrieves all mitigated models for the project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Bias Mitigation Feature Info

**Slug:** `DATAROBOT_LIST_PROJECTS_BIAS_MITIGATION_FEATURE_INFO`

Tool to get bias mitigation data quality information for a given project and feature. Use when analyzing protected features for fairness and bias issues.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The project ID |
| `feature_name` | string | Yes | Name of feature for mitigation info |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Bias vs Accuracy Insights

**Slug:** `DATAROBOT_LIST_PROJECTS_BIAS_VS_ACCURACY_INSIGHTS`

Tool to list bias vs accuracy insights for a DataRobot project. Use when evaluating model fairness and accuracy trade-offs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID to retrieve bias vs accuracy insights for. |
| `accuracyMetric` | string ("AUC" | "Weighted AUC" | "Area Under PR Curve" | "Weighted Area Under PR Curve" | "Kolmogorov-Smirnov" | "Weighted Kolmogorov-Smirnov" | "FVE Binomial" | "Weighted FVE Binomial" | "Gini Norm" | "Weighted Gini Norm" | "LogLoss" | "Weighted LogLoss" | "Max MCC" | "Weighted Max MCC" | "Rate@Top5%" | "Weighted Rate@Top5%" | "Rate@Top10%" | "Weighted Rate@Top10%" | "Rate@TopTenth%" | "RMSE" | "Weighted RMSE") | No | Supported accuracy metrics for bias vs accuracy insights. |
| `fairnessMetric` | string | No | The fairness metric used to calculate the fairness scores. |
| `protectedFeature` | string | No | Name of the protected feature to analyze for bias. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Blender Models

**Slug:** `DATAROBOT_LIST_PROJECTS_BLENDER_MODELS`

Tool to list all blender models in a DataRobot project. Use when you need to retrieve blender (ensemble) models that combine multiple sub-models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. If 0, all results are returned. |
| `offset` | integer | No | Number of results to skip for pagination. This many results will be skipped. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project to retrieve blender models from |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Blueprints

**Slug:** `DATAROBOT_LIST_PROJECTS_BLUEPRINTS`

Tool to list all blueprints available in a DataRobot project. Use when you need to explore available modeling blueprints for a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID to list blueprints for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Blueprint Chart

**Slug:** `DATAROBOT_LIST_PROJECTS_BLUEPRINTS_BLUEPRINT_CHART`

Tool to retrieve a blueprint chart by blueprint ID. Use when you need to visualize the structure and flow of a DataRobot blueprint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | ID of the DataRobot project containing the blueprint. |
| `blueprintId` | string | Yes | ID of the blueprint to retrieve the chart for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Blueprints Blueprint Docs

**Slug:** `DATAROBOT_LIST_PROJECTS_BLUEPRINTS_BLUEPRINT_DOCS`

Tool to retrieve blueprint tasks documentation. Use when you need detailed information about the tasks, parameters, and references in a blueprint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | The ID of the project containing the blueprint |
| `blueprint_id` | string | Yes | The ID of the blueprint to retrieve documentation for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Combined Models

**Slug:** `DATAROBOT_LIST_PROJECTS_COMBINED_MODELS`

Tool to retrieve all existing combined models for a DataRobot project. Use when you need to list combined models from segmented modeling projects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `projectId` | string | Yes | The project ID to retrieve combined models for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Data Slices

**Slug:** `DATAROBOT_LIST_PROJECTS_DATA_SLICES`

Tool to list paginated data slices for a specific DataRobot project. Use when you need to browse or filter data slices within a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return per page. |
| `offset` | integer | No | The number of items to skip before starting to collect the result set. |
| `projectId` | string | Yes | The project ID to list data slices for. |
| `searchQuery` | string | No | Search query to filter data slices by name or other criteria. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Datetime Models

**Slug:** `DATAROBOT_LIST_PROJECTS_DATETIME_MODELS`

Tool to list datetime partitioned models in a DataRobot project. Use when you need to retrieve models from time series projects with datetime partitioning.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `projectId` | string | Yes | The project ID to list datetime models from. |
| `bulkOperationId` | string | No | The ID of the bulk model operation. If specified, only models submitted in scope of this operation will be shown. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Document Text Extraction Samples

**Slug:** `DATAROBOT_LIST_PROJECTS_DOCUMENT_TEXT_EXTRACTION_SAMPLES`

Tool to list metadata on all computed document text extraction samples in a DataRobot project across all models. Use when you need to retrieve document text extraction sample information for a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `project_id` | string | Yes | Project ID to list document text extraction samples for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Document Thumbnails

**Slug:** `DATAROBOT_LIST_PROJECTS_DOCUMENT_THUMBNAILS`

Tool to list document thumbnail metadata for a DataRobot project. Use when retrieving document page information for features with document data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS. |
| `featureName` | string | No | Name of the document feature to filter thumbnails by. |
| `targetValue` | string | No | For classification projects, returns only document pages corresponding to this target value. Mutually exclusive with targetBinStart and targetBinEnd. |
| `targetBinEnd` | string | No | For regression projects, returns only document pages corresponding to target values below this value. Mutually exclusive with targetValue. Must be specified with targetBinStart. |
| `targetBinStart` | string | No | For regression projects, returns only document pages corresponding to target values above this value. Mutually exclusive with targetValue. Must be specified with targetBinEnd. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Datetime Models Accuracy Over Time Plots

**Slug:** `DATAROBOT_LIST_PROJECTS_DT_MODELS_ACCURACY_OVER_TIME_PLOTS`

Tool to retrieve metadata for Accuracy over Time insights for datetime models. Use when analyzing time series model accuracy trends across backtests and holdout sets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID. |
| `seriesId` | string | No | The name of the series to retrieve. Only available for time series multiseries projects. If not provided a metadata of average plot for the first 1000 series will be retrieved. |
| `projectId` | string | Yes | The project ID. |
| `forecastDistance` | integer | No | Forecast distance to retrieve the data for. If not specified, the first forecast distance for this project will be used. Forecast distance specifies the number of time steps between the predicted point and the origin point. Only available for time series supervised projects. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects External Scores

**Slug:** `DATAROBOT_LIST_PROJECTS_EXTERNAL_SCORES`

Tool to list external scores on prediction datasets for a DataRobot project. Use when retrieving scoring results for predictions made on external datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `modelId` | string | No | If provided, returns scores for model with matching modelId. |
| `datasetId` | string | No | If provided, returns scores for dataset with matching datasetId. |
| `projectId` | string | Yes | The project ID to list external scores for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Feature Association Matrix

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURE_ASSOCIATION_MATRIX`

Tool to retrieve pairwise feature association statistics for a DataRobot project. Use when you need to analyze feature correlations or associations between features in a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("association" | "correlation") | No | The type of dependence for the data. Must be either 'association' or 'correlation'. |
| `metric` | string ("mutualInfo" | "cramersV" | "spearman" | "pearson" | "tau") | No | The name of a metric to get pairwise data for. Must be one of mutualInfo, cramersV, spearman, pearson, or tau. |
| `projectId` | string | Yes | The project ID to retrieve feature association matrix for. |
| `featurelistId` | string | No | The feature list to lookup FAM data for. By default, depending on the type of the project 'Informative Features' or 'Timeseries Informative Features' list will be used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Feature Association Matrix Details

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURE_ASSOCIATION_MATRIX_DETAILS`

Tool to retrieve feature association matrix details between two features in a DataRobot project. Use when you need to analyze the relationship and association between a pair of features for visualization.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `feature1` | string | Yes | The name of the first feature to analyze. |
| `feature2` | string | Yes | The name of the second feature to analyze. |
| `projectId` | string | Yes | The project ID. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `featurelistId` | string | No | The feature list to lookup FAM data for. By default, depending on the type of the project 'Informative Features' or 'Timeseries Informative Features' list will be used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Features

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURES`

Tool to list all features in a DataRobot project. Use when you need to retrieve feature details, statistics, and metadata for analysis or model preparation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sortBy` | string ("name" | "id" | "importance" | "featureType" | "uniqueCount" | "naCount" | "mean" | "stdDev" | "median" | "min" | "max" | "-name" | "-id" | "-importance" | "-featureType" | "-uniqueCount" | "-naCount" | "-mean" | "-stdDev" | "-median" | "-min" | "-max") | No | Sort order options for project features. |
| `projectId` | string | Yes | The project ID. |
| `searchFor` | string | No | Limit results by specific features. Performs a substring search for the term you provide in feature names. |
| `featurelistId` | string | No | Filter features by a specific featurelist ID. |
| `forSegmentedAnalysis` | string ("false" | "False" | "true" | "True") | No | Boolean values for segmented analysis filter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Feature Frequent Values

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURES_FREQUENT_VALUES`

Tool to retrieve frequent values information for a feature in a DataRobot project. Use when analyzing feature distributions or data quality for a specific feature.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | ID of the DataRobot project containing the feature. |
| `featureName` | string | Yes | Name of the feature to retrieve frequent values for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Features Metrics

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURES_METRICS`

Tool to retrieve available metrics for a specific feature in a DataRobot project. Use when you need to check which metrics are compatible with a given feature as a target.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID. |
| `featureName` | string | Yes | The name of the feature to check. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Feature Multiseries Properties

**Slug:** `DATAROBOT_LIST_PROJECTS_FEATURES_MULTISERIES_PROPERTIES`

Tool to retrieve potential multiseries ID columns to use with a particular datetime partition column. Use when configuring time series projects with multiseries functionality.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_CREATE_PROJECT. |
| `featureName` | string | Yes | The name of the feature to be used as the datetime partition column. This should be a datetime-type feature in the project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Frozen Models

**Slug:** `DATAROBOT_LIST_PROJECTS_FROZEN_MODELS`

Tool to list all frozen models from a DataRobot project. Use when you need to retrieve frozen model records from a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. If 0, all results are returned |
| `offset` | integer | No | This many results will be skipped (for pagination) |
| `projectId` | string | Yes | The unique identifier of the project |
| `withMetric` | string | No | If specified, the returned models will only have scores for this metric. If not, all metrics will be included |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Image Activation Maps

**Slug:** `DATAROBOT_LIST_PROJECTS_IMAGE_ACTIVATION_MAPS`

Tool to list all image activation maps for a DataRobot project. Use when you need to retrieve activation map records for visual AI models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return |
| `offset` | integer | No | The number of items to skip over for pagination |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Image Embeddings

**Slug:** `DATAROBOT_LIST_PROJECTS_IMAGE_EMBEDDINGS`

Tool to list all image embeddings for a DataRobot project. Use when you need to retrieve image embeddings generated for a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of items to return. |
| `offset` | integer | No | The number of items to skip over. |
| `projectId` | string | Yes | The project ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Modeling Featurelists

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELING_FEATURELISTS`

Tool to list all modeling featurelists from a DataRobot project. Use when you need to retrieve featurelists available for modeling in a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. If 0, all results. |
| `offset` | integer | No | This many results will be skipped. |
| `sortBy` | string ("name" | "description" | "features" | "numModels" | "created" | "isUserCreated" | "-name" | "-description" | "-features" | "-numModels" | "-created" | "-isUserCreated") | No | Sort order options for modeling featurelists. |
| `projectId` | string | Yes | The project ID |
| `searchFor` | string | No | Limit results by specific featurelists. Performs a substring search for the term you provide in featurelist names. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Modeling Features

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELING_FEATURES`

Tool to list all modeling features for a DataRobot project. Use when you need to retrieve feature metadata, statistics, and importance scores for model training.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. If 0, returns all results. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `sortBy` | string ("name" | "id" | "importance" | "featureType" | "uniqueCount" | "naCount" | "mean" | "stdDev" | "median" | "min" | "max" | "-name" | "-id" | "-importance" | "-featureType" | "-uniqueCount" | "-naCount" | "-mean" | "-stdDev" | "-median" | "-min" | "-max") | No | Property to sort features by. Use negative prefix (e.g., '-name') for descending order. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |
| `searchFor` | string | No | Substring to filter feature names by partial match. |
| `featurelistId` | string | No | Filter features by a specific featurelist ID. Obtain from DATAROBOT_LIST_FEATURELISTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Model Jobs

**Slug:** `DATAROBOT_LIST_PROJECTS_MODEL_JOBS`

Tool to list modeling jobs for a given DataRobot project. Use when you need to inspect or monitor the status of model training jobs within a project, optionally filtering by status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `status` | string ("queue" | "inprogress" | "error") | No | Status enumeration for model jobs. |
| `project_id` | string | Yes | The ID of the project to list modeling jobs for. Get this ID from the list_projects action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Models Advanced Tuning Parameters

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_ADVANCED_TUNING_PARAMETERS`

Tool to retrieve information about all advanced tuning parameters available for a specified model. Use when you need to understand what parameters can be tuned for a model before creating an advanced tuned version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model ID |
| `project_id` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Blueprint Chart

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_BLUEPRINT_CHART`

Tool to retrieve a reduced model blueprint chart by model ID. Use when you need to visualize the structure of a trained model's blueprint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | ID of the model to retrieve the blueprint chart for. |
| `projectId` | string | Yes | ID of the DataRobot project containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Models Blueprint Docs

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_BLUEPRINT_DOCS`

Tool to retrieve task documentation for a reduced model blueprint. Use when you need detailed information about the tasks, parameters, and references in a model's blueprint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The ID of the model to retrieve blueprint documentation for |
| `project_id` | string | Yes | The ID of the project containing the model |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Models Cross Class Accuracy Scores

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_CROSS_CLASS_ACCURACY_SCORES`

Tool to list cross-class accuracy scores for a specific model in a project. Use when analyzing per-class accuracy metrics for bias and fairness evaluation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of items to return, defaults to 100 if not provided. |
| `offset` | integer | No | Number of items to skip. Defaults to 0 if not provided. |
| `modelId` | string | Yes | The model ID. |
| `projectId` | string | Yes | The project ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Models Cross Validation Scores

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_CROSS_VALIDATION_SCORES`

Tool to retrieve cross-validation scores for each partition in a DataRobot model. Use when evaluating model performance across different cross-validation folds.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `metric` | string | No | Set to the name of a metric to only return results for that metric (e.g., 'FVE Gamma', 'AUC', 'RMSE'). |
| `modelId` | string | Yes | The unique identifier of the model. |
| `partition` | number | No | Set to a value such as 1.0, 2.0 to only return results for the specified partition. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Models Data Disparity Insights

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_DATA_DISPARITY_INSIGHTS`

Tool to retrieve Cross Class Data Disparity insights for a DataRobot model. Use when analyzing data disparity between two classes for a protected feature.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of items to return, defaults to 100 if not provided. |
| `offset` | integer | No | Number of items to skip. Defaults to 0 if not provided. |
| `feature` | string | Yes | Feature for which insight is computed. |
| `modelId` | string | Yes | The model ID. |
| `projectId` | string | Yes | The project ID. |
| `className1` | string | Yes | One of the compared classes. |
| `className2` | string | Yes | Another compared class. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Lift Charts for Model Datasets

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_DATASET_LIFT_CHARTS`

Tool to retrieve list of lift chart data on prediction datasets for a project model. Use when you need to analyze lift chart performance metrics for model predictions across different datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `modelId` | string | Yes | The model ID |
| `datasetId` | string | No | If provided will return Lift chart for dataset with matching datasetId. |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Dataset ROC Curves

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_DATASET_ROC_CURVES`

Tool to retrieve ROC curve data for a model's prediction datasets. Use when analyzing model performance via ROC curves. NOTE: This endpoint is deprecated; DataRobot recommends using /api/v2/insights/rocCurve/models/{entityId}/ instead.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. Defaults to 100. |
| `offset` | integer | No | Number of results to skip for pagination. Defaults to 0. |
| `modelId` | string | Yes | The unique identifier of the model. |
| `datasetId` | string | No | If provided, only returns ROC curve for the dataset with matching datasetId. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Feature Effects

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_FEATURE_EFFECTS`

Tool to retrieve Feature Effects for a DataRobot model. Feature Effects show how each feature impacts predictions, including partial dependence and predicted vs actual relationships. Use when analyzing feature behavior and model interpretability.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `source` | string ("training" | "validation" | "holdout") | No | Data source for feature effects computation. Defaults to 'training'. |
| `model_id` | string | Yes | The unique identifier of the model |
| `project_id` | string | Yes | The unique identifier of the DataRobot project |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Feature Effects Metadata

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_FEATURE_EFFECTS_METADATA`

Tool to retrieve Feature Effects metadata for a model. Use when you need to check the status and available sources for Feature Effects computation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID for which to retrieve Feature Effects metadata. |
| `projectId` | string | Yes | The project ID containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Feature Impact

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_FEATURE_IMPACT`

Tool to retrieve feature impact scores for features in a DataRobot model. Use when you need to understand which features have the most impact on model predictions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model ID |
| `backtest` | string | No | The backtest value used for Feature Impact computation. Applicable for datetime aware models. |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Grid Search Scores

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_GRID_SEARCH_SCORES`

Tool to retrieve grid search scores for a specific model in a DataRobot project. Use when analyzing hyperparameter tuning results or comparing parameter combinations for model optimization.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped. |
| `source` | string ("validation") | No | Source type for the grid search scores. |
| `modelId` | string | Yes | The model ID. |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model Lift Charts

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_LIFT_CHART`

Tool to retrieve all available lift charts for a DataRobot model. Use when analyzing model performance to understand how well the model separates predictions. Lift charts show mean actual vs predicted values across bins sorted by prediction value.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The ID of the model for which to retrieve lift charts. Use DATAROBOT_LIST_PROJECTS_MODELS or DATAROBOT_GET_PROJECTS_MODELS to find model IDs. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the model. Use DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT to find project IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Model Missing Values Report

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_MISSING_REPORT`

Tool to retrieve a summary of how a model's subtasks handle missing values. Use when analyzing model preprocessing behavior for features with missing data.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | ID of the model to retrieve the missing values report for |
| `project_id` | string | Yes | ID of the DataRobot project containing the model |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Prime Rulesets

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_PRIME_RULESETS`

Tool to list DataRobot Prime rulesets that approximate a specific model. Use when you need to retrieve interpretable rule-based approximations of a trained model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model to find approximating rulesets for |
| `projectId` | string | Yes | The project the model belongs to |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Model ROC Curves

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_ROC_CURVES`

Tool to retrieve all available ROC curves for a binary classification model. Use when you need to analyze model performance across all data sources (validation, holdout, cross-validation).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | ID of the model to retrieve ROC curves for. |
| `projectId` | string | Yes | ID of the DataRobot project containing the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Retrieve Model Scoring Code

**Slug:** `DATAROBOT_LIST_PROJECTS_MODELS_SCORING_CODE`

Tool to retrieve scoring code JAR file for a specific DataRobot model. Use when you need to download the model's scoring code for local execution.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | Yes | The model to use |
| `projectId` | string | Yes | The project that created the model |
| `sourceCode` | string ("false" | "False" | "true" | "True") | No | If set to "true", the downloaded JAR file will contain only the source code and will not be executable. Defaults to "false". |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Multicategorical Invalid Format

**Slug:** `DATAROBOT_LIST_PROJECTS_MULTICATEGORICAL_INVALID_FORMAT`

Tool to retrieve multicategorical data quality log for a DataRobot project. Use when you need to check for multicategorical feature format errors.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The ID of the project this request is associated with. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Multiseries Names

**Slug:** `DATAROBOT_LIST_PROJECTS_MULTISERIES_NAMES`

Tool to list the names of a multiseries project. Use when you need to retrieve series names from a multiseries time series project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `projectId` | string | Yes | The project ID |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Optimized Datetime Partitionings

**Slug:** `DATAROBOT_LIST_PROJECTS_OPTIMIZED_DATETIME_PARTITIONINGS`

Tool to list all created optimized datetime partitioning configurations for a project. Use when you need to retrieve the datetime partitioning options that have been generated for time-series modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. Maximum value is 20. |
| `offset` | integer | No | This many results will be skipped. |
| `projectId` | string | Yes | The project ID to retrieve optimized datetime partitionings for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Payoff Matrices

**Slug:** `DATAROBOT_LIST_PROJECTS_PAYOFF_MATRICES`

Tool to list all payoff matrices for a DataRobot project. Use when retrieving payoff matrices to evaluate model cost-benefit trade-offs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Number of payoff matrices to return. |
| `offset` | integer | No | Number of payoff matrices to skip for pagination. |
| `projectId` | string | Yes | The project ID to list payoff matrices for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Prediction Datasets

**Slug:** `DATAROBOT_LIST_PROJECTS_PREDICTION_DATASETS`

Tool to list prediction datasets uploaded to a DataRobot project. Use when you need to browse or retrieve prediction datasets for batch predictions or accuracy tracking.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. If 0, all results are returned. Default is 0. |
| `offset` | integer | No | Number of results to skip (for pagination). Default is 0. |
| `projectId` | string | Yes | The project ID to query for prediction datasets. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Prediction Explanations Records

**Slug:** `DATAROBOT_LIST_PROJECTS_PREDICTION_EXPLANATIONS_RECORDS`

Tool to list prediction explanations records for a DataRobot project. Use when you need to retrieve prediction explanations that have been computed for models in a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `modelId` | string | No | If specified, only prediction explanations records computed for this model will be returned. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Predictions

**Slug:** `DATAROBOT_LIST_PROJECTS_PREDICTIONS`

Tool to list prediction records for a DataRobot project. Use when retrieving batch prediction metadata for a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. Use 0 for no limit |
| `offset` | integer | No | Number of results to skip for pagination |
| `modelId` | string | No | Filter predictions by model ID |
| `datasetId` | string | No | Filter predictions by dataset ID used to create them |
| `projectId` | string | Yes | The ID of the project to retrieve predictions for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Predict Jobs

**Slug:** `DATAROBOT_LIST_PROJECTS_PREDICT_JOBS`

Tool to list all prediction jobs for a given DataRobot project. Use when you need to inspect or monitor prediction job statuses within a project, optionally filtering by status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `status` | string ("queue" | "inprogress" | "error") | No | Status values for prediction jobs. |
| `project_id` | string | Yes | The ID of the project to list prediction jobs for. Get this ID from the list_projects action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Prime Files

**Slug:** `DATAROBOT_LIST_PROJECTS_PRIME_FILES`

Tool to list Prime files available in a DataRobot project. Use when you need to retrieve exportable Prime model code files.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. Use 0 to specify no limit. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `modelId` | string | No | If specified, only Prime files with code used in the specified Prime model will be returned. |
| `projectId` | string | Yes | The project ID to list available Prime files for. |
| `parentModelId` | string | No | If specified, only Prime files approximating the specified parent model will be returned. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Prime Models

**Slug:** `DATAROBOT_LIST_PROJECTS_PRIME_MODELS`

Tool to list all Prime models in a DataRobot project. Use when you need to retrieve Prime models, which are interpretable rule-based approximations of complex models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. Default is 100. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project to list Prime models for. Obtain from DATAROBOT_LIST_PROJECTS or DATAROBOT_GET_PROJECT. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Rating Table Models

**Slug:** `DATAROBOT_LIST_PROJECTS_RATING_TABLE_MODELS`

Tool to list rating table models for a DataRobot project. Use when you need to retrieve all rating table models associated with a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | If specified, filters for models with a model type matching 'name'. |
| `orderBy` | string ("metric" | "-metric" | "samplePct" | "-samplePct") | No | Sort order options for rating table models. |
| `isStarred` | string ("false" | "False" | "true" | "True") | No | Starred filter values. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project to list models from. |
| `samplePct` | number | No | If specified, filters for models with a matching sample percentage. |
| `withMetric` | string | No | If specified, the returned models will only have scores for this metric. If not, all metrics will be included. |
| `showInSampleScores` | boolean | No | If specified, will return metric scores for models trained into validation/holdout for projects that do not have stacked predictions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Rating Tables

**Slug:** `DATAROBOT_LIST_PROJECTS_RATING_TABLES`

Tool to list rating tables for a DataRobot project. Use when you need to retrieve all rating tables associated with a specific project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. Use 0 for no limit. Default: 0 (no limit). |
| `offset` | integer | No | Number of results to skip (for pagination). Default: 0. |
| `modelId` | string | No | If specified, only rating tables with this modelId will be returned. |
| `projectId` | string | Yes | The unique identifier of the DataRobot project to list rating tables from. |
| `parentModelId` | string | No | If specified, only rating tables with this parentModelId will be returned. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Retrieve Rating Table File

**Slug:** `DATAROBOT_LIST_PROJECTS_RATING_TABLES_FILE`

Tool to retrieve a rating table file from a DataRobot project. Use when you need to download the source CSV file for a specific rating table.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project that owns this data |
| `ratingTableId` | string | Yes | The rating table ID to retrieve the source file from |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Recommended Models

**Slug:** `DATAROBOT_LIST_PROJECTS_RECOMMENDED_MODELS`

Tool to list recommended models for a DataRobot project. Use when you need to retrieve the models that DataRobot recommends for deployment or further analysis based on accuracy and performance characteristics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | Yes | The project ID to retrieve recommended models for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project RuleFit Files

**Slug:** `DATAROBOT_LIST_PROJECTS_RULE_FIT_FILES`

Tool to list RuleFit code files for a DataRobot project. Use when you need to retrieve RuleFit code files, optionally filtered by a specific RuleFit model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of files to return. |
| `offset` | integer | No | Number of files to skip for pagination. |
| `modelId` | string | No | If specified, only RuleFit code files used in the specified RuleFit model will be returned; otherwise all applicable RuleFit files will be returned. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project to list RuleFit files for. Obtain from DATAROBOT_LIST_PROJECTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project Secondary Datasets Configurations

**Slug:** `DATAROBOT_LIST_PROJECTS_SECONDARY_DATASETS_CONFIGURATIONS`

Tool to list all secondary dataset configurations for a DataRobot project. Use when you need to retrieve configurations for secondary datasets used in modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `modelId` | string | No | Filter by ID of the model |
| `projectId` | string | Yes | The project ID |
| `featurelistId` | string | No | Filter by feature list ID of the model |
| `includeDeleted` | string ("false" | "False" | "true" | "True") | No | Enum for includeDeleted parameter values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Project SHAP Matrices

**Slug:** `DATAROBOT_LIST_PROJECTS_SHAP_MATRICES`

Tool to list SHAP matrix records for a DataRobot project. Use when you need to retrieve SHAP explanations for models in a project. Note: This is a deprecated API endpoint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. Used for pagination. |
| `project_id` | string | Yes | The ID of the project to list SHAP matrices for. Get this ID from the list_projects or get_project action. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Projects Training Predictions

**Slug:** `DATAROBOT_LIST_PROJECTS_TRAINING_PREDICTIONS`

Tool to list training prediction jobs for a specific DataRobot project. Use when you need to retrieve training predictions that have been generated for models within a project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `projectId` | string | Yes | Project ID to retrieve training predictions for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Quotas

**Slug:** `DATAROBOT_LIST_QUOTAS`

Tool to list all quotas configured in DataRobot. Use when retrieving quota configurations for resources or users.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `resourceId` | string | No | Resource ID for which quota is configured. |
| `resourceType` | string | No | Resource Type for which quota is configured. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Quota Templates

**Slug:** `DATAROBOT_LIST_QUOTA_TEMPLATES`

Tool to list quota templates in DataRobot. Use when retrieving available quota templates for resource management.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return (1-100). |
| `offset` | integer | No | Number of results to skip for pagination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Recipe Inputs

**Slug:** `DATAROBOT_LIST_RECIPE_INPUTS`

Tool to get inputs on a recipe. Use when you need to retrieve the list of inputs configured for a specific recipe.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipe_id` | string | Yes | The ID of the recipe. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Recipe Insights

**Slug:** `DATAROBOT_LIST_RECIPES_INSIGHTS`

Tool to retrieve recipe insights for a specific recipe. Use when analyzing feature characteristics and statistics for a data wrangling recipe.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change and a maximum limit may be imposed without notice. |
| `offset` | integer | No | This many results will be skipped for pagination. |
| `recipeId` | string | Yes | The ID of the recipe to retrieve insights for. |
| `numberOfOperationsToUse` | integer | No | The number indicating how many operations from the beginning to return insights for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Recipes Preview

**Slug:** `DATAROBOT_LIST_RECIPES_PREVIEW`

Tool to retrieve a wrangling recipe preview. Use when you need to see sample data output from a recipe.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `recipeId` | string | Yes | The ID of the recipe. |
| `numberOfOperationsToUse` | integer | No | The number indicating how many operations from the beginning to retrieve a preview for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Recommended Settings Choices

**Slug:** `DATAROBOT_LIST_RECOMMENDED_SETTINGS_CHOICES`

Tool to retrieve available setting choices list for an entity type. Use when you need to discover what recommended settings are available before configuring them.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityType` | string ("deployment" | "Deployment" | "DEPLOYMENT") | Yes | Type of the entity to retrieve the recommended settings choices for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Registered Models

**Slug:** `DATAROBOT_LIST_REGISTERED_MODELS`

Tool to list registered models from DataRobot. Use when you need to search, filter, or page through registered models in the model registry.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `stage` | string ("Registered" | "Development" | "Staging" | "Production" | "Archived") | No | Filter to only return models that have versions in the specified stage. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Search term to filter registered models by name. |
| `sortKey` | string ("createdAt" | "modifiedAt" | "name") | No | Key to order results by. Options: createdAt, modifiedAt, name. Defaults to modifiedAt. |
| `tagKeys` | string | No | List of tag keys to filter by. Returns registered models matching any of the tag keys. |
| `imported` | boolean | No | Return registered models that contain either imported (true) or non-imported (false) versions. |
| `isGlobal` | boolean | No | Return only global (accessible to all users) or local (accessible only to owner and shared users) registered models. |
| `createdBy` | string | No | Filter by email of the user that created the registered model. |
| `modelKind` | string | No | Return models that contain versions matching a specific format. |
| `tagValues` | string | No | List of tag values to filter by. Returns registered models matching any of the tag values. |
| `tagFilters` | string | No | Comma separated tag pairs (e.g., key1=value1,key2=value2). Only exactly matching registered models are returned. Overrides tagKeys and tagValues if specified. |
| `targetName` | string | No | Filter by the name of the target. |
| `targetType` | string | No | Filter by the type of target(s). |
| `buildStatus` | string ("inProgress" | "complete" | "failed") | No | Filter to only return models with versions having the specified build status. |
| `forChallenger` | boolean | No | Used with compatibleWithModelPackageId to request similar registered models that can be used as challenger models. |
| `sortDirection` | string ("asc" | "desc") | No | Sort direction. Options: asc, desc. Defaults to desc. |
| `createdAtEndTs` | string | No | Filter for registered models created before this timestamp (ISO 8601 format). Defaults to current time. |
| `modifiedAtEndTs` | string | No | Filter for registered models modified before this timestamp (ISO 8601 format). Defaults to current time. |
| `createdAtStartTs` | string | No | Filter for registered models created on or after this timestamp (ISO 8601 format). |
| `modifiedAtStartTs` | string | No | Filter for registered models modified on or after this timestamp (ISO 8601 format). |
| `predictionThreshold` | number | No | Return registered models containing versions matching the prediction threshold for binary classification models. |
| `predictionEnvironmentId` | string | No | Filter registered models by what is supported by the prediction environment. |
| `compatibleWithModelPackageId` | string | No | Return registered models with versions compatible with given model package ID. Matches target.name, target.type, target.classNames, modelKind.isTimeSeries, and modelKind.isMultiseries. |
| `compatibleWithLeaderboardModelId` | string | No | Limit results to registered models containing versions for the leaderboard model with the specified ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Registered Model Deployments

**Slug:** `DATAROBOT_LIST_REGISTERED_MODELS_DEPLOYMENTS`

Tool to list deployments associated with a registered model. Use when retrieving deployments for a specific registered model with pagination and filtering support.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Filter deployments with name matching this search term. |
| `sortKey` | string ("createdAt" | "label") | No | Key to order results by. Options: createdAt, label. |
| `sortDirection` | string ("asc" | "desc") | No | Sort direction. Options: asc, desc. |
| `registered_model_id` | string | Yes | ID of the registered model to list deployments for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Registered Models Shared Roles

**Slug:** `DATAROBOT_LIST_REGISTERED_MODELS_SHARED_ROLES`

Tool to get a registered model's access control list. Use when you need to view who has access to a specific registered model and their assigned roles.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Filter results to a user, group, or organization with this identifier. |
| `name` | string | No | Filter results to a user, group, or organization with this name. |
| `limit` | integer | No | Maximum number of results to return per page. Defaults to 10. |
| `offset` | integer | No | Number of results to skip for pagination. Defaults to 0 (start from beginning). |
| `registeredModelId` | string | Yes | ID of the registered model to retrieve shared roles for. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Type of recipient for shared access. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Registered Model Version Deployments

**Slug:** `DATAROBOT_LIST_REGISTERED_MODELS_VERSIONS_DEPLOYMENTS`

Tool to list all deployments associated with a registered model version. Use when retrieving deployment information for a specific model version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `offset` | integer | No | This many results will be skipped (for pagination). |
| `search` | string | No | Filter deployments with name matching search term. |
| `sortKey` | string ("createdAt" | "label") | No | Key to order result by. Options: createdAt or label. |
| `versionId` | string | Yes | ID of the registered model's version. |
| `sortDirection` | string ("asc" | "desc") | No | Sort direction. Options: asc or desc. |
| `registeredModelId` | string | Yes | ID of the registered model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Registered Model Versions

**Slug:** `DATAROBOT_LIST_REGISTERED_MODEL_VERSIONS`

Tool to list versions of a registered model. Use when you need to view or search through versions of a specific registered model in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. |
| `stage` | string ("Registered" | "Development" | "Staging" | "Production" | "Archived") | No | Stage filter options for registered model versions. |
| `offset` | integer | No | This many results will be skipped. |
| `search` | string | No | A term to search for in version name, model name, or description. |
| `sortKey` | string ("version" | "modelType" | "status" | "createdAt" | "updatedAt") | No | Sort key options for listing registered model versions. |
| `imported` | boolean | No | If specified, return either imported (true) or non-imported (false) versions (model packages). |
| `createdBy` | string | No | Email of the user that created registered model version to filter by. |
| `modelKind` | string | No | Return versions that match a specific format. |
| `useCaseId` | string | No | If specified, filter versions by use-case id. |
| `targetName` | string | No | Name of the target to filter by. |
| `targetType` | string | No | Type of the target to filter by. |
| `buildStatus` | string ("inProgress" | "complete" | "failed") | No | Build status filter options. |
| `forChallenger` | boolean | No | Can be used with compatibleWithModelPackageId to request similar versions that can be used as challenger models; for external model packages, instead of returning similar external model packages, similar DataRobot and Custom model packages will be retrieved. |
| `sortDirection` | string ("asc" | "desc") | No | Sort direction options. |
| `registeredModelId` | string | Yes | ID of the registered model. |
| `predictionThreshold` | number | No | Return versions with the specified prediction threshold used for binary classification models. |
| `predictionEnvironmentId` | string | No | Can be used to filter versions (model packages) by what is supported by the prediction environment. |
| `compatibleWithModelPackageId` | string | No | Return versions compatible with given model package ID. If used, will only return versions that match target.name, target.type, target.classNames (for classification models), modelKind.isTimeSeries, and modelKind.isMultiseries of the specified model package. |
| `compatibleWithLeaderboardModelId` | string | No | If specified, limit results to versions (model packages) of the leaderboard model with the specified ID. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Scheduled Jobs

**Slug:** `DATAROBOT_LIST_SCHEDULED_JOBS`

Tool to list scheduled deployment batch prediction jobs a user can view. Use when retrieving paginated scheduled jobs from DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of scheduled jobs (max 100) to return. Defaults to 20 |
| `offset` | integer | No | The number of scheduled jobs to skip. Defaults to 0. |
| `search` | string | No | Case insensitive search against scheduled jobs name or type name. |
| `typeId` | string ("datasetRefresh") | No | Enum for scheduled job type ID. |
| `orderBy` | string | No | The order to sort the scheduled jobs. Defaults to order by last successful run timestamp in descending order. |
| `queryByUser` | string ("createdBy" | "updatedBy") | No | Enum for user field to filter with. |
| `deploymentId` | string | No | Filter by the prediction integration deployment ID. Ignored for non prediction integration type ID. |
| `filterEnabled` | string ("true" | "True" | "false" | "False") | No | Enum for filter enabled values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Seat License Allocations

**Slug:** `DATAROBOT_LIST_SEAT_LICENSE_ALLOCATIONS`

Tool to list seat license allocations. Use when you need to retrieve seat license allocation information with optional filtering by IDs, organization, or subjects.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ids` | string | No | Comma-separated list of seat license allocation IDs to filter by. |
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `orgId` | string | No | Filter by the ID of the organization the seat licenses have been allocated to. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `subjectIds` | string | No | Comma-separated list of subject IDs that should be part of the seat license allocations. |
| `seatLicenseIds` | string | No | Comma-separated list of seat license IDs to filter by. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Secure Configurations

**Slug:** `DATAROBOT_LIST_SECURE_CONFIGS`

Tool to retrieve a list of secure configurations in DataRobot. Use when you need to browse or filter available secure configurations for credentials management.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Filter for a specific secure configuration by exact name match. |
| `limit` | integer | No | Maximum number of results to return (1-100). Defaults to 100. |
| `offset` | integer | No | Number of results to skip (for pagination). Defaults to 0. |
| `orderBy` | string ("name" | "-name" | "createdAt" | "-createdAt") | No | Sort order options for secure configurations. |
| `schemas` | string | No | Comma-separated list of schema names to filter on. Example: 'aws,azure,gcp'. |
| `namePart` | string | No | Filter for secure configurations containing this substring in the name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Secure Config Schemas

**Slug:** `DATAROBOT_LIST_SECURE_CONFIG_SCHEMAS`

Tool to retrieve a list of secure configuration schemas. Use when you need to list available secure configuration schemas with optional filtering by name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Filter for a specific secure configuration schema name |
| `limit` | integer | No | Maximum number of results to return (1-100) |
| `offset` | integer | No | Number of results to skip for pagination |
| `orderBy` | string ("name" | "-name") | No | Sort order for secure configuration schemas. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Secure Config Values

**Slug:** `DATAROBOT_LIST_SECURE_CONFIGS_VALUES`

Tool to retrieve a list of values for a secure configuration. Use when you need to fetch the key-value pairs associated with a specific secure configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `secureConfigId` | string | Yes | The ID of the secure configuration to retrieve values for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Status Jobs

**Slug:** `DATAROBOT_LIST_STATUS`

Tool to list currently running async status jobs in DataRobot. Use when you need to monitor or inspect the status of asynchronous tasks across the system.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. If 0, all results. |
| `offset` | integer | No | This many results will be skipped. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Tenants Resource Categories

**Slug:** `DATAROBOT_LIST_TENANTS_RESOURCE_CATEGORIES`

Tool to retrieve available resource categories for a specific tenant. Use when you need to understand what resource types are available for a tenant.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `tenantId` | string | Yes | The tenant ID to get resource categories for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Tenant Usage Resources Categories

**Slug:** `DATAROBOT_LIST_TENANT_USAGE_RESOURCES_CATEGORIES`

Tool to get available resource categories for tenant usage. Use when retrieving the list of resource categories that can be used for filtering or analyzing tenant usage data.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Usage Data Exports Supported Events

**Slug:** `DATAROBOT_LIST_USAGE_DATA_EXPORTS_SUPPORTED_EVENTS`

Tool to list supported audit events for usage data export filtering. Use when you need to discover available event types before creating or filtering usage data exports.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Custom Applications

**Slug:** `DATAROBOT_LIST_USE_CASE_CUSTOM_APPLICATIONS`

Tool to list custom applications referenced by a DataRobot use case. Use when you need to retrieve custom applications associated with a specific use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 10. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `useCaseId` | string | Yes | The ID of the Use Case to query for custom applications. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Datasets

**Slug:** `DATAROBOT_LIST_USE_CASE_DATASETS`

Tool to get a list of datasets associated with a DataRobot Use Case. Use when you need to view or filter datasets within a specific Use Case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return per page. Default is 100. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `search` | string | No | Only return datasets with names that match the given search string. |
| `orderBy` | string ("-columnCount" | "-createdAt" | "-createdBy" | "-dataSourceType" | "-datasetSize" | "-datasetSourceType" | "-lastActivity" | "-modifiedAt" | "-modifiedBy" | "-name" | "-rowCount" | "columnCount" | "createdAt" | "createdBy" | "dataSourceType" | "datasetSize" | "datasetSourceType" | "lastActivity" | "modifiedAt" | "modifiedBy" | "name" | "rowCount") | No | Sort order options for use case datasets. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve datasets for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Deployments

**Slug:** `DATAROBOT_LIST_USE_CASE_DEPLOYMENTS`

Tool to get deployments associated with a use case. Use when retrieving paginated deployments linked to a specific DataRobot use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `search` | string | No | Only return deployments from use case with names that match the given string. |
| `orderBy` | string ("createdAt" | "-createdAt" | "createdBy" | "-createdBy" | "lastActivity" | "-lastActivity" | "name" | "-name" | "updatedAt" | "-updatedAt" | "updatedBy" | "-updatedBy") | No | The order to sort the use case deployments. |
| `useCaseId` | string | Yes | The ID of the use case. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Notebooks By ID

**Slug:** `DATAROBOT_LIST_USE_CASE_NOTEBOOKS_BY_ID`

Tool to get a list of notebooks associated with a specific Use Case by ID. Use when you need to retrieve notebooks for a particular use case in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve notebooks for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Registered Models

**Slug:** `DATAROBOT_LIST_USE_CASE_REGISTERED_MODELS`

Tool to get registered models associated with a use case. Use when retrieving paginated registered models linked to a specific DataRobot use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. |
| `offset` | integer | No | The number of records to skip over. |
| `useCaseId` | string | Yes | The ID of the use case. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases

**Slug:** `DATAROBOT_LIST_USE_CASES`

Tool to retrieve a list of Use Cases from DataRobot. Use when you need to browse or filter available Use Cases to select one for further operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `stage` | string | No | Only return Use Cases in the given stage. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `search` | string | No | Returns only Use Cases with names that match the given string. |
| `orderBy` | string ("applicationsCount" | "createdAt" | "createdBy" | "customApplicationsCount" | "datasetsCount" | "description" | "filesCount" | "id" | "name" | "notebooksCount" | "playgroundsCount" | "potentialValue" | "projectsCount" | "riskLevel" | "stage" | "updatedAt" | "updatedBy" | "vectorDatabasesCount" | "-applicationsCount" | "-createdAt" | "-createdBy" | "-customApplicationsCount" | "-datasetsCount" | "-description" | "-filesCount" | "-id" | "-name" | "-notebooksCount" | "-playgroundsCount" | "-potentialValue" | "-projectsCount" | "-riskLevel" | "-stage" | "-updatedAt" | "-updatedBy" | "-vectorDatabasesCount") | No | Sort order options for Use Cases. |
| `entityId` | string | No | The id of the entity type that is linked with the Use Case. |
| `createdBy` | string | No | Filter Use Cases to return only those created by the selected user. |
| `projectId` | string | No | Only return experiment containers associated with the given project id. |
| `riskLevel` | string | No | Only return Use Cases associated with the given risk level. |
| `entityType` | string ("project" | "dataset" | "file" | "notebook" | "application" | "recipe" | "playground" | "vectorDatabase" | "customModelVersion" | "registeredModelVersion" | "deployment" | "customApplication" | "customJob") | No | Entity types that can be linked to a Use Case. |
| `usecaseType` | string ("all" | "general" | "walkthrough") | No | Use Case type filter options. |
| `applicationId` | string | No | Only return experiment containers associated with the given app. |
| `showOrgUseCases` | boolean | No | Defines if the Use Cases available on Organization level should be shown. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases All Resources

**Slug:** `DATAROBOT_LIST_USE_CASES_ALL_RESOURCES`

Tool to get a list of references associated with all Use Cases. Use when retrieving all resources (projects, datasets, files, etc.) linked to any Use Case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string ("entityType" | "lastActivity" | "name" | "updatedAt" | "updatedBy" | "-entityType" | "-lastActivity" | "-name" | "-updatedAt" | "-updatedBy") | No | Sort order options for Use Case references. |
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `orderBy` | string ("entityType" | "lastActivity" | "name" | "updatedAt" | "updatedBy" | "-entityType" | "-lastActivity" | "-name" | "-updatedAt" | "-updatedBy") | No | Sort order options for Use Case references. |
| `recipeStatus` | string | No | Recipe status used for filtering recipes. |
| `daysSinceLastActivity` | integer | No | Only retrieve resources that had activity within the specified number of days. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases Applications

**Slug:** `DATAROBOT_LIST_USE_CASES_APPLICATIONS`

Tool to list applications associated with a DataRobot Use Case. Use when retrieving a paginated list of applications for a specific Use Case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip (for pagination). |
| `search` | string | No | Only return applications with names that match the given string. |
| `orderBy` | string ("applicationTemplateType" | "-applicationTemplateType" | "createdAt" | "-createdAt" | "lastActivity" | "-lastActivity" | "name" | "-name" | "source" | "-source" | "updatedAt" | "-updatedAt" | "userId" | "-userId") | No | Sort order options for applications. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve applications for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Data

**Slug:** `DATAROBOT_LIST_USE_CASES_DATA`

Tool to retrieve a list of datasets and recipes from a specific DataRobot Use Case. Use when you need to browse or filter available datasets and recipes within a use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `search` | string | No | Only return datasets or recipes from use case with names that match the given string. |
| `orderBy` | string ("name" | "-name" | "description" | "-description" | "createdBy" | "-createdBy" | "modifiedAt" | "-modifiedAt" | "dataType" | "-dataType" | "dataSourceType" | "-dataSourceType" | "rowCount" | "-rowCount" | "columnCount" | "-columnCount" | "datasetSize" | "-datasetSize") | No | Sorting order which will be applied to data list. Prefix with '-' for descending order. |
| `dataType` | string | No | Data types used for filtering. |
| `useCaseId` | string | Yes | The ID of the use case. |
| `recipeStatus` | string | No | Recipe status used for filtering recipes. |
| `creatorUserId` | string | No | Filter results to display only those created by user(s) identified by the specified ID. |
| `dataSourceType` | string | No | The driver class type of the recipe wrangling engine. |
| `creatorUsername` | string | No | Filter results to display only those created by user(s) identified by the specified username. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases Files

**Slug:** `DATAROBOT_LIST_USE_CASES_FILES`

Tool to list catalog files associated with a specific Use Case. Use when retrieving files from a DataRobot Use Case for further analysis or processing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. The default may change without notice. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `search` | string | No | Only return files from Use Cases with names that match the given string. |
| `orderBy` | string ("createdAt" | "-createdAt" | "createdBy" | "-createdBy" | "dataSourceType" | "-dataSourceType" | "fileSourceType" | "-fileSourceType" | "lastActivity" | "-lastActivity" | "modifiedAt" | "-modifiedAt" | "modifiedBy" | "-modifiedBy" | "name" | "-name" | "numFiles" | "-numFiles") | No | The order to sort the Use Case files. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve files from. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Use Case Filter Metadata

**Slug:** `DATAROBOT_LIST_USE_CASES_FILTER_METADATA`

Tool to retrieve filtering metadata for a DataRobot Use Case. Use when you need to understand available metrics, model families, and sample sizes for filtering models within a Use Case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `useCaseId` | string | Yes | The ID of the use case to retrieve filter metadata for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Models for Comparison

**Slug:** `DATAROBOT_LIST_USE_CASES_MODELS_FOR_COMPARISON`

Tool to get models from projects in a Use Case for comparison. Use when you need to compare models across multiple projects within a Use Case, filter by metrics, or find top-performing models.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `orderBy` | string ("-createdAt" | "createdAt") | No | Sort order for models by project creation date. |
| `samplePct` | string | No | Filter to models trained at the specified sample size percentage(s). |
| `useCaseId` | string | Yes | The ID of the use case. |
| `modelFamily` | string | No | Filter to models that match the specified model family/families. |
| `targetFeature` | string | No | Filter to models from projects built using specified target feature(s). |
| `numberTopModels` | integer | No | Filter to limited number of top scoring models, where default value is 1. A value of 0 means no top scoring models will be returned. |
| `scoringCodeOnly` | boolean | No | Whether to include only models that can be converted to scorable java code. |
| `binarySortMetric` | string ("AUC" | "Weighted AUC" | "Area Under PR Curve" | "Weighted Area Under PR Curve" | "Kolmogorov-Smirnov" | "Weighted Kolmogorov-Smirnov" | "FVE Binomial" | "Weighted FVE Binomial" | "Gini Norm" | "Weighted Gini Norm" | "LogLoss" | "Weighted LogLoss" | "Max MCC" | "Weighted Max MCC" | "Rate@Top5%" | "Weighted Rate@Top5%" | "Rate@Top10%" | "Weighted Rate@Top10%" | "Rate@TopTenth%" | "RMSE" | "Weighted RMSE" | "F1 Score" | "Weighted F1 Score" | "Precision" | "Weighted Precision" | "Recall" | "Weighted Recall") | No | Binary Classification sort metric options. |
| `trainingDatasetId` | string | No | Filter to models from projects built using specified training dataset ID(s). |
| `binarySortPartition` | string ("validation" | "holdout" | "crossValidation") | No | Partition type for binary classification metric scores. |
| `regressionSortMetric` | string ("FVE Poisson" | "Weighted FVE Poisson" | "FVE Gamma" | "Weighted FVE Gamma" | "FVE Tweedie" | "Weighted FVE Tweedie" | "Gamma Deviance" | "Weighted Gamma Deviance" | "Gini Norm" | "Weighted Gini Norm" | "MAE" | "Weighted MAE" | "MAPE" | "Weighted MAPE" | "SMAPE" | "Weighted SMAPE" | "Poisson Deviance" | "Weighted Poisson Deviance" | "RMSLE" | "RMSE" | "Weighted RMSLE" | "Weighted RMSE" | "R Squared" | "Weighted R Squared" | "Tweedie Deviance" | "Weighted Tweedie Deviance") | No | Regression sort metric options. |
| `includeAllStarredModels` | boolean | No | Whether to include all starred models in filtering output. This means starred models will be included in addition to top-scoring models. |
| `regressionSortPartition` | string ("validation" | "holdout" | "crossValidation") | No | Partition type for regression metric scores. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases Notebooks

**Slug:** `DATAROBOT_LIST_USE_CASES_NOTEBOOKS`

Tool to get a list of notebooks from all Use Cases. Use when you need to retrieve or browse notebooks across all use cases in DataRobot.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `includeName` | boolean | No | Include use case name in the response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases Playgrounds

**Slug:** `DATAROBOT_LIST_USE_CASES_PLAYGROUNDS`

Tool to list playgrounds associated with a Use Case. Use when retrieving playground information for a specific use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. |
| `offset` | integer | No | The number of records to skip over. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve playgrounds for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Projects

**Slug:** `DATAROBOT_LIST_USE_CASES_PROJECTS`

Tool to get a list of projects associated with a use case. Use when retrieving paginated projects linked to a specific DataRobot use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `search` | string | No | Returns only projects with names that match the given string. |
| `orderBy` | string ("createdAt" | "-createdAt" | "createdBy" | "-createdBy" | "dataset" | "-dataset" | "featureCount" | "-featureCount" | "fullName" | "-fullName" | "lastActivity" | "-lastActivity" | "models" | "-models" | "name" | "-name" | "projectId" | "-projectId" | "rowCount" | "-rowCount" | "target" | "-target" | "targetType" | "-targetType" | "timeAware" | "-timeAware" | "updatedAt" | "-updatedAt" | "updatedBy" | "-updatedBy") | No | The order to sort the use case projects. |
| `useCaseId` | string | Yes | The ID of the use case. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Resources

**Slug:** `DATAROBOT_LIST_USE_CASES_RESOURCES`

Tool to get a list of the references associated with a DataRobot use case. Use when retrieving resources linked to a specific use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return. |
| `offset` | integer | No | Number of results to skip for pagination. |
| `orderBy` | string ("entityType" | "-entityType" | "lastActivity" | "-lastActivity" | "name" | "-name" | "updatedAt" | "-updatedAt" | "updatedBy" | "-updatedBy") | No | Sort order options for use case resources. |
| `useCaseId` | string | Yes | The ID of the use case to retrieve resources for. |
| `recipeStatus` | string | No | Recipe status used for filtering recipes. |
| `daysSinceLastActivity` | integer | No | Only retrieve resources that had activity within the specified number of days. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases Shared Roles

**Slug:** `DATAROBOT_LIST_USE_CASES_SHARED_ROLES`

Tool to get a use case's access control list. Use when you need to see who has access to a specific use case and their roles.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Optional user ID to filter access control information for a specific user. |
| `limit` | integer | No | Maximum number of records to return per page. Defaults to 100. |
| `offset` | integer | No | Number of records to skip for pagination. Defaults to 0. |
| `useCaseId` | string | Yes | Unique identifier of the use case to retrieve access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Case Vector Databases

**Slug:** `DATAROBOT_LIST_USE_CASES_VECTOR_DATABASES`

Tool to retrieve a list of vector databases associated with a DataRobot Use Case. Use when you need to browse or filter vector databases linked to a specific use case.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | At most this many results are returned. The default may change without notice. |
| `offset` | integer | No | Number of results to skip. |
| `useCaseId` | string | Yes | The ID of the Use Case to retrieve vector databases for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Use Cases With Shortened Info

**Slug:** `DATAROBOT_LIST_USE_CASES_WITH_SHORTENED_INFO`

Tool to retrieve a list of Use Cases with abbreviated content from DataRobot. Use when you need to quickly browse Use Cases without full metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sort` | string ("applicationsCount" | "createdAt" | "createdBy" | "customApplicationsCount" | "datasetsCount" | "description" | "filesCount" | "id" | "name" | "notebooksCount" | "playgroundsCount" | "potentialValue" | "projectsCount" | "riskLevel" | "stage" | "updatedAt" | "updatedBy" | "vectorDatabasesCount" | "-applicationsCount" | "-createdAt" | "-createdBy" | "-customApplicationsCount" | "-datasetsCount" | "-description" | "-filesCount" | "-id" | "-name" | "-notebooksCount" | "-playgroundsCount" | "-potentialValue" | "-projectsCount" | "-riskLevel" | "-stage" | "-updatedAt" | "-updatedBy" | "-vectorDatabasesCount") | No | Sort order options for Use Cases (deprecated - use OrderBy). |
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `stage` | string | No | Only return Use Cases in the given stage. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `search` | string | No | Returns only Use Cases with names that match the given string. |
| `orderBy` | string ("applicationsCount" | "createdAt" | "createdBy" | "customApplicationsCount" | "datasetsCount" | "description" | "filesCount" | "id" | "name" | "notebooksCount" | "playgroundsCount" | "potentialValue" | "projectsCount" | "riskLevel" | "stage" | "updatedAt" | "updatedBy" | "vectorDatabasesCount" | "-applicationsCount" | "-createdAt" | "-createdBy" | "-customApplicationsCount" | "-datasetsCount" | "-description" | "-filesCount" | "-id" | "-name" | "-notebooksCount" | "-playgroundsCount" | "-potentialValue" | "-projectsCount" | "-riskLevel" | "-stage" | "-updatedAt" | "-updatedBy" | "-vectorDatabasesCount") | No | Order by options for Use Cases. |
| `entityId` | string | No | The id of the entity type that is linked with the Use Case. |
| `createdBy` | string | No | Filter Use Cases to return only those created by the selected user. |
| `projectId` | string | No | Only return experiment containers associated with the given project id. |
| `riskLevel` | string | No | Only return Use Cases associated with the given risk level. |
| `entityType` | string ("project" | "dataset" | "file" | "notebook" | "application" | "recipe" | "playground" | "vectorDatabase" | "customModelVersion" | "registeredModelVersion" | "deployment" | "customApplication" | "customJob") | No | Entity types that can be linked to a Use Case. |
| `usecaseType` | string ("all" | "general" | "walkthrough") | No | Use Case type filter options. |
| `applicationId` | string | No | Only return experiment containers associated with the given app. |
| `showOrgUseCases` | boolean | No | Defines if the Use Cases available on Organization level should be shown. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List User Blueprints

**Slug:** `DATAROBOT_LIST_USER_BLUEPRINTS`

Tool to list user blueprints from DataRobot. Use when retrieving custom blueprints created by users for model building.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The max number of results to return (1-100). |
| `offset` | integer | No | The number of results to skip (for pagination). |
| `projectId` | string | No | The id of the project, used to filter for original project_id. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List User Blueprints Input Types

**Slug:** `DATAROBOT_LIST_USER_BLUEPRINTS_INPUT_TYPES`

Tool to retrieve available input types for user blueprints. Use when you need to understand what input types are supported for custom blueprint creation.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List User Blueprints Tasks

**Slug:** `DATAROBOT_LIST_USER_BLUEPRINTS_TASKS`

Tool to retrieve tasks for blueprint construction in DataRobot. Use when you need to view available tasks for creating or modifying blueprints, optionally filtered by project, blueprint, or user blueprint ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `projectId` | string | No | The project ID to use for task retrieval. Filters tasks relevant to a specific project. |
| `blueprintId` | string | No | The blueprint ID to use for task retrieval. Filters tasks relevant to a specific blueprint. |
| `userBlueprintId` | string | No | The user blueprint ID to use for task retrieval. Filters tasks relevant to a specific user blueprint. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List User Groups

**Slug:** `DATAROBOT_LIST_USER_GROUPS`

Tool to list user groups. Use when you need to retrieve DataRobot user groups with optional filtering and pagination.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return; must be at least 1. |
| `orgId` | string | No | Filter by organization ID; use 'unowned' for groups without an org. |
| `offset` | integer | No | Number of results to skip; must be non-negative. |
| `userId` | string | No | Filter groups by membership of this user ID. |
| `orderBy` | string ("name" | "-name" | "orgName" | "-orgName" | "accessRoleName" | "-accessRoleName") | No | Sort order of the results. |
| `namePart` | string | No | Substring to filter group names. |
| `excludeUserMembership` | boolean | No | Exclude groups containing the specified user when using namePart filter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List User Notifications

**Slug:** `DATAROBOT_LIST_USER_NOTIFICATIONS`

Tool to list user notifications in DataRobot. Use when you need to retrieve a paginated list of notifications for the authenticated user, optionally filtering by read/unread status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return (1-1000). |
| `isRead` | boolean | No | When provided, returns only read (true) or unread (false) notifications. If omitted, returns all notifications regardless of read status. |
| `offset` | integer | No | The number of records to skip over for pagination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Users in Group

**Slug:** `DATAROBOT_LIST_USERS_IN_GROUP`

Lists users in a specific DataRobot user group. Returns paginated user membership details including username, status, and organization. Use the List User Groups action first to obtain a valid groupId. Supports filtering by name, active status, and admin status, with sorting options.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of results to return per page (1-100). Defaults to 25. |
| `offset` | integer | No | Number of results to skip for pagination. Defaults to 0 (start from beginning). |
| `groupId` | string | Yes | Identifier of the user group to list members for. Obtain valid groupIds from the List User Groups action. |
| `isAdmin` | boolean | No | Filter by admin status: true for admins, false for non-admins. |
| `orderBy` | string ("username" | "-username" | "userGroup" | "-userGroup" | "lastName" | "-lastName" | "firstName" | "-firstName" | "status" | "-status" | "expirationDate" | "-expirationDate") | No | Field to sort by; prefix with '-' for descending. |
| `isActive` | boolean | No | Filter by account activation status: true for active, false for inactive. |
| `namePart` | string | No | Only include users whose username contains this substring. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Trackers

**Slug:** `DATAROBOT_LIST_VALUE_TRACKERS`

Tool to list Value Trackers that the requesting user has access to in DataRobot. Use when you need to retrieve a catalog of Value Trackers for review or selection.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `stage` | string ("ideation" | "queued" | "dataPrepAndModeling" | "validatingAndDeploying" | "inProduction" | "retired" | "onHold") | No | Value Tracker stage filter options. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `orderBy` | string ("businessImpact" | "-businessImpact" | "potentialValue" | "-potentialValue" | "realizedValue" | "-realizedValue" | "feasibility" | "-feasibility" | "stage" | "-stage") | No | Sort order options for Value Trackers. |
| `namePart` | string | No | Only return Value Trackers with names that match the given string. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Trackers Activities

**Slug:** `DATAROBOT_LIST_VALUE_TRACKERS_ACTIVITIES`

Tool to retrieve the activities of a value tracker. Use when you need to view the history of changes and events for a specific value tracker.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | The number of records to return in the range from 1 to 100. Default 100. |
| `offset` | integer | No | The number of records to skip over. Default 0. |
| `valueTrackerId` | string | Yes | The id of the value tracker. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Tracker Attachments

**Slug:** `DATAROBOT_LIST_VALUE_TRACKERS_ATTACHMENTS`

Tool to list resources attached to a DataRobot value tracker. Use when you need to retrieve datasets, deployments, models, or other resources associated with a specific value tracker.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `type` | string ("dataset" | "modelingProject" | "deployment" | "customModel" | "modelPackage" | "application") | No | Type of value tracker attachment. |
| `limit` | integer | No | Maximum number of results to return |
| `offset` | integer | No | Number of results to skip for pagination |
| `valueTrackerId` | string | Yes | The ID of the value tracker to retrieve attachments for |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Trackers Shared Roles

**Slug:** `DATAROBOT_LIST_VALUE_TRACKERS_SHARED_ROLES`

Tool to get a value tracker's access control list. Use when you need to view who has access to a value tracker and their permission levels.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | No | Only return roles for a user, group or organization with this identifier. |
| `name` | string | No | Only return roles for a user, group or organization with this name. |
| `limit` | integer | No | At most this many results are returned per page. Defaults to 10. |
| `offset` | integer | No | This many results will be skipped for pagination. Defaults to 0. |
| `valueTrackerId` | string | Yes | The ID of the value tracker. |
| `shareRecipientType` | string ("user" | "group" | "organization") | No | Describes the type of share recipient. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Tracker Value Templates

**Slug:** `DATAROBOT_LIST_VALUE_TRACKER_VALUE_TEMPLATES`

Tool to list available value tracker templates in DataRobot. Use when you need to discover available value tracker templates for classification or regression models.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Value Tracker Value Templates Calculation

**Slug:** `DATAROBOT_LIST_VALUE_TRACKER_VALUE_TEMPLATES_CALCULATION`

Tool to calculate value of template with given template parameters. Use when you need to estimate the value and savings of a DataRobot model based on accuracy improvements and decision costs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `targetValue` | number | No | Target value. Required when templateType is 'regression'. |
| `templateType` | string ("classification" | "regression") | Yes | The name of the class ValueTracker value template to be retrieved. Must be either 'classification' or 'regression'. |
| `decisionsCount` | integer | Yes | Estimated number of decisions per year. |
| `accuracyImprovement` | number | Yes | Accuracy improvement as a decimal value (e.g., 0.15 for 15% improvement). |
| `incorrectDecisionCost` | number | No | Estimated cost of an individual incorrect decision. Required when templateType is 'classification'. |
| `incorrectDecisionsCount` | integer | No | Estimated number of incorrect decisions per year. Required when templateType is 'classification'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List VCS Git Providers

**Slug:** `DATAROBOT_LIST_VCS_GIT_PROVIDERS`

Tool to list all VCS Git providers configured in DataRobot. Use when you need to view available Git provider integrations.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Version

**Slug:** `DATAROBOT_LIST_VERSION`

Tool to retrieve DataRobot API version information. Use when you need to check the API version or verify compatibility.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Wrangling Recipes

**Slug:** `DATAROBOT_LIST_WRANGLING_RECIPES`

Tool to list all available wrangling recipes. Use when fetching paginated recipes for data wrangling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `limit` | integer | No | Maximum number of recipes to return. |
| `offset` | integer | No | Number of recipes to skip for pagination. |
| `search` | string | No | Return recipes with names containing this string. |
| `status` | string ("draft" | "preview" | "published") | No | Filter by recipe publication status. |
| `dialect` | string ("snowflake" | "bigquery" | "databricks" | "spark" | "postgres" | "spark-feature-discovery") | No | SQL dialect for the recipe. |
| `orderBy` | string ("recipeId" | "-recipeId" | "name" | "-name" | "description" | "-description" | "dialect" | "-dialect" | "status" | "-status" | "recipeType" | "-recipeType" | "createdAt" | "-createdAt" | "createdBy" | "-createdBy" | "updatedAt" | "-updatedAt" | "updatedBy" | "-updatedBy") | No | Attribute to sort by; prefix with '-' for descending. |
| `recipeType` | string ("SQL" | "WRANGLING" | "FEATURE_DISCOVERY") | No | Filter by recipe workflow type. |
| `creatorUserId` | string | No | Only recipes created by this user ID. |
| `creatorUsername` | string | No | Only recipes created by this username. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Move Notebook Filesystem Object

**Slug:** `DATAROBOT_MOVE_NOTEBOOK_FILESYSTEM_OBJECT`

Tool to move a file or directory within a DataRobot notebook session filesystem. Use when you need to relocate files or directories to a different path within the same notebook session.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook session ID (24-character hex string ObjectId). Can use notebook ID from DATAROBOT_LIST_NOTEBOOKS. |
| `source` | string | Yes | The source file or directory path to move |
| `destination` | string | Yes | The destination path where the file/directory should be moved to |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Pause or Unpause Autopilot

**Slug:** `DATAROBOT_PAUSE_OR_UNPAUSE_AUTOPILOT`

Tool to pause or unpause Autopilot for a project. Use when you need to stop or resume automated modeling jobs after confirming the project ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | string ("start" | "stop") | Yes | Command to control Autopilot: 'start' to unpause and run queued jobs; 'stop' to pause Autopilot. |
| `project_id` | string | Yes | The ID of the project for which to pause or unpause Autopilot. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Restore Notebook Revision

**Slug:** `DATAROBOT_RESTORE_NOTEBOOK_REVISION`

Tool to restore a DataRobot notebook to a specific revision. Use when you need to revert a notebook to a previous state identified by its revision ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `notebookId` | string | Yes | The ID of the notebook to restore. Must be a valid 24-character hex ObjectId. |
| `revisionId` | string | Yes | The ID of the revision to restore from. Must be a valid 24-character hex ObjectId. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Start DataRobot Autopilot

**Slug:** `DATAROBOT_START_MODELING`

Tool to start the data modeling (Autopilot) process for a DataRobot project. Use after uploading data and configuring the project to initiate modeling.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `settings` | object | No | Modeling (Autopilot) settings. Any valid fields from the 'Aim' schema are allowed. Only commonly used fields are defined here; additional fields will be included. |
| `project_id` | string | Yes | Identifier of the DataRobot project to start modeling on. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Start OCR Job Resource

**Slug:** `DATAROBOT_START_OCR_JOB_RESOURCE`

Start an OCR job resource for optical character recognition processing. Use after creating an OCR job resource to initiate the actual OCR processing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `jobResourceId` | string | Yes | OCR job resource ID (24-character hex string). The job resource must be created before starting. Obtain from OCR job resource creation endpoints. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Stop Notebook Runtime

**Slug:** `DATAROBOT_STOP_NOTEBOOK_RUNTIME`

Tool to stop a running DataRobot notebook runtime. Use when you need to terminate an active notebook session to free up resources or end work.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the notebook runtime to stop. Obtain from DATAROBOT_LIST_USE_CASE_NOTEBOOKS_BY_ID or other notebook listing actions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Test External Data Store Connection

**Slug:** `DATAROBOT_TEST_EXTERNAL_DATA_STORES_CONNECTION`

Tool to test an external data store connection in DataRobot. Use when you need to validate that a data store is properly configured and accessible with the provided credentials.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for data store authentication. Alternative to credentialId or credentialData. |
| `password` | string | No | Password for data store authentication. Alternative to credentialId or credentialData. |
| `dataStoreId` | string | Yes | ID of the data store to test connection for. |
| `useKerberos` | boolean | No | Whether to use Kerberos for data store authentication. |
| `credentialId` | string | No | ID of the stored credentials to use for testing the connection. If not provided, credentialData, user, or password may be used instead. |
| `credentialData` | object | No | Credential data object for testing the connection. Must include 'credentialType' field. Use this instead of credentialId to test with inline credentials. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Access Role

**Slug:** `DATAROBOT_UPDATE_ACCESS_ROLE`

Tool to update a custom Access Role. Use when you need to change the name or permissions of an existing custom role.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | New role name; must be unique within the organization. |
| `role_id` | string | Yes | ID of the Access Role to update. |
| `permissions` | object | No | Mapping of entity codes to permission flags; include only flags to update. Use entity codes as keys, e.g., {'PROJECT': {'read': True, 'write': True}}. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Accuracy Metrics Config

**Slug:** `DATAROBOT_UPDATE_ACCURACY_METRICS_CONFIG`

Tool to update which accuracy metrics are returned by the accuracy endpoint for a deployment. Use after deployment is live to customize returned metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of accuracy metrics to include; must contain between 1 and 15 items. Allowed values: AUC, Accuracy, Balanced Accuracy, F1, FPR, FVE Binomial, FVE Gamma, FVE Multinomial, FVE Poisson, FVE Tweedie, Gamma Deviance, Gini Norm, Kolmogorov-Smirnov, LogLoss, MAE, MAPE, MCC, NPV, PPV, Poisson Deviance, R Squared, RMSE, RMSLE, Rate@Top10%, Rate@Top5%, TNR, TPR, Tweedie Deviance, WGS84 MAE, WGS84 RMSE |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Upload CSV Data to Batch Job

**Slug:** `DATAROBOT_UPDATE_BATCH_JOBS_CSV_UPLOAD`

Tool to stream CSV data to a DataRobot batch job. Use when you need to upload CSV data for batch predictions. The batch job must have been created with localFile intake settings before using this endpoint.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `csvData` | string | Yes | CSV data as a string. Must include header row with feature names. Each subsequent row contains feature values for scoring. |
| `batchJobId` | string | Yes | ID of the batch job to upload CSV data to |
| `partNumber` | integer | No | The number of which CSV part is being uploaded when using multipart upload. Defaults to 0 for single-part uploads. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Batch Monitoring Job Definition

**Slug:** `DATAROBOT_UPDATE_BATCH_MONITORING_JOB_DEFINITION`

Tool to update a Batch Monitoring job definition. Use when you need to modify settings like name, schedule, or monitoring configuration for an existing batch monitoring job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | A human-readable name for the definition, must be unique across organisations. |
| `enabled` | boolean | No | If this job definition is enabled as a scheduled job. Optional if no schedule is supplied. |
| `modelId` | string | No | ID of leaderboard model which is used in job for processing predictions dataset. |
| `schedule` | object | No | Cron-like schedule configuration |
| `chunkSize` | string | No | Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes. |
| `csvSettings` | object | No | CSV configuration settings |
| `abortOnError` | boolean | No | Should this job abort if too many errors are encountered. |
| `batchJobType` | string ("monitoring" | "prediction") | No | Type of batch job |
| `deploymentId` | string | No | ID of deployment which is used in job for processing predictions dataset. |
| `thresholdLow` | number | No | Compute explanations for predictions below this threshold. |
| `numConcurrent` | integer | No | Number of simultaneous requests to run against the prediction instance. |
| `pinnedModelId` | string | No | Specify a model ID used for scoring. |
| `thresholdHigh` | number | No | Compute explanations for predictions above this threshold. |
| `intakeSettings` | object | No | The intake option configured for this job. |
| `modelPackageId` | string | No | ID of model package from registry is used in job for processing predictions dataset. |
| `outputSettings` | object | No | The output option configured for this job. |
| `jobDefinitionId` | string | Yes | ID of the Batch Monitoring job definition to update. |
| `maxExplanations` | integer | No | Number of explanations requested. Will be ordered by strength. |
| `monitoringColumns` | object | No | Column names mapping for monitoring |
| `skipDriftTracking` | boolean | No | Skip drift tracking for this job. |
| `passthroughColumns` | array | No | Pass through columns from the original dataset. |
| `predictionInstance` | object | No | Override the default prediction instance from the deployment |
| `timeseriesSettings` | object | No | Time Series settings included of this job is a Time Series job. |
| `predictionThreshold` | number | No | Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0. |
| `columnNamesRemapping` | string | No | Remap (rename or remove columns from) the output from this job. |
| `explanationAlgorithm` | string ("shap" | "xemp") | No | Algorithm for calculating prediction explanations |
| `includeProbabilities` | boolean | No | Include probabilities for all classes. |
| `explanationClassNames` | array | No | Sets a list of selected class names for which corresponding explanations are returned in each row. This setting is mutually exclusive with numTopClasses. If neither parameter is specified, the default setting is numTopClasses=1. |
| `monitoringAggregation` | object | No | Aggregation policy for monitoring jobs |
| `monitoringBatchPrefix` | string | No | Name of the batch to create with this job. |
| `passthroughColumnsSet` | string | No | Pass through all columns from the original dataset. |
| `includePredictionStatus` | boolean | No | Include prediction status column in the output. |
| `explanationNumTopClasses` | integer | No | Sets the number of most probable (top predicted) classes for which corresponding explanations are returned in each row. This setting is mutually exclusive with classNames. If neither parameter is specified, the default setting is numTopClasses=1. |
| `monitoringOutputSettings` | object | No | Output settings for monitoring jobs |
| `predictionWarningEnabled` | boolean | No | Enable prediction warnings. |
| `secondaryDatasetsConfigId` | string | No | Configuration id for secondary datasets to use when making a prediction. |
| `includeProbabilitiesClasses` | array | No | Include only probabilities for these specific class names. |
| `disableRowLevelErrorHandling` | boolean | No | Skip row by row error handling. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Batch Prediction Job Definition

**Slug:** `DATAROBOT_UPDATE_BATCH_PREDICTION_JOB_DEFINITION`

Tool to update an existing Batch Prediction job definition. Use when you need to modify job parameters, scheduling, or configuration for batch scoring operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | A human-readable name for the definition, must be unique across organisations. |
| `enabled` | boolean | No | If this job definition is enabled as a scheduled job. Optional if no schedule is supplied. |
| `modelId` | string | No | ID of leaderboard model which is used in job for processing predictions dataset. |
| `schedule` | object | No | Cron-like schedule configuration |
| `chunkSize` | string | No | Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes. |
| `csvSettings` | object | No | CSV configuration settings |
| `abortOnError` | boolean | No | Should this job abort if too many errors are encountered. |
| `batchJobType` | string ("monitoring" | "prediction") | No | Batch job type. |
| `deploymentId` | string | No | ID of deployment which is used in job for processing predictions dataset. |
| `thresholdLow` | number | No | Compute explanations for predictions below this threshold. |
| `numConcurrent` | integer | No | Number of simultaneous requests to run against the prediction instance. |
| `pinnedModelId` | string | No | Specify a model ID used for scoring. |
| `thresholdHigh` | number | No | Compute explanations for predictions above this threshold. |
| `intakeSettings` | object | No | The intake option configured for this job. |
| `modelPackageId` | string | No | ID of model package from registry is used in job for processing predictions dataset. |
| `outputSettings` | object | No | The output option configured for this job. |
| `jobDefinitionId` | string | Yes | ID of the Batch Prediction job definition to update. |
| `maxExplanations` | integer | No | Number of explanations requested. Will be ordered by strength. |
| `skipDriftTracking` | boolean | No | Skip drift tracking for this job. |
| `passthroughColumns` | array | No | Pass through columns from the original dataset (max 100 columns). |
| `predictionInstance` | object | No | Override the default prediction instance from the deployment |
| `timeseriesSettings` | object | No | Time Series settings included of this job is a Time Series job. |
| `predictionThreshold` | number | No | Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0. |
| `columnNamesRemapping` | string | No | Remap (rename or remove columns from) the output from this job. |
| `explanationAlgorithm` | string ("shap" | "xemp") | No | Which algorithm will be used to calculate prediction explanations. |
| `includeProbabilities` | boolean | No | Include probabilities for all classes. |
| `explanationClassNames` | array | No | Sets a list of selected class names for which corresponding explanations are returned in each row (1-100 items). This setting is mutually exclusive with numTopClasses. If neither parameter is specified, the default setting is numTopClasses=1. |
| `monitoringBatchPrefix` | string | No | Name of the batch to create with this job. |
| `passthroughColumnsSet` | string ("all") | No | Pass through all columns from the original dataset. |
| `includePredictionStatus` | boolean | No | Include prediction status column in the output. |
| `explanationNumTopClasses` | integer | No | Sets the number of most probable (top predicted) classes for which corresponding explanations are returned in each row. This setting is mutually exclusive with classNames. If neither parameter is specified, the default setting is numTopClasses=1. |
| `predictionWarningEnabled` | boolean | No | Enable prediction warnings. |
| `secondaryDatasetsConfigId` | string | No | Configuration id for secondary datasets to use when making a prediction. |
| `includeProbabilitiesClasses` | array | No | Include only probabilities for these specific class names (max 100). |
| `disableRowLevelErrorHandling` | boolean | No | Skip row by row error handling. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Batch Predictions

**Slug:** `DATAROBOT_UPDATE_BATCH_PREDICTIONS`

Tool to update a Batch Prediction job in DataRobot. Use when you need to hide/unhide a job or update job status information.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `logs` | array | No | The job log. List of log lines from the job log. |
| `hidden` | boolean | No | Hides or unhides the job from the job list |
| `status` | string ("INITIALIZING" | "RUNNING" | "COMPLETED" | "ABORTED" | "FAILED") | No | The current job status |
| `aborted` | string | No | Time when job abortion happened |
| `started` | string | No | Time when job scoring began |
| `completed` | string | No | Time when job completed scoring |
| `failed_rows` | integer | No | Number of rows that have failed scoring |
| `scored_rows` | integer | No | Number of rows that have been used in prediction computation |
| `skipped_rows` | integer | No | Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows. |
| `job_intake_size` | integer | No | Number of bytes in the intake dataset for this job |
| `job_output_size` | integer | No | Number of bytes in the output dataset for this job |
| `prediction_job_id` | string | Yes | ID of the Batch Prediction job to update |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Upload Batch Prediction CSV Part

**Slug:** `DATAROBOT_UPDATE_BATCH_PREDICTIONS_CSV_UPLOAD_PART`

Tool to upload CSV data in multiple parts for batch predictions. Use when you need to submit prediction input data for an existing batch prediction job in chunks.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `csvData` | string | Yes | CSV data to upload as a string. Include header row with column names in the first line, followed by data rows with feature values. |
| `partNumber` | integer | Yes | The number of which csv part is being uploaded when using multipart upload |
| `predictionJobId` | string | Yes | ID of the Batch Prediction job |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Calendar

**Slug:** `DATAROBOT_UPDATE_CALENDAR`

Tool to update a calendar's name in DataRobot. Use when you need to rename an existing calendar.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The new name to assign to the calendar. |
| `calendarId` | string | Yes | The ID of the calendar to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Calendars Access Control

**Slug:** `DATAROBOT_UPDATE_CALENDARS_ACCESS_CONTROL`

Tool to update access control for a calendar. Use when you need to grant, modify, or revoke user access to a calendar. Allows setting roles (OWNER, EDITOR, CONSUMER) for up to 100 users at once.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `users` | array | Yes | List of users and their roles to update access for this calendar. Maximum 100 users. |
| `calendarId` | string | Yes | The ID of the calendar to update access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Catalog Item

**Slug:** `DATAROBOT_UPDATE_CATALOG_ITEM`

Tool to update a catalog item's name, description, or tags. Use when you need to modify metadata for an existing DataRobot catalog item.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | New catalog item name. |
| `tags` | array | No | New catalog item tags. Tags must be lower case, without spaces, and cannot include -$.,{}"#' special characters. |
| `catalogId` | string | Yes | Catalog item ID to update. |
| `description` | string | No | New catalog item description. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Comment

**Slug:** `DATAROBOT_UPDATE_COMMENT`

Tool to update an existing comment in DataRobot. Use when you need to modify the content of a comment or update mentioned users.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `content` | string | Yes | Updated content of the comment, 10000 symbols max |
| `mentions` | array | No | A list of user IDs mentioned in the content. Maximum 100 user IDs allowed. |
| `comment_id` | string | Yes | The ID of the comment to update |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Compliance Doc Templates

**Slug:** `DATAROBOT_UPDATE_COMPLIANCE_DOC_TEMPLATES`

Tool to update an existing model compliance documentation template in DataRobot. Use when you need to modify the name, description, labels, project type, or sections of a compliance template.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | New name for the template. Must be unique among templates created by the user. |
| `labels` | array | No | Names of the labels to assign to the template. |
| `sections` | array | No | List of section objects representing the document structure. Supports nested sub-sections (max 5 levels deep, 500 total sections). Each section must include 'title' and 'type' at minimum. |
| `templateId` | string | Yes | The ID of the model compliance document template to update. |
| `description` | string | No | New description for the template. |
| `projectType` | string ("autoMl" | "textGeneration" | "timeSeries") | No | Enum for project template types. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Compliance Doc Template Shared Roles

**Slug:** `DATAROBOT_UPDATE_COMPLIANCE_DOC_TEMPLATES_SHARED_ROLES`

Tool to update shared roles for a compliance document template. Use when you need to grant, modify, or revoke access to a compliance doc template. Allows setting roles (OWNER, EDITOR, CONSUMER) for users, groups, or organizations. Maximum 100 entries per request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of access control objects defining who has access and their roles. Maximum 100 entries. Each entry can grant access to a user (requires username), or to a group/organization (requires id). |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `templateId` | string | Yes | The ID of the compliance document template to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Credentials

**Slug:** `DATAROBOT_UPDATE_CREDENTIALS`

Tool to update existing DataRobot credentials. Use when you need to modify credential fields such as name, description, or credential-specific parameters (tokens, keys, etc.).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of credentials. |
| `user` | string | No | Username to update for this set of credentials (applicable for credentialType 'basic' and 'snowflake_key_pair_user_account' only). |
| `token` | string | No | OAUTH token (applicable for credentialType 'oauth' only). |
| `gcpKey` | object | No | Google Cloud Platform service account key. |
| `authUrl` | string | No | The URL used for SAP OAuth authentication. |
| `apiToken` | string | No | API token. |
| `clientId` | string | No | OAUTH client ID (applicable for credentialType 'snowflake_oauth_user_account', 'adls_gen2_oauth', 'sap_oauth_account', 'azure_service_principal' and 'azure_oauth'). |
| `configId` | string | No | ID of secure configuration credentials to share by admin. Alternative to googleConfigId (deprecated). |
| `password` | string | No | Password to update for this set of credentials (applicable for credentialType 'basic' only). |
| `passphrase` | string | No | Optional passphrase to encrypt private key (applicable for credentialType 'snowflake_key_pair_user_account' only). |
| `description` | string | No | Description of credentials. If omitted and name is provided, clears any previous description. |
| `oauthScopes` | array | No | External OAUTH scopes (applicable for Snowflake External OAUTH connections, credentialType 'snowflake_oauth_user_account', 'adls_gen2_oauth', and 'azure_oauth'). |
| `publicKeyId` | string | No | Box public key identifier (applicable for credentialType 'box_jwt' only). |
| `sapAiApiUrl` | string | No | The URL used for SAP AI API service. |
| `clientSecret` | string | No | OAUTH client secret (applicable for credentialType 'snowflake_oauth_user_account', 'adls_gen2_oauth', 'sap_oauth_account', 'azure_service_principal' and 'azure_oauth'). |
| `credentialId` | string | Yes | Unique identifier of the credential to update. |
| `enterpriseId` | string | No | Box enterprise identifier (applicable for credentialType 'box_jwt' only). |
| `refreshToken` | string | No | OAUTH refresh token (applicable for credentialType 'oauth' only). |
| `azureTenantId` | string | No | Tenant ID of the Azure AD service principal (applicable for credentialType 'azure_service_principal' only). |
| `oauthConfigId` | string | No | ID of snowflake OAuth configurations shared by admin. |
| `privateKeyStr` | string | No | Private key for key pair authentication (applicable for credentialType 'snowflake_key_pair_user_account' only). |
| `awsAccessKeyId` | string | No | AWS access key ID (applicable for credentialType 's3' only). |
| `googleConfigId` | string | No | ID of Google configurations shared by admin (deprecated). Please use configId instead. |
| `oauthIssuerUrl` | string | No | Snowflake External IDP issuer URL (applicable for Snowflake External OAUTH connections only). |
| `awsSessionToken` | string | No | The AWS session token (applicable for credentialType 's3' only). |
| `oauthIssuerType` | string ("azure" | "okta" | "snowflake") | No | Snowflake IDP issuer type (applicable for credentialType 'snowflake_oauth_user_account' only). |
| `authenticationId` | string | No | Authorized external OAuth provider identifier. |
| `awsSecretAccessKey` | string | No | The AWS secret access key (applicable for credentialType 's3' only). |
| `snowflakeAccountName` | string | No | Snowflake account name (applicable for 'snowflake_oauth_user_account' only). |
| `azureConnectionString` | string | No | Azure connection string (applicable for credentialType 'azure' only). |
| `databricksAccessToken` | string | No | Databricks personal access token. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Credentials Associations

**Slug:** `DATAROBOT_UPDATE_CREDENTIALS_ASSOCIATIONS`

Tool to add or remove objects associated with credentials. Use when you need to associate data connections with stored credentials or remove existing associations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `credentialId` | string | Yes | ID of the credentials to update associations for. |
| `credentialsToAdd` | array | No | Objects to associate with the credentials. Cannot be used simultaneously with credentialsToRemove. |
| `credentialsToRemove` | array | No | List of object IDs to disassociate from the credentials. Cannot be used simultaneously with credentialsToAdd. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Credentials Associations by ID

**Slug:** `DATAROBOT_UPDATE_CREDENTIALS_ASSOCIATIONS_BY_ID`

Tool to set default credentials for a data connection or batch prediction job. Use when you need to update the default status of a credential association.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `isDefault` | boolean | No | Whether this credentials' association with the given object is default for given session user. |
| `credentialId` | string | Yes | Credentials entity ID. |
| `associationId` | string | Yes | The compound ID of the data connection. Format: <object_type>:<object_id> where object_id is the ID of the data connection and object_type is either 'dataconnection' or 'batch_prediction_job_definition'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Application Source

**Slug:** `DATAROBOT_UPDATE_CUSTOM_APPLICATION_SOURCE`

Tool to update a custom application source's name. Use when you need to rename an existing custom application source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The new name for the custom application source. |
| `appSourceId` | string | Yes | The ID of the custom application source to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Application Source Version

**Slug:** `DATAROBOT_UPDATE_CUSTOM_APPLICATION_SOURCES_VERSIONS`

Tool to update a custom application source version with new label, base environment, or files. Use when you need to modify an existing version's metadata or file configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `label` | string | No | The new label for the custom application source version. |
| `filePath` | string | No | The local path of the file being uploaded. Can be a single path or an array of paths. See the 'file' field explanation for more details. |
| `appSourceId` | string | Yes | The ID of the custom application source. |
| `filesToDelete` | string | No | The IDs of the files to be deleted. Can be a single ID or an array of IDs. |
| `baseEnvironmentId` | string | No | The base environment ID to use with this source version. |
| `appSourceVersionId` | string | Yes | The ID of the custom application source version to update. |
| `baseEnvironmentVersionId` | string | No | The base environment version ID to use with this source version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Job

**Slug:** `DATAROBOT_UPDATE_CUSTOM_JOB`

Tool to update an existing DataRobot custom job. Use when you need to modify custom job properties like name, description, environment, or runtime parameters.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the custom job (max 255 characters). |
| `resources` | object | No | Custom job resources configuration for Kubernetes cluster |
| `entryPoint` | string | No | The ID of the entry point file to use for execution. |
| `customJobId` | string | Yes | ID of the custom job to update. |
| `description` | string | No | The description of the custom job (max 10,000 characters). |
| `environmentId` | string | No | The ID of the execution environment to use for this custom job. |
| `environmentVersionId` | string | No | The ID of the execution environment version to use for this custom job. If not provided, the latest execution environment version will be used. |
| `runtimeParameterValues` | string | No | JSON string to inject runtime parameter values. The fieldName must match a fieldName listed in the runtimeParameterDefinitions section of the metadata.yaml file. This list will be merged with any existing runtime values set from the prior version. Specify a null value to unset specific parameters and fall back to the defaultValue from the definition. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Job Shared Roles

**Slug:** `DATAROBOT_UPDATE_CUSTOM_JOBS_SHARED_ROLES`

Tool to update shared roles for a custom job. Use when you need to grant, modify, or revoke access to a custom job. Allows setting roles (OWNER, EDITOR, CONSUMER) for users, groups, or organizations. Maximum 100 entries per request.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of access control objects defining who has access and their roles. Maximum 100 entries. Each entry can grant access to a user (requires username), or to a group/organization (requires id). |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `customJobId` | string | Yes | The ID of the custom job to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Model

**Slug:** `DATAROBOT_UPDATE_CUSTOM_MODEL`

Tool to update a DataRobot custom model. Use when you need to modify properties of an existing custom model such as description, memory settings, class labels, or other configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The user-friendly name for the model. |
| `language` | string | No | Programming language name in which model is written. |
| `replicas` | integer | No | A fixed number of replicas that will be set for the given custom-model. |
| `requiresHa` | boolean | No | Require all custom model replicas to be deployed on different Kubernetes nodes for predictions fault tolerance. |
| `targetName` | string | No | The name of the target for labeling predictions. Required for model type 'inference'. Specifying this value for a model type 'training' will result in an error. |
| `classLabels` | array | No | The class labels for multiclass classification. Required for multiclass inference models. If using one of the [DataRobot] base environments and your model produces an ndarray of unlabeled class probabilities, the order of the labels should match the order of the predicted output. |
| `description` | string | No | The user-friendly description of the model. |
| `customModelId` | string | Yes | The ID of the custom model to update. |
| `desiredMemory` | integer | No | The amount of memory that is expected to be allocated by the custom model in bytes. |
| `maximumMemory` | integer | No | The maximum memory that might be allocated by the custom-model in bytes. If exceeded, the custom-model will be killed. |
| `gitModelVersion` | object | No | Git-related attributes for a custom model version. |
| `negativeClassLabel` | string | No | The negative class label for custom models that support binary classification. If specified, `positiveClassLabel` must also be specified. Default value is "0". |
| `positiveClassLabel` | string | No | The positive class label for custom models that support binary classification. If specified, `negativeClassLabel` must also be specified. Default value is "1". |
| `networkEgressPolicy` | string ("NONE" | "DR_API_ACCESS" | "PUBLIC") | No | Network egress policy options for custom models. |
| `predictionThreshold` | number | No | The prediction threshold which will be used for binary classification custom model. |
| `isTrainingDataForVersionsPermanentlyEnabled` | boolean | No | Indicates that training data assignment is now permanently at the version level only for the custom model. Once enabled, this cannot be disabled. Training data assignment on the model level will be permanently disabled for this particular model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Model Access Control

**Slug:** `DATAROBOT_UPDATE_CUSTOM_MODELS_ACCESS_CONTROL`

Tool to update access control for a custom model. Use when you need to grant, modify, or revoke user access to a custom model. Allows setting roles (OWNER, EDITOR, CONSUMER) and share permissions for up to 100 users at once.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of sharing roles to update. Maximum 100 users. |
| `customModelId` | string | Yes | The ID of the custom model to update access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Model Version

**Slug:** `DATAROBOT_UPDATE_CUSTOM_MODELS_VERSIONS`

Tool to update custom model version files and configuration in DataRobot. Use when you need to create a new version of a custom model by uploading new files or updating settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | No | A file with code for a custom task or a custom model. For each file supplied, you must have a corresponding "filePath" supplied that shows the relative location of the file. If the supplied file already exists at the supplied filePath, the old file is replaced by the new file. |
| `filePath` | string | No | The local path of the file being uploaded. See the "file" field explanation for more details. Required if "file" is provided. |
| `replicas` | integer | No | A fixed number of replicas that will be set for the given custom-model. |
| `requiresHa` | boolean | No | Require all custom model replicas to be deployed on different Kubernetes nodes for predictions fault tolerance. |
| `holdoutData` | string | No | Holdout data configuration may be supplied for version. This functionality has to be explicitly enabled for the current model. |
| `trainingData` | string | No | Training data configuration may be supplied for version. This functionality has to be explicitly enabled for the current model. |
| `customModelId` | string | Yes | The ID of the custom model to update. |
| `desiredMemory` | integer | No | The amount of memory in bytes that is expected to be allocated by the custom model. This setting is incompatible with setting the resourceBundleId. |
| `filesToDelete` | array | No | The IDs of the files to be deleted. |
| `isMajorUpdate` | boolean | Yes | If set to true, new major version will be created, otherwise minor version will be created. |
| `maximumMemory` | integer | No | The maximum memory in bytes that might be allocated by the custom-model. If exceeded, the custom-model will be killed. This setting is incompatible with setting the resourceBundleId. |
| `gitModelVersion` | object | No | Git-related attributes associated with a custom model version. |
| `requiredMetadata` | string | No | Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment's requiredMetadataKeys. Once set, they cannot be changed. If you need to change them, make a new version. |
| `resourceBundleId` | string | No | A single identifier that represents a bundle of resources: Memory, CPU, GPU, etc. A list of available bundles can be obtained via the resource bundles endpoint. |
| `baseEnvironmentId` | string | No | The base environment to use with this model version. At least one of "baseEnvironmentId" and "baseEnvironmentVersionId" must be provided. If both are specified, the version must belong to the environment. |
| `networkEgressPolicy` | string | No | Network egress policy. Must be one of: "NONE" or "PUBLIC". |
| `requiredMetadataValues` | string | No | Additional parameters required by the execution environment. The required fieldNames are defined by the fieldNames in the base environment's requiredMetadataKeys. Field names and values are exposed as environment variables when running the custom model. Example: "required_metadata_values": [{"field_name": "hi", "value": "there"}] |
| `keepTrainingHoldoutData` | boolean | No | If the version should inherit training and holdout data from the previous version. Defaults to true. This field is only applicable if the model has training data for versions enabled. Otherwise the field value will be ignored. |
| `baseEnvironmentVersionId` | string | No | The base environment version ID to use with this model version. At least one of "baseEnvironmentId" and "baseEnvironmentVersionId" must be provided. If both are specified, the version must belong to the environment. If not specified: in the case where the previous model versions exist, the value from the latest model version is inherited, otherwise, the latest successfully built version of the environment specified in "baseEnvironmentId" is used. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Model Version

**Slug:** `DATAROBOT_UPDATE_CUSTOM_MODELS_VERSIONS_BY_ID`

Tool to update a custom model version in DataRobot. Use when you need to modify the description, git attributes, or metadata of an existing custom model version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `description` | string | No | New description for the custom model version (max 10,000 characters). |
| `customModelId` | string | Yes | The ID of the custom model. |
| `gitModelVersion` | object | No | Git related attributes for custom model version update. |
| `requiredMetadata` | object | No | Additional parameters required by the execution environment. Keys are defined by the base environment's requiredMetadataKeys. |
| `customModelVersionId` | string | Yes | The ID of the custom model version to update. |
| `requiredMetadataValues` | array | No | Additional parameters required by the execution environment as a list of field name-value pairs (max 100 items). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Models Versions With Training Data

**Slug:** `DATAROBOT_UPDATE_CUSTOM_MODELS_VERSIONS_WITH_TRAINING_DATA`

Tool to add or replace training and holdout data for a custom model version. Use when you need to update the datasets associated with a custom model for training or validation purposes. The operation is asynchronous - poll the returned location URL to check completion status.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `holdoutData` | object | No | Holdout data configuration for the custom model version |
| `trainingData` | object | Yes | Training data configuration. This field is required to update the model version. |
| `customModelId` | string | Yes | The ID of the custom model to update. |
| `gitModelVersion` | object | No | Git-related attributes associated with a custom model version |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Task Version

**Slug:** `DATAROBOT_UPDATE_CUSTOM_TASKS_VERSIONS`

Tool to update a DataRobot custom task version by uploading new files and/or updating configuration. Creates a new version (major or minor) of an existing custom task with updated code files. Use when you need to modify custom task code, change the base environment, or update task configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | string | No | File(s) with code for the custom task. Each file must have a corresponding filePath entry showing its relative location. If a file at the same filePath already exists, it will be replaced. Can be a single file or a list for multiple files. |
| `filePath` | string | No | The relative path(s) of the file(s) being uploaded. Must correspond to each file in the 'file' field. Examples: 'main.py', 'helpers/helper.py'. If uploading multiple files, provide a list with the same length as the file list. |
| `customTaskId` | string | Yes | The ID of the custom task to update. |
| `filesToDelete` | string | No | The ID(s) of file(s) to be deleted from the custom task version. Can be a single ID or a list. |
| `isMajorUpdate` | string ("true" | "True" | "false" | "False") | Yes | If true, creates a new major version; if false, creates a minor version. |
| `maximumMemory` | integer | No | [DEPRECATED] Maximum memory in bytes that may be allocated by the custom task. If exceeded, the task will be killed. |
| `requiredMetadata` | string | No | Additional parameters required by the execution environment (JSON string). Required keys are defined by the base environment's requiredMetadataKeys. Cannot be changed once set. |
| `baseEnvironmentId` | string | Yes | The base environment to use with this custom task version. |
| `outboundNetworkPolicy` | string ("ISOLATED" | "PUBLIC") | No | Outbound network policy options |
| `requiredMetadataValues` | string | No | Additional parameters required by the execution environment (JSON string). Field names are defined by base environment's requiredMetadataKeys. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Custom Task Version

**Slug:** `DATAROBOT_UPDATE_CUSTOM_TASK_VERSION`

Tool to update a custom task version in DataRobot. Use when you need to modify the description or metadata of an existing custom task version.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `description` | string | No | New description for the custom task version. |
| `customTaskId` | string | Yes | The ID of the custom task to update. |
| `requiredMetadata` | object | No | Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment's requiredMetadataKeys. |
| `customTaskVersionId` | string | Yes | The ID of the custom task version to update. |
| `requiredMetadataValues` | array | No | Additional parameters required by the execution environment as an array of field-value pairs. The required fieldNames are defined by the fieldNames in the base environment's requiredMetadataKeys. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Definitions Chunk Definitions

**Slug:** `DATAROBOT_UPDATE_DATASET_DEFINITIONS_CHUNK_DEFINITIONS`

Tool to update a chunk definition in a dataset definition. Use when you need to modify chunk definition properties such as name, partition columns, or validation dates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `updates` | object | Yes | Fields to be updated in the chunk definition. |
| `operations` | object | No | Operations to perform on the update chunk definition. |
| `chunkDefinitionId` | string | Yes | The ID of the chunk definition to update. |
| `datasetDefinitionId` | string | Yes | The ID of the dataset definition. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Featurelist

**Slug:** `DATAROBOT_UPDATE_DATASET_FEATURELIST`

Tool to update a dataset featurelist's name or description. Use when you need to modify the metadata of an existing dataset featurelist.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The new name of the featurelist. |
| `datasetId` | string | Yes | The ID of the dataset. |
| `description` | string | No | The new description of the featurelist. |
| `featurelistId` | string | Yes | The ID of the featurelist. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Relationship

**Slug:** `DATAROBOT_UPDATE_DATASET_RELATIONSHIP`

Tool to update an existing dataset relationship in DataRobot. Use when you need to modify the linked dataset or features used in a relationship between two datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | ID of the source dataset containing the relationship. |
| `linkedFeatures` | array | No | List of feature names from the linked dataset to use in the relationship. Must contain at least 1 feature if provided. |
| `sourceFeatures` | array | No | List of feature names from the source dataset to use in the relationship. Must contain at least 1 feature if provided. |
| `linkedDatasetId` | string | No | ID of the linked dataset to connect with. If provided, updates the relationship to reference this dataset. |
| `datasetRelationshipId` | string | Yes | ID of the dataset relationship to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Datasets (Bulk Action)

**Slug:** `DATAROBOT_UPDATE_DATASETS`

Tool to execute bulk actions on multiple datasets. Supports tagging, deleting, and updating role-based access (sharing/unsharing) for datasets. Use when you need to perform the same action on multiple datasets at once.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `payload` | string | Yes | Payload indicating which action to run and with what parameters. Must be one of: TagPayload (for tagging), UpdateRolesPayload (for sharing/unsharing), or DeletePayload (for deletion). |
| `datasetIds` | array | Yes | List of dataset IDs to execute the bulk action on. Use DATAROBOT_LIST_DATASETS to find available dataset IDs. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Access Control

**Slug:** `DATAROBOT_UPDATE_DATASETS_ACCESS_CONTROL`

Tool to update access control for a dataset. Use when you need to grant, modify, or revoke user access to a dataset. Allows setting roles (OWNER, EDITOR, CONSUMER) and permissions (canShare, canUseData) for multiple users at once.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | Array of DatasetAccessControl objects specifying users and their access levels. Must contain at least one entry. |
| `datasetId` | string | Yes | The ID of the dataset to update access control for. |
| `applyGrantToLinkedObjects` | boolean | No | If true for any users being granted access to the entity, grant the user read access to any linked objects such as DataSources and DataStores that may be used by this entity. Default is false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset by ID

**Slug:** `DATAROBOT_UPDATE_DATASETS_BY_ID`

Tool to update a dataset's name or categories in DataRobot's global catalog. Use when you need to modify dataset metadata such as its name or intended-use categories.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The new name of the dataset. |
| `datasetId` | string | Yes | The ID of the dataset to update. |
| `categories` | string | No | An array of strings describing the intended use of the dataset. If any categories were previously specified for the dataset, they will be overwritten. Can be a single string or a list of category values. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Recover Deleted Dataset

**Slug:** `DATAROBOT_UPDATE_DATASETS_DELETED`

Tool to recover a deleted dataset in DataRobot. Use when you need to restore a dataset that was previously marked as deleted.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The unique identifier of the dataset to recover. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Refresh Job

**Slug:** `DATAROBOT_UPDATE_DATASETS_REFRESH_JOBS`

Tool to update a dataset refresh job configuration. Use when you need to modify the schedule, name, enabled status, or other settings for an existing dataset refresh job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Scheduled job name. |
| `jobId` | string | Yes | ID of the user scheduled dataset refresh job to update. |
| `enabled` | boolean | No | Boolean for whether the scheduled job is active (true) or inactive (false). |
| `schedule` | object | No | Schedule configuration for dataset refresh jobs |
| `datasetId` | string | Yes | The dataset ID associated with the scheduled refresh job. |
| `categories` | string | No | An array of strings describing the intended use of the dataset. The supported options are 'TRAINING', and 'PREDICTION'. |
| `credentials` | string | No | A JSON string describing the data engine queries credentials to use when refreshing. |
| `useKerberos` | boolean | No | If true, the Kerberos authentication system is used in conjunction with a credential ID. |
| `credentialId` | string | No | The ID of the set of credentials to use to run the scheduled job when the Kerberos authentication service is utilized. Required when useKerberos is true. |
| `scheduleReferenceDate` | string | No | The UTC reference date in RFC-3339 format of when the schedule starts from. This value is returned in /api/v2/datasets/(datasetId)/refreshJobs/(jobId)/ to help build a more intuitive schedule picker. Required when schedule is being updated. The default is the current time. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Dataset Shared Roles

**Slug:** `DATAROBOT_UPDATE_DATASETS_SHARED_ROLES`

Tool to modify dataset shared roles in DataRobot. Use when you need to update permissions for users, groups, or organizations on a dataset. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'name' (for new users only). Use 'id' when updating permissions for known entities, 'name' when granting access to new users. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `dataset_id` | string | Yes | The ID of the dataset to update shared roles for. |
| `applyGrantToLinkedObjects` | boolean | No | If true, grants read access to linked objects (DataSources, DataStores) for users being granted access. Ignored if no linked objects exist. Will not lower existing permissions. May require additional permissions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Recover Deleted Dataset Version

**Slug:** `DATAROBOT_UPDATE_DATASETS_VERSIONS_DELETED`

Tool to recover a deleted dataset version in DataRobot. Use when you need to restore a specific version of a dataset that was previously marked as deleted.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | Yes | The unique identifier of the dataset. |
| `datasetVersionId` | string | Yes | The unique identifier of the dataset version to recover. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Data Stage Part

**Slug:** `DATAROBOT_UPDATE_DATA_STAGES_PARTS`

Tool to upload a part file to a DataRobot data stage. Use when uploading large datasets in multiple parts. The uploaded part is verified with a checksum and size returned in the response.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | object | Yes | The part file to upload to the data stage. |
| `partNumber` | integer | Yes | The part number associated with this part. Must be a positive integer. |
| `dataStageId` | string | Yes | The ID of the data stage where the part will be uploaded. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENT`

Tool to update an existing DataRobot deployment's metadata. Use when you need to modify a deployment's label, description, or importance level.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `label` | string | No | Human-readable name for the deployment (max 512 characters). If null, removes the existing label. |
| `importance` | string ("CRITICAL" | "HIGH" | "MODERATE" | "LOW") | No | Enum for deployment importance levels |
| `description` | string | No | Description for the deployment (max 10,000 characters). If null, removes the existing description. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Challenger Replay Settings

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_CHALLENGER_REPLAY_SETTINGS`

Tool to update challenger replay settings for a deployment. Use to enable/disable scheduled replay and configure the schedule.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `enabled` | boolean | No | Identifies whether scheduled replay is enabled. |
| `schedule` | object | No | Scheduling configuration for challenger replay jobs. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Custom Metric

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_CUSTOM_METRICS`

Tool to update settings for a deployment's custom metric. Use to modify metric description, directionality, units, or other configuration settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the custom metric |
| `type` | string ("average" | "categorical" | "gauge" | "sum") | No | Type (and aggregation character) of a metric |
| `batch` | object | No | A custom metric batch ID source when reading values from columnar dataset like a file |
| `units` | string | No | Units or Y Label of given custom metric |
| `value` | object | No | A custom metric value source when reading values from columnar dataset like a file |
| `timestamp` | object | No | A custom metric timestamp spoofing when reading values from file, like dataset. By default, we replicate pd.to_datetime formatting behaviour |
| `categories` | array | No | Category values. Required for categorical custom metrics (max 25 items) |
| `description` | string | No | A description of the custom metric |
| `sampleCount` | object | No | Points to a weight column if users provide pre-aggregated metric values. Used with columnar datasets |
| `deploymentId` | string | Yes | ID of the deployment |
| `baselineValues` | array | No | Baseline values (max 5 items) |
| `customMetricId` | string | Yes | ID of the custom metric to update |
| `directionality` | string ("higherIsBetter" | "lowerIsBetter") | No | Directionality of the custom metric |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Settings

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENT_SETTINGS`

Tool to update deployment settings for a DataRobot deployment. Use when you need to configure settings like predictions data collection, feature drift, target drift, bias and fairness, humility, prediction intervals, and other deployment-level configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `humility` | object | No | Humility setting for the deployment. |
| `targetDrift` | object | No | Target drift setting for the deployment. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `featureDrift` | object | No | Feature drift setting for the deployment. |
| `associationId` | object | No | Association ID settings for the deployment. |
| `biasAndFairness` | object | No | Bias and fairness setting for the deployment. |
| `segmentAnalysis` | object | No | Segment analysis setting for the deployment. |
| `automaticActuals` | object | No | Automatic actuals setting for the deployment. |
| `challengerModels` | object | No | Challenger models setting for the deployment. |
| `processingLimits` | object | No | Processing limits setting for the deployment. |
| `predictionWarning` | object | No | Prediction warning setting for the deployment. |
| `predictionIntervals` | object | No | Prediction intervals setting for the deployment. |
| `predictionsByForecastDate` | object | No | Forecast date setting for the deployment. |
| `predictionsDataCollection` | object | No | Predictions data collection setting for the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Health Settings

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_HEALTH_SETTINGS`

Tool to update health settings for a DataRobot deployment. Configure monitoring thresholds for accuracy, data drift, service health, fairness, and timeliness. Use after deployment is active to customize health monitoring behavior.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `service` | object | No | Service health status settings. |
| `accuracy` | object | No | Accuracy health status settings. |
| `fairness` | object | No | Fairness health status settings. |
| `dataDrift` | object | No | Data drift health status settings. |
| `customMetrics` | object | No | Custom metrics health status settings. |
| `deployment_id` | string | Yes | Unique identifier of the deployment. |
| `actualsTimeliness` | object | No | Timeliness health settings for predictions or actuals. |
| `predictionsTimeliness` | object | No | Timeliness health settings for predictions or actuals. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Model

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_MODEL`

Tool to replace a deployment's champion model with a different model or model package. Use when updating the active model in a deployment due to accuracy improvements, data drift, or performance needs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `reason` | string ("ACCURACY" | "DATA_DRIFT" | "ERRORS" | "SCHEDULED_REFRESH" | "SCORING_SPEED" | "DEPRECATION" | "OTHER") | Yes | Reason for the model replacement: ACCURACY, DATA_DRIFT, ERRORS, SCHEDULED_REFRESH, SCORING_SPEED, DEPRECATION, or OTHER. |
| `modelId` | string | No | ID of the model used to replace deployment's champion model. Required if modelPackageId is not provided. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `modelPackageId` | string | No | ID of the model package used to replace deployment's champion model. Required if modelId is not provided. |
| `runtimeParameterValues` | string | No | Inject values into a model at runtime. The fieldName must match a fieldName defined in the model package. This list is merged with any existing runtime values set through the deployed model package. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Monitoring Batch

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_MONITORING_BATCHES`

Tool to update a monitoring batch in a deployment. Use when you need to modify batch properties like name, description, lock status, or external context URL.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `isLocked` | boolean | No | Whether or not predictions can be added to the batch |
| `batchName` | string | No | Name of the monitoring batch. |
| `description` | string | No | Description of the monitoring batch. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `monitoringBatchId` | string | Yes | ID of the monitoring batch. |
| `externalContextUrl` | string | No | External URL associated with the batch. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployments Retraining Settings

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_RETRAINING_SETTINGS`

Tool to update deployment retraining settings. Use when configuring automatic retraining for a deployment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `datasetId` | string | No | ID of the retraining dataset. |
| `credentialId` | string | No | ID of the credential used to refresh retraining dataset. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |
| `retrainingUserId` | string | No | ID of the retraining user. |
| `predictionEnvironmentId` | string | No | ID of the prediction environment to associate with the challengers created by retraining policies. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Shared Roles

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENTS_SHARED_ROLES`

Tool to modify deployment shared roles in DataRobot. Use when you need to update permissions for users, groups, or organizations on a deployment. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `deployment_id` | string | Yes | The ID of the deployment to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Deployment Status

**Slug:** `DATAROBOT_UPDATE_DEPLOYMENT_STATUS`

Tool to update deployment status. Use when you need to activate or deactivate a deployment. Returns 202 when job is submitted.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `status` | string ("active" | "inactive") | Yes | Status that deployment should transition to: active or inactive. |
| `deploymentId` | string | Yes | Unique identifier of the deployment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Entity Notification Channel

**Slug:** `DATAROBOT_UPDATE_ENTITY_NOTIFICATION_CHANNELS`

Tool to update an entity notification channel for a deployment or custom job. Use when you need to modify the configuration of an existing notification channel, such as changing the name, language preference, or channel-specific settings like email addresses, payload URLs, or custom headers.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the notification channel. |
| `channel_id` | string | Yes | The ID of the entity notification channel to update. |
| `dr_entities` | array | No | The IDs and names of DataRobot Users, Groups, or Custom Jobs associated with DataRobotUser, DataRobotGroup, or DataRobotCustomJob channel types. Required for these channel types; must have 1-100 items. |
| `payload_url` | string | No | The payload URL of the notification channel. Required for Webhook, Slack, and MSTeams channel types. |
| `channel_type` | string ("DataRobotCustomJob" | "DataRobotGroup" | "DataRobotUser" | "Database" | "Email" | "InApp" | "InsightsComputations" | "MSTeams" | "Slack" | "Webhook") | No | Types of notification channels available. |
| `content_type` | string ("application/json" | "application/x-www-form-urlencoded") | No | Content types for notification messages. |
| `secret_token` | string | No | Secret token to be used for the notification channel. Used for Webhook authentication. |
| `validate_ssl` | boolean | No | Whether to validate SSL certificates for the notification channel. Applies to Webhook, Slack, and MSTeams channel types. |
| `email_address` | string | No | The email address to be used in the notification channel. Required for Email channel type. |
| `language_code` | string ("en" | "es_419" | "fr" | "ja" | "ko" | "ptBR") | No | Language codes for notification preferences. |
| `custom_headers` | array | No | Custom headers and their values to be sent in the notification channel. Maximum 100 items. |
| `related_entity_id` | string | Yes | The ID of the related entity (deployment or custom job). |
| `verification_code` | string | No | Required if the channel type is Email and the email address needs verification. |
| `related_entity_type` | string ("deployment" | "customjob") | Yes | Type of related entity (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Entity Notification Policies

**Slug:** `DATAROBOT_UPDATE_ENTITY_NOTIFICATION_POLICIES`

Tool to update entity notification policy. Use when you need to modify notification settings for deployments or custom jobs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the notification policy. |
| `active` | boolean | No | Defines if the notification policy is active or not. |
| `policyId` | string | Yes | The id of the notification policy. |
| `channelId` | string | No | The id of the notification channel to be used to send the notification. |
| `eventType` | string | No | The type of the event that triggers the notification. |
| `eventGroup` | string | No | The group of the event that trigger the notification. |
| `channelScope` | string ("organization" | "Organization" | "ORGANIZATION" | "entity" | "Entity" | "ENTITY" | "template" | "Template" | "TEMPLATE") | No | Scope of the channel. |
| `relatedEntityId` | string | Yes | The id of related entity. |
| `maximalFrequency` | string | No | Maximal frequency between policy runs in ISO 8601 duration string. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Entity Notification Policy Template

**Slug:** `DATAROBOT_UPDATE_ENTITY_NOTIFICATION_POLICY_TEMPLATE`

Tool to update an entity notification policy template in DataRobot. Use when you need to modify notification settings for existing templates. At least one optional field must be provided for update.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The updated name of the notification policy template (max 100 characters). |
| `active` | boolean | No | Defines if the notification policy is active or not. |
| `policyId` | string | Yes | The ID of the notification policy template to update. |
| `channelId` | string | No | The ID of the notification channel to be used to send the notification. |
| `eventType` | string ("secure_config.created" | "secure_config.deleted" | "secure_config.shared" | "dataset.created" | "dataset.registered" | "dataset.deleted" | "datasets.deleted" | "datasetrelationship.created" | "dataset.shared" | "datasets.shared" | "file.created" | "file.registered" | "file.deleted" | "file.shared" | "comment.created" | "comment.updated" | "invite_job.completed" | "misc.asset_access_request" | "misc.webhook_connection_test" | "misc.webhook_resend" | "misc.email_verification" | "monitoring.spooler_channel_base" | "monitoring.spooler_channel_red" | "monitoring.spooler_channel_green" | "monitoring.external_model_nan_predictions" | "management.deploymentInfo" | "model_deployments.None" | "model_deployments.deployment_sharing" | "model_deployments.model_replacement" | "prediction_request.None" | "prediction_request.failed" | "model_deployments.humility_rule" | "model_deployments.model_replacement_lifecycle" | "model_deployments.model_replacement_started" | "model_deployments.model_replacement_succeeded" | "model_deployments.model_replacement_failed" | "model_deployments.model_replacement_validation_warning" | "model_deployments.deployment_creation" | "model_deployments.deployment_deletion" | "model_deployments.service_health_yellow_from_green" | "model_deployments.service_health_yellow_from_red" | "model_deployments.service_health_red" | "model_deployments.data_drift_yellow_from_green" | "model_deployments.data_drift_yellow_from_red" | "model_deployments.data_drift_red" | "model_deployments.accuracy_yellow_from_green" | "model_deployments.accuracy_yellow_from_red" | "model_deployments.accuracy_red" | "model_deployments.health.fairness_health.green_to_yellow" | "model_deployments.health.fairness_health.red_to_yellow" | "model_deployments.health.fairness_health.red" | "model_deployments.health.custom_metrics_health.green_to_yellow" | "model_deployments.health.custom_metrics_health.red_to_yellow" | "model_deployments.health.custom_metrics_health.red" | "model_deployments.health.base.green" | "model_deployments.service_health_green" | "model_deployments.data_drift_green" | "model_deployments.accuracy_green" | "model_deployments.health.fairness_health.green" | "model_deployments.health.custom_metrics_health.green" | "model_deployments.retraining_policy_run_started" | "model_deployments.retraining_policy_run_succeeded" | "model_deployments.retraining_policy_run_failed" | "model_deployments.challenger_scoring_success" | "model_deployments.challenger_scoring_data_warning" | "model_deployments.challenger_scoring_failure" | "model_deployments.challenger_scoring_started" | "model_deployments.challenger_model_validation_warning" | "model_deployments.challenger_model_created" | "model_deployments.challenger_model_deleted" | "model_deployments.actuals_upload_failed" | "model_deployments.actuals_upload_warning" | "model_deployments.training_data_baseline_calculation_started" | "model_deployments.training_data_baseline_calculation_completed" | "model_deployments.training_data_baseline_failed" | "model_deployments.custom_model_deployment_creation_started" | "model_deployments.custom_model_deployment_creation_completed" | "model_deployments.custom_model_deployment_creation_failed" | "model_deployments.deployment_prediction_explanations_preview_job_submitted" | "model_deployments.deployment_prediction_explanations_preview_job_completed" | "model_deployments.deployment_prediction_explanations_preview_job_failed" | "model_deployments.custom_model_deployment_activated" | "model_deployments.custom_model_deployment_activation_failed" | "model_deployments.custom_model_deployment_deactivated" | "model_deployments.custom_model_deployment_deactivation_failed" | "model_deployments.prediction_processing_rate_limit_reached" | "model_deployments.prediction_data_processing_rate_limit_reached" | "model_deployments.prediction_data_processing_rate_limit_warning" | "model_deployments.actuals_processing_rate_limit_reached" | "model_deployments.actuals_processing_rate_limit_warning" | "model_deployments.deployment_monitoring_data_cleared" | "model_deployments.deployment_launch_started" | "model_deployments.deployment_launch_succeeded" | "model_deployments.deployment_launch_failed" | "model_deployments.deployment_shutdown_started" | "model_deployments.deployment_shutdown_succeeded" | "model_deployments.deployment_shutdown_failed" | "model_deployments.endpoint_update_started" | "model_deployments.endpoint_update_succeeded" | "model_deployments.endpoint_update_failed" | "model_deployments.management_agent_service_health_green" | "model_deployments.management_agent_service_health_yellow" | "model_deployments.management_agent_service_health_red" | "model_deployments.management_agent_service_health_unknown" | "model_deployments.predictions_missing_association_id" | "model_deployments.prediction_result_rows_cleand_up" | "model_deployments.batch_deleted" | "model_deployments.batch_creation_limit_reached" | "model_deployments.batch_creation_limit_exceeded" | "model_deployments.batch_not_found" | "model_deployments.predictions_encountered_for_locked_batch" | "model_deployments.predictions_encountered_for_deleted_batch" | "model_deployments.scheduled_report_generated" | "model_deployments.predictions_timeliness_health_red" | "model_deployments.actuals_timeliness_health_red" | "model_deployments.service_health_still_red" | "model_deployments.data_drift_still_red" | "model_deployments.accuracy_still_red" | "model_deployments.health.fairness_health.still_red" | "model_deployments.health.custom_metrics_health.still_red" | "model_deployments.predictions_timeliness_health_still_red" | "model_deployments.actuals_timeliness_health_still_red" | "model_deployments.service_health_still_yellow" | "model_deployments.data_drift_still_yellow" | "model_deployments.accuracy_still_yellow" | "model_deployments.health.fairness_health.still_yellow" | "model_deployments.health.custom_metrics_health.still_yellow" | "model_deployments.prediction_payload_parsing_failure" | "model_deployments.deployment_inference_server_creation_started" | "model_deployments.deployment_inference_server_creation_failed" | "model_deployments.deployment_inference_server_creation_completed" | "model_deployments.deployment_inference_server_deletion" | "model_deployments.deployment_inference_server_idle_stopped" | "model_deployments.deployment_inference_server_maintenance_started" | "entity_notification_policy_template.shared" | "notification_channel_template.shared" | "project.created" | "project.deleted" | "project.shared" | "autopilot.complete" | "autopilot.started" | "autostart.failure" | "perma_delete_project.success" | "perma_delete_project.failure" | "users_delete.preview_started" | "users_delete.preview_completed" | "users_delete.preview_failed" | "users_delete.started" | "users_delete.completed" | "users_delete.failed" | "application.created" | "application.shared" | "model_version.added" | "batch_predictions.success" | "batch_predictions.failed" | "batch_predictions.scheduler.auto_disabled" | "change_request.cancelled" | "change_request.created" | "change_request.deployment_approval_requested" | "change_request.resolved" | "change_request.proposed_changes_updated" | "change_request.pending" | "change_request.commenting_review_added" | "change_request.approving_review_added" | "change_request.changes_requesting_review_added" | "custom_job_run.success" | "custom_job_run.failed" | "custom_job_run.interrupted" | "custom_job_run.cancelled" | "monitoring.rate_limit_enforced" | "notebook_schedule.created" | "notebook_schedule.failure" | "notebook_schedule.completed") | No | Specific event types that trigger notifications. |
| `eventGroup` | string ("secure_config.all" | "dataset.all" | "file.all" | "comment.all" | "invite_job.all" | "deployment_prediction_explanations_computation.all" | "model_deployments.critical_health" | "model_deployments.critical_frequent_health_change" | "model_deployments.frequent_health_change" | "model_deployments.health" | "model_deployments.retraining_policy" | "inference_endpoints.health" | "model_deployments.management_agent" | "model_deployments.management_agent_health" | "prediction_request.all" | "challenger_management.all" | "challenger_replay.all" | "model_deployments.all" | "project.all" | "perma_delete_project.all" | "users_delete.all" | "applications.all" | "model_version.stage_transitions" | "model_version.all" | "use_case.all" | "batch_predictions.all" | "change_requests.all" | "custom_job_run.all" | "custom_job_run.unsuccessful" | "insights_computation.all" | "notebook_schedule.all" | "monitoring.all") | No | Event groups that trigger notifications. |
| `maximalFrequency` | string | No | Maximal frequency between policy runs in ISO 8601 duration string (e.g., 'PT1H' for 1 hour, 'P1D' for 1 day). Use null to remove frequency limitation. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity that this template applies to (deployment or customjob, case-insensitive). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notification Policy Template Shared Roles

**Slug:** `DATAROBOT_UPDATE_ENTITY_NOTIFY_POLICY_TPL_SHARED_ROLES`

Tool to update entity notification policy template shared roles in DataRobot. Use when you need to modify permissions for users, groups, or organizations on a notification policy template. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates to avoid leaving the template without an owner.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `policyId` | string | Yes | The ID of the notification policy template to update shared roles for. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `relatedEntityType` | string ("deployment" | "Deployment" | "DEPLOYMENT" | "customjob" | "Customjob" | "CUSTOMJOB") | Yes | Type of related entity (deployment or customjob). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Entity Tag

**Slug:** `DATAROBOT_UPDATE_ENTITY_TAGS`

Tool to update an entity tag's name in DataRobot. Use when you need to rename an existing entity tag.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The new name for the entity tag. Must be 100 characters or less. |
| `entityTagId` | string | Yes | The ID of the entity tag to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Execution Environment

**Slug:** `DATAROBOT_UPDATE_EXECUTION_ENVIRONMENT`

Tool to update a DataRobot execution environment. Use when you need to modify properties like description, name, programming language, or use cases of an existing environment.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | New execution environment name. |
| `isPublic` | boolean | No | If the environment is public. |
| `useCases` | array | No | The list of use cases supported by the environment |
| `description` | string | No | The new description of the environment. |
| `environmentId` | string | Yes | The ID of the execution environment to update. |
| `programmingLanguage` | string ("python" | "r" | "java" | "julia" | "legacy" | "other") | No | The new programming language of the environment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Source

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_SOURCE`

Tool to update an external data source's canonical name or configuration parameters. Use when modifying properties of an existing data source connection.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `params` | string | No | Data source configuration parameters to update. Provide the appropriate structure based on your data source type (table-based JDBC, query-based JDBC, or filesystem). |
| `dataSourceId` | string | Yes | The ID of the external data source to update. |
| `canonicalName` | string | No | The data source's canonical name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Sources Access Control

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_SOURCES_ACCESS_CONTROL`

Tool to update access control roles for an external data source. Use when you need to grant, modify, or revoke user access permissions on a data source.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of sharing roles to update. Maximum 100 entries per request. |
| `dataSourceId` | string | Yes | The ID of the external data source to update access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Source Shared Roles

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_SOURCES_SHARED_ROLES`

Tool to modify external data source shared roles in DataRobot. Use when you need to update permissions for users, groups, or organizations on an external data source. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `dataSourceId` | string | Yes | The ID of the external data source to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Store

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_STORE`

Tool to update an external data store configuration. Use when you need to modify the name or parameters of an existing data store connection.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `params` | object | No | Data store configuration. Can contain JDBC parameters (driverId, jdbcFields, jdbcUrl) or connector parameters (fields). |
| `dataStoreId` | string | Yes | ID of the external data store to update. |
| `canonicalName` | string | No | The user-friendly name of the data store. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Store Access Control

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_STORES_ACCESS_CONTROL`

Tool to update access control settings for an external data store. Use when you need to grant, modify, or revoke user access to a data store. Note: The request must not leave the data store without an owner.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of access control entries to update. Maximum 100 entries. Each entry specifies a user and their role. |
| `dataStoreId` | string | Yes | ID of the external data store to update access control for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External Data Store Shared Roles

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_DATA_STORES_SHARED_ROLES`

Tool to modify external data store shared roles in DataRobot. Use when you need to update permissions for users, groups, or organizations on an external data store. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `dataStoreId` | string | Yes | The ID of the external data store to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update External OAuth Provider

**Slug:** `DATAROBOT_UPDATE_EXTERNAL_OAUTH_PROVIDER`

Tool to update an external OAuth provider configuration. Use when you need to modify name, client secret, consent settings, or status of an existing OAuth provider.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Updated provider name. |
| `status` | string ("active" | "inactive" | "disabled" | "deleted") | No | Provider status: active, inactive, disabled, or deleted. |
| `providerId` | string | Yes | OAuth Provider ID to update. |
| `skipConsent` | boolean | No | Whether to bypass the consent screen during OAuth flow. |
| `clientSecret` | string | No | Updated client secret credential for the OAuth provider. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Rename File or Folder in Catalog

**Slug:** `DATAROBOT_UPDATE_FILES_ALL_FILES`

Tool to rename a file or folder within a DataRobot catalog item. Use when you need to change the path or name of a file or folder in the files catalog. Supports conflict resolution strategies like rename, replace, skip, or error.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `toPath` | string | Yes | The new path for the file or folder. Folder paths should end with '/'. |
| `fromPath` | string | Yes | The file or folder path to rename. Folder paths should end with '/'. |
| `catalogId` | string | Yes | The catalog item ID containing the file or folder to rename. |
| `overwrite` | string ("rename" | "Rename" | "RENAME" | "replace" | "Replace" | "REPLACE" | "skip" | "Skip" | "SKIP" | "error" | "Error" | "ERROR") | No | How to deal with a name conflict with an existing file or folder with the same name. RENAME (default): rename the file or folder using '<filename> (n).ext' or '<folder> (n)' pattern. REPLACE: prefer the renamed file or folder. SKIP: prefer the existing file or folder. ERROR: return 'HTTP 409 Conflict' response in case of a naming conflict. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Recover Deleted File

**Slug:** `DATAROBOT_UPDATE_FILES_DELETED`

Tool to recover a deleted file from DataRobot. Use when you need to restore a previously deleted file by its catalog item ID.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `catalogId` | string | Yes | The catalog item ID of the deleted file to recover. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update GenAI Comparison Chat

**Slug:** `DATAROBOT_UPDATE_GENAI_COMPARISON_CHAT`

Tool to update a GenAI comparison chat name. Use when you need to rename an existing comparison chat for better organization.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The comparison chat identifier (24-character hex string). |
| `name` | string | Yes | The new name for the comparison chat. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update GenAI Custom Model LLM Validations

**Slug:** `DATAROBOT_UPDATE_GENAI_CUSTOM_MODEL_LLM_VALIDATIONS`

Tool to update a GenAI custom model LLM validation. Use when you need to modify validation settings, rename, or change associated model/deployment IDs. At least one optional field must be provided for the update to succeed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the custom model LLM validation to edit |
| `name` | string | No | Renames the validation (1-5000 characters) |
| `modelId` | string | No | Changes the associated model ID |
| `chatModelId` | string | No | Model ID for OpenAI chat completion API calls |
| `deploymentId` | string | No | Changes the associated deployment ID |
| `promptColumnName` | string | No | Column name for prompt text input |
| `targetColumnName` | string | No | Column name for prediction output |
| `predictionTimeout` | integer | No | Sets timeout in seconds (1-600 range) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update GenAI Custom Model Vector Database Validations

**Slug:** `DATAROBOT_UPDATE_GENAI_CUSTOM_MODEL_VECTOR_DB_VALIDATIONS`

Tool to update a GenAI custom model vector database validation. Use when you need to modify validation settings, rename, or change associated model/deployment IDs. At least one optional field must be provided for the update to succeed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the custom model vector database validation to edit |
| `name` | string | No | If specified, renames the custom model validation to this value |
| `modelId` | string | No | Changes the associated model ID |
| `chatModelId` | string | No | Model ID for OpenAI chat completion API calls |
| `deploymentId` | string | No | Changes the associated deployment ID |
| `promptColumnName` | string | No | Changes the prompt input column name |
| `targetColumnName` | string | No | Changes the prediction output column name |
| `predictionTimeout` | integer | No | Sets timeout in seconds (1-600 range) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update GenAI Playground

**Slug:** `DATAROBOT_UPDATE_GENAI_PLAYGROUND`

Tool to update a GenAI playground. Use when you need to modify the name or description of an existing playground.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The playground ID to update. |
| `name` | string | No | The updated name for the playground. |
| `description` | string | No | The updated description for the playground. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update User Group

**Slug:** `DATAROBOT_UPDATE_GROUPS`

Tool to update a user group by its ID. Use when you need to modify group properties like name, description, email, access role, or permissions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The new name for the user group (max length 100). |
| `email` | string | No | The email address for this user group. |
| `orgId` | string | No | The identifier of the organization to assign the user group to. |
| `groupId` | string | Yes | The unique identifier of the user group to update. |
| `description` | string | No | The description of this user group (max length 1000). |
| `accessRoleId` | string | No | The identifier of the access role to assign to the group. |
| `accountPermissions` | object | No | Account permissions to set for this user group. Each key is a permission name mapped to a boolean value. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Guard Configuration

**Slug:** `DATAROBOT_UPDATE_GUARD_CONFIGURATIONS`

Tool to update a DataRobot guard configuration. Use when you need to modify guard settings such as name, description, intervention rules, or LLM configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Guard configuration name |
| `llmType` | string ("openAi" | "azureOpenAi" | "google" | "amazon" | "datarobot" | "nim") | No | Type of LLM used by guard. |
| `awsModel` | string ("amazon-titan" | "anthropic-claude-2" | "anthropic-claude-3-haiku" | "anthropic-claude-3-sonnet" | "anthropic-claude-3-opus" | "anthropic-claude-3.5-sonnet-v1" | "anthropic-claude-3.5-sonnet-v2" | "amazon-nova-lite" | "amazon-nova-micro" | "amazon-nova-pro") | No | AWS model types for guard configurations. |
| `nemoInfo` | object | No | Configuration info for NeMo guards. |
| `awsRegion` | string | No | AWS model region |
| `config_id` | string | Yes | ID of the guard configuration to update |
| `modelInfo` | object | No | Configuration info for guards using deployed models. |
| `awsAccount` | string | No | ID of user credential containing an AWS account |
| `description` | string | No | Guard configuration description |
| `googleModel` | string ("chat-bison" | "google-gemini-1.5-flash" | "google-gemini-1.5-pro") | No | Google model types for guard configurations. |
| `deploymentId` | string | No | ID of deployed model, for model guards |
| `googleRegion` | string | No | Google model region |
| `intervention` | object | No | Intervention configuration for the guard. |
| `openaiApiKey` | string | No | Deprecated; use openai_credential instead |
| `openaiApiBase` | string | No | Azure OpenAI API Base URL |
| `allowedActions` | array | No | The actions this guard is allowed to take |
| `openaiCredential` | string | No | ID of user credential containing an OpenAI token |
| `llmGatewayModelId` | string | No | LLM Gateway model ID to use as judge |
| `nemoEvaluatorInfo` | object | No | Configuration info for NeMo Evaluator guards. |
| `openaiDeploymentId` | string | No | OpenAI Deployment ID |
| `googleServiceAccount` | string | No | ID of user credential containing a Google service account |
| `additionalGuardConfig` | object | No | Additional configuration for the guard. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Image Augmentation Lists

**Slug:** `DATAROBOT_UPDATE_IMAGE_AUGMENTATION_LISTS`

Tool to update an existing image augmentation list in DataRobot. Use when you need to modify augmentation settings for image data preprocessing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the image augmentation list |
| `in_use` | boolean | No | (Deprecated) This parameter is deprecated and will be ignored by the API |
| `project_id` | string | No | (Deprecated) This parameter is deprecated and will be ignored by the API. To move an augmentation list to another project, create a new list in the other project and delete the list in this project. |
| `feature_name` | string | No | The name of the image feature containing the data to be augmented |
| `initial_list` | boolean | No | Whether this list will be used during autopilot to perform image augmentation |
| `augmentation_id` | string | Yes | The ID of the augmentation list to update |
| `transformations` | array | No | List of transformations to possibly apply to each image |
| `number_of_new_images` | integer | No | Number of new rows to add for each existing row |
| `transformation_probability` | number | No | Probability that each enabled transformation will be applied to an image. This does not apply to Horizontal or Vertical Flip, which are set to 50% always. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Modeling Featurelist

**Slug:** `DATAROBOT_UPDATE_MODELING_FEATURELIST`

Tool to update a modeling featurelist's name or description. Use when you need to modify an existing featurelist's metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The new name for the featurelist. Must be unique within the project. |
| `project_id` | string | Yes | The ID of the DataRobot project containing the featurelist. Obtain from DATAROBOT_LIST_PROJECTS. |
| `description` | string | No | The new description for the featurelist. |
| `featurelist_id` | string | Yes | The ID of the featurelist to update. Obtain from DATAROBOT_LIST_FEATURELISTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notebook

**Slug:** `DATAROBOT_UPDATE_NOTEBOOK`

Tool to update a DataRobot notebook's name or description. Use when you need to modify notebook metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID (24-character hex string ObjectId) |
| `name` | string | No | Updated name for the notebook |
| `description` | string | No | Updated description for the notebook |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notebook Execution Environment

**Slug:** `DATAROBOT_UPDATE_NOTEBOOK_EXECUTION_ENVIRONMENT`

Tool to update a notebook execution environment configuration. Use when you need to change environment settings, compute resources, or inactivity timeout for a notebook.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook execution environment ID to update |
| `language` | string | No | The programming language of the environment (e.g., 'python', 'r') |
| `machineId` | string | No | The machine ID specifying compute resources |
| `timeToLive` | integer | No | Inactivity timeout limit in minutes. Minimum: 3, Maximum: 525600 (365 days) |
| `machineSlug` | string | No | The machine slug (alternative to machineId) |
| `environmentId` | string | No | The execution environment ID to assign to the notebook |
| `environmentSlug` | string | No | The execution environment slug (alternative to environmentId) |
| `languageVersion` | string | No | The programming language version |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notebook Job

**Slug:** `DATAROBOT_UPDATE_NOTEBOOK_JOBS`

Tool to update an existing notebook job in DataRobot. Use when you need to modify the schedule, enable/disable a job, change the use case association, or update parameters for an existing scheduled notebook job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique ID of the notebook job to update. Obtain from notebook job listing or creation. |
| `enabled` | boolean | No | Whether the scheduled notebook job is enabled. Set to false to disable the job from running on schedule. |
| `schedule` | object | No | Cron-like schedule configuration for notebook job execution |
| `useCaseId` | string | No | The ID of the use case (project) this notebook is associated with. Obtain from DATAROBOT_LIST_PROJECTS or project creation. |
| `parameters` | array | No | List of environment variables to pass to the notebook execution. Each parameter has a 'name' (alphanumeric + underscores) and 'value'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notebook Cell Output

**Slug:** `DATAROBOT_UPDATE_NOTEBOOKS_CELLS_OUTPUT`

Tool to update the output of a specific cell in a DataRobot notebook. Use when you need to update cell output data with an MD5 validation hash.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `md5` | string | Yes | The MD5 hash of the cell content for validation |
| `cell_id` | string | Yes | The unique identifier of the cell to update |
| `notebook_id` | string | Yes | The unique identifier of the notebook containing the cell |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notebook State

**Slug:** `DATAROBOT_UPDATE_NOTEBOOK_STATE`

Tool to update notebook state and retrieve current cell information. Use when you need to refresh the notebook's execution state and get details about all cells. Note: This operation is not supported for Codespace-type notebooks.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The notebook ID (24-character hex string). Note: Codespace-type notebooks do not support this operation. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notification Channel Template

**Slug:** `DATAROBOT_UPDATE_NOTIFICATION_CHANNEL_TEMPLATE`

Tool to update a notification channel template in DataRobot. Use when you need to modify an existing notification channel template's configuration, such as changing its name, language, or channel-specific settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | The name of the notification channel. |
| `channelId` | string | Yes | The ID of the notification channel template to update. |
| `drEntities` | array | No | The IDs of the DataRobot Users, Group or Custom Job associated with the DataRobotUser, DataRobotGroup or DataRobotCustomJob channel types (1-100 items). |
| `payloadUrl` | string | No | The payload URL of the notification channel. |
| `channelType` | string ("DataRobotCustomJob" | "DataRobotGroup" | "DataRobotUser" | "Database" | "Email" | "InApp" | "InsightsComputations" | "MSTeams" | "Slack" | "Webhook") | No | The type of the notification channel. |
| `contentType` | string ("application/json" | "application/x-www-form-urlencoded") | No | The content type of the messages of the notification channel. |
| `secretToken` | string | No | Secret token to be used for notification channel. |
| `validateSsl` | boolean | No | Defines if validate SSL or not in the notification channel. |
| `emailAddress` | string | No | The email address to be used in the notification channel. |
| `languageCode` | string ("en" | "es_419" | "fr" | "ja" | "ko" | "ptBR") | No | The preferred language code. |
| `customHeaders` | array | No | Custom headers and their values to be sent in the notification channel (maximum 100 items). |
| `verificationCode` | string | No | Required if the channel type is Email. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Notification Channel Template Shared Roles

**Slug:** `DATAROBOT_UPDATE_NOTIFICATION_CHANNEL_TEMPLATES_SHARED_ROLES`

Tool to update notification channel template shared roles in DataRobot. Use when you need to modify permissions for users, groups, or organizations on a notification channel template. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates to avoid leaving the template without an owner.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `channelId` | string | Yes | The ID of the notification channel template to update shared roles for. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update OpenTelemetry Metrics Configurations

**Slug:** `DATAROBOT_UPDATE_OTEL_METRICS_CONFIGS`

Tool to set all OpenTelemetry metric configurations for a specified entity. Use when you need to configure which OTel metrics are collected and how they are aggregated for deployments, use cases, or other entities. This replaces all existing configurations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `values` | array | Yes | List of OpenTelemetry metric configurations to set. Replaces all existing configurations. Maximum 50 items. |
| `entityId` | string | Yes | ID of the entity to which the metrics belong (24-character hex string). |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metrics belong. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update OTEL Metrics Config By ID

**Slug:** `DATAROBOT_UPDATE_OTEL_METRICS_CONFIGS_BY_ID`

Tool to update an OpenTelemetry metric configuration for a specified entity. Use when you need to modify display name, aggregation method, or other settings of an existing OTEL metric.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `unit` | string ("bytes" | "nanocores" | "percentage") | No | Unit of measurement for metrics. |
| `enabled` | boolean | No | Whether the OTel metric is enabled. |
| `entityId` | string | Yes | ID of the entity to which the metric belongs. |
| `otelName` | string | No | The OTel key of the metric. |
| `entityType` | string ("deployment" | "use_case" | "experiment_container" | "custom_application" | "workload" | "workload_deployment") | Yes | Type of the entity to which the metric belongs. |
| `percentile` | number | No | The metric percentile for the percentile aggregation of histograms. Must be between 0 and 1. |
| `aggregation` | string ("sum" | "average" | "min" | "max" | "cardinality" | "percentiles" | "histogram") | No | Aggregation method for metric display. |
| `displayName` | string | No | The display name of the metric. |
| `otelMetricId` | string | Yes | The ID of the OpenTelemetry metric configuration to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Overall Moderation Configuration

**Slug:** `DATAROBOT_UPDATE_OVERALL_MODERATION_CONFIGURATION`

Tool to update overall moderation configuration for a custom model, custom model version, or playground. Use when you need to configure timeout behavior and NeMo Evaluator settings for guardrails.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | ID of custom model, custom model version, or playground for this configuration. |
| `entityType` | string ("customModel" | "customModelVersion" | "playground") | Yes | Type of associated entity (customModel, customModelVersion, or playground). |
| `timeoutSec` | integer | Yes | Timeout value in seconds for any guard. Must be a non-negative integer. |
| `timeoutAction` | string ("block" | "score") | Yes | Action to take if timeout occurs: 'block' (block the request) or 'score' (allow scoring to continue). |
| `nemoEvaluatorDeploymentId` | string | No | ID of NeMo Evaluator deployment to use for all NeMo Evaluator guards. Set to null to remove. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Pinned Use Cases

**Slug:** `DATAROBOT_UPDATE_PINNED_USECASES`

Tool to add or remove pinned use cases in DataRobot. Use when you need to pin or unpin use cases for quick access. Accepts 1-8 use case IDs and an operation type (add/remove).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `operation` | string ("add" | "remove") | Yes | Operation to perform: 'add' to pin use cases or 'remove' to unpin them. |
| `pinnedUseCasesIds` | array | Yes | List of use case IDs to pin or unpin. Must contain 1-8 IDs (24-character hex strings). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Project

**Slug:** `DATAROBOT_UPDATE_PROJECT`

Tool to update a DataRobot project's name, description, worker settings, or unlock holdout. Use when you need to modify project metadata or resource allocation after project creation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | ID of the DataRobot project to update |
| `projectName` | string | No | New name for the project (max 100 characters) |
| `workerCount` | integer | No | Desired number of workers for modeling. Must not exceed available workers. Use 0 for no workers, or -1 to request maximum available |
| `gpuWorkerCount` | integer | No | Desired number of GPU workers for modeling. Must not exceed available GPU workers. Use 0 for no GPU workers, or -1 to request maximum available |
| `holdoutUnlocked` | string ("True") | No | Enum for holdout unlock operation (can only unlock, not lock) |
| `projectDescription` | string | No | New description for the project (max 500 characters) |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Project Access Control

**Slug:** `DATAROBOT_UPDATE_PROJECT_ACCESS_CONTROL`

Tool to update access control settings for a DataRobot project. Use when you need to grant, modify, or revoke user access to a project. Allows setting roles (OWNER, USER, OBSERVER) for multiple users at once. Note: The request must leave at least one OWNER for the project.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of users and their roles to set on the project. Maximum 100 entries per request. Setting role to null revokes access. |
| `projectId` | string | Yes | Unique identifier of the project to update access control for. |
| `sendNotification` | boolean | No | Whether to send email notifications to users about the access changes. Defaults to true. |
| `includeFeatureDiscoveryEntities` | boolean | No | Whether to share all related entities with the specified users. When true, shares feature discovery entities associated with the project. Defaults to false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Feature List

**Slug:** `DATAROBOT_UPDATE_PROJECTS_FEATURELISTS`

Tool to update an existing feature list in a DataRobot project. Use when you need to rename or update the description of a feature list.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | New name for the feature list. Must be unique within the project. |
| `projectId` | string | Yes | Unique identifier of the DataRobot project. Obtain from DATAROBOT_LIST_PROJECTS. |
| `description` | string | No | New description for the feature list. |
| `featurelistId` | string | Yes | Unique identifier of the feature list to update. Obtain from DATAROBOT_LIST_FEATURELISTS. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Projects Models

**Slug:** `DATAROBOT_UPDATE_PROJECTS_MODELS`

Tool to update a model's attributes in a DataRobot project. Use when you need to star/unstar a model or change the prediction threshold for binary classification.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model ID to update. |
| `is_starred` | boolean | No | Mark model either as starred or unstarred. |
| `project_id` | string | Yes | The project ID containing the model. |
| `prediction_threshold` | number | No | Threshold used for binary classification in predictions. Default value is 0.5. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Payoff Matrix

**Slug:** `DATAROBOT_UPDATE_PROJECTS_PAYOFF_MATRICES`

Tool to update a payoff matrix for a DataRobot project. Use when you need to modify payoff values or the name of an existing payoff matrix for profit curve calculations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Name of the payoff matrix to be created. |
| `project_id` | string | Yes | The project ID |
| `payoff_matrix_id` | string | Yes | ObjectId of the payoff matrix. |
| `true_negative_value` | number | Yes | True negative value to use for profit curve calculation. |
| `true_positive_value` | number | Yes | True positive value to use for profit curve calculation. |
| `false_negative_value` | number | Yes | False negative value to use for profit curve calculation. |
| `false_positive_value` | number | Yes | False positive value to use for profit curve calculation. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Quotas

**Slug:** `DATAROBOT_UPDATE_QUOTAS`

Tool to update quota configuration for a DataRobot deployment resource. Use when you need to modify capacity, rules, policies, or saturation threshold of an existing quota.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `quotaId` | string | Yes | Unique identifier of the quota to update |
| `capacity` | object | No | Resource capacity configuration. |
| `policies` | array | No | List of quota policies to overwrite quota per specific consumer (max 10,000 items) |
| `defaultRules` | array | No | List of default quota rules (max 100 items) |
| `saturationThreshold` | number | No | Saturation threshold to enable guaranteed quotas - value between 0.0 and 1.0 |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Wrangling Recipe

**Slug:** `DATAROBOT_UPDATE_RECIPE`

Tool to update a DataRobot wrangling recipe. Use when modifying recipe name, description, type, or SQL query.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `sql` | string | No | Recipe SQL query. |
| `name` | string | No | New recipe name. |
| `recipeId` | string | Yes | The ID of the recipe to update. |
| `recipeType` | string ("sql" | "Sql" | "SQL" | "wrangling" | "Wrangling" | "WRANGLING" | "featureDiscovery" | "FeatureDiscovery" | "FEATURE_DISCOVERY" | "featureDiscoveryPrivatePreview" | "FeatureDiscoveryPrivatePreview" | "FEATURE_DISCOVERY_PRIVATE_PREVIEW") | No | The recipe workflow type. |
| `description` | string | No | New recipe description. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Recipe Downsampling

**Slug:** `DATAROBOT_UPDATE_RECIPES_DOWNSAMPLING`

Tool to update the downsampling configuration in a DataRobot recipe. Use when you need to modify downsampling settings for data transformation. Note: Cannot modify published recipes (status 422).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `recipeId` | string | Yes | The ID of the recipe to update. |
| `downsampling` | object | Yes | The downsampling transformation step to apply to the recipe. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Recipe Inputs

**Slug:** `DATAROBOT_UPDATE_RECIPES_INPUTS`

Tool to update the input data sources for a DataRobot recipe. Use when you need to change the data source, dataset, or sampling configuration for a recipe. This operation implicitly restarts the initial sampling job.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | array | Yes | List of data sources and their sampling configurations. Each input should contain 'inputType' field with either datasource or dataset configuration. |
| `recipeId` | string | Yes | ID of the recipe to update inputs for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Recipe Operations

**Slug:** `DATAROBOT_UPDATE_RECIPES_OPERATIONS`

Tool to update the operations in a DataRobot recipe. Use when modifying data transformation directives for wrangling recipes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `force` | boolean | No | If true, operations are stored even if they contain errors. |
| `recipeId` | string | Yes | The ID of the recipe to update operations for. |
| `operations` | array | Yes | List of directives to run for the recipe. Maximum 1000 operations allowed. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Recipe Settings

**Slug:** `DATAROBOT_UPDATE_RECIPES_SETTINGS`

Tool to update recipe settings that are reusable at the modeling stage. Use when modifying Spark instance size, target feature, prediction point, or Feature Discovery settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `target` | string | No | The feature to use as the target at the modeling stage. |
| `recipeId` | string | Yes | The ID of the recipe. |
| `weightsFeature` | string | No | The weights feature. |
| `predictionPoint` | string | No | The date column to be used as the prediction point for time-based feature engineering. |
| `sparkInstanceSize` | string ("small" | "medium" | "large") | No | Enum for Spark instance size options |
| `relationshipsConfigurationId` | string | No | [Deprecated] No effect. The relationships configuration ID field is immutable. |
| `featureDiscoverySupervisedFeatureReduction` | boolean | No | Run supervised feature reduction for Feature Discovery. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Recommended Settings

**Slug:** `DATAROBOT_UPDATE_RECOMMENDED_SETTINGS`

Tool to update recommended settings for a DataRobot entity (currently only deployments). Use when you need to configure deployment settings checklist items.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | array | Yes | List of recommended settings to update. Must contain between 1 and 20 settings. |
| `entityType` | string ("deployment" | "Deployment" | "DEPLOYMENT") | Yes | Type of the entity to update recommended settings for. Currently only 'deployment' is supported. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Registered Model

**Slug:** `DATAROBOT_UPDATE_REGISTERED_MODEL`

Tool to update a registered model in DataRobot. Use when you need to modify the name, description, or visibility settings of an existing registered model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the registered model. |
| `isGlobal` | boolean | No | Make registered model global (accessible to all users in the organization) or local (accessible only to the owner and the users with whom it has been explicitly shared). |
| `description` | string | No | Description of the registered model. |
| `registeredModelId` | string | Yes | ID of the registered model to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Registered Model Shared Roles

**Slug:** `DATAROBOT_UPDATE_REGISTERED_MODELS_SHARED_ROLES`

Tool to modify registered model shared roles in DataRobot. Use when you need to update permissions for users, groups, or organizations on a registered model. Updates roles by adding, modifying, or removing access permissions. Ensure at least one OWNER remains after updates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `operation` | string | Yes | The operation to perform. Must be 'updateRoles'. |
| `registeredModelId` | string | Yes | The ID of the registered model to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Secure Configuration

**Slug:** `DATAROBOT_UPDATE_SECURE_CONFIG`

Tool to update a secure configuration. Use when you need to modify the name, schema, or values of an existing secure configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the secure configuration. |
| `values` | array | No | Values to associate with the secure configuration (max 100 items). |
| `schemaName` | string | No | Name of the schema used for validating the secure configuration. |
| `secureConfigId` | string | Yes | ID of the secure configuration to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Secure Config Shared Roles

**Slug:** `DATAROBOT_UPDATE_SECURE_CONFIGS_SHARED_ROLES`

Tool to share a secure configuration with users, groups, or organizations. Use when you need to update permissions for a secure configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | Array of role update requests. Maximum 100 items. Each item can specify a recipient by 'id' (for existing users/groups/orgs) or by 'username' (for users). Use 'id' when updating permissions for known entities, 'username' when granting access by email. |
| `secure_config_id` | string | Yes | The ID of the secure configuration to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Use Case

**Slug:** `DATAROBOT_UPDATE_USE_CASE`

Tool to update an existing DataRobot use case's metadata. Use when you need to modify a use case's name or description.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the use case (max 100 characters). If null, removes the existing name. |
| `useCaseId` | string | Yes | Unique identifier of the use case to update (24-character hex string). |
| `description` | string | No | Description of the use case providing context about its purpose and goals. If null, removes the existing description. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Move Entity Between Use Cases

**Slug:** `DATAROBOT_UPDATE_USE_CASES_BY_ID`

Tool to move an entity (project, dataset, notebook, etc.) from one use case to another. Use when you need to reorganize entities across use cases or reassign entity ownership.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entityId` | string | Yes | The ID of the specific entity to move to the destination use case. |
| `useCaseId` | string | Yes | The ID of the destination use case where the entity will be moved. |
| `includeDataset` | boolean | No | Include dataset migration when a project is migrated. Only applicable when referenceCollectionType is 'projects'. |
| `referenceCollectionType` | string ("projects" | "datasets" | "files" | "notebooks" | "applications" | "recipes" | "customModelVersions" | "registeredModelVersions" | "deployments" | "customApplications" | "customJobs") | Yes | The type of entity being moved (projects, datasets, notebooks, etc.). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Use Case Shared Roles

**Slug:** `DATAROBOT_UPDATE_USE_CASE_SHARED_ROLES`

Tool to update a use case's access control list. Use when you need to change who has access to a use case and their roles. Supports sharing with users (by username), groups (by name), or any entity (by ID).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | List of sharing role objects. Each object specifies a user, group, or organization and their assigned role. Use username for user-based sharing, name for group-based sharing, or id for ID-based sharing. |
| `operation` | string ("updateRoles") | No | The name of the action being taken. Only 'updateRoles' is supported. |
| `useCaseId` | string | Yes | The ID of the use case to update. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update User Notification

**Slug:** `DATAROBOT_UPDATE_USER_NOTIFICATION_BY_ID`

Tool to mark a DataRobot user notification as read. Use when you need to update the status of a notification by its unique identifier.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `userNotificationId` | string | Yes | Unique identifier of the notification. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update User Notifications

**Slug:** `DATAROBOT_UPDATE_USER_NOTIFICATIONS`

Tool to mark all user notifications as read. Use when you need to mark all pending notifications for the authenticated user as read. This operation affects all unread notifications at once.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Value Tracker

**Slug:** `DATAROBOT_UPDATE_VALUE_TRACKER`

Tool to update a DataRobot value tracker. Use when you need to modify value tracker properties like name, stage, business impact, or monetary values.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | No | Name of the value tracker |
| `notes` | string | No | User notes |
| `owner` | object | No | DataRobot user information for value tracker owner |
| `stage` | string ("ideation" | "queued" | "dataPrepAndModeling" | "validatingAndDeploying" | "inProduction" | "retired" | "onHold") | No | Current stage of value tracker lifecycle |
| `description` | string | No | Value tracker description |
| `feasibility` | integer | No | Assessment of how the value tracker can be accomplished across multiple dimensions (1-5 scale) |
| `targetDates` | array | No | Array of target date objects for reaching specific stages |
| `realizedValue` | object | No | Monetary value with currency and optional details |
| `businessImpact` | integer | No | Expected effects on overall business operations (1-5 scale) |
| `potentialValue` | object | No | Monetary value with currency and optional details |
| `valueTrackerId` | string | Yes | The ID of the value tracker to update |
| `predictionTargets` | array | No | Array of prediction target names |
| `potentialValueTemplate` | object | No | Template type and parameter information for potential value calculation |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update Value Tracker Shared Roles

**Slug:** `DATAROBOT_UPDATE_VALUE_TRACKER_SHARED_ROLES`

Tool to update shared roles for a DataRobot value tracker. Use when you need to share a value tracker with users, groups, or organizations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `roles` | array | Yes | A list of sharing role assignments to apply to the value tracker. |
| `operation` | string | Yes | The operation to perform. Only 'updateRoles' is supported. |
| `valueTrackerId` | string | Yes | The ID of the value tracker to update shared roles for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate Deployment Model Replacement

**Slug:** `DATAROBOT_VALIDATE_DEPLOYMENT_MODEL_REPLACEMENT`

Tool to validate whether a model can replace the current champion model in a deployment. Returns detailed validation checks for model compatibility. Use before replacing a deployment model to ensure compatibility.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelId` | string | No | ID of the model to replace the deployment's champion model. Required if modelPackageId is not provided. |
| `deploymentId` | string | Yes | Unique identifier of the deployment to validate model replacement for. |
| `modelPackageId` | string | No | ID of the model package to replace the deployment's champion model. Required if modelId is not provided. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Validate User Blueprint

**Slug:** `DATAROBOT_VALIDATE_USER_BLUEPRINT`

Tool to validate a user-defined blueprint (custom modeling pipeline) in DataRobot. Use when you need to verify that a blueprint structure is valid before training models with it. Returns validation context including any errors, warnings, or informational messages for each task vertex in the blueprint DAG.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `blueprint` | string | Yes | The blueprint representation as a directed acyclic graph (DAG) defining a pipeline of data through tasks and a final estimator. Can be provided as either an array or object structure depending on the blueprint format. |
| `projectId` | string | No | Project ID (24-character hex string) for the currently active project. The user blueprint is validated in the context of this project. Required for project-specific tasks such as column selection tasks. |
| `isInplaceEditor` | boolean | No | Set to true if the request is sent from the in-place user blueprint editor, false otherwise. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Verify External Data Store SQL

**Slug:** `DATAROBOT_VERIFY_EXTERNAL_DATA_STORE_SQL`

Tool to verify a SQL query for an external data store. Use when you need to test SQL syntax and retrieve sample results. Returns column names and up to maxRows records if successful.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `user` | string | No | Username for data store authentication. Use credentialId instead for better security. |
| `query` | string | Yes | SQL query to verify. |
| `maxRows` | integer | Yes | Maximum number of rows of data to return if the query is successful. Must be between 0 and 999. |
| `password` | string | No | Password for data store authentication. Use credentialId instead for better security. |
| `dataStoreId` | string | Yes | ID of the external data store to verify the SQL query against. |
| `useKerberos` | boolean | No | Whether to use Kerberos for data store authentication. |
| `credentialId` | string | No | ID of the set of credentials to use instead of username and password. Use DATAROBOT_LIST_CREDENTIALS to find available credentials. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |
