Skip to content

Test OpenRouter Response Format

A testing workflow that validates OpenRouter API response formats by making direct HTTP calls to the OpenRouter chat completions endpoint and analyzing the structure of returned data.

Purpose

No business context provided yet — add a context.md to enrich this documentation.

How It Works

  1. Webhook receives test request - Accepts POST requests with optional model and query parameters
  2. Makes direct API call to OpenRouter - Sends a chat completion request with system prompt instructing JSON responses
  3. Extracts and analyzes response structure - Parses the API response to examine content types, tool calls, reasoning, and message structure
  4. Returns analysis results - Provides detailed breakdown of the response format for testing purposes

Workflow Diagram

graph TD
    A[Webhook] --> B[Call OpenRouter Raw]
    B --> C[Extract Response]
    C --> D[Respond]

Trigger

Webhook (POST): test-openrouter-format - Accepts POST requests with optional parameters: - model: OpenRouter model to test (defaults to 'openai/gpt-5.2') - query: Test message to send (defaults to 'Hello')

Nodes Used

Node Type Node Name Purpose
Webhook Webhook Receives incoming test requests via HTTP POST
HTTP Request Call OpenRouter Raw Makes direct API calls to OpenRouter chat completions endpoint
Set Extract Response Analyzes and extracts response structure details
Respond to Webhook Respond Returns analysis results to the caller

External Services & Credentials Required

OpenRouter API

  • Service: OpenRouter chat completions API
  • Endpoint: https://openrouter.ai/api/v1/chat/completions
  • Authentication: Bearer token (API key)
  • Required Headers:
    • Authorization: Bearer [API_KEY]
    • Content-Type: application/json

⚠️ Security Note: The workflow currently contains a hardcoded API key which should be replaced with a credential reference or environment variable.

Environment Variables

No environment variables are currently used. Consider moving the API key to: - n8n credential store - Environment variable: OPENROUTER_API_KEY

Data Flow

Input

1
2
3
4
{
  "model": "openai/gpt-5.2",
  "query": "Hello"
}

Output

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "model": "model_name",
  "content_type": "string|object",
  "content_is_array": "true|false",
  "content": "truncated_content_preview",
  "tool_calls": "truncated_tool_calls_json",
  "reasoning": "truncated_reasoning_json",
  "finish_reason": "stop|tool_calls|length",
  "full_message": "truncated_full_message_json"
}

Error Handling

The workflow has minimal error handling: - HTTP Request node will fail if OpenRouter API is unreachable or returns errors - No retry logic or graceful error responses implemented - Failed requests will return n8n's default error response

Known Limitations

  • API key is hardcoded in the workflow (security risk)
  • No input validation on webhook parameters
  • Response content is truncated for analysis (may miss important data in large responses)
  • No error handling for malformed API responses
  • Workflow is marked as inactive and appears to be for testing purposes only

No related workflows identified from the current context.

Setup Instructions

  1. Import the workflow into your n8n instance

  2. Secure the API credentials:

    • Remove the hardcoded API key from the HTTP Request node
    • Create an OpenRouter credential in n8n or use environment variables
    • Update the Authorization header to reference the credential
  3. Configure the webhook:

    • The webhook path is set to test-openrouter-format
    • Ensure your n8n instance can receive external HTTP requests if testing from outside
  4. Test the workflow:

    1
    2
    3
    curl -X POST https://your-n8n-instance/webhook/test-openrouter-format \
      -H "Content-Type: application/json" \
      -d '{"model": "openai/gpt-4", "query": "Test message"}'
    

  5. Activate the workflow when ready for testing

  6. Review security settings:

    • Consider restricting webhook access if needed
    • Ensure API keys are properly secured
    • Monitor usage to avoid unexpected API costs