My Workflow¶
A simple n8n workflow that demonstrates content filtering using AI-powered guardrails to detect and prevent potentially harmful or inappropriate content from being processed.
Purpose¶
No business context provided yet — add a context.md to enrich this documentation.
How It Works¶
This workflow follows a straightforward two-step process:
- Manual Trigger: The workflow starts when a user manually clicks the "Execute workflow" button in the n8n interface
- Content Filtering: Input content is passed through AI-powered guardrails that analyze it for potential jailbreak attempts or harmful content using a configurable threshold of 0.7
The workflow is designed to act as a content safety filter, ensuring that any text or prompts processed meet safety standards before proceeding to downstream operations.
Workflow Diagram¶
graph TD
A["When clicking 'Execute workflow'<br/>(Manual Trigger)"] --> B["Guardrails<br/>(Content Safety Filter)"]
Trigger¶
Manual Trigger: This workflow is triggered manually by clicking the "Execute workflow" button in the n8n interface. This makes it suitable for testing, one-off content validation, or integration into larger workflows that need on-demand content filtering.
Nodes Used¶
| Node Type | Node Name | Purpose |
|---|---|---|
| Manual Trigger | When clicking 'Execute workflow' | Starts the workflow when manually executed |
| Guardrails | Guardrails | Filters content for jailbreak attempts and harmful content with AI analysis |
External Services & Credentials Required¶
The Guardrails node requires: - AI/LLM Service Access: Credentials for the underlying AI service used for content analysis (specific service depends on n8n configuration) - API Keys: Authentication tokens for the AI service being used for guardrail analysis
Note: Specific credential requirements depend on which AI provider is configured in your n8n instance.
Environment Variables¶
No specific environment variables are required for this workflow, though the underlying AI service may require configuration at the n8n instance level.
Data Flow¶
Input: - Content/text to be analyzed (provided when manually executing the workflow) - Can accept any text input that needs safety validation
Processing: - Content is analyzed against jailbreak detection algorithms - Threshold of 0.7 is applied to determine if content passes safety checks - Custom prompts are enabled for more nuanced analysis
Output: - Filtered/validated content that has passed safety checks - Potential rejection or flagging of content that fails guardrail validation
Error Handling¶
This workflow does not implement explicit error handling paths. If the Guardrails node fails or detects prohibited content, the workflow will terminate at that point. Consider adding error handling nodes for production use cases.
Known Limitations¶
- Workflow is currently archived and inactive
- No automated triggering mechanism - requires manual execution
- Limited to single-pass content filtering without retry logic
- Guardrail effectiveness depends on the quality and configuration of the underlying AI model
Related Workflows¶
No related workflows specified in the current context.
Setup Instructions¶
- Import Workflow: Import this workflow JSON into your n8n instance
- Configure Credentials:
- Set up credentials for the AI service used by the Guardrails node
- Ensure your n8n instance has access to the required AI/LLM provider
- Test Configuration:
- Activate the workflow (currently archived)
- Click "Execute workflow" to test with sample content
- Verify that the guardrails are working as expected with your AI provider
- Customize Settings:
- Adjust the jailbreak detection threshold (currently 0.7) based on your requirements
- Modify custom prompts in the Guardrails node if needed
- Integration: Connect this workflow to other workflows or modify the trigger type for automated content filtering
Note: This workflow is currently archived. You'll need to unarchive it before activation and use.