Audio Report V-test¶
A minimal workflow for transcribing audio recordings using OpenAI's transcription service. This workflow appears to be in development or testing phase, containing only the core transcription functionality.
Purpose¶
No business context provided yet — add a context.md to enrich this documentation.
Based on the workflow tags (VoiceNote Reports, Content, AI agent), this appears to be part of a larger system for processing voice notes and generating content reports, possibly for educational or coaching purposes.
How It Works¶
This workflow consists of a single operation:
- Audio Transcription: Takes an audio file and converts it to text using OpenAI's Whisper transcription service
The workflow is currently archived and inactive, suggesting it's either a test version or has been superseded by a more complete implementation.
Mermaid Diagram¶
graph TD
A[Audio Input] --> B[Transcribe a recording]
B --> C[Transcribed Text Output]
Trigger¶
This workflow has no configured trigger (triggerCount: 0). It would need to be: - Manually executed - Called by another workflow - Triggered via webhook (if configured)
Nodes Used¶
| Node Type | Node Name | Purpose |
|---|---|---|
| OpenAI (LangChain) | Transcribe a recording | Converts audio files to text using OpenAI's transcription API |
External Services & Credentials Required¶
OpenAI¶
- Service: OpenAI API for audio transcription
- Credentials Needed: OpenAI API key
- Purpose: Audio-to-text transcription using Whisper model
Environment Variables¶
No specific environment variables are configured in this workflow. The OpenAI node would use credentials stored in n8n's credential management system.
Data Flow¶
Input¶
- Audio file (supported formats: mp3, mp4, mpeg, mpga, m4a, wav, webm)
Output¶
- Transcribed text from the audio content
Data Structure¶
The workflow expects audio data as input and produces text output. The exact data structure would depend on how the workflow is triggered and how the audio is provided.
Error Handling¶
No explicit error handling is configured in this workflow. Any errors would use n8n's default error handling: - API failures from OpenAI would stop execution - Invalid audio formats would cause the transcription to fail - Network issues would result in workflow failure
Known Limitations¶
- Workflow is currently archived and inactive
- No trigger configured - requires manual execution or external activation
- Single-node workflow with no additional processing or output formatting
- No error handling or retry logic
- Limited to OpenAI's transcription service capabilities and file size limits
Related Workflows¶
Based on the tags, this workflow may be related to: - Other VoiceNote Reports workflows - YouTube content processing workflows - AI agent implementations for content generation
Setup Instructions¶
-
Import the Workflow
1# Import the workflow JSON into your n8n instance -
Configure OpenAI Credentials
- Go to Settings > Credentials in n8n
- Create new OpenAI credentials
- Add your OpenAI API key
-
Assign Credentials
- Open the "Transcribe a recording" node
- Select your OpenAI credentials from the dropdown
-
Configure Trigger (Required)
- Add a trigger node (Webhook, Manual, etc.)
- Connect it to the transcription node
-
Test the Workflow
- Activate the workflow
- Provide an audio file through your chosen trigger method
- Verify the transcription output
-
Production Setup
- Add error handling nodes
- Configure appropriate triggers for your use case
- Add output formatting or storage nodes as needed
Note: This workflow is currently archived. You may want to unarchive it and add additional functionality before using it in production.