Audio-Visual Trainer Analysis Workflow¶
This workflow provides automated analysis of trainer videos and audio recordings, using AI-powered transcription and performance evaluation to help trainers improve their delivery, engagement, and communication effectiveness.
Purpose¶
No business context provided yet — add a context.md to enrich this documentation.
This workflow serves as an automated trainer assessment system that: - Analyzes video/audio recordings of training sessions - Provides detailed feedback on speaking patterns, pace, and engagement - Generates actionable recommendations for improvement - Delivers comprehensive performance reports with scoring and grading
How It Works¶
- Receive Analysis Request: A webhook receives a request containing video metadata and callback information
- Extract Video Data: The payload is parsed to extract video URL, recording details, and analysis parameters
- Prepare for Transcription: Video URL is formatted with transcription settings optimized for trainer content
- Submit to AssemblyAI: The video is sent to AssemblyAI for professional transcription with speaker detection
- Wait for Processing: A 10-second delay allows initial processing to begin
- Retrieve Transcript: The completed transcription is fetched with word-level timestamps
- Analyze Speech Patterns: Custom analysis detects filler words, pauses, speaking pace, and sentence structure
- Generate AI Insights: OpenAI GPT-4 provides expert analysis of trainer performance and recommendations
- Calculate Engagement Score: Interactive elements, questions, and audience engagement indicators are measured
- Compile Final Report: All analysis components are combined into a comprehensive performance report
- Send Results: The complete analysis is delivered to the callback endpoint
- Respond to Webhook: A success confirmation is returned to the original requester
Workflow Diagram¶
graph TD
A[Webhook Trigger] --> B[Extract Payload]
B --> C[Prepare for Transcription]
C --> D[Submit to AssemblyAI]
D --> E[Wait for Processing]
E --> F[Get Transcription Result]
F --> G[Analyze Speech Patterns]
G --> H[AI Performance Analysis]
G --> I[Engagement Analysis]
H --> J[Compile Final Report]
I --> J
J --> K[Send Results to App]
K --> L[Webhook Response]
%% Error handling path
A --> M[Error Handler]
M --> N[Send Error Callback]
Trigger¶
Webhook: /webhook/analyze-trainer-video
- Methods: POST, GET
- Webhook ID: trainer-video-analysis
- Response Mode: Response node (returns immediate confirmation)
Nodes Used¶
| Node Type | Node Name | Purpose |
|---|---|---|
| Webhook | Webhook Trigger | Receives analysis requests via HTTP |
| Code | Extract Payload | Parses incoming data and validates required fields |
| Code | Prepare for Transcription | Formats video URL and transcription settings |
| HTTP Request | Submit to AssemblyAI | Sends video to transcription service |
| Wait | Wait for Processing | Delays execution for initial processing |
| HTTP Request | Get Transcription Result | Retrieves completed transcript |
| Function | Analyze Speech Patterns | Analyzes filler words, pauses, and speaking pace |
| HTTP Request | AI Performance Analysis | Gets expert analysis from OpenAI GPT-4 |
| Function | Engagement Analysis | Calculates audience engagement metrics |
| Function | Compile Final Report | Combines all analysis into structured report |
| HTTP Request | Send Results to App | Delivers results to callback endpoint |
| Respond to Webhook | Webhook Response | Returns confirmation to original request |
| Function | Error Handler | Formats error responses for failed analyses |
| HTTP Request | Send Error Callback | Sends error notifications to callback endpoint |
External Services & Credentials Required¶
AssemblyAI¶
- Purpose: Professional audio/video transcription with speaker detection
- Credential Type: API Key
- ⚠️ Security Issue: API key is currently hardcoded in workflow
- Required Setup: Create AssemblyAI credential in n8n settings
OpenAI¶
- Purpose: AI-powered trainer performance analysis
- Credential Type: OpenAI API credential
- Credential Name: "Waringa"
- Model Used: GPT-4
Supabase¶
- Purpose: Callback endpoint for delivering analysis results
- Endpoint:
https://ecwihbiaztxsfouvqzam.supabase.co/functions/v1/video-analysis-callback - Authentication: Bearer token (anon)
Environment Variables¶
No environment variables are used in this workflow. All configuration is handled through: - Hardcoded API endpoints - n8n credential system - Direct configuration in node parameters
Data Flow¶
Input Format¶
1 2 3 4 5 6 7 8 9 10 11 12 13 | |
Output Format¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | |
Error Handling¶
The workflow includes comprehensive error handling:
- Error Handler Node: Catches processing failures and formats error responses
- Error Callback: Sends failure notifications to the callback endpoint with:
- Error type and message
- Timestamp of failure
- Analysis ID for tracking
- Null values for all analysis fields
- Graceful Degradation: Failed analyses return structured error responses rather than breaking
- Retry Logic: Some nodes are configured with retry on failure
Known Limitations¶
- Security Vulnerability: AssemblyAI API key is hardcoded in the workflow instead of using n8n credentials
- Fixed Wait Time: 10-second processing delay may not be sufficient for longer videos
- Single Language: Optimized for English-language training content
- Video Size Limits: Dependent on AssemblyAI's file size and duration restrictions
- No Progress Updates: No intermediate status updates during long processing times
Related Workflows¶
No related workflows are documented in the current context.
Setup Instructions¶
-
Import Workflow:
- Copy the workflow JSON
- Import into your n8n instance
- Activate the workflow
-
Configure Credentials:
- Create AssemblyAI API credential in n8n
- Set up OpenAI API credential named "Waringa"
- Update hardcoded API key references to use credentials
-
Security Hardening:
- Replace hardcoded AssemblyAI key (
97a09f8b319948a095f5f753267e7cd6) with credential reference - Update "Submit to AssemblyAI" and "Get Transcription Result" nodes
- Use
{{ $credentials.assemblyAi.apiKey }}format
- Replace hardcoded AssemblyAI key (
-
Test the Workflow:
- Send POST request to webhook endpoint
- Include required payload structure
- Verify callback endpoint receives results
- Monitor execution logs for errors
-
Customize Analysis:
- Adjust filler word detection in "Analyze Speech Patterns"
- Modify engagement scoring criteria
- Update AI analysis prompts for specific training contexts
-
Production Deployment:
- Configure proper error monitoring
- Set up logging for analysis tracking
- Implement rate limiting if needed
- Consider adding authentication to webhook endpoint