Skip to content

Audio-Visual Trainer Analysis Workflow

This workflow provides automated analysis of trainer videos and audio recordings, using AI-powered transcription and performance evaluation to help trainers improve their delivery, engagement, and communication effectiveness.

Purpose

No business context provided yet — add a context.md to enrich this documentation.

This workflow serves as an automated trainer assessment system that: - Analyzes video/audio recordings of training sessions - Provides detailed feedback on speaking patterns, pace, and engagement - Generates actionable recommendations for improvement - Delivers comprehensive performance reports with scoring and grading

How It Works

  1. Receive Analysis Request: A webhook receives a request containing video metadata and callback information
  2. Extract Video Data: The payload is parsed to extract video URL, recording details, and analysis parameters
  3. Prepare for Transcription: Video URL is formatted with transcription settings optimized for trainer content
  4. Submit to AssemblyAI: The video is sent to AssemblyAI for professional transcription with speaker detection
  5. Wait for Processing: A 10-second delay allows initial processing to begin
  6. Retrieve Transcript: The completed transcription is fetched with word-level timestamps
  7. Analyze Speech Patterns: Custom analysis detects filler words, pauses, speaking pace, and sentence structure
  8. Generate AI Insights: OpenAI GPT-4 provides expert analysis of trainer performance and recommendations
  9. Calculate Engagement Score: Interactive elements, questions, and audience engagement indicators are measured
  10. Compile Final Report: All analysis components are combined into a comprehensive performance report
  11. Send Results: The complete analysis is delivered to the callback endpoint
  12. Respond to Webhook: A success confirmation is returned to the original requester

Workflow Diagram

graph TD
    A[Webhook Trigger] --> B[Extract Payload]
    B --> C[Prepare for Transcription]
    C --> D[Submit to AssemblyAI]
    D --> E[Wait for Processing]
    E --> F[Get Transcription Result]
    F --> G[Analyze Speech Patterns]
    G --> H[AI Performance Analysis]
    G --> I[Engagement Analysis]
    H --> J[Compile Final Report]
    I --> J
    J --> K[Send Results to App]
    K --> L[Webhook Response]

    %% Error handling path
    A --> M[Error Handler]
    M --> N[Send Error Callback]

Trigger

Webhook: /webhook/analyze-trainer-video - Methods: POST, GET - Webhook ID: trainer-video-analysis - Response Mode: Response node (returns immediate confirmation)

Nodes Used

Node Type Node Name Purpose
Webhook Webhook Trigger Receives analysis requests via HTTP
Code Extract Payload Parses incoming data and validates required fields
Code Prepare for Transcription Formats video URL and transcription settings
HTTP Request Submit to AssemblyAI Sends video to transcription service
Wait Wait for Processing Delays execution for initial processing
HTTP Request Get Transcription Result Retrieves completed transcript
Function Analyze Speech Patterns Analyzes filler words, pauses, and speaking pace
HTTP Request AI Performance Analysis Gets expert analysis from OpenAI GPT-4
Function Engagement Analysis Calculates audience engagement metrics
Function Compile Final Report Combines all analysis into structured report
HTTP Request Send Results to App Delivers results to callback endpoint
Respond to Webhook Webhook Response Returns confirmation to original request
Function Error Handler Formats error responses for failed analyses
HTTP Request Send Error Callback Sends error notifications to callback endpoint

External Services & Credentials Required

AssemblyAI

  • Purpose: Professional audio/video transcription with speaker detection
  • Credential Type: API Key
  • ⚠️ Security Issue: API key is currently hardcoded in workflow
  • Required Setup: Create AssemblyAI credential in n8n settings

OpenAI

  • Purpose: AI-powered trainer performance analysis
  • Credential Type: OpenAI API credential
  • Credential Name: "Waringa"
  • Model Used: GPT-4

Supabase

  • Purpose: Callback endpoint for delivering analysis results
  • Endpoint: https://ecwihbiaztxsfouvqzam.supabase.co/functions/v1/video-analysis-callback
  • Authentication: Bearer token (anon)

Environment Variables

No environment variables are used in this workflow. All configuration is handled through: - Hardcoded API endpoints - n8n credential system - Direct configuration in node parameters

Data Flow

Input Format

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "analysisId": "unique-analysis-id",
  "recordingId": "recording-identifier", 
  "callbackUrl": "https://app.example.com/callback",
  "timestamp": "2024-01-01T00:00:00Z",
  "recording": {
    "file_url": "https://storage.example.com/video.mp4",
    "title": "Training Session Title",
    "description": "Session description",
    "duration_seconds": 1800,
    "session_duration_minutes": 30
  }
}

Output Format

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
  "analysisId": "unique-analysis-id",
  "recordingId": "recording-identifier",
  "status": "success",
  "filler_words": {
    "total_count": 15,
    "percentage": 3.2,
    "breakdown": {"um": 8, "uh": 4, "like": 3},
    "score": 8,
    "assessment": "good"
  },
  "pauses_analysis": {
    "total_pauses": 12,
    "average_duration": 1.5,
    "assessment": "excellent"
  },
  "pace_analysis": {
    "words_per_minute": 145,
    "rating": "good",
    "assessment": "optimal"
  },
  "engagement_score": {
    "score": 7.2,
    "percentage": "72.0",
    "assessment": "highly_engaging"
  },
  "transcript": "Full transcription text...",
  "recommendations": {
    "priority_actions": ["Action items"],
    "strengths": ["Identified strengths"],
    "improvement_areas": ["Areas to improve"]
  },
  "detailed_analysis": {
    "overall_score": 8,
    "grade": "B",
    "component_scores": {
      "clarity": 8,
      "pace": 8,
      "engagement": 7,
      "overall": 8
    }
  }
}

Error Handling

The workflow includes comprehensive error handling:

  1. Error Handler Node: Catches processing failures and formats error responses
  2. Error Callback: Sends failure notifications to the callback endpoint with:
    • Error type and message
    • Timestamp of failure
    • Analysis ID for tracking
    • Null values for all analysis fields
  3. Graceful Degradation: Failed analyses return structured error responses rather than breaking
  4. Retry Logic: Some nodes are configured with retry on failure

Known Limitations

  1. Security Vulnerability: AssemblyAI API key is hardcoded in the workflow instead of using n8n credentials
  2. Fixed Wait Time: 10-second processing delay may not be sufficient for longer videos
  3. Single Language: Optimized for English-language training content
  4. Video Size Limits: Dependent on AssemblyAI's file size and duration restrictions
  5. No Progress Updates: No intermediate status updates during long processing times

No related workflows are documented in the current context.

Setup Instructions

  1. Import Workflow:

    • Copy the workflow JSON
    • Import into your n8n instance
    • Activate the workflow
  2. Configure Credentials:

    • Create AssemblyAI API credential in n8n
    • Set up OpenAI API credential named "Waringa"
    • Update hardcoded API key references to use credentials
  3. Security Hardening:

    • Replace hardcoded AssemblyAI key (97a09f8b319948a095f5f753267e7cd6) with credential reference
    • Update "Submit to AssemblyAI" and "Get Transcription Result" nodes
    • Use {{ $credentials.assemblyAi.apiKey }} format
  4. Test the Workflow:

    • Send POST request to webhook endpoint
    • Include required payload structure
    • Verify callback endpoint receives results
    • Monitor execution logs for errors
  5. Customize Analysis:

    • Adjust filler word detection in "Analyze Speech Patterns"
    • Modify engagement scoring criteria
    • Update AI analysis prompts for specific training contexts
  6. Production Deployment:

    • Configure proper error monitoring
    • Set up logging for analysis tracking
    • Implement rate limiting if needed
    • Consider adding authentication to webhook endpoint