Skip to content

PBA Evaluation

This workflow implements an AI-powered teaching assistant bot for Project-Based Assessment (PBA) in Rwandan secondary schools. The bot helps teachers implement PBA confidently by providing step-by-step guidance, project information, rubrics, and practical support through a conversational interface. It combines a comprehensive knowledge base with real-time project data to deliver contextual, accurate responses to teacher inquiries.

Purpose

No business context provided yet — add a context.md to enrich this documentation.

Based on the workflow implementation, this system serves Rwandan secondary school teachers who need support implementing Project-Based Assessment (PBA) methodology. The bot assists with:

  • Project instructions and guidance for Biology, Chemistry, Physics, and Entrepreneurship
  • Lesson planning and preparation support
  • Group work management strategies
  • Rubrics and grading assistance
  • Time management for PBA implementation
  • Feedback techniques and CAMIS grade upload processes
  • Real-life project connections

The workflow targets Senior 4-6 teachers across three academic terms, providing immediate, contextual support through WhatsApp-style interactions.

How It Works

  1. Dataset Input: The workflow receives evaluation questions from a Google Sheets dataset containing test scenarios for the PBA bot
  2. AI Agent Processing: The core PBA Bot agent processes each question using a comprehensive system prompt that defines its role as a teaching assistant
  3. Knowledge Retrieval: The bot accesses two knowledge sources:
    • Project Database: Queries Airtable to retrieve specific PBA project titles and descriptions based on term, grade, and subject
    • Resource Vectorstore: Searches a PostgreSQL vector database containing PBA teaching resources, FAQs, and guidance materials
  4. Intelligent Response: The AI combines retrieved information with its training to provide step-by-step, practical guidance in WhatsApp-friendly format
  5. Evaluation: The system compares bot responses against expected answers to measure accuracy and effectiveness
  6. Error Handling: If processing fails, the system logs errors, notifies users, and alerts the technical team for rapid resolution

Mermaid Diagram

graph TD
    A[When fetching a dataset row] --> B[PBA Bot]
    B --> C[Evaluation]
    B --> D[Send user a message about error]

    E[Get project titles and descriptions] --> B
    F[PBA Resources Vectorised] --> B
    G[OpenRouter Chat Model1] --> B
    H[Context memory] --> B
    I[Think] --> B

    J[PBA Resources] --> F
    K[Default Data Loading] --> J
    L[Recursive Character Text Splitter] --> K
    M[Embeddings OpenAI] --> J
    M --> F
    N[Reranker for relevant chunks] --> F

    C --> O[Evaluation1]
    O --> P[Evaluation2]
    Q[OpenRouter Chat Model2] --> P

    D --> R[PBA Bot Error Output Formatting]
    R --> S[Send PBA bot Error to tech team]
    S --> T[Log PBA bot error]

Trigger

Evaluation Trigger: The workflow is triggered by fetching rows from a Google Sheets evaluation dataset. Each row contains a test question and expected answer for evaluating the PBA bot's performance.

Nodes Used

Node Type Node Name Purpose
Evaluation Trigger When fetching a dataset row Fetches test questions from Google Sheets dataset
LangChain Agent PBA Bot Core AI assistant that processes teacher questions
Airtable Tool Get project titles and descriptions Retrieves PBA project information by term/grade/subject
Vector Store (Retrieve) PBA Resources Vectorised Searches PBA teaching resources and guidance
Vector Store (Insert) PBA Resources Stores PBA documents in vector database
Document Loader Default Data Loading Loads documents for vectorization
Text Splitter Recursive Character Text Splitter Splits documents into searchable chunks
Embeddings Embeddings OpenAI Creates vector embeddings for semantic search
Reranker Reranker for relevant chunks Ranks search results by relevance
Chat Model OpenRouter Chat Model1 Primary language model for bot responses
Chat Model OpenRouter Chat Model2 Secondary model for evaluation scoring
Memory Context memory Maintains conversation history (disabled)
Tool Think Enables reasoning capabilities
Evaluation Evaluation Checks if in evaluation mode
Evaluation Evaluation1 Outputs bot responses for comparison
Evaluation Evaluation2 Compares actual vs expected answers
Twilio Send user a message about error Notifies users of processing errors
Code PBA Bot Error Output Formatting Formats error details for logging
Twilio Send PBA bot Error to tech team Alerts technical team of errors
Airtable Log PBA bot error Records errors in database

External Services & Credentials Required

  • Google Sheets: For evaluation dataset storage
    • Credential: googleSheetsOAuth2Api
  • Airtable: For PBA project database and error logging
    • Credential: airtableTokenApi
  • PostgreSQL: For vector storage of PBA resources
    • Credential: postgres (Waringa database)
  • OpenAI: For text embeddings
    • Credential: openAiApi
  • OpenRouter: For chat model access
    • Credential: openRouterApi
  • Cohere: For result reranking
    • Credential: cohereApi
  • Twilio: For WhatsApp messaging
    • Credential: twilioApi

Environment Variables

No specific environment variables are defined in this workflow. All external service connections use stored credentials.

Data Flow

Input: - Evaluation questions from Google Sheets dataset - Each row contains: Question, Expected Answer

Processing: - Question text is processed by the PBA Bot agent - Agent queries Airtable for project information when needed - Agent searches vector database for relevant PBA resources - AI model generates contextual response

Output: - Bot response text - Evaluation metrics comparing actual vs expected answers - Error logs (if processing fails)

Error Handling

The workflow includes comprehensive error handling:

  1. User Notification: Failed requests trigger an immediate WhatsApp message to the user asking them to try again
  2. Error Processing: A code node categorizes errors (bad request, tool failure, timeout, authentication, rate limiting, etc.)
  3. Technical Alerts: The technical team receives WhatsApp notifications about system errors
  4. Error Logging: All errors are logged to Airtable with detailed information including:
    • User phone number and input
    • Error category and suggested solution
    • Timestamp and technical details
    • Structured error data for analysis

Known Limitations

Based on the implementation: - Context memory is currently disabled, limiting conversation continuity - Responses are limited to 600 characters per message - System only supports English language responses - Limited to specific subjects: Biology, Chemistry, Physics, Entrepreneurship - Covers only Senior 4-6 grade levels and Terms 1-3

No related workflows are mentioned in the current context.

Setup Instructions

  1. Import Workflow: Import the JSON into your n8n instance

  2. Configure Credentials:

    • Set up Google Sheets OAuth2 API access
    • Configure Airtable Personal Access Token
    • Add PostgreSQL database credentials for vector storage
    • Set up OpenAI API key for embeddings
    • Configure OpenRouter API access
    • Add Cohere API key for reranking
    • Set up Twilio credentials for WhatsApp messaging
  3. Prepare Data Sources:

    • Create evaluation dataset in Google Sheets with Question and Answer columns
    • Set up Airtable base with PBA projects table
    • Initialize PostgreSQL vector database table pba_ai_bot
    • Upload PBA resource documents for vectorization
  4. Configure Database Tables:

    • Airtable: PBA Projects' Title and Description table
    • Airtable: PBA Bot Errors table for error logging
    • PostgreSQL: pba_ai_bot table for vector storage
  5. Test Setup:

    • Run a test evaluation to verify all connections work
    • Check that project data retrieval functions correctly
    • Verify vector search returns relevant results
    • Test error handling pathways
  6. Enable Workflow: Activate the workflow to begin evaluation runs