V4 - WIP - Francis - SalesTrackingAgent¶
An AI-powered business coaching system that provides personalized guidance to young entrepreneurs in Kenya through WhatsApp and SMS. The workflow manages the complete user journey from onboarding through daily profit tracking and credit recovery coaching.
Purpose¶
No business context provided yet — add a context.md to enrich this documentation.
Based on the workflow implementation, this system appears to serve young entrepreneurs in Kenya by: - Guiding them through a structured onboarding process to understand their business - Collecting daily sales and cost data to track profit trends - Providing personalized micro-actions to improve business performance - Managing credit recovery from customers who owe money - Escalating welfare concerns to Community Engagement Advisors (CEAs)
How It Works¶
- Session Context Loading: Retrieves user profile, business records, chat history, and credit data from the database
- Onboarding Detection: Checks if the user is in the onboarding phase (new users go through a separate onboarding workflow)
- Context Analysis: Pre-computes business metrics like current day, profit trends, missing data, and session mode
- Intent Routing: Determines whether to use the main sales tracking agent or the engagement agent based on user status and message content
- AI Processing: The appropriate agent processes the user's message using extensive business coaching protocols
- Data Collection: For active users, collects daily sales, costs, and credit information through structured conversations
- Tool Integration: Calls various tools to save data, update user status, and manage the credit module
- Response Generation: Produces contextual responses in Swahili/English mix appropriate for the user's education level
- Logging & Monitoring: Records all interactions and triggers alerts for Community Engagement Advisors when needed
Workflow Diagram¶
graph TD
A[When Executed by Another Workflow] --> B[getSessionContext]
B --> C[getUserRecord]
C --> D[chatAndBusinessHistory]
D --> E[computeContext]
E --> F[isOnboardingPhase?]
F --> G[isOnboardingPhase? Route]
G -->|Onboarding| H[Call OnboardingJourneyHandler]
G -->|Active User| I[buildPromptSections]
H --> J[shouldUseAI?]
J -->|Use AI| I
J -->|Direct Output| K[onboarding_setOutput]
I --> L[Intent Router]
L --> M[If - Route Decision]
M -->|Main Agent| N[AI Agent]
M -->|Engagement Agent| O[Set Engager Context]
N --> P[Extract CEA Alert]
O --> Q[engager_agent]
P --> R[If1 - Multiple Messages?]
Q --> S[messages1]
R -->|Multiple| T[Split Out]
R -->|Single| U[messages]
T --> V[If2 - Youth Message?]
V --> W[messages2]
U --> X[Insert rows in a table]
S --> Y[Insert rows in a table2]
W --> Z[Insert rows in a table3]
X --> AA[persistState]
Y --> BB[call_create_summary_subworkflow3]
Z --> CC[call_create_summary_subworkflow4]
AA --> DD[ifCeaAlertRequired]
DD -->|Alert Needed| EE[insertCeaAlert]
DD -->|No Alert| FF[setOutputField]
EE --> FF
BB --> GG[logEngagerAgentToPL]
CC --> HH[setOutputField3]
X --> II[call_create_summary_subworkflow1]
II --> JJ[logMainAgentToPL]
JJ --> KK[addMainToPLDataset]
KK --> FF
GG --> LL[addEngagerToPLDataset]
LL --> MM[setOutputField2]
%% Tool connections to AI Agent
NN[dailySalesDataCollection] -.-> N
OO[updateUserDataTool] -.-> N
PP[updateUserStatusTool] -.-> N
QQ[Think] -.-> N
RR[messageTemplates] -.-> N
SS[saveCreditRecordTool] -.-> N
TT[callBaselineDataCollection] -.-> N
%% Memory connections
UU[Postgres Chat Memory] -.-> N
VV[Engager Chat Memory] -.-> Q
%% Language Models
WW[sifa_main_agent_prod] -.-> N
XX[sifa_engager_agent_prod] -.-> Q
%% Output Parsers
YY[Structured Output Parser2] -.-> N
ZZ[Structured Output Parser7] -.-> Q
Trigger¶
The workflow is triggered by another workflow via the "Execute Workflow Trigger" node with these inputs:
- phoneNumber: User's phone number (identifier)
- query: User's message content
- channel: Communication channel (WhatsApp or SMS)
Nodes Used¶
| Node Type | Node Name | Purpose |
|---|---|---|
| Execute Workflow Trigger | When Executed by Another Workflow | Receives input from parent workflow |
| Postgres | getSessionContext | Retrieves user context via stored procedure |
| Code | getUserRecord | Unpacks user data from database response |
| Code | chatAndBusinessHistory | Formats context data for AI agents |
| Code | computeContext | Pre-computes business metrics and session state |
| Code | isOnboardingPhase? | Determines if user is in onboarding |
| If | isOnboardingPhase? Route | Routes to onboarding or main flow |
| Execute Workflow | Call OnboardingJourneyHandler | Handles new user onboarding |
| If | shouldUseAI? | Determines if AI processing is needed |
| Code | buildPromptSections | Builds dynamic system prompts |
| Code | Intent Router | Routes messages to appropriate agent |
| If | If - Route Decision | Chooses between main and engagement agent |
| AI Agent | AI Agent | Main sales tracking and coaching agent |
| AI Agent | engager_agent | Handles casual conversation and engagement |
| LangChain Tool | dailySalesDataCollection | Saves daily business data |
| LangChain Tool | updateUserDataTool | Updates user profile information |
| LangChain Tool | updateUserStatusTool | Updates user onboarding status |
| LangChain Tool | Think | Internal reasoning tool for agents |
| LangChain Tool | messageTemplates | Retrieves message templates |
| LangChain Tool | saveCreditRecordTool | Manages credit tracking data |
| LangChain Tool | callBaselineDataCollection | Saves onboarding baseline data |
| Postgres Chat Memory | Postgres Chat Memory | Stores conversation history for main agent |
| Postgres Chat Memory | Engager Chat Memory | Stores conversation history for engagement agent |
| OpenRouter LLM | sifa_main_agent_prod | GPT-5.2 model for main agent |
| OpenRouter LLM | sifa_engager_agent_prod | Claude Sonnet 4.6 for engagement agent |
| Structured Output Parser | Multiple parsers | Ensures consistent JSON output format |
| Postgres | Multiple insert nodes | Log conversations and manage data |
| Set | Multiple set nodes | Format data for output |
| HTTP Request | PromptLayer logging | Logs AI interactions for monitoring |
External Services & Credentials Required¶
Database¶
- Postgres account: Main database connection for user data, business records, chat logs, and system state
AI Services¶
- OpenRouter API:
sifa_main_agent_prodcredential for GPT-5.2 accesssifa_engager_agent_prodcredential for Claude Sonnet 4.6 access
Monitoring¶
- PromptLayer: API key for logging AI interactions (hardcoded:
pl_80a83a0db8150339b213693376a60afb)
Environment Variables¶
No explicit environment variables are defined in the workflow. All configuration appears to be handled through: - Database credentials - API credentials for external services - Hardcoded values in the workflow nodes
Data Flow¶
Input¶
phoneNumber: User identifier (string)query: User's message (string)channel: WhatsApp or SMS (string)
Processing¶
- User context and business data retrieved from database
- Business metrics computed (profit trends, missing data, current stage)
- Message routed to appropriate AI agent based on user status and content
- AI agent processes message using extensive coaching protocols
- Tools called to save data, update status, or retrieve templates as needed
Output¶
output: Final response message to send to user (string)- Side effects: Database updates, chat logs, CEA alerts, monitoring logs
Error Handling¶
The workflow includes comprehensive error handling:
- Agent Fallback: If AI agents fail, a fallback message is sent in Swahili
- Tool Failures: Tools are configured to continue on error without breaking the flow
- Error Logging: Failed interactions are logged to an
errorLogtable - Silent Failures: Database write failures are handled silently to maintain user experience
- CEA Escalation: Welfare concerns and system errors trigger alerts to Community Engagement Advisors
- Retry Logic: Critical nodes have retry configurations with exponential backoff
Known Limitations¶
Based on the implementation: - Data collection is restricted to evening hours (7 PM - 11:59 PM) for active users - Sunday data collection is disabled (rest day policy) - Credit module only tracks money owed BY customers, not debt TO suppliers - SMS messages are limited to 160 characters and must be split if longer - Practice data during onboarding is never saved to the database - Tool failures are handled silently, which may lead to data inconsistencies
Related Workflows¶
The workflow references several sub-workflows:
- V4 - OnboardingJourneyHandler (QVsCitg5rxMQx0Z3): Handles new user onboarding
- dailySalesDataCollection (aOQRTnNvRUCjjwRE): Saves daily business data
- updateUserDataTool (ON18WqpuHwpe5jc6): Updates user profile
- updateUserStatusTool (Ht3MkckUHVE039d0): Updates onboarding status
- messageTemplates (h77nzSL65k3aj1VU): Provides message templates
- saveCreditRecordTool (SmNrxmAoEYSMqQDe): Manages credit tracking
- saveOnboardingBaselineTool (Ro2mHPinIURfchxo): Saves baseline configuration
- create_business_summary (2nTlbf07leuKBdM9): Generates business summaries
Setup Instructions¶
-
Database Setup:
- Configure PostgreSQL database with required tables
- Set up stored procedure
get_session_context_v4 - Create tables:
v4_youthEntrepreneurs,v4_chatLog,ceaAlerts,errorLog
-
Credentials Configuration:
- Add Postgres database credentials
- Configure OpenRouter API credentials for both GPT-5.2 and Claude Sonnet 4.6
- Set up PromptLayer API key for monitoring
-
Sub-workflow Dependencies:
- Import and configure all referenced sub-workflows
- Ensure proper workflow IDs are updated in tool configurations
-
Environment Configuration:
- Verify timezone settings for Nairobi (EAT, UTC+3)
- Configure error workflow (cuHEGQjAfvuGwIOD)
- Set execution order to v1
-
Testing:
- Use the pinned test data to verify basic functionality
- Test both WhatsApp and SMS channel formatting
- Verify onboarding flow with test users
- Test error handling scenarios
-
Monitoring Setup:
- Configure PromptLayer datasets for both agents
- Set up CEA alert monitoring
- Verify error logging is working
-
Production Deployment:
- Update test mode flags in code nodes
- Verify all database table names are production-ready
- Test with real user data in staging environment