Phase 09: AI Integration
AI service with RAG for document search, chat tools, and org intelligence
Phase 09: AI Integration
Goal: Build a separate AI service with RAG-powered document search, chat tools for employee lookup, org structure explanation, and time-off balance queries. Integrate with a floating chat widget in the web app.
| Attribute | Value |
|---|---|
| Steps | 139-155 |
| Estimated Time | 10-14 hours |
| Dependencies | Phase 08 complete (Dashboard System) |
| Completion Gate | AI chat widget works, can search documents, explain org structure, check time-off balance. |
Step Timing Estimates
| Step | Task | Est. Time |
|---|---|---|
| 139 | Create AI service (Express app) | 45 min |
| 140 | Setup MongoDB connection | 30 min |
| 141 | Install LangChain and OpenAI | 20 min |
| 142 | Create document chunking service | 40 min |
| 143 | Create embedding service | 35 min |
| 144 | Create vector store service | 45 min |
| 145 | Create RAG query endpoint | 40 min |
| 146 | Create chat tools registry | 35 min |
| 147 | Implement employee_search tool | 40 min |
| 148 | Implement document_search tool | 35 min |
| 149 | Implement org_explain tool | 40 min |
| 150 | Implement timeoff_balance tool | 35 min |
| 151 | Create chat endpoint | 50 min |
| 152 | Create AI chat widget (frontend) | 45 min |
| 153 | Create chat message components | 40 min |
| 154 | Create tool result renderers | 35 min |
| 155 | Integrate chat with dashboard | 30 min |
Phase Context (READ FIRST)
What This Phase Accomplishes
- Separate Express-based AI service on port 3002
- MongoDB for storing document chunks and embeddings
- LangChain + OpenAI for RAG pipeline
- Document chunking with semantic boundaries
- Vector similarity search for document retrieval
- Four chat tools: employee_search, document_search, org_explain, timeoff_balance
- Floating chat widget available on all dashboard pages
- Tool result cards for structured data display
Known Limitations
AI Configuration (MVP)
Current Implementation:
- OpenAI API key configured via global
OPENAI_API_KEYenvironment variable - AI features enabled for all tenants when API key is set
Not included in MVP:
- Per-tenant AI enable/disable toggle in admin settings
- Per-tenant OpenAI API key configuration
- Admin UI for AI settings management
Future Enhancement: Add tenant-level AI settings panel in Phase 01 settings UI with enable toggle and optional per-tenant API key (encrypted storage).
- No streaming responses - Chat returns complete JSON response (not SSE/streaming). Streaming is a future enhancement.
- Simple zodToJsonSchema - Only handles flat objects with string/number/boolean. For complex nested schemas, use
zod-to-json-schemapackage. - Service auth required - AI service sends
x-service-secretheader; NestJS API needs ServiceSecretGuard (see Step 147 comments).
What This Phase Does NOT Include
- Voice/speech input - future enhancement
- Image/file uploads in chat - future enhancement
- Multi-turn context beyond conversation - Phase 10+
- Fine-tuned models - uses OpenAI API directly
- Real-time collaboration in chat - single user only
- Autonomous agents - user-initiated only
Bluewoo Anti-Pattern Reminder
This phase intentionally has NO:
- Complex agent orchestration - simple tool calling only
- Custom model training - OpenAI API only
- Persistent memory beyond session - stateless per conversation
- Auto-triggered AI actions - user must initiate
If the AI suggests adding any of these, REJECT and continue with the spec.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Web App (Next.js) │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ AI Chat Widget │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │
│ │ │ MessageList │ │ InputField │ │ ToolResultCards │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ POST /api/ai/chat
▼
┌─────────────────────────────────────────────────────────────┐
│ NestJS API Gateway │
│ (proxies to AI Service) │
└─────────────────────────────────────────────────────────────┘
│
│ Internal HTTP
▼
┌─────────────────────────────────────────────────────────────┐
│ AI Service (Express:3002) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Chat Handler │ │ Tool Registry│ │ RAG Query Engine │ │
│ └──────────────┘ └──────────────┘ └──────────────────┘ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Chunking Svc │ │ Embedding Svc│ │ Vector Store Svc │ │
│ └──────────────┘ └──────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ │
│ │ OpenAI API
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ MongoDB │ │ OpenAI (GPT-4) │
│ (Embeddings)│ │ (Embeddings + │
│ │ │ Completions) │
└──────────────┘ └──────────────────┘Step 139: Create AI Service (Express App)
Input
- Phase 08 complete
- Monorepo structure exists at
apps/
Constraints
- MUST be Express 4.x (NOT 5.x, NOT Fastify)
- MUST be in
apps/ai/folder - MUST NOT use NestJS - separate Express service
- Port 3002 (to avoid conflict with NestJS on 3001 and Next.js on 3000)
Task
Create apps/ai/ folder structure:
mkdir -p apps/ai/src/{services,routes,tools,types,middleware}
cd apps/aiCreate apps/ai/package.json:
{
"name": "@hrms/ai",
"version": "0.0.1",
"private": true,
"scripts": {
"dev": "tsx watch src/index.ts",
"build": "tsc",
"start": "node dist/index.js",
"lint": "eslint src --ext .ts"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5",
"helmet": "^7.1.0",
"dotenv": "^16.3.1",
"zod": "^3.22.4"
},
"devDependencies": {
"@types/express": "^4.17.21",
"@types/cors": "^2.8.17",
"@types/node": "^20.10.0",
"tsx": "^4.6.2",
"typescript": "^5.7.3"
}
}Create apps/ai/tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"lib": ["ES2022"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}Create apps/ai/.env.example:
# AI Service Configuration
PORT=3002
NODE_ENV=development
# MongoDB
MONGODB_URI=mongodb://localhost:27017/hrms_ai
# OpenAI
OPENAI_API_KEY=sk-your-key-here
# HRMS API (for tool callbacks)
HRMS_API_URL=http://localhost:3001
HRMS_API_SECRET=shared-secret-for-service-authCreate apps/ai/src/config.ts:
import { z } from 'zod';
import dotenv from 'dotenv';
dotenv.config();
const envSchema = z.object({
PORT: z.string().default('3002'),
NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
MONGODB_URI: z.string(),
OPENAI_API_KEY: z.string(),
HRMS_API_URL: z.string().default('http://localhost:3001'),
HRMS_API_SECRET: z.string(),
});
const parsed = envSchema.safeParse(process.env);
if (!parsed.success) {
console.error('❌ Invalid environment variables:', parsed.error.flatten().fieldErrors);
process.exit(1);
}
export const config = parsed.data;Create apps/ai/src/middleware/error-handler.ts:
import { Request, Response, NextFunction } from 'express';
export class AppError extends Error {
constructor(
public statusCode: number,
public code: string,
message: string,
) {
super(message);
this.name = 'AppError';
}
}
export function errorHandler(
err: Error,
_req: Request,
res: Response,
_next: NextFunction,
): void {
console.error('Error:', err);
if (err instanceof AppError) {
res.status(err.statusCode).json({
data: null,
error: {
code: err.code,
message: err.message,
},
});
return;
}
res.status(500).json({
data: null,
error: {
code: 'INTERNAL_ERROR',
message: 'An unexpected error occurred',
},
});
}Create apps/ai/src/middleware/auth.ts:
import { Request, Response, NextFunction } from 'express';
import { AppError } from './error-handler';
export interface AuthenticatedRequest extends Request {
tenantId: string;
userId: string;
}
export function authMiddleware(
req: Request,
_res: Response,
next: NextFunction,
): void {
const tenantId = req.headers['x-tenant-id'] as string;
const userId = req.headers['x-user-id'] as string;
if (!tenantId) {
throw new AppError(401, 'UNAUTHORIZED', 'Missing tenant ID');
}
if (!userId) {
throw new AppError(401, 'UNAUTHORIZED', 'Missing user ID');
}
(req as AuthenticatedRequest).tenantId = tenantId;
(req as AuthenticatedRequest).userId = userId;
next();
}Create apps/ai/src/routes/health.ts:
import { Router } from 'express';
const router = Router();
router.get('/health', (_req, res) => {
res.json({
status: 'ok',
service: 'ai',
timestamp: new Date().toISOString(),
});
});
export const healthRouter = router;Create apps/ai/src/index.ts:
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { config } from './config';
import { errorHandler } from './middleware/error-handler';
import { healthRouter } from './routes/health';
const app = express();
// Middleware
app.use(helmet());
app.use(cors({
origin: ['http://localhost:3000', 'http://localhost:3001'],
credentials: true,
}));
app.use(express.json({ limit: '10mb' }));
// Routes
app.use(healthRouter);
// Error handling
app.use(errorHandler);
// Start server
const port = parseInt(config.PORT, 10);
app.listen(port, () => {
console.log(`🤖 AI Service running on http://localhost:${port}`);
console.log(` Environment: ${config.NODE_ENV}`);
});Gate
cd apps/ai
npm install
# Create .env from example (with dummy values for now)
cp .env.example .env
# Edit .env to add: MONGODB_URI=mongodb://localhost:27017/hrms_ai
# Edit .env to add: OPENAI_API_KEY=sk-dummy-for-testing
# Edit .env to add: HRMS_API_SECRET=test-secret
npm run dev &
sleep 3
curl http://localhost:3002/health
# Should return: {"status":"ok","service":"ai","timestamp":"..."}
# Kill the dev server
pkill -f "tsx watch"Common Errors
| Error | Cause | Fix |
|---|---|---|
EADDRINUSE | Port 3002 in use | Kill process or change PORT |
Invalid environment | Missing env vars | Copy .env.example to .env |
Rollback
rm -rf apps/aiLock
apps/ai/package.json
apps/ai/tsconfig.json
apps/ai/src/index.ts
apps/ai/src/config.ts
apps/ai/src/middleware/error-handler.ts
apps/ai/src/middleware/auth.tsCheckpoint
- Express service created in apps/ai/
- Health endpoint returns status ok
- Service runs on port 3002
Step 140: Setup MongoDB Connection
Input
- Step 139 complete
- AI service running
- MongoDB available (local or Atlas)
Constraints
- Use native MongoDB driver (NOT Mongoose)
- Single database:
hrms_ai - Collections:
documents,chunks
Prerequisites
Ensure MongoDB is running:
# Local MongoDB with Docker
docker run -d --name hrms-mongo -p 27017:27017 mongo:7
# Or use MongoDB Atlas connection stringTask
Install MongoDB driver:
cd apps/ai
npm install mongodb
npm install -D @types/mongodbCreate apps/ai/src/services/mongodb.ts:
import { MongoClient, Db, Collection } from 'mongodb';
import { config } from '../config';
let client: MongoClient | null = null;
let db: Db | null = null;
export interface DocumentRecord {
_id: string;
tenantId: string;
sourceType: 'document' | 'employee' | 'policy';
sourceId: string;
title: string;
content: string;
metadata: Record<string, unknown>;
createdAt: Date;
updatedAt: Date;
}
export interface ChunkRecord {
_id: string;
tenantId: string;
documentId: string;
content: string;
embedding: number[];
chunkIndex: number;
metadata: {
sourceType: string;
sourceId: string;
title: string;
};
createdAt: Date;
}
export async function connectMongo(): Promise<Db> {
if (db) return db;
client = new MongoClient(config.MONGODB_URI);
await client.connect();
db = client.db('hrms_ai');
// Create indexes
const chunks = db.collection<ChunkRecord>('chunks');
await chunks.createIndex({ tenantId: 1, documentId: 1 });
await chunks.createIndex({ tenantId: 1 });
const documents = db.collection<DocumentRecord>('documents');
await documents.createIndex({ tenantId: 1, sourceType: 1, sourceId: 1 }, { unique: true });
console.log('✅ MongoDB connected');
return db;
}
export async function getDb(): Promise<Db> {
if (!db) {
await connectMongo();
}
return db!;
}
export async function getDocumentsCollection(): Promise<Collection<DocumentRecord>> {
const database = await getDb();
return database.collection<DocumentRecord>('documents');
}
export async function getChunksCollection(): Promise<Collection<ChunkRecord>> {
const database = await getDb();
return database.collection<ChunkRecord>('chunks');
}
export async function closeMongo(): Promise<void> {
if (client) {
await client.close();
client = null;
db = null;
}
}Update apps/ai/src/index.ts to connect on startup:
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { config } from './config';
import { errorHandler } from './middleware/error-handler';
import { healthRouter } from './routes/health';
import { connectMongo } from './services/mongodb';
const app = express();
// Middleware
app.use(helmet());
app.use(cors({
origin: ['http://localhost:3000', 'http://localhost:3001'],
credentials: true,
}));
app.use(express.json({ limit: '10mb' }));
// Routes
app.use(healthRouter);
// Error handling
app.use(errorHandler);
// Start server
async function start(): Promise<void> {
try {
await connectMongo();
const port = parseInt(config.PORT, 10);
app.listen(port, () => {
console.log(`🤖 AI Service running on http://localhost:${port}`);
console.log(` Environment: ${config.NODE_ENV}`);
});
} catch (error) {
console.error('Failed to start server:', error);
process.exit(1);
}
}
start();Update health route to check MongoDB in apps/ai/src/routes/health.ts:
import { Router } from 'express';
import { getDb } from '../services/mongodb';
const router = Router();
router.get('/health', async (_req, res) => {
try {
const db = await getDb();
await db.command({ ping: 1 });
res.json({
status: 'ok',
service: 'ai',
mongodb: 'connected',
timestamp: new Date().toISOString(),
});
} catch (error) {
res.status(503).json({
status: 'degraded',
service: 'ai',
mongodb: 'disconnected',
timestamp: new Date().toISOString(),
});
}
});
export const healthRouter = router;Gate
# Ensure MongoDB is running
docker ps | grep hrms-mongo
cd apps/ai
npm run dev &
sleep 5
curl http://localhost:3002/health
# Should return: {"status":"ok","service":"ai","mongodb":"connected",...}
pkill -f "tsx watch"Common Errors
| Error | Cause | Fix |
|---|---|---|
ECONNREFUSED | MongoDB not running | Start MongoDB container |
Authentication failed | Wrong credentials | Check MONGODB_URI |
Rollback
# Remove MongoDB service file
rm apps/ai/src/services/mongodb.ts
# Revert index.ts changesLock
apps/ai/src/services/mongodb.tsCheckpoint
- MongoDB driver installed
- Connection service created
- Health endpoint checks MongoDB
- Indexes created on startup
Step 141: Install LangChain and OpenAI
Input
- Step 140 complete
- MongoDB connected
Constraints
- Use LangChain 0.1.x (stable)
- Use OpenAI SDK v4
- text-embedding-3-small for embeddings (cost-effective)
- gpt-4o-mini for chat (good balance of cost/quality)
Task
Install dependencies:
cd apps/ai
npm install openai @langchain/openai @langchain/core langchainCreate apps/ai/src/services/openai.ts:
import OpenAI from 'openai';
import { config } from '../config';
let openaiClient: OpenAI | null = null;
export function getOpenAI(): OpenAI {
if (!openaiClient) {
openaiClient = new OpenAI({
apiKey: config.OPENAI_API_KEY,
});
}
return openaiClient;
}
// Model constants
export const EMBEDDING_MODEL = 'text-embedding-3-small';
export const CHAT_MODEL = 'gpt-4o-mini';
export const EMBEDDING_DIMENSIONS = 1536;Create apps/ai/src/types/index.ts:
export interface ChatMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}
export interface ChatContext {
tenantId: string;
userId: string;
employeeId?: string;
}
export interface ToolCall {
id: string;
name: string;
arguments: Record<string, unknown>;
}
export interface ToolResult {
toolCallId: string;
name: string;
result: unknown;
}
export interface ChatRequest {
message: string;
conversationHistory?: ChatMessage[];
context: ChatContext;
}
export interface ChatResponse {
message: string;
toolCalls?: ToolCall[];
toolResults?: ToolResult[];
}Gate
cd apps/ai
npm run build
# Should complete without errors
# Verify imports work
cat > /tmp/test-imports.ts << 'EOF'
import OpenAI from 'openai';
import { ChatOpenAI } from '@langchain/openai';
console.log('Imports OK');
EOF
npx tsx /tmp/test-imports.ts
# Should print: Imports OK
rm /tmp/test-imports.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
Cannot find module | Package not installed | Run npm install |
| Type errors | Version mismatch | Check @langchain versions |
Rollback
npm uninstall openai @langchain/openai @langchain/core langchainLock
apps/ai/src/services/openai.ts
apps/ai/src/types/index.tsCheckpoint
- OpenAI SDK installed
- LangChain packages installed
- OpenAI service created
- Type definitions created
- Build succeeds
Step 142: Create Document Chunking Service
Input
- Step 141 complete
- LangChain installed
Constraints
- Chunk size: 1000 characters
- Chunk overlap: 200 characters
- Semantic chunking by paragraph when possible
- Preserve metadata through chunking
Task
Create apps/ai/src/services/chunking.ts:
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
export interface ChunkInput {
content: string;
metadata: {
documentId: string;
sourceType: 'document' | 'employee' | 'policy';
sourceId: string;
title: string;
};
}
export interface ChunkOutput {
content: string;
chunkIndex: number;
metadata: ChunkInput['metadata'];
}
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
separators: ['\n\n', '\n', '. ', ' ', ''],
});
export async function chunkDocument(input: ChunkInput): Promise<ChunkOutput[]> {
const { content, metadata } = input;
if (!content || content.trim().length === 0) {
return [];
}
const chunks = await splitter.splitText(content);
return chunks.map((chunkContent, index) => ({
content: chunkContent,
chunkIndex: index,
metadata,
}));
}
export async function chunkDocuments(inputs: ChunkInput[]): Promise<ChunkOutput[]> {
const results: ChunkOutput[] = [];
for (const input of inputs) {
const chunks = await chunkDocument(input);
results.push(...chunks);
}
return results;
}Gate
cd apps/ai
# Create test file
cat > /tmp/test-chunking.ts << 'EOF'
import { chunkDocument } from './src/services/chunking';
async function test() {
const result = await chunkDocument({
content: 'This is paragraph one.\n\nThis is paragraph two with more content that goes on for a while to test the chunking behavior.\n\nParagraph three.',
metadata: {
documentId: 'doc-1',
sourceType: 'document',
sourceId: 'src-1',
title: 'Test Doc',
},
});
console.log('Chunks created:', result.length);
console.log('First chunk:', result[0]?.content.substring(0, 50));
if (result.length > 0 && result[0].metadata.documentId === 'doc-1') {
console.log('✅ Chunking test passed');
} else {
console.error('❌ Chunking test failed');
process.exit(1);
}
}
test();
EOF
npx tsx /tmp/test-chunking.ts
# Should print: ✅ Chunking test passed
rm /tmp/test-chunking.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
Cannot find module langchain | Wrong import path | Use langchain/text_splitter |
Rollback
rm apps/ai/src/services/chunking.tsLock
apps/ai/src/services/chunking.tsCheckpoint
- Chunking service created
- Splits by paragraph, then sentence
- Preserves metadata
- Test passes
Step 143: Create Embedding Service
Input
- Step 142 complete
- OpenAI service available
Constraints
- Use text-embedding-3-small model
- Batch embeddings for efficiency (max 100 per request)
- Handle rate limits gracefully
Task
Create apps/ai/src/services/embedding.ts:
import { getOpenAI, EMBEDDING_MODEL, EMBEDDING_DIMENSIONS } from './openai';
export interface EmbeddingInput {
text: string;
id?: string;
}
export interface EmbeddingOutput {
embedding: number[];
id?: string;
}
const MAX_BATCH_SIZE = 100;
export async function generateEmbedding(text: string): Promise<number[]> {
const openai = getOpenAI();
const response = await openai.embeddings.create({
model: EMBEDDING_MODEL,
input: text,
dimensions: EMBEDDING_DIMENSIONS,
});
return response.data[0].embedding;
}
export async function generateEmbeddings(
inputs: EmbeddingInput[],
): Promise<EmbeddingOutput[]> {
const openai = getOpenAI();
const results: EmbeddingOutput[] = [];
// Process in batches
for (let i = 0; i < inputs.length; i += MAX_BATCH_SIZE) {
const batch = inputs.slice(i, i + MAX_BATCH_SIZE);
const texts = batch.map((input) => input.text);
const response = await openai.embeddings.create({
model: EMBEDDING_MODEL,
input: texts,
dimensions: EMBEDDING_DIMENSIONS,
});
for (let j = 0; j < response.data.length; j++) {
results.push({
embedding: response.data[j].embedding,
id: batch[j].id,
});
}
}
return results;
}
export function cosineSimilarity(a: number[], b: number[]): number {
if (a.length !== b.length) {
throw new Error('Vectors must have same length');
}
let dotProduct = 0;
let normA = 0;
let normB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
normA += a[i] * a[i];
normB += b[i] * b[i];
}
return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB));
}Gate
cd apps/ai
# Test requires valid OPENAI_API_KEY
# If you have a valid key in .env:
cat > /tmp/test-embedding.ts << 'EOF'
import { generateEmbedding, cosineSimilarity } from './src/services/embedding';
async function test() {
try {
const embedding = await generateEmbedding('Hello world');
console.log('Embedding length:', embedding.length);
console.log('First 5 values:', embedding.slice(0, 5));
if (embedding.length === 1536) {
console.log('✅ Embedding test passed');
} else {
console.log('⚠️ Unexpected embedding length');
}
} catch (error: any) {
if (error.message?.includes('API key')) {
console.log('⏭️ Skipped: No valid API key');
} else {
throw error;
}
}
}
test();
EOF
npx tsx /tmp/test-embedding.ts
rm /tmp/test-embedding.ts
# Test cosine similarity (no API needed)
cat > /tmp/test-cosine.ts << 'EOF'
import { cosineSimilarity } from './src/services/embedding';
const a = [1, 0, 0];
const b = [1, 0, 0];
const c = [0, 1, 0];
const simAB = cosineSimilarity(a, b);
const simAC = cosineSimilarity(a, c);
console.log('Same vectors:', simAB); // Should be 1
console.log('Orthogonal vectors:', simAC); // Should be 0
if (simAB === 1 && simAC === 0) {
console.log('✅ Cosine similarity test passed');
} else {
console.error('❌ Cosine similarity test failed');
process.exit(1);
}
EOF
npx tsx /tmp/test-cosine.ts
rm /tmp/test-cosine.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
Invalid API key | Wrong/missing key | Check OPENAI_API_KEY in .env |
Rate limit | Too many requests | Add retry logic or wait |
Rollback
rm apps/ai/src/services/embedding.tsLock
apps/ai/src/services/embedding.tsCheckpoint
- Embedding service created
- Batch processing implemented
- Cosine similarity helper added
- Uses text-embedding-3-small
Step 144: Create Vector Store Service
Input
- Step 143 complete
- MongoDB connected
- Embedding service available
Constraints
- Store embeddings in MongoDB chunks collection
- Use cosine similarity for search
- Return top-k results with scores
Task
Create apps/ai/src/services/vector-store.ts:
import { ObjectId } from 'mongodb';
import { getChunksCollection, getDocumentsCollection, ChunkRecord, DocumentRecord } from './mongodb';
import { chunkDocument } from './chunking';
import { generateEmbedding, generateEmbeddings, cosineSimilarity } from './embedding';
export interface IndexDocumentInput {
tenantId: string;
sourceType: 'document' | 'employee' | 'policy';
sourceId: string;
title: string;
content: string;
metadata?: Record<string, unknown>;
}
export interface SearchResult {
documentId: string;
sourceType: string;
sourceId: string;
title: string;
content: string;
score: number;
}
export async function indexDocument(input: IndexDocumentInput): Promise<string> {
const { tenantId, sourceType, sourceId, title, content, metadata = {} } = input;
const documents = await getDocumentsCollection();
const chunks = await getChunksCollection();
// Check for existing document first (MongoDB doesn't allow changing _id)
const existing = await documents.findOne({ tenantId, sourceType, sourceId });
let documentId: string;
if (existing) {
// Update existing document
documentId = existing._id.toString();
await documents.updateOne(
{ _id: existing._id },
{
$set: {
title,
content,
metadata,
updatedAt: new Date(),
},
},
);
} else {
// Insert new document
const result = await documents.insertOne({
tenantId,
sourceType,
sourceId,
title,
content,
metadata,
createdAt: new Date(),
updatedAt: new Date(),
});
documentId = result.insertedId.toString();
}
// Delete old chunks
await chunks.deleteMany({ tenantId, documentId });
// Chunk the document
const docChunks = await chunkDocument({
content,
metadata: { documentId, sourceType, sourceId, title },
});
if (docChunks.length === 0) {
return documentId;
}
// Generate embeddings for all chunks
const embeddings = await generateEmbeddings(
docChunks.map((chunk, i) => ({
text: chunk.content,
id: String(i),
})),
);
// Store chunks with embeddings
const chunkRecords: ChunkRecord[] = docChunks.map((chunk, i) => ({
_id: new ObjectId().toString(),
tenantId,
documentId,
content: chunk.content,
embedding: embeddings[i].embedding,
chunkIndex: chunk.chunkIndex,
metadata: chunk.metadata,
createdAt: new Date(),
}));
await chunks.insertMany(chunkRecords);
return documentId;
}
export async function searchSimilar(
tenantId: string,
query: string,
topK: number = 5,
): Promise<SearchResult[]> {
const chunks = await getChunksCollection();
// Generate query embedding
const queryEmbedding = await generateEmbedding(query);
// Get all chunks for tenant
// NOTE: This is O(n) - acceptable for MVP but for production use
// MongoDB Atlas Vector Search or a dedicated vector database
const allChunks = await chunks.find({ tenantId }).toArray();
if (allChunks.length === 0) {
return [];
}
// Calculate similarities
const scored = allChunks.map((chunk) => ({
chunk,
score: cosineSimilarity(queryEmbedding, chunk.embedding),
}));
// Sort by score and take top-k
scored.sort((a, b) => b.score - a.score);
const topResults = scored.slice(0, topK);
return topResults.map(({ chunk, score }) => ({
documentId: chunk.documentId,
sourceType: chunk.metadata.sourceType,
sourceId: chunk.metadata.sourceId,
title: chunk.metadata.title,
content: chunk.content,
score,
}));
}
export async function deleteDocument(
tenantId: string,
sourceType: string,
sourceId: string,
): Promise<boolean> {
const documents = await getDocumentsCollection();
const chunks = await getChunksCollection();
const doc = await documents.findOne({ tenantId, sourceType, sourceId });
if (!doc) return false;
await chunks.deleteMany({ tenantId, documentId: doc._id });
await documents.deleteOne({ _id: doc._id });
return true;
}Gate
cd apps/ai
# Start MongoDB if not running
docker start hrms-mongo 2>/dev/null || true
# Test requires valid OPENAI_API_KEY for embeddings
cat > /tmp/test-vector-store.ts << 'EOF'
import { connectMongo, closeMongo } from './src/services/mongodb';
import { indexDocument, searchSimilar, deleteDocument } from './src/services/vector-store';
async function test() {
try {
await connectMongo();
// Index a test document
const docId = await indexDocument({
tenantId: 'test-tenant',
sourceType: 'document',
sourceId: 'test-doc-1',
title: 'Employee Handbook',
content: 'All employees must follow the code of conduct. Vacation policy allows 20 days per year.',
});
console.log('Indexed document:', docId);
// Search for similar content
const results = await searchSimilar('test-tenant', 'vacation days', 3);
console.log('Search results:', results.length);
if (results.length > 0) {
console.log('Top result score:', results[0].score.toFixed(3));
console.log('✅ Vector store test passed');
}
// Cleanup
await deleteDocument('test-tenant', 'document', 'test-doc-1');
await closeMongo();
} catch (error: any) {
if (error.message?.includes('API key')) {
console.log('⏭️ Skipped: No valid API key');
} else {
throw error;
}
}
}
test();
EOF
npx tsx /tmp/test-vector-store.ts
rm /tmp/test-vector-store.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
MongoDB not connected | Connection failed | Check MongoDB is running |
Vectors must have same length | Embedding mismatch | Ensure same model used |
Rollback
rm apps/ai/src/services/vector-store.tsLock
apps/ai/src/services/vector-store.tsCheckpoint
- Vector store service created
- Can index documents with embeddings
- Can search by similarity
- Can delete documents
Step 145: Create RAG Query Endpoint
Input
- Step 144 complete
- Vector store functional
Constraints
- Endpoint: POST /ai/query
- Returns relevant document chunks
- Includes context for chat
Task
Create apps/ai/src/routes/rag.ts:
import { Router } from 'express';
import { z } from 'zod';
import { authMiddleware, AuthenticatedRequest } from '../middleware/auth';
import { searchSimilar, indexDocument } from '../services/vector-store';
import { AppError } from '../middleware/error-handler';
const router = Router();
const querySchema = z.object({
query: z.string().min(1).max(1000),
topK: z.number().min(1).max(20).optional().default(5),
});
const indexSchema = z.object({
sourceType: z.enum(['document', 'employee', 'policy']),
sourceId: z.string(),
title: z.string(),
content: z.string(),
metadata: z.record(z.unknown()).optional(),
});
// Query for similar documents
router.post('/query', authMiddleware, async (req, res, next) => {
try {
const authReq = req as AuthenticatedRequest;
const parsed = querySchema.safeParse(req.body);
if (!parsed.success) {
throw new AppError(400, 'VALIDATION_ERROR', parsed.error.message);
}
const { query, topK } = parsed.data;
const results = await searchSimilar(authReq.tenantId, query, topK);
res.json({
data: {
results,
query,
count: results.length,
},
});
} catch (error) {
next(error);
}
});
// Index a document
router.post('/index', authMiddleware, async (req, res, next) => {
try {
const authReq = req as AuthenticatedRequest;
const parsed = indexSchema.safeParse(req.body);
if (!parsed.success) {
throw new AppError(400, 'VALIDATION_ERROR', parsed.error.message);
}
const documentId = await indexDocument({
tenantId: authReq.tenantId,
...parsed.data,
});
res.json({
data: {
documentId,
indexed: true,
},
});
} catch (error) {
next(error);
}
});
export const ragRouter = router;Update apps/ai/src/index.ts to add the route:
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { config } from './config';
import { errorHandler } from './middleware/error-handler';
import { healthRouter } from './routes/health';
import { ragRouter } from './routes/rag';
import { connectMongo } from './services/mongodb';
const app = express();
// Middleware
app.use(helmet());
app.use(cors({
origin: ['http://localhost:3000', 'http://localhost:3001'],
credentials: true,
}));
app.use(express.json({ limit: '10mb' }));
// Routes
app.use(healthRouter);
app.use('/ai', ragRouter);
// Error handling
app.use(errorHandler);
// Start server
async function start(): Promise<void> {
try {
await connectMongo();
const port = parseInt(config.PORT, 10);
app.listen(port, () => {
console.log(`🤖 AI Service running on http://localhost:${port}`);
console.log(` Environment: ${config.NODE_ENV}`);
});
} catch (error) {
console.error('Failed to start server:', error);
process.exit(1);
}
}
start();Gate
cd apps/ai
npm run dev &
sleep 5
# Test query endpoint (should return empty results initially)
curl -X POST http://localhost:3002/ai/query \
-H "Content-Type: application/json" \
-H "x-tenant-id: test-tenant" \
-H "x-user-id: test-user" \
-d '{"query": "vacation policy", "topK": 5}'
# Should return: {"data":{"results":[],"query":"vacation policy","count":0}}
# Test validation
curl -X POST http://localhost:3002/ai/query \
-H "Content-Type: application/json" \
-H "x-tenant-id: test-tenant" \
-H "x-user-id: test-user" \
-d '{"query": ""}'
# Should return validation error
pkill -f "tsx watch"Common Errors
| Error | Cause | Fix |
|---|---|---|
Missing tenant ID | No x-tenant-id header | Add header to request |
| 404 | Route not mounted | Check /ai prefix in index.ts |
Rollback
rm apps/ai/src/routes/rag.tsLock
apps/ai/src/routes/rag.tsCheckpoint
- RAG query endpoint created
- Index endpoint created
- Auth middleware applied
- Validation working
Step 146: Create Chat Tools Registry
Input
- Step 145 complete
- RAG endpoint available
Constraints
- Tools follow OpenAI function calling format
- Each tool has: name, description, parameters, handler
- Registry allows dynamic tool registration
Task
Create apps/ai/src/tools/registry.ts:
import { z, ZodSchema } from 'zod';
import { ChatContext } from '../types';
export interface ToolDefinition {
name: string;
description: string;
parameters: ZodSchema;
handler: (args: unknown, context: ChatContext) => Promise<unknown>;
}
export interface OpenAITool {
type: 'function';
function: {
name: string;
description: string;
parameters: {
type: 'object';
properties: Record<string, unknown>;
required: string[];
};
};
}
class ToolRegistry {
private tools: Map<string, ToolDefinition> = new Map();
register(tool: ToolDefinition): void {
this.tools.set(tool.name, tool);
console.log(`📦 Registered tool: ${tool.name}`);
}
get(name: string): ToolDefinition | undefined {
return this.tools.get(name);
}
getAll(): ToolDefinition[] {
return Array.from(this.tools.values());
}
async execute(
name: string,
args: unknown,
context: ChatContext,
): Promise<unknown> {
const tool = this.tools.get(name);
if (!tool) {
throw new Error(`Tool not found: ${name}`);
}
// Validate arguments
const parsed = tool.parameters.safeParse(args);
if (!parsed.success) {
throw new Error(`Invalid arguments for ${name}: ${parsed.error.message}`);
}
return tool.handler(parsed.data, context);
}
toOpenAITools(): OpenAITool[] {
return this.getAll().map((tool) => ({
type: 'function' as const,
function: {
name: tool.name,
description: tool.description,
parameters: this.zodToJsonSchema(tool.parameters),
},
}));
}
private zodToJsonSchema(schema: ZodSchema): {
type: 'object';
properties: Record<string, unknown>;
required: string[];
} {
// Simple conversion for our use case
// In production, use zod-to-json-schema package
const shape = (schema as z.ZodObject<any>).shape;
const properties: Record<string, unknown> = {};
const required: string[] = [];
for (const [key, value] of Object.entries(shape)) {
const zodType = value as z.ZodTypeAny;
properties[key] = this.zodTypeToJsonSchema(zodType);
if (!zodType.isOptional()) {
required.push(key);
}
}
return {
type: 'object',
properties,
required,
};
}
private zodTypeToJsonSchema(zodType: z.ZodTypeAny): unknown {
if (zodType instanceof z.ZodString) {
return { type: 'string', description: zodType.description };
}
if (zodType instanceof z.ZodNumber) {
return { type: 'number', description: zodType.description };
}
if (zodType instanceof z.ZodBoolean) {
return { type: 'boolean', description: zodType.description };
}
if (zodType instanceof z.ZodOptional) {
return this.zodTypeToJsonSchema(zodType.unwrap());
}
return { type: 'string' };
}
}
export const toolRegistry = new ToolRegistry();Create apps/ai/src/tools/index.ts:
export { toolRegistry } from './registry';
export type { ToolDefinition, OpenAITool } from './registry';Gate
cd apps/ai
cat > /tmp/test-registry.ts << 'EOF'
import { z } from 'zod';
import { toolRegistry } from './src/tools/registry';
// Register a test tool
toolRegistry.register({
name: 'test_tool',
description: 'A test tool',
parameters: z.object({
message: z.string().describe('The message to echo'),
}),
handler: async (args: { message: string }) => {
return { echo: args.message };
},
});
// Test tool retrieval
const tool = toolRegistry.get('test_tool');
console.log('Tool found:', !!tool);
// Test OpenAI format
const openaiTools = toolRegistry.toOpenAITools();
console.log('OpenAI tools:', JSON.stringify(openaiTools, null, 2));
// Test execution
async function testExec() {
const result = await toolRegistry.execute(
'test_tool',
{ message: 'Hello' },
{ tenantId: 't1', userId: 'u1' },
);
console.log('Execution result:', result);
console.log('✅ Registry test passed');
}
testExec();
EOF
npx tsx /tmp/test-registry.ts
rm /tmp/test-registry.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
Tool not found | Tool not registered | Call register() first |
| Zod type error | Complex nested types | Simplify or use zod-to-json-schema |
Rollback
rm -rf apps/ai/src/toolsLock
apps/ai/src/tools/registry.ts
apps/ai/src/tools/index.tsCheckpoint
- Tool registry created
- Can register tools
- Can convert to OpenAI format
- Can execute tools with validation
Step 147: Implement employee_search Tool
Input
- Step 146 complete
- Tool registry available
Prerequisites - Required HRMS Endpoints
This phase assumes these endpoints exist from previous phases:
GET /api/v1/employees?search=(Phase 02 - may need search param added)GET /api/v1/org/employees/:id/summary(Phase 03 - may need to be added)GET /api/v1/timeoff/balances/:employeeId(Phase 05 - verify exists)
If these endpoints don't exist, the tools will fail gracefully with empty results. Verify or add missing endpoints before testing tool integration.
Constraints
- Searches HRMS API for employees
- Returns basic info: id, name, title, department
- Respects tenant context
Task
Create apps/ai/src/services/hrms-client.ts:
import { config } from '../config';
interface HrmsRequestOptions {
tenantId: string;
method?: 'GET' | 'POST' | 'PATCH' | 'DELETE';
body?: unknown;
}
// NOTE: Service-to-service auth requires a ServiceSecretGuard in NestJS API
// that validates x-service-secret header. Add to @nestjs/api guards:
//
// @Injectable()
// export class ServiceSecretGuard implements CanActivate {
// canActivate(context: ExecutionContext): boolean {
// const request = context.switchToHttp().getRequest();
// const secret = request.headers['x-service-secret'];
// return secret === process.env.SERVICE_SECRET;
// }
// }
//
// Apply to endpoints that AI service calls (employee search, org structure, etc.)
export async function hrmsRequest<T>(
path: string,
options: HrmsRequestOptions,
): Promise<T> {
const { tenantId, method = 'GET', body } = options;
const response = await fetch(`${config.HRMS_API_URL}${path}`, {
method,
headers: {
'Content-Type': 'application/json',
'x-tenant-id': tenantId,
'x-service-secret': config.HRMS_API_SECRET,
},
body: body ? JSON.stringify(body) : undefined,
});
if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new Error(`HRMS API error: ${response.status} - ${error.message || 'Unknown error'}`);
}
const result = await response.json();
return result.data;
}Create apps/ai/src/tools/employee-search.ts:
import { z } from 'zod';
import { toolRegistry } from './registry';
import { hrmsRequest } from '../services/hrms-client';
import { ChatContext } from '../types';
const parametersSchema = z.object({
query: z.string().describe('Search query for employee name, email, or job title'),
limit: z.number().min(1).max(10).optional().default(5).describe('Maximum results to return'),
});
interface EmployeeSearchResult {
id: string;
firstName: string;
lastName: string;
email: string;
jobTitle: string | null;
department: string | null;
}
interface EmployeesResponse {
items: Array<{
id: string;
firstName: string;
lastName: string;
email: string;
jobTitle: string | null;
departments: Array<{ department: { name: string }; isPrimary: boolean }>;
}>;
}
async function handler(
args: z.infer<typeof parametersSchema>,
context: ChatContext,
): Promise<EmployeeSearchResult[]> {
const { query, limit } = args;
try {
const response = await hrmsRequest<EmployeesResponse>(
`/api/v1/employees?search=${encodeURIComponent(query)}&limit=${limit}`,
{ tenantId: context.tenantId },
);
return response.items.map((emp) => ({
id: emp.id,
firstName: emp.firstName,
lastName: emp.lastName,
email: emp.email,
jobTitle: emp.jobTitle,
department: emp.departments.find((d) => d.isPrimary)?.department.name || null,
}));
} catch (error) {
console.error('Employee search failed:', error);
return [];
}
}
export function registerEmployeeSearchTool(): void {
toolRegistry.register({
name: 'employee_search',
description: 'Search for employees by name, email, or job title. Use this to find people in the organization.',
parameters: parametersSchema,
handler,
});
}Gate
cd apps/ai
cat > /tmp/test-employee-tool.ts << 'EOF'
import { toolRegistry } from './src/tools/registry';
import { registerEmployeeSearchTool } from './src/tools/employee-search';
registerEmployeeSearchTool();
const tool = toolRegistry.get('employee_search');
console.log('Tool registered:', !!tool);
console.log('Tool name:', tool?.name);
console.log('Tool description:', tool?.description);
// Check OpenAI format
const openaiTools = toolRegistry.toOpenAITools();
const employeeTool = openaiTools.find(t => t.function.name === 'employee_search');
console.log('OpenAI format:', JSON.stringify(employeeTool, null, 2));
console.log('✅ Employee search tool test passed');
EOF
npx tsx /tmp/test-employee-tool.ts
rm /tmp/test-employee-tool.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
HRMS API error | API not running | Start NestJS server |
fetch is not defined | Old Node version | Use Node 20+ |
Rollback
rm apps/ai/src/tools/employee-search.ts
rm apps/ai/src/services/hrms-client.tsLock
apps/ai/src/tools/employee-search.ts
apps/ai/src/services/hrms-client.tsCheckpoint
- HRMS client created
- employee_search tool registered
- Converts to OpenAI format correctly
Step 148: Implement document_search Tool
Input
- Step 147 complete
- Vector store available
Constraints
- Uses RAG for semantic search
- Returns document snippets with context
- Scores results by relevance
Task
Create apps/ai/src/tools/document-search.ts:
import { z } from 'zod';
import { toolRegistry } from './registry';
import { searchSimilar } from '../services/vector-store';
import { ChatContext } from '../types';
const parametersSchema = z.object({
query: z.string().describe('Search query for finding relevant documents'),
limit: z.number().min(1).max(10).optional().default(5).describe('Maximum results to return'),
});
interface DocumentSearchResult {
title: string;
content: string;
sourceType: string;
relevanceScore: number;
}
async function handler(
args: z.infer<typeof parametersSchema>,
context: ChatContext,
): Promise<DocumentSearchResult[]> {
const { query, limit } = args;
const results = await searchSimilar(context.tenantId, query, limit);
return results.map((result) => ({
title: result.title,
content: result.content,
sourceType: result.sourceType,
relevanceScore: Math.round(result.score * 100) / 100,
}));
}
export function registerDocumentSearchTool(): void {
toolRegistry.register({
name: 'document_search',
description: 'Search company documents, policies, and knowledge base using semantic search. Use this to find information about company policies, procedures, or any documented knowledge.',
parameters: parametersSchema,
handler,
});
}Gate
cd apps/ai
cat > /tmp/test-doc-tool.ts << 'EOF'
import { toolRegistry } from './src/tools/registry';
import { registerDocumentSearchTool } from './src/tools/document-search';
registerDocumentSearchTool();
const tool = toolRegistry.get('document_search');
console.log('Tool registered:', !!tool);
console.log('Tool name:', tool?.name);
const openaiTools = toolRegistry.toOpenAITools();
const docTool = openaiTools.find(t => t.function.name === 'document_search');
console.log('Has parameters:', !!docTool?.function.parameters);
console.log('✅ Document search tool test passed');
EOF
npx tsx /tmp/test-doc-tool.ts
rm /tmp/test-doc-tool.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
| Empty results | No documents indexed | Index documents first |
Rollback
rm apps/ai/src/tools/document-search.tsLock
apps/ai/src/tools/document-search.tsCheckpoint
- document_search tool registered
- Uses vector store for semantic search
- Returns relevance scores
Step 149: Implement org_explain Tool
Input
- Step 148 complete
- HRMS client available
Constraints
- Explains employee's org position
- Shows managers, reports, teams
- Natural language summary
Task
Create apps/ai/src/tools/org-explain.ts:
import { z } from 'zod';
import { toolRegistry } from './registry';
import { hrmsRequest } from '../services/hrms-client';
import { ChatContext } from '../types';
const parametersSchema = z.object({
employeeId: z.string().describe('The employee ID to explain org structure for'),
});
interface OrgSummary {
employee: {
id: string;
name: string;
jobTitle: string | null;
};
primaryManager: {
id: string;
name: string;
jobTitle: string | null;
} | null;
dottedLineManagers: Array<{
id: string;
name: string;
jobTitle: string | null;
}>;
directReports: number;
teams: string[];
departments: string[];
}
interface OrgApiResponse {
employee: {
id: string;
firstName: string;
lastName: string;
jobTitle: string | null;
};
primaryManager: {
id: string;
firstName: string;
lastName: string;
jobTitle: string | null;
} | null;
dottedLineManagers: Array<{
id: string;
firstName: string;
lastName: string;
jobTitle: string | null;
}>;
directReports: Array<unknown>;
teams: Array<{ team: { name: string } }>;
departments: Array<{ department: { name: string } }>;
}
async function handler(
args: z.infer<typeof parametersSchema>,
context: ChatContext,
): Promise<OrgSummary> {
const { employeeId } = args;
const response = await hrmsRequest<OrgApiResponse>(
`/api/v1/org/employees/${employeeId}/summary`,
{ tenantId: context.tenantId },
);
return {
employee: {
id: response.employee.id,
name: `${response.employee.firstName} ${response.employee.lastName}`,
jobTitle: response.employee.jobTitle,
},
primaryManager: response.primaryManager
? {
id: response.primaryManager.id,
name: `${response.primaryManager.firstName} ${response.primaryManager.lastName}`,
jobTitle: response.primaryManager.jobTitle,
}
: null,
dottedLineManagers: response.dottedLineManagers.map((m) => ({
id: m.id,
name: `${m.firstName} ${m.lastName}`,
jobTitle: m.jobTitle,
})),
directReports: response.directReports.length,
teams: response.teams.map((t) => t.team.name),
departments: response.departments.map((d) => d.department.name),
};
}
export function registerOrgExplainTool(): void {
toolRegistry.register({
name: 'org_explain',
description: 'Get detailed org structure information for an employee including their manager, dotted line managers, direct reports, teams, and departments.',
parameters: parametersSchema,
handler,
});
}Gate
cd apps/ai
cat > /tmp/test-org-tool.ts << 'EOF'
import { toolRegistry } from './src/tools/registry';
import { registerOrgExplainTool } from './src/tools/org-explain';
registerOrgExplainTool();
const tool = toolRegistry.get('org_explain');
console.log('Tool registered:', !!tool);
console.log('Tool name:', tool?.name);
console.log('✅ Org explain tool test passed');
EOF
npx tsx /tmp/test-org-tool.ts
rm /tmp/test-org-tool.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
| 404 | Employee not found | Verify employee exists |
Rollback
rm apps/ai/src/tools/org-explain.tsLock
apps/ai/src/tools/org-explain.tsCheckpoint
- org_explain tool registered
- Fetches org summary from HRMS
- Returns structured org info
Step 150: Implement timeoff_balance Tool
Input
- Step 149 complete
- HRMS client available
Constraints
- Gets employee's time-off balances
- Shows all policy balances
- Includes used and remaining days
Task
Create apps/ai/src/tools/timeoff-balance.ts:
import { z } from 'zod';
import { toolRegistry } from './registry';
import { hrmsRequest } from '../services/hrms-client';
import { ChatContext } from '../types';
const parametersSchema = z.object({
employeeId: z.string().optional().describe('The employee ID to check balance for. If not provided, returns the current user\'s balance.'),
});
interface TimeOffBalance {
policyName: string;
entitled: number;
used: number;
pending: number;
available: number;
}
// Response shape matches Phase 05 TimeOffBalanceService
interface BalanceApiResponse {
data: Array<{
policyName: string;
entitled: number;
used: number;
pending: number;
available: number;
}>;
}
async function handler(
args: z.infer<typeof parametersSchema>,
context: ChatContext,
): Promise<TimeOffBalance[]> {
// Use provided employeeId or fall back to current user's employee
const employeeId = args.employeeId || context.employeeId;
if (!employeeId) {
throw new Error('No employee ID provided and current user has no linked employee');
}
const response = await hrmsRequest<BalanceApiResponse>(
`/api/v1/timeoff/balances/${employeeId}`,
{ tenantId: context.tenantId },
);
return response.data.map((balance) => ({
policyName: balance.policyName,
entitled: balance.entitled,
used: balance.used,
pending: balance.pending,
available: balance.available,
}));
}
export function registerTimeoffBalanceTool(): void {
toolRegistry.register({
name: 'timeoff_balance',
description: 'Get time-off balance for an employee showing available days, used days, and pending requests for each policy type (vacation, sick leave, etc.).',
parameters: parametersSchema,
handler,
});
}Gate
cd apps/ai
cat > /tmp/test-timeoff-tool.ts << 'EOF'
import { toolRegistry } from './src/tools/registry';
import { registerTimeoffBalanceTool } from './src/tools/timeoff-balance';
registerTimeoffBalanceTool();
const tool = toolRegistry.get('timeoff_balance');
console.log('Tool registered:', !!tool);
console.log('Tool name:', tool?.name);
console.log('✅ Timeoff balance tool test passed');
EOF
npx tsx /tmp/test-timeoff-tool.ts
rm /tmp/test-timeoff-tool.tsCommon Errors
| Error | Cause | Fix |
|---|---|---|
No employee ID | Missing context | Ensure employeeId in context |
Rollback
rm apps/ai/src/tools/timeoff-balance.tsLock
apps/ai/src/tools/timeoff-balance.tsCheckpoint
- timeoff_balance tool registered
- Fetches balances from HRMS
- Calculates remaining days
Step 151: Create Chat Endpoint
Input
- Step 150 complete
- All tools registered
Constraints
- POST /ai/chat
- Streaming responses for UX
- Tool calling with auto-execution
- Conversation history support
Task
Create apps/ai/src/services/chat.ts:
import OpenAI from 'openai';
import { getOpenAI, CHAT_MODEL } from './openai';
import { toolRegistry } from '../tools';
import { ChatContext, ChatMessage, ChatRequest, ToolCall, ToolResult } from '../types';
const SYSTEM_PROMPT = `You are an AI assistant for the HRMS (Human Resource Management System). You help employees with:
- Finding information about colleagues (use employee_search)
- Looking up company policies and documents (use document_search)
- Understanding org structure and reporting lines (use org_explain)
- Checking time-off balances (use timeoff_balance)
Always be helpful, professional, and concise. When you use a tool, explain what you found in natural language.
If you cannot find the information, say so clearly.`;
export interface ChatResult {
message: string;
toolCalls: ToolCall[];
toolResults: ToolResult[];
}
export async function processChat(
request: ChatRequest,
): Promise<ChatResult> {
const openai = getOpenAI();
const tools = toolRegistry.toOpenAITools();
// Build messages
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
{ role: 'system', content: SYSTEM_PROMPT },
...(request.conversationHistory || []).map((msg) => ({
role: msg.role as 'user' | 'assistant',
content: msg.content,
})),
{ role: 'user', content: request.message },
];
const toolCalls: ToolCall[] = [];
const toolResults: ToolResult[] = [];
// First call - might include tool calls
const response = await openai.chat.completions.create({
model: CHAT_MODEL,
messages,
tools: tools.length > 0 ? tools : undefined,
tool_choice: tools.length > 0 ? 'auto' : undefined,
});
const assistantMessage = response.choices[0].message;
// If there are tool calls, execute them
if (assistantMessage.tool_calls && assistantMessage.tool_calls.length > 0) {
// Add assistant message with tool calls to messages
messages.push(assistantMessage);
// Execute each tool call
for (const toolCall of assistantMessage.tool_calls) {
const call: ToolCall = {
id: toolCall.id,
name: toolCall.function.name,
arguments: JSON.parse(toolCall.function.arguments),
};
toolCalls.push(call);
try {
const result = await toolRegistry.execute(
toolCall.function.name,
JSON.parse(toolCall.function.arguments),
request.context,
);
const toolResult: ToolResult = {
toolCallId: toolCall.id,
name: toolCall.function.name,
result,
};
toolResults.push(toolResult);
// Add tool result to messages
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(result),
});
} catch (error) {
const errorResult: ToolResult = {
toolCallId: toolCall.id,
name: toolCall.function.name,
result: { error: error instanceof Error ? error.message : 'Unknown error' },
};
toolResults.push(errorResult);
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify({ error: 'Tool execution failed' }),
});
}
}
// Get final response after tool execution
const finalResponse = await openai.chat.completions.create({
model: CHAT_MODEL,
messages,
});
return {
message: finalResponse.choices[0].message.content || '',
toolCalls,
toolResults,
};
}
// No tool calls, return direct response
return {
message: assistantMessage.content || '',
toolCalls: [],
toolResults: [],
};
}Create apps/ai/src/routes/chat.ts:
import { Router } from 'express';
import { z } from 'zod';
import { authMiddleware, AuthenticatedRequest } from '../middleware/auth';
import { processChat } from '../services/chat';
import { AppError } from '../middleware/error-handler';
const router = Router();
const chatSchema = z.object({
message: z.string().min(1).max(4000),
conversationHistory: z
.array(
z.object({
role: z.enum(['user', 'assistant']),
content: z.string(),
}),
)
.optional(),
context: z
.object({
employeeId: z.string().optional(),
})
.optional(),
});
router.post('/chat', authMiddleware, async (req, res, next) => {
try {
const authReq = req as AuthenticatedRequest;
const parsed = chatSchema.safeParse(req.body);
if (!parsed.success) {
throw new AppError(400, 'VALIDATION_ERROR', parsed.error.message);
}
const { message, conversationHistory, context: additionalContext } = parsed.data;
const result = await processChat({
message,
conversationHistory,
context: {
tenantId: authReq.tenantId,
userId: authReq.userId,
employeeId: additionalContext?.employeeId,
},
});
res.json({
data: result,
});
} catch (error) {
next(error);
}
});
export const chatRouter = router;Create apps/ai/src/tools/init.ts:
import { registerEmployeeSearchTool } from './employee-search';
import { registerDocumentSearchTool } from './document-search';
import { registerOrgExplainTool } from './org-explain';
import { registerTimeoffBalanceTool } from './timeoff-balance';
export function initializeTools(): void {
registerEmployeeSearchTool();
registerDocumentSearchTool();
registerOrgExplainTool();
registerTimeoffBalanceTool();
console.log('✅ All chat tools initialized');
}Update apps/ai/src/index.ts:
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { config } from './config';
import { errorHandler } from './middleware/error-handler';
import { healthRouter } from './routes/health';
import { ragRouter } from './routes/rag';
import { chatRouter } from './routes/chat';
import { connectMongo } from './services/mongodb';
import { initializeTools } from './tools/init';
const app = express();
// Middleware
app.use(helmet());
app.use(cors({
origin: ['http://localhost:3000', 'http://localhost:3001'],
credentials: true,
}));
app.use(express.json({ limit: '10mb' }));
// Routes
app.use(healthRouter);
app.use('/ai', ragRouter);
app.use('/ai', chatRouter);
// Error handling
app.use(errorHandler);
// Start server
async function start(): Promise<void> {
try {
await connectMongo();
initializeTools();
const port = parseInt(config.PORT, 10);
app.listen(port, () => {
console.log(`🤖 AI Service running on http://localhost:${port}`);
console.log(` Environment: ${config.NODE_ENV}`);
});
} catch (error) {
console.error('Failed to start server:', error);
process.exit(1);
}
}
start();Gate
cd apps/ai
npm run dev &
sleep 5
# Test chat endpoint
curl -X POST http://localhost:3002/ai/chat \
-H "Content-Type: application/json" \
-H "x-tenant-id: test-tenant" \
-H "x-user-id: test-user" \
-d '{"message": "Hello, what can you help me with?"}'
# Should return a helpful response explaining capabilities
pkill -f "tsx watch"Common Errors
| Error | Cause | Fix |
|---|---|---|
Invalid API key | Missing OpenAI key | Set OPENAI_API_KEY in .env |
Tool not found | Tools not initialized | Call initializeTools() |
Rollback
rm apps/ai/src/services/chat.ts
rm apps/ai/src/routes/chat.ts
rm apps/ai/src/tools/init.tsLock
apps/ai/src/services/chat.ts
apps/ai/src/routes/chat.ts
apps/ai/src/tools/init.tsCheckpoint
- Chat service created
- Chat route created
- Tools initialized on startup
- Tool execution working
Step 151.5: Create NestJS AI Proxy Controller
Input
- Step 151 complete
- AI service running on port 3002
Constraints
- Frontend calls NestJS API (port 3001), not AI service directly
- NestJS proxies requests to AI service
- Maintains tenant and user context
Why This Step is Needed
The frontend api.ts talks to NestJS (port 3001). The AI service runs on port 3002.
Without this proxy, the frontend cannot reach the AI service.
Task
Create apps/api/src/ai/ai-proxy.controller.ts:
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { TenantGuard } from '../tenant/tenant.guard';
import { TenantId } from '../tenant/tenant.decorator';
import { CurrentUser } from '../auth/current-user.decorator';
interface AuthUser {
id: string;
}
@Controller('api/v1/ai')
@UseGuards(TenantGuard)
export class AiProxyController {
private readonly aiServiceUrl = process.env.AI_SERVICE_URL || 'http://localhost:3002';
@Post('chat')
async proxyChat(
@TenantId() tenantId: string,
@CurrentUser() user: AuthUser,
@Body() body: unknown,
) {
const response = await fetch(`${this.aiServiceUrl}/ai/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-tenant-id': tenantId,
'x-user-id': user.id,
},
body: JSON.stringify(body),
});
if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new Error(error.message || 'AI service error');
}
return response.json();
}
@Post('query')
async proxyQuery(
@TenantId() tenantId: string,
@CurrentUser() user: AuthUser,
@Body() body: unknown,
) {
const response = await fetch(`${this.aiServiceUrl}/ai/query`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-tenant-id': tenantId,
'x-user-id': user.id,
},
body: JSON.stringify(body),
});
if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new Error(error.message || 'AI service error');
}
return response.json();
}
}Create apps/api/src/ai/ai.module.ts:
import { Module } from '@nestjs/common';
import { AiProxyController } from './ai-proxy.controller';
@Module({
controllers: [AiProxyController],
})
export class AiModule {}Register in apps/api/src/app.module.ts:
import { AiModule } from './ai/ai.module';
@Module({
imports: [
// ... existing modules
AiModule,
],
})
export class AppModule {}Add to .env:
AI_SERVICE_URL=http://localhost:3002Gate
cd apps/api
# Verify module exists
cat src/ai/ai.module.ts
# Should show AiModule with AiProxyController
# Verify controller exists
cat src/ai/ai-proxy.controller.ts
# Should show proxy endpoints
# Build check
npm run build
# Should complete without errorsCommon Errors
| Error | Cause | Fix |
|---|---|---|
Cannot find module './ai/ai.module' | Not imported in app.module | Add import |
fetch is not defined | Old Node version | Use Node 20+ |
Rollback
rm -rf apps/api/src/ai
# Remove AiModule from app.module.ts importsLock
apps/api/src/ai/ai-proxy.controller.ts
apps/api/src/ai/ai.module.tsCheckpoint
- AI proxy controller created
- AI module created
- Registered in app.module.ts
- Build succeeds
Step 152: Create AI Chat Widget (Frontend)
Input
- Step 151.5 complete (NestJS AI proxy)
- AI service running on port 3002
Important Note
Steps 152, 153, and 154 create interdependent components. The build may fail until all three steps are complete. This is expected. Complete all three steps before running the final build verification.
Constraints
- Floating button in bottom-right
- Opens chat panel on click
- Uses shadcn/ui components
- Available on all dashboard pages
Prerequisites
Ensure the @/features/* path alias exists in apps/web/tsconfig.json:
{
"compilerOptions": {
"paths": {
"@/*": ["./*"],
"@/features/*": ["./features/*"]
}
}
}Task
Create folder structure:
mkdir -p apps/web/features/ai-chat/components
mkdir -p apps/web/features/ai-chat/hooks
mkdir -p apps/web/features/ai-chat/typesCreate apps/web/features/ai-chat/types/index.ts:
export interface ChatMessage {
id: string;
role: 'user' | 'assistant';
content: string;
timestamp: Date;
toolCalls?: ToolCall[];
toolResults?: ToolResult[];
}
export interface ToolCall {
id: string;
name: string;
arguments: Record<string, unknown>;
}
export interface ToolResult {
toolCallId: string;
name: string;
result: unknown;
}
export interface ChatResponse {
data: {
message: string;
toolCalls?: ToolCall[];
toolResults?: ToolResult[];
};
}Create apps/web/features/ai-chat/hooks/use-chat.ts:
'use client';
import { useState, useCallback } from 'react';
import { ChatMessage, ChatResponse } from '../types';
import { api } from '@/lib/api';
export function useChat() {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const sendMessage = useCallback(async (content: string) => {
if (!content.trim()) return;
const userMessage: ChatMessage = {
id: crypto.randomUUID(),
role: 'user',
content: content.trim(),
timestamp: new Date(),
};
setMessages((prev) => [...prev, userMessage]);
setIsLoading(true);
setError(null);
try {
const conversationHistory = messages.map((msg) => ({
role: msg.role,
content: msg.content,
}));
const response = await api.post<ChatResponse['data']>('/ai/chat', {
message: content.trim(),
conversationHistory,
});
const assistantMessage: ChatMessage = {
id: crypto.randomUUID(),
role: 'assistant',
content: response.data.message,
timestamp: new Date(),
toolCalls: response.data.toolCalls,
toolResults: response.data.toolResults,
};
setMessages((prev) => [...prev, assistantMessage]);
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to send message');
} finally {
setIsLoading(false);
}
}, [messages]);
const clearMessages = useCallback(() => {
setMessages([]);
setError(null);
}, []);
return {
messages,
isLoading,
error,
sendMessage,
clearMessages,
};
}Create apps/web/features/ai-chat/components/ChatWidget.tsx:
'use client';
import { useState } from 'react';
import { MessageSquare, X, Minimize2 } from 'lucide-react';
import { Button } from '@/components/ui/button';
import { ChatPanel } from './ChatPanel';
export function ChatWidget() {
const [isOpen, setIsOpen] = useState(false);
const [isMinimized, setIsMinimized] = useState(false);
if (!isOpen) {
return (
<Button
onClick={() => setIsOpen(true)}
className="fixed bottom-6 right-6 h-14 w-14 rounded-full shadow-lg z-50"
size="icon"
>
<MessageSquare className="h-6 w-6" />
</Button>
);
}
if (isMinimized) {
return (
<div className="fixed bottom-6 right-6 z-50">
<Button
onClick={() => setIsMinimized(false)}
className="h-14 px-4 rounded-full shadow-lg"
>
<MessageSquare className="h-5 w-5 mr-2" />
AI Assistant
</Button>
</div>
);
}
return (
<div className="fixed bottom-6 right-6 w-96 h-[500px] bg-background rounded-2xl shadow-xl shadow-gray-200/50 z-50 flex flex-col overflow-hidden">
{/* Header */}
<div className="flex items-center justify-between p-6 bg-gradient-to-r from-violet-500 via-purple-500 to-fuchsia-500 text-white">
<div className="flex items-center gap-2">
<MessageSquare className="h-5 w-5" />
<span className="font-semibold">AI Assistant</span>
</div>
<div className="flex gap-1">
<Button
variant="ghost"
size="icon"
onClick={() => setIsMinimized(true)}
className="h-8 w-8"
>
<Minimize2 className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon"
onClick={() => setIsOpen(false)}
className="h-8 w-8"
>
<X className="h-4 w-4" />
</Button>
</div>
</div>
{/* Chat Panel */}
<ChatPanel />
</div>
);
}Create apps/web/features/ai-chat/components/ChatPanel.tsx:
'use client';
import { useState, useRef, useEffect } from 'react';
import { Send, Loader2 } from 'lucide-react';
import { Button } from '@/components/ui/button';
import { Input } from '@/components/ui/input';
import { ScrollArea } from '@/components/ui/scroll-area';
import { useChat } from '../hooks/use-chat';
import { ChatMessageItem } from './ChatMessage';
export function ChatPanel() {
const [input, setInput] = useState('');
const messagesEndRef = useRef<HTMLDivElement>(null);
const { messages, isLoading, error, sendMessage } = useChat();
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim() && !isLoading) {
sendMessage(input);
setInput('');
}
};
return (
<>
{/* Messages */}
<ScrollArea className="flex-1 p-4">
{messages.length === 0 ? (
<div className="text-center text-muted-foreground py-8">
<p className="mb-2">Hi! I'm your AI assistant.</p>
<p className="text-sm">
Ask me about employees, policies, org structure, or time-off balances.
</p>
</div>
) : (
<div className="space-y-4">
{messages.map((message) => (
<ChatMessageItem key={message.id} message={message} />
))}
{isLoading && (
<div className="flex items-center gap-2 text-muted-foreground">
<Loader2 className="h-4 w-4 animate-spin" />
<span className="text-sm">Thinking...</span>
</div>
)}
<div ref={messagesEndRef} />
</div>
)}
{error && (
<div className="text-destructive text-sm mt-2">
Error: {error}
</div>
)}
</ScrollArea>
{/* Input */}
<form onSubmit={handleSubmit} className="p-4 bg-gray-50">
<div className="flex gap-2">
<Input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask me anything..."
disabled={isLoading}
className="flex-1"
/>
<Button type="submit" size="icon" disabled={isLoading || !input.trim()}>
<Send className="h-4 w-4" />
</Button>
</div>
</form>
</>
);
}Gate
cd apps/web
# Verify files exist
ls features/ai-chat/components/
# Should show: ChatWidget.tsx, ChatPanel.tsx
ls features/ai-chat/hooks/
# Should show: use-chat.ts
# Build check
npm run build
# Should complete without errors related to ai-chatCommon Errors
| Error | Cause | Fix |
|---|---|---|
Module not found: @/features | Path alias missing | Add to tsconfig.json |
Cannot find ScrollArea | shadcn component missing | Install: npx shadcn@latest add scroll-area |
Rollback
rm -rf apps/web/features/ai-chatLock
apps/web/features/ai-chat/types/index.ts
apps/web/features/ai-chat/hooks/use-chat.ts
apps/web/features/ai-chat/components/ChatWidget.tsx
apps/web/features/ai-chat/components/ChatPanel.tsxCheckpoint
- Chat widget component created
- useChat hook created
- Opens/closes/minimizes correctly
- Build succeeds
Step 153: Create Chat Message Components
Input
- Step 152 complete
- ChatPanel exists
Constraints
- User messages aligned right
- Assistant messages aligned left
- Support markdown in responses
- Show tool calls visually
Task
Create apps/web/features/ai-chat/components/ChatMessage.tsx:
'use client';
import { User, Bot } from 'lucide-react';
import { cn } from '@/lib/utils';
import { ChatMessage, ToolResult } from '../types';
import { ToolResultCard } from './ToolResultCard';
interface ChatMessageItemProps {
message: ChatMessage;
}
export function ChatMessageItem({ message }: ChatMessageItemProps) {
const isUser = message.role === 'user';
return (
<div className={cn('flex gap-3', isUser && 'flex-row-reverse')}>
{/* Avatar */}
<div
className={cn(
'flex-shrink-0 w-8 h-8 rounded-full flex items-center justify-center',
isUser ? 'bg-primary text-primary-foreground' : 'bg-muted',
)}
>
{isUser ? <User className="h-4 w-4" /> : <Bot className="h-4 w-4" />}
</div>
{/* Content */}
<div className={cn('flex flex-col gap-2 max-w-[80%]', isUser && 'items-end')}>
{/* Tool Results */}
{message.toolResults && message.toolResults.length > 0 && (
<div className="space-y-2">
{message.toolResults.map((result) => (
<ToolResultCard key={result.toolCallId} result={result} />
))}
</div>
)}
{/* Message Bubble */}
<div
className={cn(
'rounded-2xl px-4 py-3 text-sm',
isUser
? 'bg-gradient-to-br from-violet-500 to-purple-600 text-white'
: 'bg-gray-100 text-foreground',
)}
>
<MessageContent content={message.content} />
</div>
{/* Timestamp */}
<span className="text-xs text-muted-foreground">
{formatTime(message.timestamp)}
</span>
</div>
</div>
);
}
function MessageContent({ content }: { content: string }) {
// Simple markdown-like formatting
// NOTE: For production, use DOMPurify to sanitize HTML or use react-markdown
// Example: dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(line) }}
const formatted = content
.split('\n')
.map((line, i) => {
// Bold
line = line.replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>');
// Code
line = line.replace(/`(.*?)`/g, '<code class="bg-background/50 px-1 rounded">$1</code>');
return line;
});
return (
<div className="whitespace-pre-wrap">
{formatted.map((line, i) => (
<span key={i} dangerouslySetInnerHTML={{ __html: line + (i < formatted.length - 1 ? '\n' : '') }} />
))}
</div>
);
}
function formatTime(date: Date): string {
return new Intl.DateTimeFormat('en-US', {
hour: 'numeric',
minute: 'numeric',
}).format(date);
}Gate
cd apps/web
# Verify file exists
cat features/ai-chat/components/ChatMessage.tsx | head -20
# Should show component definition
npm run build
# Should complete without errorsCommon Errors
| Error | Cause | Fix |
|---|---|---|
| Missing cn utility | Not imported | Add import from @/lib/utils |
Rollback
rm apps/web/features/ai-chat/components/ChatMessage.tsxLock
apps/web/features/ai-chat/components/ChatMessage.tsxCheckpoint
- ChatMessage component created
- User/assistant styling works
- Markdown formatting works
- Build succeeds
Step 154: Create Tool Result Renderers
Input
- Step 153 complete
- ChatMessage exists
Constraints
- Card-based display for tool results
- Different layouts per tool type
- Collapsible for large results
Task
Create apps/web/features/ai-chat/components/ToolResultCard.tsx:
'use client';
import { useState } from 'react';
import { ChevronDown, ChevronUp, User, FileText, Building2, Calendar } from 'lucide-react';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
import { Button } from '@/components/ui/button';
import { Badge } from '@/components/ui/badge';
import { ToolResult } from '../types';
interface ToolResultCardProps {
result: ToolResult;
}
export function ToolResultCard({ result }: ToolResultCardProps) {
const [isExpanded, setIsExpanded] = useState(true);
const getIcon = () => {
switch (result.name) {
case 'employee_search':
return <User className="h-4 w-4" />;
case 'document_search':
return <FileText className="h-4 w-4" />;
case 'org_explain':
return <Building2 className="h-4 w-4" />;
case 'timeoff_balance':
return <Calendar className="h-4 w-4" />;
default:
return null;
}
};
const getTitle = () => {
switch (result.name) {
case 'employee_search':
return 'Employee Search Results';
case 'document_search':
return 'Document Search Results';
case 'org_explain':
return 'Org Structure';
case 'timeoff_balance':
return 'Time-Off Balance';
default:
return result.name;
}
};
return (
<Card className="text-sm">
<CardHeader className="py-2 px-3">
<div className="flex items-center justify-between">
<div className="flex items-center gap-2">
{getIcon()}
<CardTitle className="text-sm font-medium">{getTitle()}</CardTitle>
</div>
<Button
variant="ghost"
size="icon"
className="h-6 w-6"
onClick={() => setIsExpanded(!isExpanded)}
>
{isExpanded ? (
<ChevronUp className="h-4 w-4" />
) : (
<ChevronDown className="h-4 w-4" />
)}
</Button>
</div>
</CardHeader>
{isExpanded && (
<CardContent className="py-2 px-3">
<ToolResultContent name={result.name} result={result.result} />
</CardContent>
)}
</Card>
);
}
function ToolResultContent({ name, result }: { name: string; result: unknown }) {
if (result && typeof result === 'object' && 'error' in result) {
return <div className="text-destructive">Error: {String((result as any).error)}</div>;
}
switch (name) {
case 'employee_search':
return <EmployeeSearchResult data={result as EmployeeSearchData[]} />;
case 'document_search':
return <DocumentSearchResult data={result as DocumentSearchData[]} />;
case 'org_explain':
return <OrgExplainResult data={result as OrgExplainData} />;
case 'timeoff_balance':
return <TimeoffBalanceResult data={result as TimeoffBalanceData[]} />;
default:
return <pre className="text-xs overflow-auto">{JSON.stringify(result, null, 2)}</pre>;
}
}
// Types for tool results
interface EmployeeSearchData {
id: string;
firstName: string;
lastName: string;
jobTitle: string | null;
department: string | null;
}
interface DocumentSearchData {
title: string;
content: string;
relevanceScore: number;
}
interface OrgExplainData {
employee: { name: string; jobTitle: string | null };
primaryManager: { name: string; jobTitle: string | null } | null;
directReports: number;
teams: string[];
departments: string[];
}
interface TimeoffBalanceData {
policyName: string;
totalDays: number;
usedDays: number;
remainingDays: number;
}
function EmployeeSearchResult({ data }: { data: EmployeeSearchData[] }) {
if (!data || data.length === 0) {
return <div className="text-muted-foreground">No employees found</div>;
}
return (
<div className="space-y-2">
{data.map((emp) => (
<div key={emp.id} className="flex items-center justify-between py-1">
<div>
<div className="font-medium">{emp.firstName} {emp.lastName}</div>
<div className="text-xs text-muted-foreground">{emp.jobTitle || 'No title'}</div>
</div>
{emp.department && (
<Badge variant="outline" className="text-xs">{emp.department}</Badge>
)}
</div>
))}
</div>
);
}
function DocumentSearchResult({ data }: { data: DocumentSearchData[] }) {
if (!data || data.length === 0) {
return <div className="text-muted-foreground">No documents found</div>;
}
return (
<div className="space-y-2">
{data.map((doc, i) => (
<div key={i} className="border-b border-gray-100 last:border-0 pb-2 last:pb-0">
<div className="font-medium">{doc.title}</div>
<div className="text-xs text-muted-foreground line-clamp-2">{doc.content}</div>
<Badge variant="secondary" className="text-xs mt-1">
{Math.round(doc.relevanceScore * 100)}% match
</Badge>
</div>
))}
</div>
);
}
function OrgExplainResult({ data }: { data: OrgExplainData }) {
return (
<div className="space-y-2">
<div>
<div className="font-medium">{data.employee.name}</div>
<div className="text-xs text-muted-foreground">{data.employee.jobTitle}</div>
</div>
{data.primaryManager && (
<div>
<div className="text-xs text-muted-foreground">Reports to:</div>
<div className="text-sm">{data.primaryManager.name}</div>
</div>
)}
<div className="flex gap-4 text-xs">
<div>
<span className="text-muted-foreground">Direct reports: </span>
<span className="font-medium">{data.directReports}</span>
</div>
{data.teams.length > 0 && (
<div>
<span className="text-muted-foreground">Teams: </span>
<span className="font-medium">{data.teams.join(', ')}</span>
</div>
)}
</div>
</div>
);
}
function TimeoffBalanceResult({ data }: { data: TimeoffBalanceData[] }) {
if (!data || data.length === 0) {
return <div className="text-muted-foreground">No balance information</div>;
}
return (
<div className="space-y-2">
{data.map((balance, i) => (
<div key={i} className="flex items-center justify-between">
<div className="font-medium">{balance.policyName}</div>
<div className="text-right">
<div className="font-medium">{balance.remainingDays} days left</div>
<div className="text-xs text-muted-foreground">
{balance.usedDays} used of {balance.totalDays}
</div>
</div>
</div>
))}
</div>
);
}Gate
cd apps/web
# Verify file exists and has all components
grep -c "function.*Result" features/ai-chat/components/ToolResultCard.tsx
# Should return 4 or more
npm run build
# Should complete without errorsCommon Errors
| Error | Cause | Fix |
|---|---|---|
| Missing Card | shadcn not installed | npx shadcn@latest add card |
| Missing Badge | shadcn not installed | npx shadcn@latest add badge |
Rollback
rm apps/web/features/ai-chat/components/ToolResultCard.tsxLock
apps/web/features/ai-chat/components/ToolResultCard.tsxCheckpoint
- ToolResultCard created
- All 4 tool renderers implemented
- Collapsible cards working
- Build succeeds
Step 155: Integrate Chat with Dashboard
Input
- Step 154 complete
- All chat components ready
Constraints
- Add to dashboard layout (not page-specific)
- Load lazily for performance
- Respect auth state
Task
Create apps/web/features/ai-chat/index.ts:
export { ChatWidget } from './components/ChatWidget';
export { useChat } from './hooks/use-chat';
export type { ChatMessage, ToolCall, ToolResult } from './types';Create apps/web/features/ai-chat/components/ChatWidgetWrapper.tsx:
'use client';
import dynamic from 'next/dynamic';
const ChatWidget = dynamic(
() => import('./ChatWidget').then((mod) => mod.ChatWidget),
{
ssr: false,
loading: () => null,
},
);
export function ChatWidgetWrapper() {
return <ChatWidget />;
}Update apps/web/app/dashboard/layout.tsx to include the chat widget:
import { ChatWidgetWrapper } from '@/features/ai-chat/components/ChatWidgetWrapper';
export default function DashboardLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<div className="min-h-screen">
{/* Your existing dashboard layout */}
{children}
{/* AI Chat Widget */}
<ChatWidgetWrapper />
</div>
);
}Note: If a DashboardLayout doesn't exist yet, create it following the pattern above. The chat widget should be at the root of the dashboard layout so it appears on all dashboard pages.
Gate
cd apps/web
# Verify exports
grep "export" features/ai-chat/index.ts
# Should show ChatWidget, useChat, types
# Verify wrapper exists
cat features/ai-chat/components/ChatWidgetWrapper.tsx | head -10
# Should show dynamic import
# Build check
npm run build
# Should complete without errors
# Manual test: Start the app and navigate to dashboard
# The chat button should appear in bottom-right cornerCommon Errors
| Error | Cause | Fix |
|---|---|---|
Cannot find module | Export missing | Check index.ts exports |
| Widget not showing | Not in layout | Add to dashboard layout |
| Hydration error | SSR mismatch | Ensure dynamic with ssr: false |
Rollback
# Remove from dashboard layout
# Remove ChatWidgetWrapper.tsx
# Remove index.ts exportsLock
apps/web/features/ai-chat/index.ts
apps/web/features/ai-chat/components/ChatWidgetWrapper.tsx
apps/web/app/dashboard/layout.tsx (ChatWidget import)Checkpoint
- Chat widget exports created
- Dynamic import wrapper created
- Integrated into dashboard layout
- Widget appears on all dashboard pages
- Build succeeds
Phase 09 Completion Checklist
All Steps Complete
- Step 139: AI Service Express app
- Step 140: MongoDB connection
- Step 141: LangChain and OpenAI installed
- Step 142: Document chunking service
- Step 143: Embedding service
- Step 144: Vector store service
- Step 145: RAG query endpoint
- Step 146: Chat tools registry
- Step 147: employee_search tool
- Step 148: document_search tool
- Step 149: org_explain tool
- Step 150: timeoff_balance tool
- Step 151: Chat endpoint
- Step 152: AI chat widget
- Step 153: Chat message components
- Step 154: Tool result renderers
- Step 155: Dashboard integration
Phase Gate Verification
# 1. AI Service running
cd apps/ai
npm run dev &
sleep 5
curl http://localhost:3002/health
# Should return status: ok, mongodb: connected
# 2. Chat endpoint works
curl -X POST http://localhost:3002/ai/chat \
-H "Content-Type: application/json" \
-H "x-tenant-id: test" \
-H "x-user-id: test" \
-d '{"message": "Hello"}'
# Should return AI response
# 3. Frontend builds
cd apps/web
npm run build
# Should complete without errors
# 4. Chat widget renders
# Start app and navigate to /dashboard
# Chat button should appear in bottom-right
# Click to open, type message, get response
pkill -f "tsx watch"Locked Files After Phase 09
All Phase 8 locks, plus:
apps/ai/*apps/web/features/ai-chat/*
Step 156: Add Natural Language HR Questions (AI-02)
Input
- Step 155 complete
- Chat system working
Constraints
- Answer HR policy questions
- Use RAG with document embeddings
- Return helpful, accurate responses
- ONLY add HR knowledge base integration
Task
1. Create HR Knowledge Base Tool at apps/ai/src/tools/hr-knowledge.ts:
import { MongoClient } from 'mongodb';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export const hrKnowledgeTool = {
name: 'hr_knowledge',
description: 'Answer questions about HR policies, procedures, time-off rules, benefits, and company guidelines. Use this when the user asks about company policies or HR-related questions.',
parameters: {
type: 'object',
properties: {
question: {
type: 'string',
description: 'The HR-related question to answer',
},
category: {
type: 'string',
enum: ['time-off', 'benefits', 'policies', 'procedures', 'general'],
description: 'Category of the HR question',
},
},
required: ['question'],
},
async execute(
args: { question: string; category?: string },
context: { tenantId: string; userId: string },
) {
const client = new MongoClient(process.env.MONGODB_URI!);
try {
await client.connect();
const db = client.db('hrms_ai');
// Generate embedding for the question
const embeddingResponse = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: args.question,
});
const queryEmbedding = embeddingResponse.data[0].embedding;
// Search for relevant documents (policies, procedures, etc.)
const pipeline = [
{
$vectorSearch: {
index: 'vector_index',
path: 'embedding',
queryVector: queryEmbedding,
numCandidates: 50,
limit: 5,
filter: {
tenantId: context.tenantId,
type: { $in: ['policy', 'procedure', 'guideline'] },
},
},
},
{
$project: {
content: 1,
title: 1,
type: 1,
score: { $meta: 'vectorSearchScore' },
},
},
];
const results = await db.collection('document_chunks').aggregate(pipeline).toArray();
if (results.length === 0) {
return {
answer: 'I don\'t have specific information about that in the company knowledge base. Please contact HR directly for assistance.',
sources: [],
confidence: 'low',
};
}
// Combine relevant chunks for context
const context = results.map((r) => `[${r.title}]\n${r.content}`).join('\n\n---\n\n');
// Generate answer using GPT with the context
const completion = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages: [
{
role: 'system',
content: `You are an HR assistant. Answer the user's question based on the provided company documentation.
If the documentation doesn't contain the answer, say so clearly.
Be concise and accurate. Cite which document the information comes from.`,
},
{
role: 'user',
content: `Documentation:\n${context}\n\nQuestion: ${args.question}`,
},
],
max_tokens: 500,
});
const answer = completion.choices[0]?.message?.content || 'Unable to generate answer';
return {
answer,
sources: results.map((r) => ({
title: r.title,
type: r.type,
relevance: Math.round(r.score * 100),
})),
confidence: results[0]?.score > 0.8 ? 'high' : results[0]?.score > 0.6 ? 'medium' : 'low',
};
} finally {
await client.close();
}
},
};2. Register Tool in apps/ai/src/tools/index.ts:
import { hrKnowledgeTool } from './hr-knowledge';
export const tools = {
// ... existing tools
hr_knowledge: hrKnowledgeTool,
};
export const toolSchemas = Object.values(tools).map((tool) => ({
type: 'function' as const,
function: {
name: tool.name,
description: tool.description,
parameters: tool.parameters,
},
}));3. Update Chat System Prompt in apps/ai/src/routes/chat.ts:
const systemPrompt = `You are an intelligent HR assistant for the company's HRMS system.
You can help with:
- Looking up employee information
- Searching documents and policies
- Explaining organizational structure
- Checking time-off balances
- Answering HR policy questions
When answering HR policy questions, use the hr_knowledge tool to search the company knowledge base.
Always provide accurate, helpful responses. If you're not sure about something, say so.
Format your responses in a clear, readable way using markdown when appropriate.`;Gate
# Test HR knowledge query
curl -X POST http://localhost:3002/ai/chat \
-H "Content-Type: application/json" \
-H "x-tenant-id: YOUR_TENANT_ID" \
-H "x-user-id: USER_ID" \
-d '{"message": "What is the vacation policy?"}'
# Should use hr_knowledge tool and return policy informationCheckpoint
- hr_knowledge tool created
- Tool registered in tools index
- System prompt updated
- Can answer HR policy questions
- Type "GATE 156 PASSED" to continue
Step 157: Add Context-Aware Responses (AI-08)
Input
- Step 156 complete
- Tools system working
Constraints
- Include user context in responses
- Remember conversation history
- Personalize based on user role
- ONLY modify chat endpoint
Task
1. Create User Context Service at apps/ai/src/services/user-context.ts:
export interface UserContext {
userId: string;
tenantId: string;
employeeId?: string;
name?: string;
role?: string;
department?: string;
directReports?: number;
pendingApprovals?: number;
}
export async function fetchUserContext(
tenantId: string,
userId: string,
apiUrl: string,
): Promise<UserContext> {
try {
// Fetch user info from main API
const [userResponse, contextResponse] = await Promise.all([
fetch(`${apiUrl}/api/v1/employees/me`, {
headers: {
'x-tenant-id': tenantId,
'x-user-id': userId,
},
}).then((r) => r.json()).catch(() => null),
fetch(`${apiUrl}/api/v1/ai/context`, {
headers: {
'x-tenant-id': tenantId,
'x-user-id': userId,
},
}).then((r) => r.json()).catch(() => null),
]);
return {
userId,
tenantId,
employeeId: userResponse?.data?.id,
name: userResponse?.data?.firstName,
role: userResponse?.data?.systemRole,
department: userResponse?.data?.department?.name,
directReports: contextResponse?.data?.directReportsCount || 0,
pendingApprovals: contextResponse?.data?.pendingApprovals || 0,
};
} catch (error) {
console.error('Failed to fetch user context:', error);
return { userId, tenantId };
}
}2. Add Context Endpoint to NestJS API at apps/api/src/ai/ai-context.controller.ts:
import { Controller, Get, UseGuards } from '@nestjs/common';
import { TenantGuard } from '../common/guards';
import { TenantId, CurrentUser } from '../common/decorators';
import { PrismaService } from '../prisma/prisma.service';
@Controller('api/v1/ai')
@UseGuards(TenantGuard)
export class AiContextController {
constructor(private prisma: PrismaService) {}
@Get('context')
async getContext(
@TenantId() tenantId: string,
@CurrentUser('id') userId: string,
) {
const employee = await this.prisma.employee.findFirst({
where: { userId, tenantId, deletedAt: null },
include: {
orgRelations: {
include: {
department: true,
},
},
},
});
if (!employee) {
return { data: null, error: null };
}
// Count direct reports
const directReportsCount = await this.prisma.employeeOrgRelations.count({
where: { primaryManagerId: employee.id },
});
// Count pending approvals (for managers)
const pendingApprovals = await this.prisma.timeOffRequest.count({
where: {
status: 'PENDING',
employee: {
orgRelations: {
primaryManagerId: employee.id,
},
},
},
});
return {
data: {
directReportsCount,
pendingApprovals,
role: employee.systemRole,
department: employee.orgRelations?.department?.name,
},
error: null,
};
}
}3. Update Chat Route to Use Context in apps/ai/src/routes/chat.ts:
import { fetchUserContext } from '../services/user-context';
router.post('/chat', async (req, res) => {
const { message, conversationHistory = [] } = req.body;
const tenantId = req.headers['x-tenant-id'] as string;
const userId = req.headers['x-user-id'] as string;
// Fetch user context
const userContext = await fetchUserContext(
tenantId,
userId,
process.env.API_URL || 'http://localhost:3001',
);
// Build contextual system prompt
let contextPrompt = 'You are an intelligent HR assistant.';
if (userContext.name) {
contextPrompt += `\nYou are speaking with ${userContext.name}`;
if (userContext.role) contextPrompt += ` (${userContext.role})`;
if (userContext.department) contextPrompt += ` from ${userContext.department}`;
contextPrompt += '.';
}
if (userContext.directReports && userContext.directReports > 0) {
contextPrompt += `\nThey manage ${userContext.directReports} direct reports.`;
}
if (userContext.pendingApprovals && userContext.pendingApprovals > 0) {
contextPrompt += `\nThey have ${userContext.pendingApprovals} pending approval(s).`;
}
// Include context in the completion
const completion = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages: [
{ role: 'system', content: contextPrompt },
...conversationHistory,
{ role: 'user', content: message },
],
tools: toolSchemas,
});
// ... rest of chat handling
});Gate
# Test context-aware response
# Login as a manager with pending approvals
curl -X POST http://localhost:3002/ai/chat \
-H "Content-Type: application/json" \
-H "x-tenant-id: YOUR_TENANT_ID" \
-H "x-user-id: MANAGER_USER_ID" \
-d '{"message": "What do I need to do today?"}'
# Should mention pending approvals and personalize responseCheckpoint
- User context service created
- Context endpoint added to API
- Chat uses context in prompts
- Responses are personalized
- Type "GATE 157 PASSED" to continue
Step 158: Add Suggested Actions (AI-09)
Input
- Step 157 complete
- Context-aware chat working
Constraints
- Show quick action buttons
- Based on user context and conversation
- Actions trigger real functionality
- ONLY add suggestion system
Task
1. Update Chat Response to Include Suggestions in apps/ai/src/routes/chat.ts:
function generateSuggestions(
userContext: UserContext,
message: string,
response: string,
): string[] {
const suggestions: string[] = [];
// Based on pending approvals
if (userContext.pendingApprovals && userContext.pendingApprovals > 0) {
suggestions.push('Review pending approvals');
}
// Based on conversation content
const lowerResponse = response.toLowerCase();
if (lowerResponse.includes('time-off') || lowerResponse.includes('vacation')) {
suggestions.push('Submit time-off request');
suggestions.push('Check my balance');
}
if (lowerResponse.includes('employee') || lowerResponse.includes('team')) {
suggestions.push('View my team');
suggestions.push('Search employees');
}
if (lowerResponse.includes('document') || lowerResponse.includes('policy')) {
suggestions.push('Browse documents');
}
// Default suggestions if none generated
if (suggestions.length === 0) {
suggestions.push('Ask another question');
if (userContext.role === 'MANAGER' || userContext.role === 'HR_ADMIN') {
suggestions.push('View team dashboard');
}
}
return suggestions.slice(0, 4); // Max 4 suggestions
}
// In chat endpoint response:
return res.json({
data: {
message: assistantMessage,
toolCalls,
toolResults,
suggestions: generateSuggestions(userContext, message, assistantMessage),
},
});2. Update Chat Types in apps/web/features/ai-chat/types/index.ts:
export interface ChatResponse {
data: {
message: string;
toolCalls?: ToolCall[];
toolResults?: ToolResult[];
suggestions?: string[]; // Add suggestions
};
}3. Create Suggestions Component at apps/web/features/ai-chat/components/ChatSuggestions.tsx:
'use client';
import { Button } from '@/components/ui/button';
import { useRouter } from 'next/navigation';
interface ChatSuggestionsProps {
suggestions: string[];
onSuggestionClick: (suggestion: string) => void;
}
const actionRoutes: Record<string, string> = {
'Review pending approvals': '/dashboard/time-off/approvals',
'Submit time-off request': '/dashboard/time-off/new',
'Check my balance': '/dashboard/time-off',
'View my team': '/dashboard/org',
'Search employees': '/dashboard/employees',
'Browse documents': '/dashboard/documents',
'View team dashboard': '/dashboard/dashboards',
};
export function ChatSuggestions({
suggestions,
onSuggestionClick,
}: ChatSuggestionsProps) {
const router = useRouter();
if (!suggestions || suggestions.length === 0) return null;
const handleClick = (suggestion: string) => {
const route = actionRoutes[suggestion];
if (route) {
router.push(route);
} else {
onSuggestionClick(suggestion);
}
};
return (
<div className="flex flex-wrap gap-2 p-4 bg-gray-50/50">
{suggestions.map((suggestion, index) => (
<Button
key={index}
variant="outline"
size="sm"
onClick={() => handleClick(suggestion)}
className="text-xs"
>
{suggestion}
</Button>
))}
</div>
);
}4. Update ChatPanel to Show Suggestions in apps/web/features/ai-chat/components/ChatPanel.tsx:
import { ChatSuggestions } from './ChatSuggestions';
// In ChatPanel, after messages list:
{messages.length > 0 && messages[messages.length - 1].suggestions && (
<ChatSuggestions
suggestions={messages[messages.length - 1].suggestions}
onSuggestionClick={(text) => sendMessage(text)}
/>
)}Gate
cd apps/web && npm run dev
# Open chat widget
# Ask "What's my time-off balance?"
# Should show suggestion buttons like:
# - "Submit time-off request"
# - "Check my balance"
# Clicking a button should either navigate or send as messageCheckpoint
- Suggestions generated based on context
- Suggestions component created
- Buttons navigate or trigger actions
- Max 4 suggestions shown
- Type "GATE 158 PASSED" to continue
Step 159: Add System Health Dashboard (SYS-02)
Input
- Step 158 complete
- AI service running
Constraints
- Show AI service status
- Display embedding stats
- Admin-only access
- ONLY add health monitoring UI
Task
1. Add Health Endpoint to AI Service in apps/ai/src/routes/health.ts:
import { Router } from 'express';
import { MongoClient } from 'mongodb';
const router = Router();
router.get('/health', async (req, res) => {
const client = new MongoClient(process.env.MONGODB_URI!);
let mongoStatus = 'disconnected';
let stats = null;
try {
await client.connect();
await client.db('admin').command({ ping: 1 });
mongoStatus = 'connected';
// Get stats
const db = client.db('hrms_ai');
const embeddingsCount = await db.collection('document_chunks').countDocuments();
const tenantsWithEmbeddings = await db.collection('document_chunks')
.distinct('tenantId');
stats = {
embeddingsCount,
tenantsCount: tenantsWithEmbeddings.length,
lastUpdated: new Date().toISOString(),
};
} catch (error) {
console.error('Health check failed:', error);
} finally {
await client.close();
}
res.json({
status: 'ok',
mongodb: mongoStatus,
openai: process.env.OPENAI_API_KEY ? 'configured' : 'missing',
stats,
uptime: process.uptime(),
});
});
export default router;2. Create System Health Page at apps/web/app/dashboard/admin/system/page.tsx:
import { Metadata } from 'next';
import { SystemHealthView } from './system-health-view';
export const metadata: Metadata = {
title: 'System Health | Admin',
};
export default function SystemHealthPage() {
return (
<div className="container py-6">
<h1 className="text-2xl font-bold mb-6">System Health</h1>
<SystemHealthView />
</div>
);
}Create apps/web/app/dashboard/admin/system/system-health-view.tsx:
'use client';
import { useQuery } from '@tanstack/react-query';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
import { Badge } from '@/components/ui/badge';
import { Skeleton } from '@/components/ui/skeleton';
import { CheckCircle, XCircle, AlertCircle, Activity, Database, Brain } from 'lucide-react';
import { api } from '@/lib/api';
interface HealthData {
status: string;
mongodb: string;
openai: string;
stats: {
embeddingsCount: number;
tenantsCount: number;
lastUpdated: string;
} | null;
uptime: number;
}
export function SystemHealthView() {
const { data: health, isLoading, error } = useQuery({
queryKey: ['system-health'],
queryFn: async () => {
const response = await fetch(
`${process.env.NEXT_PUBLIC_AI_URL || 'http://localhost:3002'}/health`
);
return response.json() as Promise<HealthData>;
},
refetchInterval: 30000, // Refresh every 30 seconds
});
if (isLoading) {
return (
<div className="grid gap-4 md:grid-cols-2 lg:grid-cols-3">
{[...Array(3)].map((_, i) => (
<Skeleton key={i} className="h-32" />
))}
</div>
);
}
if (error || !health) {
return (
<Card className="bg-red-50 shadow-lg shadow-red-200/50">
<CardContent className="pt-6">
<div className="flex items-center gap-2 text-red-600">
<XCircle className="h-5 w-5" />
<span>Failed to connect to AI service</span>
</div>
</CardContent>
</Card>
);
}
const StatusIcon = ({ status }: { status: string }) => {
if (status === 'ok' || status === 'connected' || status === 'configured') {
return <CheckCircle className="h-5 w-5 text-green-600" />;
}
if (status === 'missing') {
return <AlertCircle className="h-5 w-5 text-yellow-600" />;
}
return <XCircle className="h-5 w-5 text-red-600" />;
};
const formatUptime = (seconds: number) => {
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
return `${hours}h ${minutes}m`;
};
return (
<div className="space-y-6">
{/* Status Cards */}
<div className="grid gap-4 md:grid-cols-2 lg:grid-cols-4">
<Card>
<CardHeader className="flex flex-row items-center justify-between space-y-0 pb-2">
<CardTitle className="text-sm font-medium">AI Service</CardTitle>
<Activity className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="flex items-center gap-2">
<StatusIcon status={health.status} />
<span className="text-2xl font-bold capitalize">{health.status}</span>
</div>
<p className="text-xs text-muted-foreground mt-1">
Uptime: {formatUptime(health.uptime)}
</p>
</CardContent>
</Card>
<Card>
<CardHeader className="flex flex-row items-center justify-between space-y-0 pb-2">
<CardTitle className="text-sm font-medium">MongoDB</CardTitle>
<Database className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="flex items-center gap-2">
<StatusIcon status={health.mongodb} />
<span className="text-2xl font-bold capitalize">{health.mongodb}</span>
</div>
</CardContent>
</Card>
<Card>
<CardHeader className="flex flex-row items-center justify-between space-y-0 pb-2">
<CardTitle className="text-sm font-medium">OpenAI API</CardTitle>
<Brain className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="flex items-center gap-2">
<StatusIcon status={health.openai} />
<span className="text-2xl font-bold capitalize">{health.openai}</span>
</div>
</CardContent>
</Card>
<Card>
<CardHeader className="flex flex-row items-center justify-between space-y-0 pb-2">
<CardTitle className="text-sm font-medium">Embeddings</CardTitle>
<Brain className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="text-2xl font-bold">
{health.stats?.embeddingsCount.toLocaleString() || 0}
</div>
<p className="text-xs text-muted-foreground mt-1">
Across {health.stats?.tenantsCount || 0} tenants
</p>
</CardContent>
</Card>
</div>
{/* Configuration Guide */}
<Card>
<CardHeader>
<CardTitle>Configuration Status</CardTitle>
</CardHeader>
<CardContent>
<div className="space-y-3">
<div className="flex items-center justify-between">
<span>MongoDB Connection</span>
<Badge variant={health.mongodb === 'connected' ? 'default' : 'destructive'}>
{health.mongodb === 'connected' ? 'Connected' : 'Disconnected'}
</Badge>
</div>
<div className="flex items-center justify-between">
<span>OpenAI API Key</span>
<Badge variant={health.openai === 'configured' ? 'default' : 'secondary'}>
{health.openai === 'configured' ? 'Configured' : 'Not Set'}
</Badge>
</div>
<div className="flex items-center justify-between">
<span>Vector Search Index</span>
<Badge variant={(health.stats?.embeddingsCount || 0) > 0 ? 'default' : 'secondary'}>
{(health.stats?.embeddingsCount || 0) > 0 ? 'Active' : 'Empty'}
</Badge>
</div>
</div>
</CardContent>
</Card>
</div>
);
}3. Add Navigation (admin only):
// In admin sidebar navigation
{
name: 'System Health',
href: '/dashboard/admin/system',
icon: Activity,
roles: ['SYSTEM_ADMIN'],
}Gate
cd apps/web && npm run dev
# Navigate to /dashboard/admin/system (as SYSTEM_ADMIN)
# Should show:
# - AI Service status
# - MongoDB connection status
# - OpenAI API status
# - Embedding stats
# Data should refresh every 30 secondsCheckpoint
- Health endpoint returns all stats
- System health page created
- Shows connection statuses
- Shows embedding counts
- Admin-only access
- Type "GATE 159 PASSED" to continue
Summary
Phase 09 implements the complete AI integration:
| Component | Technology | Purpose |
|---|---|---|
| AI Service | Express 4.x | Separate service for AI features |
| MongoDB | Native driver | Vector embeddings storage |
| LangChain | 0.1.x | RAG pipeline orchestration |
| OpenAI | SDK v4 | Embeddings + Chat completions |
| Tools | Custom registry | employee_search, document_search, org_explain, timeoff_balance, hr_knowledge |
| Frontend | React + shadcn | Floating chat widget with suggestions |
| Admin | System health | Monitoring dashboard |
Total: 21 steps, ~12-16 hours
Phase Completion Checklist (MANDATORY)
BEFORE MOVING TO NEXT PHASE
Complete ALL items before proceeding. Do NOT skip any step.
1. Gate Verification
- All step gates passed
- AI chat widget functional
- RAG for policy Q&A working
- AI.org.explain() working
- AI.org.validate() working
2. Update PROJECT_STATE.md
- Mark Phase 09 as COMPLETED with timestamp
- Update "Current Phase" to Phase 10
- Add session log entry3. Update WHAT_EXISTS.md
## API Endpoints
- /api/v1/ai/*
## Frontend Components
- AI Chat widget
- Policy Q&A interface
## Established Patterns
- AI service integration pattern4. Git Tag & Commit
git add PROJECT_STATE.md WHAT_EXISTS.md
git commit -m "chore: complete Phase 09 - AI Integration"
git tag phase-09-ai-integrationNext Phase
After verification, proceed to Phase 10: Platform Admin
Last Updated: 2025-11-30