Bluewoo HRMS

AI Service

AI capabilities, use cases, and architecture

AI Service

What We're Building

An intelligent AI service that helps employees and HR managers through natural language interactions. The AI understands company policies, answers HR questions, and enhances productivity.

User Use Cases

1. HR Policy Assistant

User goal: Get instant answers to HR questions without searching documents.

  • "What's our vacation policy?"
  • "How do I request parental leave?"
  • "What are the steps for onboarding a new team member?"

The AI searches uploaded policy documents using RAG and provides accurate, company-specific answers.

User goal: Find relevant information across all HR documents.

  • Upload company handbooks, policies, procedures
  • Search documents using natural language
  • Results respect access controls (private, team, company-wide)

3. Team Update Summaries

User goal: Quickly understand team activity without reading everything.

  • AI summarizes team feed posts
  • Transforms voice/video posts into text summaries
  • Highlights key updates and action items

4. Goal Suggestions

User goal: Get help setting meaningful professional goals.

  • AI suggests goals based on job role and responsibilities
  • Provides OKR-style key results
  • Aligns with company objectives

Technology Decisions

ComponentChoiceWhy
ServiceSeparate Express appIndependent scaling, AI isolation
LLMOpenAI GPT-4Best quality responses
Embeddingstext-embedding-ada-0021536 dimensions
Vector DBMongoDB AtlasVector search + document storage
FrameworkLangChainRAG orchestration
StreamingServer-Sent EventsReal-time responses
QueueBullMQBackground document processing

Architecture

Separate Service (Not in NestJS Monolith)

  • Why separate: Different scaling needs, AI is CPU-intensive
  • Communication: REST API calls from HRMS backend
  • Auth: Same JWT tokens as HRMS (shared secret)
  • Data: MongoDB for vectors, PostgreSQL for metadata

Project Structure

packages/hrms-ai/
├── src/
│   ├── routes/        # chat, documents
│   ├── services/      # rag, openai, chat
│   ├── workers/       # document-processor
│   └── middleware/    # auth, rate-limit

RAG (Retrieval Augmented Generation)

How It Works

  1. Upload: User uploads document with visibility settings
  2. Process: BullMQ job extracts text, chunks, generates embeddings
  3. Store: Embeddings stored in MongoDB with access metadata
  4. Query: User asks question
  5. Retrieve: Permission-aware vector search finds relevant chunks
  6. Generate: LLM generates answer using retrieved context

Visibility Levels

LevelWho Can Access
PrivateOnly the uploader
TeamTeam members only
CompanyAll employees in tenant
  • Always filter by tenantId (multi-tenant isolation)
  • Filter by visibility based on user's permissions
  • Platform admins can access all (within tenant context)
  • No data leakage between tenants

Key Features

Streaming Responses

  • Real-time response streaming via SSE
  • First token < 3 seconds
  • Better UX than waiting for full response

Background Processing

  • Document processing in BullMQ workers
  • Doesn't block user requests
  • Handles large documents gracefully

Rate Limiting

  • 100 requests per 15 minutes per user
  • Prevents abuse and controls API costs

Success Metrics

  • AI answers HR questions accurately (based on uploaded docs)
  • Document search returns relevant results
  • Response streaming works (first token < 3 seconds)
  • Permission-aware (zero data leakage)
  • Processing handles standard document sizes

AI Chat Action Execution

Beyond Q&A and document search, the AI Service powers the AI Chat feature that allows users to execute HRMS actions through natural language.

How It Works

  1. User types a command in AI Chat (e.g., "Create an onboarding workflow for Frontend Developers")
  2. AI Service parses intent and maps to available tools
  3. Tool execution requires user confirmation (for write operations)
  4. Results displayed in chat widgets with links to created resources

Integration with HRMS Backend

The AI Service calls HRMS backend APIs to execute actions:

User Request → AI Chat → AI Service → HRMS API → Database

            Chat Widgets (confirmation, progress, results)

Available Action Categories

CategoryExamples
EmployeeCreate employee, search, transfer, terminate
Time-OffSubmit leave, check balance, approve requests
WorkflowsCreate onboarding, track progress, initiate offboarding
DocumentsSearch documents, ask questions, get summaries
DashboardGet metrics, pending approvals, AI insights

Security

  • All actions respect user's role and permissions
  • Write operations require explicit confirmation
  • Tenant isolation enforced on every action
  • Audit trail for all AI-initiated actions

Full Specification: See AI Chat Specification for complete tool definitions, widget specifications, and implementation details.


Implementation details to be defined during development