Bluewoo HRMS
Deployment

Redis Memorystore Setup

Step-by-step guide to setting up Redis on Google Cloud Memorystore

Redis Memorystore Setup

This guide provides detailed instructions for setting up Google Cloud Memorystore for Redis to handle caching and session storage for the HRMS application.

Overview

SettingValue
ServiceGoogle Cloud Memorystore for Redis
PurposeCaching, sessions, job queues
VersionRedis 7.0
TierBasic (staging) / Standard (production)

Why Memorystore?

  1. Fully managed: No Redis administration
  2. High availability: Standard tier has automatic failover
  3. VPC integration: Private network access
  4. Sub-millisecond latency: Same region as Cloud Run
  5. Automatic patching: Security updates applied automatically

HRMS Redis Usage

Use CasePatternTTL
User sessionssession:{sessionId}24h
Permission cachepermissions:{userId}5min
Dashboard statsdashboard:{tenantId}:{widgetId}1min
Rate limitingratelimit:{ip}1min
Job queues (BullMQ)bull:{queueName}:*-

Prerequisites

  • GCP project with billing enabled
  • VPC network configured
  • APIs enabled:
    • redis.googleapis.com
    • vpcaccess.googleapis.com
    • servicenetworking.googleapis.com
gcloud services enable \
  redis.googleapis.com \
  vpcaccess.googleapis.com \
  servicenetworking.googleapis.com

Step 1: Create VPC Connector

Cloud Run needs a VPC connector to reach Memorystore (which is VPC-only).

export PROJECT_ID="bluewoo-hrms"
export REGION="europe-west6"

# Create VPC connector
gcloud compute networks vpc-access connectors create hrms-connector \
  --region=$REGION \
  --network=default \
  --range=10.8.0.0/28 \
  --min-instances=2 \
  --max-instances=10

# Verify connector
gcloud compute networks vpc-access connectors describe hrms-connector \
  --region=$REGION

Step 2: Create Redis Instance

Staging Instance (Basic Tier)

gcloud redis instances create hrms-cache-staging \
  --size=1 \
  --region=$REGION \
  --redis-version=redis_7_0 \
  --tier=basic \
  --network=default

Production Instance (Standard Tier)

gcloud redis instances create hrms-cache-prod \
  --size=2 \
  --region=$REGION \
  --redis-version=redis_7_0 \
  --tier=standard \
  --network=default \
  --replica-count=1 \
  --read-replicas-mode=READ_REPLICAS_ENABLED

Note: Standard tier provides automatic failover and read replicas.


Step 3: Get Connection Information

# Get Redis host and port
gcloud redis instances describe hrms-cache-staging --region=$REGION \
  --format="value(host)"
# Output: 10.0.0.3

gcloud redis instances describe hrms-cache-staging --region=$REGION \
  --format="value(port)"
# Output: 6379

# Full connection info
gcloud redis instances describe hrms-cache-staging --region=$REGION \
  --format="table(host,port,currentLocationId)"

Step 4: Store Connection String in Secret Manager

# Get host
REDIS_HOST=$(gcloud redis instances describe hrms-cache-staging --region=$REGION --format="value(host)")

# Create secret
echo -n "redis://${REDIS_HOST}:6379" | \
  gcloud secrets create hrms-redis-url \
    --data-file=- \
    --replication-policy="user-managed" \
    --locations="europe-west6"

# For production with auth (if enabled)
# echo -n "redis://:PASSWORD@${REDIS_HOST}:6379" | ...

Step 5: Configure Cloud Run to Use Redis

# Deploy with VPC connector and Redis secret
gcloud run deploy hrms-api \
  --image=... \
  --region=$REGION \
  --vpc-connector=hrms-connector \
  --set-secrets="REDIS_URL=hrms-redis-url:latest"

GitHub Actions

- name: Deploy API
  uses: google-github-actions/deploy-cloudrun@v2
  with:
    service: hrms-api
    region: europe-west6
    image: ${{ env.IMAGE }}
    flags: |
      --vpc-connector=hrms-connector
      --set-secrets=REDIS_URL=hrms-redis-url:latest

Step 6: Test Connection

From Cloud Shell

# Install redis-cli
sudo apt-get install redis-tools

# Get Redis IP
REDIS_IP=$(gcloud redis instances describe hrms-cache-staging --region=$REGION --format="value(host)")

# Test connection (from Cloud Shell in same VPC)
redis-cli -h $REDIS_IP ping
# Output: PONG

redis-cli -h $REDIS_IP info server

From Application

// Test Redis connection
import Redis from 'ioredis';

const redis = new Redis(process.env.REDIS_URL);

async function testRedis() {
  await redis.ping();
  console.log('Redis connected');
  
  await redis.set('test', 'value');
  const value = await redis.get('test');
  console.log('Test value:', value);
  
  await redis.del('test');
}

Redis Usage Patterns

Session Storage (Auth.js)

// apps/web/auth.ts
import { Redis } from 'ioredis';
import { UpstashRedisAdapter } from '@auth/upstash-redis-adapter';

const redis = new Redis(process.env.REDIS_URL);

export const authOptions = {
  adapter: UpstashRedisAdapter(redis),
  // ... rest of config
};

Permission Caching (NestJS)

// apps/api/src/cache/permission-cache.service.ts
import { Injectable } from '@nestjs/common';
import { Redis } from 'ioredis';

@Injectable()
export class PermissionCacheService {
  private redis: Redis;
  private TTL = 300; // 5 minutes

  constructor() {
    this.redis = new Redis(process.env.REDIS_URL);
  }

  async getPermissions(userId: string): Promise<string[] | null> {
    const cached = await this.redis.get(`permissions:${userId}`);
    return cached ? JSON.parse(cached) : null;
  }

  async setPermissions(userId: string, permissions: string[]): Promise<void> {
    await this.redis.setex(
      `permissions:${userId}`,
      this.TTL,
      JSON.stringify(permissions)
    );
  }

  async invalidatePermissions(userId: string): Promise<void> {
    await this.redis.del(`permissions:${userId}`);
  }
}

Dashboard Cache

// apps/api/src/cache/dashboard-cache.service.ts
@Injectable()
export class DashboardCacheService {
  private redis: Redis;
  private TTL = 60; // 1 minute

  async getWidgetData(tenantId: string, widgetId: string): Promise<any | null> {
    const key = `dashboard:${tenantId}:${widgetId}`;
    const cached = await this.redis.get(key);
    return cached ? JSON.parse(cached) : null;
  }

  async setWidgetData(tenantId: string, widgetId: string, data: any): Promise<void> {
    const key = `dashboard:${tenantId}:${widgetId}`;
    await this.redis.setex(key, this.TTL, JSON.stringify(data));
  }
}

BullMQ Job Queue

// apps/api/src/queues/email.queue.ts
import { Queue, Worker } from 'bullmq';

const connection = {
  host: process.env.REDIS_HOST,
  port: parseInt(process.env.REDIS_PORT || '6379'),
};

export const emailQueue = new Queue('email', { connection });

export const emailWorker = new Worker(
  'email',
  async (job) => {
    // Process email job
    await sendEmail(job.data);
  },
  { connection }
);

Instance Tiers and Pricing

Basic Tier

SizeMemoryUse CaseMonthly Cost
1 GB1 GBDevelopment/staging~$35
2 GB2 GBSmall production~$70
5 GB5 GBMedium production~$175

Standard Tier (HA)

SizeMemoryUse CaseMonthly Cost
1 GB1 GBProduction~$70
2 GB2 GBProduction~$140
5 GB5 GBHigh traffic~$350

Configuration Options

Memory Policy

# Set maxmemory policy
gcloud redis instances update hrms-cache-staging \
  --region=$REGION \
  --update-redis-config=maxmemory-policy=allkeys-lru

Policies:

  • volatile-lru: Evict keys with TTL using LRU
  • allkeys-lru: Evict any key using LRU (recommended)
  • noeviction: Return error when memory full

Maintenance Window

gcloud redis instances update hrms-cache-prod \
  --region=$REGION \
  --maintenance-window-day=SUNDAY \
  --maintenance-window-hour=4

Monitoring

Key Metrics

  1. Memory Usage: Should stay below 80%
  2. Connected Clients: Monitor for connection leaks
  3. Cache Hit Ratio: Should be >90% for effective caching
  4. Commands/sec: Monitor for unusual spikes

View Metrics

# Console: Cloud Console → Memorystore → Redis → Monitoring

# CLI: Export to Cloud Monitoring
gcloud redis instances describe hrms-cache-staging --region=$REGION

Set Up Alerts

# Create alert for high memory usage
gcloud alpha monitoring policies create \
  --display-name="Redis High Memory" \
  --condition-display-name="Memory > 80%" \
  --condition-filter='resource.type="redis_instance" AND metric.type="redis.googleapis.com/stats/memory/usage_ratio"' \
  --condition-threshold-value=0.8 \
  --condition-threshold-comparison=COMPARISON_GT

Troubleshooting

Connection Refused

Error: connect ECONNREFUSED 10.0.0.3:6379

Fix:

  1. Verify Cloud Run has VPC connector attached
  2. Check VPC connector is in same network as Redis
  3. Verify Redis instance is in READY state
# Check Redis state
gcloud redis instances describe hrms-cache-staging --region=$REGION \
  --format="value(state)"
# Should be: READY

# Check VPC connector
gcloud compute networks vpc-access connectors describe hrms-connector \
  --region=$REGION

Connection Timeout

Error: connect ETIMEDOUT

Fix:

  1. Firewall rules may be blocking traffic
  2. VPC connector IP range may conflict
# Check firewall rules
gcloud compute firewall-rules list --filter="network:default"

Out of Memory

OOM command not allowed when used memory > 'maxmemory'

Fix:

  1. Increase instance size
  2. Set appropriate TTLs on keys
  3. Review maxmemory-policy
# Scale up
gcloud redis instances update hrms-cache-staging \
  --region=$REGION \
  --size=2

Too Many Connections

Error: max number of clients reached

Fix:

  1. Use connection pooling
  2. Close connections properly
  3. Increase maxclients config
// Use connection pooling
const redis = new Redis({
  host: process.env.REDIS_HOST,
  maxRetriesPerRequest: 3,
  enableReadyCheck: true,
  lazyConnect: true,
});

Security Best Practices

1. VPC-Only Access

Memorystore is VPC-only by default. Never expose to internet.

2. Use AUTH (Optional)

# Enable AUTH
gcloud redis instances update hrms-cache-staging \
  --region=$REGION \
  --enable-auth

3. Encrypt Data

Sensitive data should be encrypted before storing:

import { createCipheriv, createDecipheriv } from 'crypto';

const ENCRYPTION_KEY = process.env.CACHE_ENCRYPTION_KEY;

function encrypt(data: string): string {
  const cipher = createCipheriv('aes-256-gcm', ENCRYPTION_KEY, iv);
  return cipher.update(data, 'utf8', 'hex') + cipher.final('hex');
}

4. Set Appropriate TTLs

Always set TTL on cached data:

// Bad: No TTL
await redis.set('key', 'value');

// Good: With TTL
await redis.setex('key', 3600, 'value'); // 1 hour

Cost Optimization

  1. Use Basic tier for staging: No HA needed
  2. Right-size instance: Start small, scale up
  3. Set aggressive TTLs: Free memory automatically
  4. Use efficient data structures: Hashes vs strings
  5. Compress large values: Reduce memory usage