Deployment
Redis Memorystore Setup
Step-by-step guide to setting up Redis on Google Cloud Memorystore
Redis Memorystore Setup
This guide provides detailed instructions for setting up Google Cloud Memorystore for Redis to handle caching and session storage for the HRMS application.
Overview
| Setting | Value |
|---|---|
| Service | Google Cloud Memorystore for Redis |
| Purpose | Caching, sessions, job queues |
| Version | Redis 7.0 |
| Tier | Basic (staging) / Standard (production) |
Why Memorystore?
- Fully managed: No Redis administration
- High availability: Standard tier has automatic failover
- VPC integration: Private network access
- Sub-millisecond latency: Same region as Cloud Run
- Automatic patching: Security updates applied automatically
HRMS Redis Usage
| Use Case | Pattern | TTL |
|---|---|---|
| User sessions | session:{sessionId} | 24h |
| Permission cache | permissions:{userId} | 5min |
| Dashboard stats | dashboard:{tenantId}:{widgetId} | 1min |
| Rate limiting | ratelimit:{ip} | 1min |
| Job queues (BullMQ) | bull:{queueName}:* | - |
Prerequisites
- GCP project with billing enabled
- VPC network configured
- APIs enabled:
redis.googleapis.comvpcaccess.googleapis.comservicenetworking.googleapis.com
gcloud services enable \
redis.googleapis.com \
vpcaccess.googleapis.com \
servicenetworking.googleapis.comStep 1: Create VPC Connector
Cloud Run needs a VPC connector to reach Memorystore (which is VPC-only).
export PROJECT_ID="bluewoo-hrms"
export REGION="europe-west6"
# Create VPC connector
gcloud compute networks vpc-access connectors create hrms-connector \
--region=$REGION \
--network=default \
--range=10.8.0.0/28 \
--min-instances=2 \
--max-instances=10
# Verify connector
gcloud compute networks vpc-access connectors describe hrms-connector \
--region=$REGIONStep 2: Create Redis Instance
Staging Instance (Basic Tier)
gcloud redis instances create hrms-cache-staging \
--size=1 \
--region=$REGION \
--redis-version=redis_7_0 \
--tier=basic \
--network=defaultProduction Instance (Standard Tier)
gcloud redis instances create hrms-cache-prod \
--size=2 \
--region=$REGION \
--redis-version=redis_7_0 \
--tier=standard \
--network=default \
--replica-count=1 \
--read-replicas-mode=READ_REPLICAS_ENABLEDNote: Standard tier provides automatic failover and read replicas.
Step 3: Get Connection Information
# Get Redis host and port
gcloud redis instances describe hrms-cache-staging --region=$REGION \
--format="value(host)"
# Output: 10.0.0.3
gcloud redis instances describe hrms-cache-staging --region=$REGION \
--format="value(port)"
# Output: 6379
# Full connection info
gcloud redis instances describe hrms-cache-staging --region=$REGION \
--format="table(host,port,currentLocationId)"Step 4: Store Connection String in Secret Manager
# Get host
REDIS_HOST=$(gcloud redis instances describe hrms-cache-staging --region=$REGION --format="value(host)")
# Create secret
echo -n "redis://${REDIS_HOST}:6379" | \
gcloud secrets create hrms-redis-url \
--data-file=- \
--replication-policy="user-managed" \
--locations="europe-west6"
# For production with auth (if enabled)
# echo -n "redis://:PASSWORD@${REDIS_HOST}:6379" | ...Step 5: Configure Cloud Run to Use Redis
# Deploy with VPC connector and Redis secret
gcloud run deploy hrms-api \
--image=... \
--region=$REGION \
--vpc-connector=hrms-connector \
--set-secrets="REDIS_URL=hrms-redis-url:latest"GitHub Actions
- name: Deploy API
uses: google-github-actions/deploy-cloudrun@v2
with:
service: hrms-api
region: europe-west6
image: ${{ env.IMAGE }}
flags: |
--vpc-connector=hrms-connector
--set-secrets=REDIS_URL=hrms-redis-url:latestStep 6: Test Connection
From Cloud Shell
# Install redis-cli
sudo apt-get install redis-tools
# Get Redis IP
REDIS_IP=$(gcloud redis instances describe hrms-cache-staging --region=$REGION --format="value(host)")
# Test connection (from Cloud Shell in same VPC)
redis-cli -h $REDIS_IP ping
# Output: PONG
redis-cli -h $REDIS_IP info serverFrom Application
// Test Redis connection
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function testRedis() {
await redis.ping();
console.log('Redis connected');
await redis.set('test', 'value');
const value = await redis.get('test');
console.log('Test value:', value);
await redis.del('test');
}Redis Usage Patterns
Session Storage (Auth.js)
// apps/web/auth.ts
import { Redis } from 'ioredis';
import { UpstashRedisAdapter } from '@auth/upstash-redis-adapter';
const redis = new Redis(process.env.REDIS_URL);
export const authOptions = {
adapter: UpstashRedisAdapter(redis),
// ... rest of config
};Permission Caching (NestJS)
// apps/api/src/cache/permission-cache.service.ts
import { Injectable } from '@nestjs/common';
import { Redis } from 'ioredis';
@Injectable()
export class PermissionCacheService {
private redis: Redis;
private TTL = 300; // 5 minutes
constructor() {
this.redis = new Redis(process.env.REDIS_URL);
}
async getPermissions(userId: string): Promise<string[] | null> {
const cached = await this.redis.get(`permissions:${userId}`);
return cached ? JSON.parse(cached) : null;
}
async setPermissions(userId: string, permissions: string[]): Promise<void> {
await this.redis.setex(
`permissions:${userId}`,
this.TTL,
JSON.stringify(permissions)
);
}
async invalidatePermissions(userId: string): Promise<void> {
await this.redis.del(`permissions:${userId}`);
}
}Dashboard Cache
// apps/api/src/cache/dashboard-cache.service.ts
@Injectable()
export class DashboardCacheService {
private redis: Redis;
private TTL = 60; // 1 minute
async getWidgetData(tenantId: string, widgetId: string): Promise<any | null> {
const key = `dashboard:${tenantId}:${widgetId}`;
const cached = await this.redis.get(key);
return cached ? JSON.parse(cached) : null;
}
async setWidgetData(tenantId: string, widgetId: string, data: any): Promise<void> {
const key = `dashboard:${tenantId}:${widgetId}`;
await this.redis.setex(key, this.TTL, JSON.stringify(data));
}
}BullMQ Job Queue
// apps/api/src/queues/email.queue.ts
import { Queue, Worker } from 'bullmq';
const connection = {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379'),
};
export const emailQueue = new Queue('email', { connection });
export const emailWorker = new Worker(
'email',
async (job) => {
// Process email job
await sendEmail(job.data);
},
{ connection }
);Instance Tiers and Pricing
Basic Tier
| Size | Memory | Use Case | Monthly Cost |
|---|---|---|---|
| 1 GB | 1 GB | Development/staging | ~$35 |
| 2 GB | 2 GB | Small production | ~$70 |
| 5 GB | 5 GB | Medium production | ~$175 |
Standard Tier (HA)
| Size | Memory | Use Case | Monthly Cost |
|---|---|---|---|
| 1 GB | 1 GB | Production | ~$70 |
| 2 GB | 2 GB | Production | ~$140 |
| 5 GB | 5 GB | High traffic | ~$350 |
Configuration Options
Memory Policy
# Set maxmemory policy
gcloud redis instances update hrms-cache-staging \
--region=$REGION \
--update-redis-config=maxmemory-policy=allkeys-lruPolicies:
volatile-lru: Evict keys with TTL using LRUallkeys-lru: Evict any key using LRU (recommended)noeviction: Return error when memory full
Maintenance Window
gcloud redis instances update hrms-cache-prod \
--region=$REGION \
--maintenance-window-day=SUNDAY \
--maintenance-window-hour=4Monitoring
Key Metrics
- Memory Usage: Should stay below 80%
- Connected Clients: Monitor for connection leaks
- Cache Hit Ratio: Should be >90% for effective caching
- Commands/sec: Monitor for unusual spikes
View Metrics
# Console: Cloud Console → Memorystore → Redis → Monitoring
# CLI: Export to Cloud Monitoring
gcloud redis instances describe hrms-cache-staging --region=$REGIONSet Up Alerts
# Create alert for high memory usage
gcloud alpha monitoring policies create \
--display-name="Redis High Memory" \
--condition-display-name="Memory > 80%" \
--condition-filter='resource.type="redis_instance" AND metric.type="redis.googleapis.com/stats/memory/usage_ratio"' \
--condition-threshold-value=0.8 \
--condition-threshold-comparison=COMPARISON_GTTroubleshooting
Connection Refused
Error: connect ECONNREFUSED 10.0.0.3:6379Fix:
- Verify Cloud Run has VPC connector attached
- Check VPC connector is in same network as Redis
- Verify Redis instance is in READY state
# Check Redis state
gcloud redis instances describe hrms-cache-staging --region=$REGION \
--format="value(state)"
# Should be: READY
# Check VPC connector
gcloud compute networks vpc-access connectors describe hrms-connector \
--region=$REGIONConnection Timeout
Error: connect ETIMEDOUTFix:
- Firewall rules may be blocking traffic
- VPC connector IP range may conflict
# Check firewall rules
gcloud compute firewall-rules list --filter="network:default"Out of Memory
OOM command not allowed when used memory > 'maxmemory'Fix:
- Increase instance size
- Set appropriate TTLs on keys
- Review maxmemory-policy
# Scale up
gcloud redis instances update hrms-cache-staging \
--region=$REGION \
--size=2Too Many Connections
Error: max number of clients reachedFix:
- Use connection pooling
- Close connections properly
- Increase maxclients config
// Use connection pooling
const redis = new Redis({
host: process.env.REDIS_HOST,
maxRetriesPerRequest: 3,
enableReadyCheck: true,
lazyConnect: true,
});Security Best Practices
1. VPC-Only Access
Memorystore is VPC-only by default. Never expose to internet.
2. Use AUTH (Optional)
# Enable AUTH
gcloud redis instances update hrms-cache-staging \
--region=$REGION \
--enable-auth3. Encrypt Data
Sensitive data should be encrypted before storing:
import { createCipheriv, createDecipheriv } from 'crypto';
const ENCRYPTION_KEY = process.env.CACHE_ENCRYPTION_KEY;
function encrypt(data: string): string {
const cipher = createCipheriv('aes-256-gcm', ENCRYPTION_KEY, iv);
return cipher.update(data, 'utf8', 'hex') + cipher.final('hex');
}4. Set Appropriate TTLs
Always set TTL on cached data:
// Bad: No TTL
await redis.set('key', 'value');
// Good: With TTL
await redis.setex('key', 3600, 'value'); // 1 hourCost Optimization
- Use Basic tier for staging: No HA needed
- Right-size instance: Start small, scale up
- Set aggressive TTLs: Free memory automatically
- Use efficient data structures: Hashes vs strings
- Compress large values: Reduce memory usage