mirror of
https://github.com/kjanat/livedash-node.git
synced 2026-01-16 12:12:09 +01:00
fix: resolved biome errors
This commit is contained in:
265
CLAUDE.md
265
CLAUDE.md
@ -6,57 +6,57 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
|
||||
**Core Development:**
|
||||
|
||||
- `pnpm dev` - Start development server (runs custom server.ts with schedulers)
|
||||
- `pnpm dev:next-only` - Start Next.js only with Turbopack (no schedulers)
|
||||
- `pnpm build` - Build production application
|
||||
- `pnpm start` - Run production server
|
||||
- `pnpm dev` - Start development server (runs custom server.ts with schedulers)
|
||||
- `pnpm dev:next-only` - Start Next.js only with Turbopack (no schedulers)
|
||||
- `pnpm build` - Build production application
|
||||
- `pnpm start` - Run production server
|
||||
|
||||
**Code Quality:**
|
||||
|
||||
- `pnpm lint` - Run ESLint
|
||||
- `pnpm lint:fix` - Fix ESLint issues automatically
|
||||
- `pnpm format` - Format code with Prettier
|
||||
- `pnpm format:check` - Check formatting without fixing
|
||||
- `pnpm lint` - Run ESLint
|
||||
- `pnpm lint:fix` - Fix ESLint issues automatically
|
||||
- `pnpm format` - Format code with Prettier
|
||||
- `pnpm format:check` - Check formatting without fixing
|
||||
|
||||
**Database:**
|
||||
|
||||
- `pnpm prisma:generate` - Generate Prisma client
|
||||
- `pnpm prisma:migrate` - Run database migrations
|
||||
- `pnpm prisma:push` - Push schema changes to database
|
||||
- `pnpm prisma:push:force` - Force reset database and push schema
|
||||
- `pnpm prisma:seed` - Seed database with initial data
|
||||
- `pnpm prisma:studio` - Open Prisma Studio database viewer
|
||||
- `pnpm prisma:generate` - Generate Prisma client
|
||||
- `pnpm prisma:migrate` - Run database migrations
|
||||
- `pnpm prisma:push` - Push schema changes to database
|
||||
- `pnpm prisma:push:force` - Force reset database and push schema
|
||||
- `pnpm prisma:seed` - Seed database with initial data
|
||||
- `pnpm prisma:studio` - Open Prisma Studio database viewer
|
||||
|
||||
**Testing:**
|
||||
|
||||
- `pnpm test` - Run both Vitest and Playwright tests concurrently
|
||||
- `pnpm test:vitest` - Run Vitest tests only
|
||||
- `pnpm test:vitest:watch` - Run Vitest in watch mode
|
||||
- `pnpm test:vitest:coverage` - Run Vitest with coverage report
|
||||
- `pnpm test:coverage` - Run all tests with coverage
|
||||
- `pnpm test` - Run both Vitest and Playwright tests concurrently
|
||||
- `pnpm test:vitest` - Run Vitest tests only
|
||||
- `pnpm test:vitest:watch` - Run Vitest in watch mode
|
||||
- `pnpm test:vitest:coverage` - Run Vitest with coverage report
|
||||
- `pnpm test:coverage` - Run all tests with coverage
|
||||
|
||||
**Security Testing:**
|
||||
|
||||
- `pnpm test:security` - Run security-specific tests
|
||||
- `pnpm test:security-headers` - Test HTTP security headers implementation
|
||||
- `pnpm test:csp` - Test CSP implementation and nonce generation
|
||||
- `pnpm test:csp:validate` - Validate CSP implementation with security scoring
|
||||
- `pnpm test:csp:full` - Comprehensive CSP test suite
|
||||
- `pnpm test:security` - Run security-specific tests
|
||||
- `pnpm test:security-headers` - Test HTTP security headers implementation
|
||||
- `pnpm test:csp` - Test CSP implementation and nonce generation
|
||||
- `pnpm test:csp:validate` - Validate CSP implementation with security scoring
|
||||
- `pnpm test:csp:full` - Comprehensive CSP test suite
|
||||
|
||||
**Migration & Deployment:**
|
||||
|
||||
- `pnpm migration:backup` - Create database backup
|
||||
- `pnpm migration:validate-db` - Validate database schema and integrity
|
||||
- `pnpm migration:validate-env` - Validate environment configuration
|
||||
- `pnpm migration:pre-check` - Run pre-deployment validation checks
|
||||
- `pnpm migration:health-check` - Run system health checks
|
||||
- `pnpm migration:deploy` - Execute full deployment process
|
||||
- `pnpm migration:rollback` - Rollback failed migration
|
||||
- `pnpm migration:backup` - Create database backup
|
||||
- `pnpm migration:validate-db` - Validate database schema and integrity
|
||||
- `pnpm migration:validate-env` - Validate environment configuration
|
||||
- `pnpm migration:pre-check` - Run pre-deployment validation checks
|
||||
- `pnpm migration:health-check` - Run system health checks
|
||||
- `pnpm migration:deploy` - Execute full deployment process
|
||||
- `pnpm migration:rollback` - Rollback failed migration
|
||||
|
||||
**Markdown:**
|
||||
|
||||
- `pnpm lint:md` - Lint Markdown files
|
||||
- `pnpm lint:md:fix` - Fix Markdown linting issues
|
||||
- `pnpm lint:md` - Lint Markdown files
|
||||
- `pnpm lint:md:fix` - Fix Markdown linting issues
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
@ -64,138 +64,155 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
|
||||
### Tech Stack
|
||||
|
||||
- **Frontend:** Next.js 15 + React 19 + TailwindCSS 4
|
||||
- **Backend:** Next.js API Routes + Custom Node.js server
|
||||
- **Database:** PostgreSQL with Prisma ORM
|
||||
- **Authentication:** NextAuth.js
|
||||
- **AI Processing:** OpenAI API integration
|
||||
- **Visualization:** D3.js, React Leaflet, Recharts
|
||||
- **Scheduling:** Node-cron for background processing
|
||||
- **Frontend:** Next.js 15 + React 19 + TailwindCSS 4
|
||||
- **Backend:** Next.js API Routes + Custom Node.js server
|
||||
- **Database:** PostgreSQL with Prisma ORM
|
||||
- **Authentication:** NextAuth.js
|
||||
- **AI Processing:** OpenAI API integration
|
||||
- **Visualization:** D3.js, React Leaflet, Recharts
|
||||
- **Scheduling:** Node-cron for background processing
|
||||
|
||||
### Key Architecture Components
|
||||
|
||||
**1. Multi-Stage Processing Pipeline**
|
||||
The system processes user sessions through distinct stages tracked in `SessionProcessingStatus`:
|
||||
|
||||
- `CSV_IMPORT` - Import raw CSV data into `SessionImport`
|
||||
- `TRANSCRIPT_FETCH` - Fetch transcript content from URLs
|
||||
- `SESSION_CREATION` - Create normalized `Session` and `Message` records
|
||||
- `AI_ANALYSIS` - AI processing for sentiment, categorization, summaries
|
||||
- `QUESTION_EXTRACTION` - Extract questions from conversations
|
||||
- `CSV_IMPORT` - Import raw CSV data into `SessionImport`
|
||||
- `TRANSCRIPT_FETCH` - Fetch transcript content from URLs
|
||||
- `SESSION_CREATION` - Create normalized `Session` and `Message` records
|
||||
- `AI_ANALYSIS` - AI processing for sentiment, categorization, summaries
|
||||
- `QUESTION_EXTRACTION` - Extract questions from conversations
|
||||
|
||||
**2. Database Architecture**
|
||||
|
||||
- **Multi-tenant design** with `Company` as root entity
|
||||
- **Dual storage pattern**: Raw CSV data in `SessionImport`, processed data in `Session`
|
||||
- **1-to-1 relationship** between `SessionImport` and `Session` via `importId`
|
||||
- **Message parsing** into individual `Message` records with order tracking
|
||||
- **AI cost tracking** via `AIProcessingRequest` with detailed token usage
|
||||
- **Flexible AI model management** through `AIModel`, `AIModelPricing`, and `CompanyAIModel`
|
||||
- **Multi-tenant design** with `Company` as root entity
|
||||
- **Dual storage pattern**: Raw CSV data in `SessionImport`, processed data in `Session`
|
||||
- **1-to-1 relationship** between `SessionImport` and `Session` via `importId`
|
||||
- **Message parsing** into individual `Message` records with order tracking
|
||||
- **AI cost tracking** via `AIProcessingRequest` with detailed token usage
|
||||
- **Flexible AI model management** through `AIModel`, `AIModelPricing`, and `CompanyAIModel`
|
||||
|
||||
**3. Custom Server Architecture**
|
||||
|
||||
- `server.ts` - Custom Next.js server with configurable scheduler initialization
|
||||
- Three main schedulers: CSV import, import processing, and session processing
|
||||
- Environment-based configuration via `lib/env.ts`
|
||||
- `server.ts` - Custom Next.js server with configurable scheduler initialization
|
||||
- Three main schedulers: CSV import, import processing, and session processing
|
||||
- Environment-based configuration via `lib/env.ts`
|
||||
|
||||
**4. Key Processing Libraries**
|
||||
|
||||
- `lib/scheduler.ts` - CSV import scheduling
|
||||
- `lib/importProcessor.ts` - Raw data to Session conversion
|
||||
- `lib/processingScheduler.ts` - AI analysis pipeline
|
||||
- `lib/transcriptFetcher.ts` - External transcript fetching
|
||||
- `lib/transcriptParser.ts` - Message parsing from transcripts
|
||||
- `lib/batchProcessor.ts` - OpenAI Batch API integration for cost-efficient processing
|
||||
- `lib/batchScheduler.ts` - Automated batch job lifecycle management
|
||||
- `lib/rateLimiter.ts` - In-memory rate limiting utility for API endpoints
|
||||
- `lib/scheduler.ts` - CSV import scheduling
|
||||
- `lib/importProcessor.ts` - Raw data to Session conversion
|
||||
- `lib/processingScheduler.ts` - AI analysis pipeline
|
||||
- `lib/transcriptFetcher.ts` - External transcript fetching
|
||||
- `lib/transcriptParser.ts` - Message parsing from transcripts
|
||||
- `lib/batchProcessor.ts` - OpenAI Batch API integration for cost-efficient processing
|
||||
- `lib/batchScheduler.ts` - Automated batch job lifecycle management
|
||||
- `lib/rateLimiter.ts` - In-memory rate limiting utility for API endpoints
|
||||
|
||||
### Development Environment
|
||||
|
||||
**Environment Configuration:**
|
||||
Environment variables are managed through `lib/env.ts` with .env.local file support:
|
||||
|
||||
- Database: PostgreSQL via `DATABASE_URL` and `DATABASE_URL_DIRECT`
|
||||
- Authentication: `NEXTAUTH_SECRET`, `NEXTAUTH_URL`
|
||||
- AI Processing: `OPENAI_API_KEY`
|
||||
- Schedulers: `SCHEDULER_ENABLED`, various interval configurations
|
||||
- Database: PostgreSQL via `DATABASE_URL` and `DATABASE_URL_DIRECT`
|
||||
- Authentication: `NEXTAUTH_SECRET`, `NEXTAUTH_URL`
|
||||
- AI Processing: `OPENAI_API_KEY`
|
||||
- Schedulers: `SCHEDULER_ENABLED`, various interval configurations
|
||||
|
||||
**Key Files to Understand:**
|
||||
|
||||
- `prisma/schema.prisma` - Complete database schema with enums and relationships
|
||||
- `server.ts` - Custom server entry point
|
||||
- `lib/env.ts` - Environment variable management and validation
|
||||
- `app/` - Next.js App Router structure
|
||||
- `prisma/schema.prisma` - Complete database schema with enums and relationships
|
||||
- `server.ts` - Custom server entry point
|
||||
- `lib/env.ts` - Environment variable management and validation
|
||||
- `app/` - Next.js App Router structure
|
||||
|
||||
**Testing:**
|
||||
|
||||
- Uses Vitest for unit testing
|
||||
- Playwright for E2E testing
|
||||
- Test files in `tests/` directory
|
||||
- Uses Vitest for unit testing
|
||||
- Playwright for E2E testing
|
||||
- Test files in `tests/` directory
|
||||
|
||||
### Important Notes
|
||||
|
||||
**Scheduler System:**
|
||||
|
||||
- Schedulers are optional and controlled by `SCHEDULER_ENABLED` environment variable
|
||||
- Use `pnpm dev:next-only` to run without schedulers for pure frontend development
|
||||
- Four separate schedulers handle different pipeline stages:
|
||||
- CSV Import Scheduler (`lib/scheduler.ts`)
|
||||
- Import Processing Scheduler (`lib/importProcessor.ts`)
|
||||
- Session Processing Scheduler (`lib/processingScheduler.ts`)
|
||||
- Batch Processing Scheduler (`lib/batchScheduler.ts`) - Manages OpenAI Batch API lifecycle
|
||||
- Schedulers are optional and controlled by `SCHEDULER_ENABLED` environment variable
|
||||
- Use `pnpm dev:next-only` to run without schedulers for pure frontend development
|
||||
- Four separate schedulers handle different pipeline stages:
|
||||
- CSV Import Scheduler (`lib/scheduler.ts`)
|
||||
- Import Processing Scheduler (`lib/importProcessor.ts`)
|
||||
- Session Processing Scheduler (`lib/processingScheduler.ts`)
|
||||
- Batch Processing Scheduler (`lib/batchScheduler.ts`) - Manages OpenAI Batch API lifecycle
|
||||
|
||||
**Database Migrations:**
|
||||
|
||||
- Always run `pnpm prisma:generate` after schema changes
|
||||
- Use `pnpm prisma:migrate` for production-ready migrations
|
||||
- Use `pnpm prisma:push` for development schema changes
|
||||
- Database uses PostgreSQL with Prisma's driver adapter for connection pooling
|
||||
- Always run `pnpm prisma:generate` after schema changes
|
||||
- Use `pnpm prisma:migrate` for production-ready migrations
|
||||
- Use `pnpm prisma:push` for development schema changes
|
||||
- Database uses PostgreSQL with Prisma's driver adapter for connection pooling
|
||||
|
||||
**AI Processing:**
|
||||
|
||||
- All AI requests are tracked for cost analysis
|
||||
- Support for multiple AI models per company
|
||||
- Time-based pricing management for accurate cost calculation
|
||||
- Processing stages can be retried on failure with retry count tracking
|
||||
- **Batch API Integration**: 50% cost reduction using OpenAI Batch API
|
||||
- Automatic batching of AI requests every 5 minutes
|
||||
- Batch status checking every 2 minutes
|
||||
- Result processing every minute
|
||||
- Failed request retry with individual API calls
|
||||
- All AI requests are tracked for cost analysis
|
||||
- Support for multiple AI models per company
|
||||
- Time-based pricing management for accurate cost calculation
|
||||
- Processing stages can be retried on failure with retry count tracking
|
||||
- **Batch API Integration**: 50% cost reduction using OpenAI Batch API
|
||||
- Automatic batching of AI requests every 5 minutes
|
||||
- Batch status checking every 2 minutes
|
||||
- Result processing every minute
|
||||
- Failed request retry with individual API calls
|
||||
|
||||
**Code Quality Standards:**
|
||||
|
||||
- Run `pnpm lint` and `pnpm format:check` before committing
|
||||
- TypeScript with ES modules (type: "module" in package.json)
|
||||
- React 19 with Next.js 15 App Router
|
||||
- TailwindCSS 4 for styling
|
||||
- Run `pnpm lint` and `pnpm format:check` before committing
|
||||
- TypeScript with ES modules (type: "module" in package.json)
|
||||
- React 19 with Next.js 15 App Router
|
||||
- TailwindCSS 4 for styling
|
||||
|
||||
**Security Features:**
|
||||
|
||||
- **Comprehensive CSRF Protection**: Multi-layer CSRF protection with automatic token management
|
||||
- Middleware-level protection for all state-changing endpoints
|
||||
- tRPC integration with CSRF-protected procedures
|
||||
- Client-side hooks and components for seamless integration
|
||||
- HTTP-only cookies with SameSite protection
|
||||
- **Enhanced Content Security Policy (CSP)**:
|
||||
- Nonce-based script execution for maximum XSS protection
|
||||
- Environment-specific policies (strict production, permissive development)
|
||||
- Real-time violation reporting and bypass detection
|
||||
- Automated policy optimization recommendations
|
||||
- **Security Monitoring & Audit System**:
|
||||
- Real-time threat detection and alerting
|
||||
- Comprehensive security audit logging with retention management
|
||||
- Geographic anomaly detection and IP threat analysis
|
||||
- Security scoring and automated incident response
|
||||
- **Advanced Rate Limiting**: In-memory rate limiting system
|
||||
- Authentication endpoints: Login (5/15min), Registration (3/hour), Password Reset (5/15min)
|
||||
- CSP reporting: 10 reports per minute per IP
|
||||
- Admin endpoints: Configurable thresholds
|
||||
- **Input Validation & Security Headers**:
|
||||
- Comprehensive Zod schemas for all user inputs with XSS/injection prevention
|
||||
- HTTP security headers (HSTS, X-Frame-Options, X-Content-Type-Options, Permissions Policy)
|
||||
- Strong password requirements and email validation
|
||||
- **Session Security**:
|
||||
- JWT tokens with 24-hour expiration and secure cookie settings
|
||||
- HttpOnly, Secure, SameSite cookies with proper CSP integration
|
||||
- Company isolation and multi-tenant security
|
||||
- **Comprehensive CSRF Protection**: Multi-layer CSRF protection with automatic token management
|
||||
- Middleware-level protection for all state-changing endpoints
|
||||
- tRPC integration with CSRF-protected procedures
|
||||
- Client-side hooks and components for seamless integration
|
||||
- HTTP-only cookies with SameSite protection
|
||||
- **Enhanced Content Security Policy (CSP)**:
|
||||
- Nonce-based script execution for maximum XSS protection
|
||||
- Environment-specific policies (strict production, permissive development)
|
||||
- Real-time violation reporting and bypass detection
|
||||
- Automated policy optimization recommendations
|
||||
- **Security Monitoring & Audit System**:
|
||||
- Real-time threat detection and alerting
|
||||
- Comprehensive security audit logging with retention management
|
||||
- Geographic anomaly detection and IP threat analysis
|
||||
- Security scoring and automated incident response
|
||||
- **Advanced Rate Limiting**: In-memory rate limiting system
|
||||
- Authentication endpoints: Login (5/15min), Registration (3/hour), Password Reset (5/15min)
|
||||
- CSP reporting: 10 reports per minute per IP
|
||||
- Admin endpoints: Configurable thresholds
|
||||
- **Input Validation & Security Headers**:
|
||||
- Comprehensive Zod schemas for all user inputs with XSS/injection prevention
|
||||
- HTTP security headers (HSTS, X-Frame-Options, X-Content-Type-Options, Permissions Policy)
|
||||
- Strong password requirements and email validation
|
||||
- **Session Security**:
|
||||
- JWT tokens with 24-hour expiration and secure cookie settings
|
||||
- HttpOnly, Secure, SameSite cookies with proper CSP integration
|
||||
- Company isolation and multi-tenant security
|
||||
|
||||
**Code Quality & Linting:**
|
||||
|
||||
- **Biome Integration**: Primary linting and formatting tool
|
||||
- Pre-commit hooks enforce code quality standards
|
||||
- Some security-critical patterns require `biome-ignore` comments
|
||||
- Non-null assertions (`!`) used intentionally in authenticated contexts require ignore comments
|
||||
- Complex functions may need refactoring to meet complexity thresholds (max 15)
|
||||
- Performance classes use static-only patterns which may trigger warnings
|
||||
- **TypeScript Strict Mode**: Comprehensive type checking
|
||||
- Avoid `any` types where possible; use proper type definitions
|
||||
- Optional chaining vs non-null assertions: choose based on security context
|
||||
- In authenticated API handlers, non-null assertions are often safer than optional chaining
|
||||
- **Security vs Linting Balance**:
|
||||
- Security takes precedence over linting rules when they conflict
|
||||
- Document security-critical choices with detailed comments
|
||||
- Use `// biome-ignore` with explanations for intentional rule violations
|
||||
|
||||
@ -11,32 +11,37 @@ This document summarizes the comprehensive documentation audit performed on the
|
||||
The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
1. **CSRF Protection** (`docs/CSRF_PROTECTION.md`)
|
||||
- Multi-layer protection implementation
|
||||
- Client-side integration guide
|
||||
- tRPC integration details
|
||||
- Comprehensive examples
|
||||
|
||||
- Multi-layer protection implementation
|
||||
- Client-side integration guide
|
||||
- tRPC integration details
|
||||
- Comprehensive examples
|
||||
|
||||
2. **Enhanced CSP Implementation** (`docs/security/enhanced-csp.md`)
|
||||
- Nonce-based script execution
|
||||
- Environment-specific policies
|
||||
- Violation reporting and monitoring
|
||||
- Testing framework
|
||||
|
||||
- Nonce-based script execution
|
||||
- Environment-specific policies
|
||||
- Violation reporting and monitoring
|
||||
- Testing framework
|
||||
|
||||
3. **Security Headers** (`docs/security-headers.md`)
|
||||
- Complete header implementation details
|
||||
- Testing procedures
|
||||
- Compatibility information
|
||||
|
||||
- Complete header implementation details
|
||||
- Testing procedures
|
||||
- Compatibility information
|
||||
|
||||
4. **Security Monitoring System** (`docs/security-monitoring.md`)
|
||||
- Real-time threat detection
|
||||
- Alert management
|
||||
- API usage examples
|
||||
- Performance considerations
|
||||
|
||||
- Real-time threat detection
|
||||
- Alert management
|
||||
- API usage examples
|
||||
- Performance considerations
|
||||
|
||||
5. **Migration Guide** (`MIGRATION_GUIDE.md`)
|
||||
- Comprehensive v2.0.0 migration procedures
|
||||
- Rollback procedures
|
||||
- Health checks and validation
|
||||
|
||||
- Comprehensive v2.0.0 migration procedures
|
||||
- Rollback procedures
|
||||
- Health checks and validation
|
||||
|
||||
### Major Issues Identified ❌
|
||||
|
||||
@ -44,70 +49,70 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Problems Found:**
|
||||
|
||||
- Listed database as "SQLite (default)" when project uses PostgreSQL
|
||||
- Missing all new security features (CSRF, CSP, security monitoring)
|
||||
- Incomplete environment setup section
|
||||
- Outdated tech stack (missing tRPC, security features)
|
||||
- Project structure didn't reflect new admin/security directories
|
||||
- Listed database as "SQLite (default)" when project uses PostgreSQL
|
||||
- Missing all new security features (CSRF, CSP, security monitoring)
|
||||
- Incomplete environment setup section
|
||||
- Outdated tech stack (missing tRPC, security features)
|
||||
- Project structure didn't reflect new admin/security directories
|
||||
|
||||
**Actions Taken:**
|
||||
|
||||
- ✅ Updated features section to include security and admin capabilities
|
||||
- ✅ Corrected tech stack to include PostgreSQL, tRPC, security features
|
||||
- ✅ Updated environment setup with proper PostgreSQL configuration
|
||||
- ✅ Revised project structure to reflect current codebase
|
||||
- ✅ Added comprehensive script documentation
|
||||
- ✅ Updated features section to include security and admin capabilities
|
||||
- ✅ Corrected tech stack to include PostgreSQL, tRPC, security features
|
||||
- ✅ Updated environment setup with proper PostgreSQL configuration
|
||||
- ✅ Revised project structure to reflect current codebase
|
||||
- ✅ Added comprehensive script documentation
|
||||
|
||||
#### 2. Undocumented API Endpoints
|
||||
|
||||
**Missing Documentation:**
|
||||
|
||||
- `/api/admin/audit-logs/` (GET) - Audit log retrieval with filtering
|
||||
- `/api/admin/audit-logs/retention/` (POST) - Retention management
|
||||
- `/api/admin/security-monitoring/` (GET/POST) - Security metrics and config
|
||||
- `/api/admin/security-monitoring/alerts/` - Alert management
|
||||
- `/api/admin/security-monitoring/export/` - Data export
|
||||
- `/api/admin/security-monitoring/threat-analysis/` - Threat analysis
|
||||
- `/api/admin/batch-monitoring/` - Batch processing monitoring
|
||||
- `/api/csp-report/` (POST) - CSP violation reporting
|
||||
- `/api/csp-metrics/` (GET) - CSP metrics and analytics
|
||||
- `/api/csrf-token/` (GET) - CSRF token endpoint
|
||||
- `/api/admin/audit-logs/` (GET) - Audit log retrieval with filtering
|
||||
- `/api/admin/audit-logs/retention/` (POST) - Retention management
|
||||
- `/api/admin/security-monitoring/` (GET/POST) - Security metrics and config
|
||||
- `/api/admin/security-monitoring/alerts/` - Alert management
|
||||
- `/api/admin/security-monitoring/export/` - Data export
|
||||
- `/api/admin/security-monitoring/threat-analysis/` - Threat analysis
|
||||
- `/api/admin/batch-monitoring/` - Batch processing monitoring
|
||||
- `/api/csp-report/` (POST) - CSP violation reporting
|
||||
- `/api/csp-metrics/` (GET) - CSP metrics and analytics
|
||||
- `/api/csrf-token/` (GET) - CSRF token endpoint
|
||||
|
||||
**Actions Taken:**
|
||||
|
||||
- ✅ Created `docs/admin-audit-logs-api.md` - Comprehensive audit logs API documentation
|
||||
- ✅ Created `docs/csp-metrics-api.md` - CSP monitoring and metrics API documentation
|
||||
- ✅ Created `docs/api-reference.md` - Complete API reference for all endpoints
|
||||
- ✅ Created `docs/admin-audit-logs-api.md` - Comprehensive audit logs API documentation
|
||||
- ✅ Created `docs/csp-metrics-api.md` - CSP monitoring and metrics API documentation
|
||||
- ✅ Created `docs/api-reference.md` - Complete API reference for all endpoints
|
||||
|
||||
#### 3. Undocumented Features and Components
|
||||
|
||||
**Missing Feature Documentation:**
|
||||
|
||||
- Batch monitoring dashboard and UI components
|
||||
- Security monitoring UI components
|
||||
- Nonce-based CSP context provider
|
||||
- Enhanced rate limiting system
|
||||
- Security audit retention system
|
||||
- Batch monitoring dashboard and UI components
|
||||
- Security monitoring UI components
|
||||
- Nonce-based CSP context provider
|
||||
- Enhanced rate limiting system
|
||||
- Security audit retention system
|
||||
|
||||
**Actions Taken:**
|
||||
|
||||
- ✅ Created `docs/batch-monitoring-dashboard.md` - Complete batch monitoring documentation
|
||||
- ✅ Created `docs/batch-monitoring-dashboard.md` - Complete batch monitoring documentation
|
||||
|
||||
#### 4. CLAUDE.md - Missing New Commands
|
||||
|
||||
**Problems Found:**
|
||||
|
||||
- Missing security testing commands
|
||||
- Missing CSP testing commands
|
||||
- Missing migration/deployment commands
|
||||
- Outdated security features section
|
||||
- Missing security testing commands
|
||||
- Missing CSP testing commands
|
||||
- Missing migration/deployment commands
|
||||
- Outdated security features section
|
||||
|
||||
**Actions Taken:**
|
||||
|
||||
- ✅ Added security testing command section
|
||||
- ✅ Added CSP testing commands
|
||||
- ✅ Added migration and deployment commands
|
||||
- ✅ Updated security features section with comprehensive details
|
||||
- ✅ Added security testing command section
|
||||
- ✅ Added CSP testing commands
|
||||
- ✅ Added migration and deployment commands
|
||||
- ✅ Updated security features section with comprehensive details
|
||||
|
||||
## New Documentation Created
|
||||
|
||||
@ -117,14 +122,14 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Contents:**
|
||||
|
||||
- Complete API endpoint documentation with examples
|
||||
- Authentication and authorization requirements
|
||||
- Query parameters and filtering options
|
||||
- Response formats and error handling
|
||||
- Retention management procedures
|
||||
- Security features and rate limiting
|
||||
- Usage examples and integration patterns
|
||||
- Performance considerations and troubleshooting
|
||||
- Complete API endpoint documentation with examples
|
||||
- Authentication and authorization requirements
|
||||
- Query parameters and filtering options
|
||||
- Response formats and error handling
|
||||
- Retention management procedures
|
||||
- Security features and rate limiting
|
||||
- Usage examples and integration patterns
|
||||
- Performance considerations and troubleshooting
|
||||
|
||||
### 2. CSP Metrics and Monitoring API Documentation
|
||||
|
||||
@ -132,14 +137,14 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Contents:**
|
||||
|
||||
- CSP violation reporting endpoint documentation
|
||||
- Metrics API with real-time violation tracking
|
||||
- Risk assessment and bypass detection features
|
||||
- Policy optimization recommendations
|
||||
- Configuration and setup instructions
|
||||
- Performance considerations and security features
|
||||
- Usage examples for monitoring and analysis
|
||||
- Integration with existing security systems
|
||||
- CSP violation reporting endpoint documentation
|
||||
- Metrics API with real-time violation tracking
|
||||
- Risk assessment and bypass detection features
|
||||
- Policy optimization recommendations
|
||||
- Configuration and setup instructions
|
||||
- Performance considerations and security features
|
||||
- Usage examples for monitoring and analysis
|
||||
- Integration with existing security systems
|
||||
|
||||
### 3. Batch Monitoring Dashboard Documentation
|
||||
|
||||
@ -147,14 +152,14 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Contents:**
|
||||
|
||||
- Comprehensive batch processing monitoring guide
|
||||
- Real-time monitoring capabilities and features
|
||||
- API endpoints for batch job tracking
|
||||
- Dashboard component documentation
|
||||
- Performance analytics and cost analysis
|
||||
- Administrative controls and error handling
|
||||
- Configuration and alert management
|
||||
- Troubleshooting and optimization guides
|
||||
- Comprehensive batch processing monitoring guide
|
||||
- Real-time monitoring capabilities and features
|
||||
- API endpoints for batch job tracking
|
||||
- Dashboard component documentation
|
||||
- Performance analytics and cost analysis
|
||||
- Administrative controls and error handling
|
||||
- Configuration and alert management
|
||||
- Troubleshooting and optimization guides
|
||||
|
||||
### 4. Complete API Reference
|
||||
|
||||
@ -162,14 +167,14 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Contents:**
|
||||
|
||||
- Comprehensive reference for all API endpoints
|
||||
- Authentication and CSRF protection requirements
|
||||
- Detailed request/response formats
|
||||
- Error codes and status descriptions
|
||||
- Rate limiting information
|
||||
- Security headers and CORS configuration
|
||||
- Pagination and filtering standards
|
||||
- Testing and integration examples
|
||||
- Comprehensive reference for all API endpoints
|
||||
- Authentication and CSRF protection requirements
|
||||
- Detailed request/response formats
|
||||
- Error codes and status descriptions
|
||||
- Rate limiting information
|
||||
- Security headers and CORS configuration
|
||||
- Pagination and filtering standards
|
||||
- Testing and integration examples
|
||||
|
||||
## Updated Documentation
|
||||
|
||||
@ -177,23 +182,23 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
**Key Updates:**
|
||||
|
||||
- ✅ Updated project description to include security and admin features
|
||||
- ✅ Corrected tech stack to reflect current implementation
|
||||
- ✅ Fixed database information (PostgreSQL vs SQLite)
|
||||
- ✅ Added comprehensive environment configuration
|
||||
- ✅ Updated project structure to match current codebase
|
||||
- ✅ Added security, migration, and testing command sections
|
||||
- ✅ Enhanced features section with detailed capabilities
|
||||
- ✅ Updated project description to include security and admin features
|
||||
- ✅ Corrected tech stack to reflect current implementation
|
||||
- ✅ Fixed database information (PostgreSQL vs SQLite)
|
||||
- ✅ Added comprehensive environment configuration
|
||||
- ✅ Updated project structure to match current codebase
|
||||
- ✅ Added security, migration, and testing command sections
|
||||
- ✅ Enhanced features section with detailed capabilities
|
||||
|
||||
### 2. CLAUDE.md - Enhanced Developer Guide
|
||||
|
||||
**Key Updates:**
|
||||
|
||||
- ✅ Added security testing commands section
|
||||
- ✅ Added CSP testing and validation commands
|
||||
- ✅ Added migration and deployment commands
|
||||
- ✅ Enhanced security features documentation
|
||||
- ✅ Updated with comprehensive CSRF, CSP, and monitoring details
|
||||
- ✅ Added security testing commands section
|
||||
- ✅ Added CSP testing and validation commands
|
||||
- ✅ Added migration and deployment commands
|
||||
- ✅ Enhanced security features documentation
|
||||
- ✅ Updated with comprehensive CSRF, CSP, and monitoring details
|
||||
|
||||
## Documentation Quality Assessment
|
||||
|
||||
@ -212,53 +217,53 @@ The following areas were found to have comprehensive, accurate documentation:
|
||||
|
||||
All new and updated documentation follows these standards:
|
||||
|
||||
- ✅ Clear, actionable examples
|
||||
- ✅ Comprehensive API documentation with request/response examples
|
||||
- ✅ Security considerations and best practices
|
||||
- ✅ Troubleshooting sections
|
||||
- ✅ Integration patterns and usage examples
|
||||
- ✅ Performance considerations
|
||||
- ✅ Cross-references to related documentation
|
||||
- ✅ Clear, actionable examples
|
||||
- ✅ Comprehensive API documentation with request/response examples
|
||||
- ✅ Security considerations and best practices
|
||||
- ✅ Troubleshooting sections
|
||||
- ✅ Integration patterns and usage examples
|
||||
- ✅ Performance considerations
|
||||
- ✅ Cross-references to related documentation
|
||||
|
||||
## Recommendations for Maintenance
|
||||
|
||||
### 1. Regular Review Schedule
|
||||
|
||||
- **Monthly**: Review API documentation for new endpoints
|
||||
- **Quarterly**: Update security feature documentation
|
||||
- **Per Release**: Validate all examples and code snippets
|
||||
- **Annually**: Comprehensive documentation audit
|
||||
- **Monthly**: Review API documentation for new endpoints
|
||||
- **Quarterly**: Update security feature documentation
|
||||
- **Per Release**: Validate all examples and code snippets
|
||||
- **Annually**: Comprehensive documentation audit
|
||||
|
||||
### 2. Documentation Automation
|
||||
|
||||
- Add documentation checks to CI/CD pipeline
|
||||
- Implement API documentation generation from OpenAPI specs
|
||||
- Set up automated link checking
|
||||
- Create documentation review templates
|
||||
- Add documentation checks to CI/CD pipeline
|
||||
- Implement API documentation generation from OpenAPI specs
|
||||
- Set up automated link checking
|
||||
- Create documentation review templates
|
||||
|
||||
### 3. Developer Onboarding
|
||||
|
||||
- Use updated documentation for new developer onboarding
|
||||
- Create documentation feedback process
|
||||
- Maintain documentation contribution guidelines
|
||||
- Track documentation usage and feedback
|
||||
- Use updated documentation for new developer onboarding
|
||||
- Create documentation feedback process
|
||||
- Maintain documentation contribution guidelines
|
||||
- Track documentation usage and feedback
|
||||
|
||||
### 4. Continuous Improvement
|
||||
|
||||
- Monitor documentation gaps through developer feedback
|
||||
- Update examples with real-world usage patterns
|
||||
- Enhance troubleshooting sections based on support issues
|
||||
- Keep security documentation current with threat landscape
|
||||
- Monitor documentation gaps through developer feedback
|
||||
- Update examples with real-world usage patterns
|
||||
- Enhance troubleshooting sections based on support issues
|
||||
- Keep security documentation current with threat landscape
|
||||
|
||||
## Summary
|
||||
|
||||
The documentation audit identified significant gaps in API documentation, outdated project information, and missing coverage of new security features. Through comprehensive updates and new documentation creation, the project now has:
|
||||
|
||||
- **Complete API Reference**: All endpoints documented with examples
|
||||
- **Accurate Project Information**: README and CLAUDE.md reflect current state
|
||||
- **Comprehensive Security Documentation**: All security features thoroughly documented
|
||||
- **Developer-Friendly Guides**: Clear setup, testing, and deployment procedures
|
||||
- **Administrative Documentation**: Complete coverage of admin and monitoring features
|
||||
- **Complete API Reference**: All endpoints documented with examples
|
||||
- **Accurate Project Information**: README and CLAUDE.md reflect current state
|
||||
- **Comprehensive Security Documentation**: All security features thoroughly documented
|
||||
- **Developer-Friendly Guides**: Clear setup, testing, and deployment procedures
|
||||
- **Administrative Documentation**: Complete coverage of admin and monitoring features
|
||||
|
||||
The documentation is now production-ready and provides comprehensive guidance for developers, administrators, and security teams working with the LiveDash-Node application.
|
||||
|
||||
|
||||
@ -20,29 +20,29 @@ Can't reach database server at `ep-tiny-math-a2zsshve-pooler.eu-central-1.aws.ne
|
||||
|
||||
### 1. Connection Retry Logic (`lib/database-retry.ts`)
|
||||
|
||||
- **Automatic retry** for connection errors
|
||||
- **Exponential backoff** (1s → 2s → 4s → 10s max)
|
||||
- **Smart error detection** (only retry connection issues)
|
||||
- **Configurable retry attempts** (default: 3 retries)
|
||||
- **Automatic retry** for connection errors
|
||||
- **Exponential backoff** (1s → 2s → 4s → 10s max)
|
||||
- **Smart error detection** (only retry connection issues)
|
||||
- **Configurable retry attempts** (default: 3 retries)
|
||||
|
||||
### 2. Enhanced Schedulers
|
||||
|
||||
- **Import Processor**: Added retry wrapper around main processing
|
||||
- **Session Processor**: Added retry wrapper around AI processing
|
||||
- **Graceful degradation** when database is temporarily unavailable
|
||||
- **Import Processor**: Added retry wrapper around main processing
|
||||
- **Session Processor**: Added retry wrapper around AI processing
|
||||
- **Graceful degradation** when database is temporarily unavailable
|
||||
|
||||
### 3. Singleton Pattern Enforced
|
||||
|
||||
- **All schedulers now use** `import { prisma } from "./prisma.js"`
|
||||
- **No more separate** `new PrismaClient()` instances
|
||||
- **Shared connection pool** across all operations
|
||||
- **All schedulers now use** `import { prisma } from "./prisma.js"`
|
||||
- **No more separate** `new PrismaClient()` instances
|
||||
- **Shared connection pool** across all operations
|
||||
|
||||
### 4. Neon-Specific Optimizations
|
||||
|
||||
- **Connection limit guidance**: 15 connections (below Neon's 20 limit)
|
||||
- **Extended timeouts**: 30s for cold start handling
|
||||
- **SSL mode requirements**: `sslmode=require` for Neon
|
||||
- **Application naming**: For better monitoring
|
||||
- **Connection limit guidance**: 15 connections (below Neon's 20 limit)
|
||||
- **Extended timeouts**: 30s for cold start handling
|
||||
- **SSL mode requirements**: `sslmode=require` for Neon
|
||||
- **Application naming**: For better monitoring
|
||||
|
||||
## Immediate Actions Needed
|
||||
|
||||
@ -83,17 +83,17 @@ pnpm db:check
|
||||
|
||||
## Monitoring
|
||||
|
||||
- **Health Endpoint**: `/api/admin/database-health`
|
||||
- **Connection Logs**: Enhanced logging for pool events
|
||||
- **Retry Logs**: Detailed retry attempt logging
|
||||
- **Error Classification**: Retryable vs non-retryable errors
|
||||
- **Health Endpoint**: `/api/admin/database-health`
|
||||
- **Connection Logs**: Enhanced logging for pool events
|
||||
- **Retry Logs**: Detailed retry attempt logging
|
||||
- **Error Classification**: Retryable vs non-retryable errors
|
||||
|
||||
## Files Modified
|
||||
|
||||
- `lib/database-retry.ts` - New retry utilities
|
||||
- `lib/importProcessor.ts` - Added retry wrapper
|
||||
- `lib/processingScheduler.ts` - Added retry wrapper
|
||||
- `docs/neon-database-optimization.md` - Neon-specific guide
|
||||
- `scripts/check-database-config.ts` - Configuration checker
|
||||
- `lib/database-retry.ts` - New retry utilities
|
||||
- `lib/importProcessor.ts` - Added retry wrapper
|
||||
- `lib/processingScheduler.ts` - Added retry wrapper
|
||||
- `docs/neon-database-optimization.md` - Neon-specific guide
|
||||
- `scripts/check-database-config.ts` - Configuration checker
|
||||
|
||||
The connection issues should be significantly reduced with these fixes! 🎯
|
||||
|
||||
@ -8,43 +8,43 @@ This guide provides step-by-step instructions for migrating LiveDash Node to ver
|
||||
|
||||
### tRPC Implementation
|
||||
|
||||
- **Type-safe APIs**: End-to-end TypeScript safety from client to server
|
||||
- **Improved Performance**: Optimized query batching and caching
|
||||
- **Better Developer Experience**: Auto-completion and type checking
|
||||
- **Simplified Authentication**: Integrated with existing NextAuth.js setup
|
||||
- **Type-safe APIs**: End-to-end TypeScript safety from client to server
|
||||
- **Improved Performance**: Optimized query batching and caching
|
||||
- **Better Developer Experience**: Auto-completion and type checking
|
||||
- **Simplified Authentication**: Integrated with existing NextAuth.js setup
|
||||
|
||||
### OpenAI Batch API Integration
|
||||
|
||||
- **50% Cost Reduction**: Batch processing reduces OpenAI API costs by half
|
||||
- **Enhanced Rate Limiting**: Better throughput management
|
||||
- **Improved Reliability**: Automatic retry mechanisms and error handling
|
||||
- **Automated Processing**: Background batch job lifecycle management
|
||||
- **50% Cost Reduction**: Batch processing reduces OpenAI API costs by half
|
||||
- **Enhanced Rate Limiting**: Better throughput management
|
||||
- **Improved Reliability**: Automatic retry mechanisms and error handling
|
||||
- **Automated Processing**: Background batch job lifecycle management
|
||||
|
||||
### Enhanced Security & Performance
|
||||
|
||||
- **Rate Limiting**: In-memory rate limiting for all authentication endpoints
|
||||
- **Input Validation**: Comprehensive Zod schemas for all user inputs
|
||||
- **Performance Monitoring**: Built-in metrics collection and monitoring
|
||||
- **Database Optimizations**: New indexes and query optimizations
|
||||
- **Rate Limiting**: In-memory rate limiting for all authentication endpoints
|
||||
- **Input Validation**: Comprehensive Zod schemas for all user inputs
|
||||
- **Performance Monitoring**: Built-in metrics collection and monitoring
|
||||
- **Database Optimizations**: New indexes and query optimizations
|
||||
|
||||
## 📋 Pre-Migration Checklist
|
||||
|
||||
### System Requirements
|
||||
|
||||
- [ ] Node.js 18+ installed
|
||||
- [ ] PostgreSQL 13+ database
|
||||
- [ ] `pg_dump` and `pg_restore` utilities available
|
||||
- [ ] Git repository with clean working directory
|
||||
- [ ] OpenAI API key (for production)
|
||||
- [ ] Sufficient disk space for backups (at least 2GB)
|
||||
- [ ] Node.js 18+ installed
|
||||
- [ ] PostgreSQL 13+ database
|
||||
- [ ] `pg_dump` and `pg_restore` utilities available
|
||||
- [ ] Git repository with clean working directory
|
||||
- [ ] OpenAI API key (for production)
|
||||
- [ ] Sufficient disk space for backups (at least 2GB)
|
||||
|
||||
### Environment Preparation
|
||||
|
||||
- [ ] Review current environment variables
|
||||
- [ ] Ensure database connection is working
|
||||
- [ ] Verify all tests are passing
|
||||
- [ ] Create a backup of your current deployment
|
||||
- [ ] Notify team members of planned downtime
|
||||
- [ ] Review current environment variables
|
||||
- [ ] Ensure database connection is working
|
||||
- [ ] Verify all tests are passing
|
||||
- [ ] Create a backup of your current deployment
|
||||
- [ ] Notify team members of planned downtime
|
||||
|
||||
## 🔧 Migration Process
|
||||
|
||||
@ -246,31 +246,31 @@ tail -f logs/migration.log
|
||||
|
||||
#### 2. tRPC Performance
|
||||
|
||||
- Monitor response times for tRPC endpoints
|
||||
- Check error rates in application logs
|
||||
- Verify type safety is working correctly
|
||||
- Monitor response times for tRPC endpoints
|
||||
- Check error rates in application logs
|
||||
- Verify type safety is working correctly
|
||||
|
||||
#### 3. Batch Processing
|
||||
|
||||
- Monitor batch job completion rates
|
||||
- Check OpenAI API cost reduction
|
||||
- Verify AI processing pipeline functionality
|
||||
- Monitor batch job completion rates
|
||||
- Check OpenAI API cost reduction
|
||||
- Verify AI processing pipeline functionality
|
||||
|
||||
### Key Metrics to Monitor
|
||||
|
||||
#### Performance Metrics
|
||||
|
||||
- **Response Times**: tRPC endpoints should respond within 500ms
|
||||
- **Database Queries**: Complex queries should complete within 1s
|
||||
- **Memory Usage**: Should remain below 80% of allocated memory
|
||||
- **CPU Usage**: Process should remain responsive
|
||||
- **Response Times**: tRPC endpoints should respond within 500ms
|
||||
- **Database Queries**: Complex queries should complete within 1s
|
||||
- **Memory Usage**: Should remain below 80% of allocated memory
|
||||
- **CPU Usage**: Process should remain responsive
|
||||
|
||||
#### Business Metrics
|
||||
|
||||
- **AI Processing Cost**: Should see ~50% reduction in OpenAI costs
|
||||
- **Processing Throughput**: Batch processing should handle larger volumes
|
||||
- **Error Rates**: Should remain below 1% for critical operations
|
||||
- **User Experience**: No degradation in dashboard performance
|
||||
- **AI Processing Cost**: Should see ~50% reduction in OpenAI costs
|
||||
- **Processing Throughput**: Batch processing should handle larger volumes
|
||||
- **Error Rates**: Should remain below 1% for critical operations
|
||||
- **User Experience**: No degradation in dashboard performance
|
||||
|
||||
## 🛠 Troubleshooting
|
||||
|
||||
@ -366,69 +366,69 @@ pnpm prisma db pull --print
|
||||
|
||||
### Immediate Tasks (First 24 Hours)
|
||||
|
||||
- [ ] Monitor application logs for errors
|
||||
- [ ] Verify all tRPC endpoints are responding correctly
|
||||
- [ ] Check batch processing job completion
|
||||
- [ ] Validate AI cost reduction in OpenAI dashboard
|
||||
- [ ] Run full test suite to ensure no regressions
|
||||
- [ ] Update documentation and team knowledge
|
||||
- [ ] Monitor application logs for errors
|
||||
- [ ] Verify all tRPC endpoints are responding correctly
|
||||
- [ ] Check batch processing job completion
|
||||
- [ ] Validate AI cost reduction in OpenAI dashboard
|
||||
- [ ] Run full test suite to ensure no regressions
|
||||
- [ ] Update documentation and team knowledge
|
||||
|
||||
### Medium-term Tasks (First Week)
|
||||
|
||||
- [ ] Optimize batch processing parameters based on usage
|
||||
- [ ] Fine-tune rate limiting settings
|
||||
- [ ] Set up monitoring alerts for new components
|
||||
- [ ] Train team on new tRPC APIs
|
||||
- [ ] Plan gradual feature adoption
|
||||
- [ ] Optimize batch processing parameters based on usage
|
||||
- [ ] Fine-tune rate limiting settings
|
||||
- [ ] Set up monitoring alerts for new components
|
||||
- [ ] Train team on new tRPC APIs
|
||||
- [ ] Plan gradual feature adoption
|
||||
|
||||
### Long-term Tasks (First Month)
|
||||
|
||||
- [ ] Analyze cost savings and performance improvements
|
||||
- [ ] Consider additional tRPC endpoint implementations
|
||||
- [ ] Optimize batch processing schedules
|
||||
- [ ] Review and adjust security settings
|
||||
- [ ] Plan next phase improvements
|
||||
- [ ] Analyze cost savings and performance improvements
|
||||
- [ ] Consider additional tRPC endpoint implementations
|
||||
- [ ] Optimize batch processing schedules
|
||||
- [ ] Review and adjust security settings
|
||||
- [ ] Plan next phase improvements
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
### New Security Features
|
||||
|
||||
- **Enhanced Rate Limiting**: Applied to all authentication endpoints
|
||||
- **Input Validation**: Comprehensive Zod schemas prevent injection attacks
|
||||
- **Secure Headers**: HTTPS enforcement in production
|
||||
- **Token Security**: JWT with proper expiration and rotation
|
||||
- **Enhanced Rate Limiting**: Applied to all authentication endpoints
|
||||
- **Input Validation**: Comprehensive Zod schemas prevent injection attacks
|
||||
- **Secure Headers**: HTTPS enforcement in production
|
||||
- **Token Security**: JWT with proper expiration and rotation
|
||||
|
||||
### Security Checklist
|
||||
|
||||
- [ ] Verify rate limiting is working correctly
|
||||
- [ ] Test input validation on all forms
|
||||
- [ ] Ensure HTTPS is enforced in production
|
||||
- [ ] Validate JWT token handling
|
||||
- [ ] Check for proper error message sanitization
|
||||
- [ ] Verify OpenAI API key is not exposed in logs
|
||||
- [ ] Verify rate limiting is working correctly
|
||||
- [ ] Test input validation on all forms
|
||||
- [ ] Ensure HTTPS is enforced in production
|
||||
- [ ] Validate JWT token handling
|
||||
- [ ] Check for proper error message sanitization
|
||||
- [ ] Verify OpenAI API key is not exposed in logs
|
||||
|
||||
## 📈 Expected Improvements
|
||||
|
||||
### Performance Improvements
|
||||
|
||||
- **50% reduction** in OpenAI API costs through batch processing
|
||||
- **30% improvement** in API response times with tRPC
|
||||
- **25% reduction** in database query time with new indexes
|
||||
- **Enhanced scalability** for processing larger session volumes
|
||||
- **50% reduction** in OpenAI API costs through batch processing
|
||||
- **30% improvement** in API response times with tRPC
|
||||
- **25% reduction** in database query time with new indexes
|
||||
- **Enhanced scalability** for processing larger session volumes
|
||||
|
||||
### Developer Experience
|
||||
|
||||
- **Type Safety**: End-to-end TypeScript types from client to server
|
||||
- **Better APIs**: Self-documenting tRPC procedures
|
||||
- **Improved Testing**: More reliable test suite with better validation
|
||||
- **Enhanced Monitoring**: Detailed health checks and reporting
|
||||
- **Type Safety**: End-to-end TypeScript types from client to server
|
||||
- **Better APIs**: Self-documenting tRPC procedures
|
||||
- **Improved Testing**: More reliable test suite with better validation
|
||||
- **Enhanced Monitoring**: Detailed health checks and reporting
|
||||
|
||||
### Operational Benefits
|
||||
|
||||
- **Automated Batch Processing**: Reduced manual intervention
|
||||
- **Better Error Handling**: Comprehensive retry mechanisms
|
||||
- **Improved Monitoring**: Real-time health status and metrics
|
||||
- **Simplified Deployment**: Automated migration and rollback procedures
|
||||
- **Automated Batch Processing**: Reduced manual intervention
|
||||
- **Better Error Handling**: Comprehensive retry mechanisms
|
||||
- **Improved Monitoring**: Real-time health status and metrics
|
||||
- **Simplified Deployment**: Automated migration and rollback procedures
|
||||
|
||||
---
|
||||
|
||||
|
||||
140
README.md
140
README.md
@ -12,46 +12,46 @@ A comprehensive real-time analytics dashboard for monitoring user sessions with
|
||||
|
||||
### Core Analytics
|
||||
|
||||
- **Real-time Session Monitoring**: Track and analyze user sessions as they happen
|
||||
- **Interactive Visualizations**: Geographic maps, response time distributions, and advanced charts
|
||||
- **AI-Powered Analysis**: OpenAI integration with 50% cost reduction through batch processing
|
||||
- **Advanced Analytics**: Detailed metrics and insights about user behavior patterns
|
||||
- **Session Details**: In-depth analysis of individual user sessions with transcript parsing
|
||||
- **Real-time Session Monitoring**: Track and analyze user sessions as they happen
|
||||
- **Interactive Visualizations**: Geographic maps, response time distributions, and advanced charts
|
||||
- **AI-Powered Analysis**: OpenAI integration with 50% cost reduction through batch processing
|
||||
- **Advanced Analytics**: Detailed metrics and insights about user behavior patterns
|
||||
- **Session Details**: In-depth analysis of individual user sessions with transcript parsing
|
||||
|
||||
### Security & Admin Features
|
||||
|
||||
- **Enterprise Security**: Multi-layer security with CSRF protection, CSP, and rate limiting
|
||||
- **Security Monitoring**: Real-time threat detection and alerting system
|
||||
- **Audit Logging**: Comprehensive security audit trails with retention management
|
||||
- **Admin Dashboard**: Advanced administration tools for user and system management
|
||||
- **Geographic Threat Detection**: IP-based threat analysis and anomaly detection
|
||||
- **Enterprise Security**: Multi-layer security with CSRF protection, CSP, and rate limiting
|
||||
- **Security Monitoring**: Real-time threat detection and alerting system
|
||||
- **Audit Logging**: Comprehensive security audit trails with retention management
|
||||
- **Admin Dashboard**: Advanced administration tools for user and system management
|
||||
- **Geographic Threat Detection**: IP-based threat analysis and anomaly detection
|
||||
|
||||
### Platform Management
|
||||
|
||||
- **Multi-tenant Architecture**: Company-based data isolation and management
|
||||
- **User Management**: Role-based access control with platform admin capabilities
|
||||
- **Batch Processing**: Optimized AI processing pipeline with automated scheduling
|
||||
- **Data Export**: CSV/JSON export capabilities for analytics and audit data
|
||||
- **Multi-tenant Architecture**: Company-based data isolation and management
|
||||
- **User Management**: Role-based access control with platform admin capabilities
|
||||
- **Batch Processing**: Optimized AI processing pipeline with automated scheduling
|
||||
- **Data Export**: CSV/JSON export capabilities for analytics and audit data
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Frontend**: React 19, Next.js 15, TailwindCSS 4
|
||||
- **Backend**: Next.js API Routes, tRPC, Custom Node.js server
|
||||
- **Database**: PostgreSQL with Prisma ORM and connection pooling
|
||||
- **Authentication**: NextAuth.js with enhanced security features
|
||||
- **Security**: CSRF protection, CSP with nonce-based scripts, comprehensive rate limiting
|
||||
- **AI Processing**: OpenAI API with batch processing for cost optimization
|
||||
- **Visualization**: D3.js, React Leaflet, Recharts, custom chart components
|
||||
- **Monitoring**: Real-time security monitoring, audit logging, threat detection
|
||||
- **Data Processing**: Node-cron schedulers for automated batch processing and AI analysis
|
||||
- **Frontend**: React 19, Next.js 15, TailwindCSS 4
|
||||
- **Backend**: Next.js API Routes, tRPC, Custom Node.js server
|
||||
- **Database**: PostgreSQL with Prisma ORM and connection pooling
|
||||
- **Authentication**: NextAuth.js with enhanced security features
|
||||
- **Security**: CSRF protection, CSP with nonce-based scripts, comprehensive rate limiting
|
||||
- **AI Processing**: OpenAI API with batch processing for cost optimization
|
||||
- **Visualization**: D3.js, React Leaflet, Recharts, custom chart components
|
||||
- **Monitoring**: Real-time security monitoring, audit logging, threat detection
|
||||
- **Data Processing**: Node-cron schedulers for automated batch processing and AI analysis
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Node.js 18+ (LTS version recommended)
|
||||
- pnpm (recommended package manager)
|
||||
- PostgreSQL 13+ database
|
||||
- Node.js 18+ (LTS version recommended)
|
||||
- pnpm (recommended package manager)
|
||||
- PostgreSQL 13+ database
|
||||
|
||||
### Installation
|
||||
|
||||
@ -126,61 +126,61 @@ BATCH_RESULT_PROCESSING_INTERVAL="*/1 * * * *"
|
||||
|
||||
## Project Structure
|
||||
|
||||
- `app/`: Next.js App Router pages and API routes
|
||||
- `api/`: API endpoints including admin, security, and tRPC routes
|
||||
- `dashboard/`: Main analytics dashboard pages
|
||||
- `platform/`: Platform administration interface
|
||||
- `components/`: Reusable React components
|
||||
- `admin/`: Administrative dashboard components
|
||||
- `security/`: Security monitoring UI components
|
||||
- `forms/`: CSRF-protected forms and form utilities
|
||||
- `providers/`: Context providers (CSRF, tRPC, themes)
|
||||
- `lib/`: Core utilities and business logic
|
||||
- Security modules (CSRF, CSP, rate limiting, audit logging)
|
||||
- Processing pipelines (batch processing, AI analysis)
|
||||
- Database utilities and authentication
|
||||
- `server/`: tRPC server configuration and routers
|
||||
- `prisma/`: Database schema, migrations, and seed scripts
|
||||
- `tests/`: Comprehensive test suite (unit, integration, E2E)
|
||||
- `docs/`: Detailed project documentation
|
||||
- `scripts/`: Migration and utility scripts
|
||||
- `app/`: Next.js App Router pages and API routes
|
||||
- `api/`: API endpoints including admin, security, and tRPC routes
|
||||
- `dashboard/`: Main analytics dashboard pages
|
||||
- `platform/`: Platform administration interface
|
||||
- `components/`: Reusable React components
|
||||
- `admin/`: Administrative dashboard components
|
||||
- `security/`: Security monitoring UI components
|
||||
- `forms/`: CSRF-protected forms and form utilities
|
||||
- `providers/`: Context providers (CSRF, tRPC, themes)
|
||||
- `lib/`: Core utilities and business logic
|
||||
- Security modules (CSRF, CSP, rate limiting, audit logging)
|
||||
- Processing pipelines (batch processing, AI analysis)
|
||||
- Database utilities and authentication
|
||||
- `server/`: tRPC server configuration and routers
|
||||
- `prisma/`: Database schema, migrations, and seed scripts
|
||||
- `tests/`: Comprehensive test suite (unit, integration, E2E)
|
||||
- `docs/`: Detailed project documentation
|
||||
- `scripts/`: Migration and utility scripts
|
||||
|
||||
## Available Scripts
|
||||
|
||||
### Development
|
||||
|
||||
- `pnpm dev`: Start development server with all features
|
||||
- `pnpm dev:next-only`: Start Next.js only (no background schedulers)
|
||||
- `pnpm build`: Build the application for production
|
||||
- `pnpm start`: Run the production build
|
||||
- `pnpm dev`: Start development server with all features
|
||||
- `pnpm dev:next-only`: Start Next.js only (no background schedulers)
|
||||
- `pnpm build`: Build the application for production
|
||||
- `pnpm start`: Run the production build
|
||||
|
||||
### Code Quality
|
||||
|
||||
- `pnpm lint`: Run ESLint
|
||||
- `pnpm lint:fix`: Fix ESLint issues automatically
|
||||
- `pnpm format`: Format code with Prettier
|
||||
- `pnpm format:check`: Check code formatting
|
||||
- `pnpm lint`: Run ESLint
|
||||
- `pnpm lint:fix`: Fix ESLint issues automatically
|
||||
- `pnpm format`: Format code with Prettier
|
||||
- `pnpm format:check`: Check code formatting
|
||||
|
||||
### Database
|
||||
|
||||
- `pnpm prisma:studio`: Open Prisma Studio to view database
|
||||
- `pnpm prisma:migrate`: Run database migrations
|
||||
- `pnpm prisma:generate`: Generate Prisma client
|
||||
- `pnpm prisma:seed`: Seed database with test data
|
||||
- `pnpm prisma:studio`: Open Prisma Studio to view database
|
||||
- `pnpm prisma:migrate`: Run database migrations
|
||||
- `pnpm prisma:generate`: Generate Prisma client
|
||||
- `pnpm prisma:seed`: Seed database with test data
|
||||
|
||||
### Testing
|
||||
|
||||
- `pnpm test`: Run all tests (Vitest + Playwright)
|
||||
- `pnpm test:vitest`: Run unit and integration tests
|
||||
- `pnpm test:coverage`: Run tests with coverage reports
|
||||
- `pnpm test:security`: Run security-specific tests
|
||||
- `pnpm test:csp`: Test CSP implementation
|
||||
- `pnpm test`: Run all tests (Vitest + Playwright)
|
||||
- `pnpm test:vitest`: Run unit and integration tests
|
||||
- `pnpm test:coverage`: Run tests with coverage reports
|
||||
- `pnpm test:security`: Run security-specific tests
|
||||
- `pnpm test:csp`: Test CSP implementation
|
||||
|
||||
### Security & Migration
|
||||
|
||||
- `pnpm migration:backup`: Create database backup
|
||||
- `pnpm migration:health-check`: Run system health checks
|
||||
- `pnpm test:security-headers`: Test HTTP security headers
|
||||
- `pnpm migration:backup`: Create database backup
|
||||
- `pnpm migration:health-check`: Run system health checks
|
||||
- `pnpm test:security-headers`: Test HTTP security headers
|
||||
|
||||
## Contributing
|
||||
|
||||
@ -196,9 +196,9 @@ This project is not licensed for commercial use without explicit permission. Fre
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- [Next.js](https://nextjs.org/)
|
||||
- [Prisma](https://prisma.io/)
|
||||
- [TailwindCSS](https://tailwindcss.com/)
|
||||
- [Chart.js](https://www.chartjs.org/)
|
||||
- [D3.js](https://d3js.org/)
|
||||
- [React Leaflet](https://react-leaflet.js.org/)
|
||||
- [Next.js](https://nextjs.org/)
|
||||
- [Prisma](https://prisma.io/)
|
||||
- [TailwindCSS](https://tailwindcss.com/)
|
||||
- [Chart.js](https://www.chartjs.org/)
|
||||
- [D3.js](https://d3js.org/)
|
||||
- [React Leaflet](https://react-leaflet.js.org/)
|
||||
|
||||
@ -37,7 +37,7 @@ function usePlatformSession() {
|
||||
const abortController = new AbortController();
|
||||
|
||||
const handleAuthSuccess = (sessionData: {
|
||||
user?: {
|
||||
user?: {
|
||||
id?: string;
|
||||
email?: string;
|
||||
name?: string;
|
||||
@ -50,14 +50,14 @@ function usePlatformSession() {
|
||||
if (sessionData?.user?.isPlatformUser) {
|
||||
setSession({
|
||||
user: {
|
||||
id: sessionData.user.id || '',
|
||||
email: sessionData.user.email || '',
|
||||
id: sessionData.user.id || "",
|
||||
email: sessionData.user.email || "",
|
||||
name: sessionData.user.name,
|
||||
role: sessionData.user.role || '',
|
||||
role: sessionData.user.role || "",
|
||||
companyId: sessionData.user.companyId,
|
||||
isPlatformUser: sessionData.user.isPlatformUser,
|
||||
platformRole: sessionData.user.platformRole,
|
||||
}
|
||||
},
|
||||
});
|
||||
setStatus("authenticated");
|
||||
} else {
|
||||
|
||||
@ -104,11 +104,7 @@ function SystemHealthCard({
|
||||
schedulerStatus,
|
||||
}: {
|
||||
health: { status: string; message: string };
|
||||
schedulerStatus: {
|
||||
csvImport?: boolean;
|
||||
processing?: boolean;
|
||||
batch?: boolean;
|
||||
};
|
||||
schedulerStatus: SchedulerStatus;
|
||||
}) {
|
||||
return (
|
||||
<Card>
|
||||
@ -125,25 +121,33 @@ function SystemHealthCard({
|
||||
</div>
|
||||
<div className="space-y-2">
|
||||
<div className="flex justify-between text-sm">
|
||||
<span>CSV Import Scheduler:</span>
|
||||
<span>Batch Creation:</span>
|
||||
<Badge
|
||||
variant={schedulerStatus?.csvImport ? "default" : "secondary"}
|
||||
variant={
|
||||
schedulerStatus?.createBatchesRunning ? "default" : "secondary"
|
||||
}
|
||||
>
|
||||
{schedulerStatus?.csvImport ? "Running" : "Stopped"}
|
||||
{schedulerStatus?.createBatchesRunning ? "Running" : "Stopped"}
|
||||
</Badge>
|
||||
</div>
|
||||
<div className="flex justify-between text-sm">
|
||||
<span>Processing Scheduler:</span>
|
||||
<span>Status Check:</span>
|
||||
<Badge
|
||||
variant={schedulerStatus?.processing ? "default" : "secondary"}
|
||||
variant={
|
||||
schedulerStatus?.checkStatusRunning ? "default" : "secondary"
|
||||
}
|
||||
>
|
||||
{schedulerStatus?.processing ? "Running" : "Stopped"}
|
||||
{schedulerStatus?.checkStatusRunning ? "Running" : "Stopped"}
|
||||
</Badge>
|
||||
</div>
|
||||
<div className="flex justify-between text-sm">
|
||||
<span>Batch Scheduler:</span>
|
||||
<Badge variant={schedulerStatus?.batch ? "default" : "secondary"}>
|
||||
{schedulerStatus?.batch ? "Running" : "Stopped"}
|
||||
<span>Result Processing:</span>
|
||||
<Badge
|
||||
variant={
|
||||
schedulerStatus?.processResultsRunning ? "default" : "secondary"
|
||||
}
|
||||
>
|
||||
{schedulerStatus?.processResultsRunning ? "Running" : "Stopped"}
|
||||
</Badge>
|
||||
</div>
|
||||
</div>
|
||||
@ -155,7 +159,7 @@ function SystemHealthCard({
|
||||
function CircuitBreakerCard({
|
||||
circuitBreakerStatus,
|
||||
}: {
|
||||
circuitBreakerStatus: Record<string, string> | null;
|
||||
circuitBreakerStatus: Record<string, CircuitBreakerStatus> | null;
|
||||
}) {
|
||||
return (
|
||||
<Card>
|
||||
@ -172,10 +176,8 @@ function CircuitBreakerCard({
|
||||
{Object.entries(circuitBreakerStatus).map(([key, status]) => (
|
||||
<div key={key} className="flex justify-between text-sm">
|
||||
<span>{key}:</span>
|
||||
<Badge
|
||||
variant={status === "CLOSED" ? "default" : "destructive"}
|
||||
>
|
||||
{status as string}
|
||||
<Badge variant={!status.isOpen ? "default" : "destructive"}>
|
||||
{status.isOpen ? "OPEN" : "CLOSED"}
|
||||
</Badge>
|
||||
</div>
|
||||
))}
|
||||
@ -412,13 +414,8 @@ export default function BatchMonitoringDashboard() {
|
||||
|
||||
return (
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 mb-6">
|
||||
<SystemHealthCard
|
||||
health={health}
|
||||
schedulerStatus={schedulerStatus as any}
|
||||
/>
|
||||
<CircuitBreakerCard
|
||||
circuitBreakerStatus={circuitBreakerStatus as any}
|
||||
/>
|
||||
<SystemHealthCard health={health} schedulerStatus={schedulerStatus} />
|
||||
<CircuitBreakerCard circuitBreakerStatus={circuitBreakerStatus} />
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@ -280,6 +280,84 @@ function addCORSHeaders(
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process authentication and authorization
|
||||
*/
|
||||
async function processAuthAndAuthz(
|
||||
context: APIContext,
|
||||
options: APIHandlerOptions
|
||||
): Promise<void> {
|
||||
if (options.requireAuth) {
|
||||
await validateAuthentication(context);
|
||||
}
|
||||
|
||||
if (options.requiredRole || options.requirePlatformAccess) {
|
||||
await validateAuthorization(context, options);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process input validation
|
||||
*/
|
||||
async function processValidation(
|
||||
request: NextRequest,
|
||||
options: APIHandlerOptions
|
||||
): Promise<{ validatedData: unknown; validatedQuery: unknown }> {
|
||||
let validatedData: unknown;
|
||||
if (options.validateInput && request.method !== "GET") {
|
||||
validatedData = await validateInput(request, options.validateInput);
|
||||
}
|
||||
|
||||
let validatedQuery: unknown;
|
||||
if (options.validateQuery) {
|
||||
validatedQuery = validateQuery(request, options.validateQuery);
|
||||
}
|
||||
|
||||
return { validatedData, validatedQuery };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create and configure response
|
||||
*/
|
||||
function createAPIResponse<T>(
|
||||
result: T,
|
||||
context: APIContext,
|
||||
options: APIHandlerOptions
|
||||
): NextResponse {
|
||||
const response = NextResponse.json(
|
||||
createSuccessResponse(result, { requestId: context.requestId })
|
||||
);
|
||||
|
||||
response.headers.set("X-Request-ID", context.requestId);
|
||||
|
||||
if (options.cacheControl) {
|
||||
response.headers.set("Cache-Control", options.cacheControl);
|
||||
}
|
||||
|
||||
addCORSHeaders(response, options);
|
||||
return response;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle request execution with audit logging
|
||||
*/
|
||||
async function executeWithAudit<T>(
|
||||
handler: APIHandler<T>,
|
||||
context: APIContext,
|
||||
validatedData: unknown,
|
||||
validatedQuery: unknown,
|
||||
request: NextRequest,
|
||||
options: APIHandlerOptions
|
||||
): Promise<T> {
|
||||
const result = await handler(context, validatedData, validatedQuery);
|
||||
|
||||
if (options.auditLog) {
|
||||
await logAPIAccess(context, "success", request.url);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Main API handler factory
|
||||
*/
|
||||
@ -291,64 +369,32 @@ export function createAPIHandler<T = unknown>(
|
||||
let context: APIContext | undefined;
|
||||
|
||||
try {
|
||||
// 1. Create request context
|
||||
context = await createAPIContext(request);
|
||||
|
||||
// 2. Apply rate limiting
|
||||
if (options.rateLimit) {
|
||||
await applyRateLimit(context, options.rateLimit);
|
||||
}
|
||||
|
||||
// 3. Validate authentication
|
||||
if (options.requireAuth) {
|
||||
await validateAuthentication(context);
|
||||
}
|
||||
await processAuthAndAuthz(context, options);
|
||||
|
||||
// 4. Validate authorization
|
||||
if (options.requiredRole || options.requirePlatformAccess) {
|
||||
await validateAuthorization(context, options);
|
||||
}
|
||||
|
||||
// 5. Validate input
|
||||
let validatedData;
|
||||
if (options.validateInput && request.method !== "GET") {
|
||||
validatedData = await validateInput(request, options.validateInput);
|
||||
}
|
||||
|
||||
// 6. Validate query parameters
|
||||
let validatedQuery;
|
||||
if (options.validateQuery) {
|
||||
validatedQuery = validateQuery(request, options.validateQuery);
|
||||
}
|
||||
|
||||
// 7. Execute handler
|
||||
const result = await handler(context, validatedData, validatedQuery);
|
||||
|
||||
// 8. Audit logging
|
||||
if (options.auditLog) {
|
||||
await logAPIAccess(context, "success", request.url);
|
||||
}
|
||||
|
||||
// 9. Create response
|
||||
const response = NextResponse.json(
|
||||
createSuccessResponse(result, { requestId: context.requestId })
|
||||
const { validatedData, validatedQuery } = await processValidation(
|
||||
request,
|
||||
options
|
||||
);
|
||||
|
||||
// 10. Add headers
|
||||
response.headers.set("X-Request-ID", context.requestId);
|
||||
const result = await executeWithAudit(
|
||||
handler,
|
||||
context,
|
||||
validatedData,
|
||||
validatedQuery,
|
||||
request,
|
||||
options
|
||||
);
|
||||
|
||||
if (options.cacheControl) {
|
||||
response.headers.set("Cache-Control", options.cacheControl);
|
||||
}
|
||||
|
||||
addCORSHeaders(response, options);
|
||||
|
||||
return response;
|
||||
return createAPIResponse(result, context, options);
|
||||
} catch (error) {
|
||||
// Handle errors consistently
|
||||
const requestId = context?.requestId || crypto.randomUUID();
|
||||
|
||||
// Log failed requests
|
||||
if (options.auditLog && context) {
|
||||
await logAPIAccess(context, "error", request.url, error as Error);
|
||||
}
|
||||
|
||||
@ -137,7 +137,7 @@ export function stopOptimizedBatchScheduler(): void {
|
||||
{ task: retryFailedTask, name: "retryFailedTask" },
|
||||
];
|
||||
|
||||
for (const { task, name } of tasks) {
|
||||
for (const { task } of tasks) {
|
||||
if (task) {
|
||||
task.stop();
|
||||
task.destroy();
|
||||
|
||||
@ -169,6 +169,10 @@ const ConfigSchema = z.object({
|
||||
|
||||
export type AppConfig = z.infer<typeof ConfigSchema>;
|
||||
|
||||
type DeepPartial<T> = {
|
||||
[P in keyof T]?: T[P] extends object ? DeepPartial<T[P]> : T[P];
|
||||
};
|
||||
|
||||
/**
|
||||
* Configuration provider class
|
||||
*/
|
||||
@ -230,8 +234,8 @@ class ConfigProvider {
|
||||
/**
|
||||
* Get environment-specific configuration
|
||||
*/
|
||||
forEnvironment(env: Environment): Partial<AppConfig> {
|
||||
const overrides: Record<Environment, any> = {
|
||||
forEnvironment(env: Environment): DeepPartial<AppConfig> {
|
||||
const overrides: Record<Environment, DeepPartial<AppConfig>> = {
|
||||
development: {
|
||||
app: {
|
||||
logLevel: "debug",
|
||||
@ -290,6 +294,169 @@ class ConfigProvider {
|
||||
return overrides[env] || {};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract app configuration from environment
|
||||
*/
|
||||
private extractAppConfig(env: NodeJS.ProcessEnv, environment: Environment) {
|
||||
return {
|
||||
name: env.APP_NAME || "LiveDash",
|
||||
version: env.APP_VERSION || "1.0.0",
|
||||
environment,
|
||||
baseUrl: env.NEXTAUTH_URL || "http://localhost:3000",
|
||||
port: Number.parseInt(env.PORT || "3000", 10),
|
||||
logLevel:
|
||||
(env.LOG_LEVEL as "debug" | "info" | "warn" | "error") || "info",
|
||||
features: {
|
||||
enableMetrics: env.ENABLE_METRICS !== "false",
|
||||
enableAnalytics: env.ENABLE_ANALYTICS !== "false",
|
||||
enableCaching: env.ENABLE_CACHING !== "false",
|
||||
enableCompression: env.ENABLE_COMPRESSION !== "false",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract database configuration from environment
|
||||
*/
|
||||
private extractDatabaseConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
url: env.DATABASE_URL || "",
|
||||
directUrl: env.DATABASE_URL_DIRECT,
|
||||
maxConnections: Number.parseInt(env.DB_MAX_CONNECTIONS || "10", 10),
|
||||
connectionTimeout: Number.parseInt(
|
||||
env.DB_CONNECTION_TIMEOUT || "30000",
|
||||
10
|
||||
),
|
||||
queryTimeout: Number.parseInt(env.DB_QUERY_TIMEOUT || "60000", 10),
|
||||
retryAttempts: Number.parseInt(env.DB_RETRY_ATTEMPTS || "3", 10),
|
||||
retryDelay: Number.parseInt(env.DB_RETRY_DELAY || "1000", 10),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract auth configuration from environment
|
||||
*/
|
||||
private extractAuthConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
secret: env.NEXTAUTH_SECRET || "",
|
||||
url: env.NEXTAUTH_URL || "http://localhost:3000",
|
||||
sessionMaxAge: Number.parseInt(env.AUTH_SESSION_MAX_AGE || "86400", 10),
|
||||
providers: {
|
||||
credentials: env.AUTH_CREDENTIALS_ENABLED !== "false",
|
||||
github: env.AUTH_GITHUB_ENABLED === "true",
|
||||
google: env.AUTH_GOOGLE_ENABLED === "true",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract security configuration from environment
|
||||
*/
|
||||
private extractSecurityConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
csp: {
|
||||
enabled: env.CSP_ENABLED !== "false",
|
||||
reportUri: env.CSP_REPORT_URI,
|
||||
reportOnly: env.CSP_REPORT_ONLY === "true",
|
||||
},
|
||||
csrf: {
|
||||
enabled: env.CSRF_ENABLED !== "false",
|
||||
tokenExpiry: Number.parseInt(env.CSRF_TOKEN_EXPIRY || "3600", 10),
|
||||
},
|
||||
rateLimit: {
|
||||
enabled: env.RATE_LIMIT_ENABLED !== "false",
|
||||
windowMs: Number.parseInt(env.RATE_LIMIT_WINDOW_MS || "900000", 10),
|
||||
maxRequests: Number.parseInt(env.RATE_LIMIT_MAX_REQUESTS || "100", 10),
|
||||
},
|
||||
audit: {
|
||||
enabled: env.AUDIT_ENABLED !== "false",
|
||||
retentionDays: Number.parseInt(env.AUDIT_RETENTION_DAYS || "90", 10),
|
||||
bufferSize: Number.parseInt(env.AUDIT_BUFFER_SIZE || "1000", 10),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract OpenAI configuration from environment
|
||||
*/
|
||||
private extractOpenAIConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
apiKey: env.OPENAI_API_KEY || "",
|
||||
organization: env.OPENAI_ORGANIZATION,
|
||||
mockMode: env.OPENAI_MOCK_MODE === "true",
|
||||
defaultModel: env.OPENAI_DEFAULT_MODEL || "gpt-3.5-turbo",
|
||||
maxTokens: Number.parseInt(env.OPENAI_MAX_TOKENS || "1000", 10),
|
||||
temperature: Number.parseFloat(env.OPENAI_TEMPERATURE || "0.1"),
|
||||
batchConfig: {
|
||||
enabled: env.OPENAI_BATCH_ENABLED !== "false",
|
||||
maxRequestsPerBatch: Number.parseInt(
|
||||
env.OPENAI_BATCH_MAX_REQUESTS || "1000",
|
||||
10
|
||||
),
|
||||
statusCheckInterval: Number.parseInt(
|
||||
env.OPENAI_BATCH_STATUS_INTERVAL || "60000",
|
||||
10
|
||||
),
|
||||
maxTimeout: Number.parseInt(
|
||||
env.OPENAI_BATCH_MAX_TIMEOUT || "86400000",
|
||||
10
|
||||
),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract scheduler configuration from environment
|
||||
*/
|
||||
private extractSchedulerConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
enabled: env.SCHEDULER_ENABLED !== "false",
|
||||
csvImport: {
|
||||
enabled: env.CSV_IMPORT_SCHEDULER_ENABLED !== "false",
|
||||
interval: env.CSV_IMPORT_INTERVAL || "*/5 * * * *",
|
||||
},
|
||||
importProcessor: {
|
||||
enabled: env.IMPORT_PROCESSOR_ENABLED !== "false",
|
||||
interval: env.IMPORT_PROCESSOR_INTERVAL || "*/2 * * * *",
|
||||
},
|
||||
sessionProcessor: {
|
||||
enabled: env.SESSION_PROCESSOR_ENABLED !== "false",
|
||||
interval: env.SESSION_PROCESSOR_INTERVAL || "*/3 * * * *",
|
||||
batchSize: Number.parseInt(
|
||||
env.SESSION_PROCESSOR_BATCH_SIZE || "50",
|
||||
10
|
||||
),
|
||||
},
|
||||
batchProcessor: {
|
||||
enabled: env.BATCH_PROCESSOR_ENABLED !== "false",
|
||||
createInterval: env.BATCH_CREATE_INTERVAL || "*/5 * * * *",
|
||||
statusInterval: env.BATCH_STATUS_INTERVAL || "*/2 * * * *",
|
||||
resultInterval: env.BATCH_RESULT_INTERVAL || "*/1 * * * *",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract email configuration from environment
|
||||
*/
|
||||
private extractEmailConfig(env: NodeJS.ProcessEnv) {
|
||||
return {
|
||||
enabled: env.EMAIL_ENABLED === "true",
|
||||
smtp: {
|
||||
host: env.SMTP_HOST,
|
||||
port: Number.parseInt(env.SMTP_PORT || "587", 10),
|
||||
secure: env.SMTP_SECURE === "true",
|
||||
user: env.SMTP_USER,
|
||||
password: env.SMTP_PASSWORD,
|
||||
},
|
||||
from: env.EMAIL_FROM || "noreply@livedash.com",
|
||||
templates: {
|
||||
passwordReset: env.EMAIL_TEMPLATE_PASSWORD_RESET || "password-reset",
|
||||
userInvitation: env.EMAIL_TEMPLATE_USER_INVITATION || "user-invitation",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract configuration from environment variables
|
||||
*/
|
||||
@ -298,130 +465,13 @@ class ConfigProvider {
|
||||
const environment = (env.NODE_ENV as Environment) || "development";
|
||||
|
||||
return {
|
||||
app: {
|
||||
name: env.APP_NAME || "LiveDash",
|
||||
version: env.APP_VERSION || "1.0.0",
|
||||
environment,
|
||||
baseUrl: env.NEXTAUTH_URL || "http://localhost:3000",
|
||||
port: Number.parseInt(env.PORT || "3000", 10),
|
||||
logLevel: (env.LOG_LEVEL as any) || "info",
|
||||
features: {
|
||||
enableMetrics: env.ENABLE_METRICS !== "false",
|
||||
enableAnalytics: env.ENABLE_ANALYTICS !== "false",
|
||||
enableCaching: env.ENABLE_CACHING !== "false",
|
||||
enableCompression: env.ENABLE_COMPRESSION !== "false",
|
||||
},
|
||||
},
|
||||
database: {
|
||||
url: env.DATABASE_URL || "",
|
||||
directUrl: env.DATABASE_URL_DIRECT,
|
||||
maxConnections: Number.parseInt(env.DB_MAX_CONNECTIONS || "10", 10),
|
||||
connectionTimeout: Number.parseInt(
|
||||
env.DB_CONNECTION_TIMEOUT || "30000",
|
||||
10
|
||||
),
|
||||
queryTimeout: Number.parseInt(env.DB_QUERY_TIMEOUT || "60000", 10),
|
||||
retryAttempts: Number.parseInt(env.DB_RETRY_ATTEMPTS || "3", 10),
|
||||
retryDelay: Number.parseInt(env.DB_RETRY_DELAY || "1000", 10),
|
||||
},
|
||||
auth: {
|
||||
secret: env.NEXTAUTH_SECRET || "",
|
||||
url: env.NEXTAUTH_URL || "http://localhost:3000",
|
||||
sessionMaxAge: Number.parseInt(env.AUTH_SESSION_MAX_AGE || "86400", 10),
|
||||
providers: {
|
||||
credentials: env.AUTH_CREDENTIALS_ENABLED !== "false",
|
||||
github: env.AUTH_GITHUB_ENABLED === "true",
|
||||
google: env.AUTH_GOOGLE_ENABLED === "true",
|
||||
},
|
||||
},
|
||||
security: {
|
||||
csp: {
|
||||
enabled: env.CSP_ENABLED !== "false",
|
||||
reportUri: env.CSP_REPORT_URI,
|
||||
reportOnly: env.CSP_REPORT_ONLY === "true",
|
||||
},
|
||||
csrf: {
|
||||
enabled: env.CSRF_ENABLED !== "false",
|
||||
tokenExpiry: Number.parseInt(env.CSRF_TOKEN_EXPIRY || "3600", 10),
|
||||
},
|
||||
rateLimit: {
|
||||
enabled: env.RATE_LIMIT_ENABLED !== "false",
|
||||
windowMs: Number.parseInt(env.RATE_LIMIT_WINDOW_MS || "900000", 10),
|
||||
maxRequests: Number.parseInt(
|
||||
env.RATE_LIMIT_MAX_REQUESTS || "100",
|
||||
10
|
||||
),
|
||||
},
|
||||
audit: {
|
||||
enabled: env.AUDIT_ENABLED !== "false",
|
||||
retentionDays: Number.parseInt(env.AUDIT_RETENTION_DAYS || "90", 10),
|
||||
bufferSize: Number.parseInt(env.AUDIT_BUFFER_SIZE || "1000", 10),
|
||||
},
|
||||
},
|
||||
openai: {
|
||||
apiKey: env.OPENAI_API_KEY || "",
|
||||
organization: env.OPENAI_ORGANIZATION,
|
||||
mockMode: env.OPENAI_MOCK_MODE === "true",
|
||||
defaultModel: env.OPENAI_DEFAULT_MODEL || "gpt-3.5-turbo",
|
||||
maxTokens: Number.parseInt(env.OPENAI_MAX_TOKENS || "1000", 10),
|
||||
temperature: Number.parseFloat(env.OPENAI_TEMPERATURE || "0.1"),
|
||||
batchConfig: {
|
||||
enabled: env.OPENAI_BATCH_ENABLED !== "false",
|
||||
maxRequestsPerBatch: Number.parseInt(
|
||||
env.OPENAI_BATCH_MAX_REQUESTS || "1000",
|
||||
10
|
||||
),
|
||||
statusCheckInterval: Number.parseInt(
|
||||
env.OPENAI_BATCH_STATUS_INTERVAL || "60000",
|
||||
10
|
||||
),
|
||||
maxTimeout: Number.parseInt(
|
||||
env.OPENAI_BATCH_MAX_TIMEOUT || "86400000",
|
||||
10
|
||||
),
|
||||
},
|
||||
},
|
||||
scheduler: {
|
||||
enabled: env.SCHEDULER_ENABLED !== "false",
|
||||
csvImport: {
|
||||
enabled: env.CSV_IMPORT_SCHEDULER_ENABLED !== "false",
|
||||
interval: env.CSV_IMPORT_INTERVAL || "*/5 * * * *",
|
||||
},
|
||||
importProcessor: {
|
||||
enabled: env.IMPORT_PROCESSOR_ENABLED !== "false",
|
||||
interval: env.IMPORT_PROCESSOR_INTERVAL || "*/2 * * * *",
|
||||
},
|
||||
sessionProcessor: {
|
||||
enabled: env.SESSION_PROCESSOR_ENABLED !== "false",
|
||||
interval: env.SESSION_PROCESSOR_INTERVAL || "*/3 * * * *",
|
||||
batchSize: Number.parseInt(
|
||||
env.SESSION_PROCESSOR_BATCH_SIZE || "50",
|
||||
10
|
||||
),
|
||||
},
|
||||
batchProcessor: {
|
||||
enabled: env.BATCH_PROCESSOR_ENABLED !== "false",
|
||||
createInterval: env.BATCH_CREATE_INTERVAL || "*/5 * * * *",
|
||||
statusInterval: env.BATCH_STATUS_INTERVAL || "*/2 * * * *",
|
||||
resultInterval: env.BATCH_RESULT_INTERVAL || "*/1 * * * *",
|
||||
},
|
||||
},
|
||||
email: {
|
||||
enabled: env.EMAIL_ENABLED === "true",
|
||||
smtp: {
|
||||
host: env.SMTP_HOST,
|
||||
port: Number.parseInt(env.SMTP_PORT || "587", 10),
|
||||
secure: env.SMTP_SECURE === "true",
|
||||
user: env.SMTP_USER,
|
||||
password: env.SMTP_PASSWORD,
|
||||
},
|
||||
from: env.EMAIL_FROM || "noreply@livedash.com",
|
||||
templates: {
|
||||
passwordReset: env.EMAIL_TEMPLATE_PASSWORD_RESET || "password-reset",
|
||||
userInvitation:
|
||||
env.EMAIL_TEMPLATE_USER_INVITATION || "user-invitation",
|
||||
},
|
||||
},
|
||||
app: this.extractAppConfig(env, environment),
|
||||
database: this.extractDatabaseConfig(env),
|
||||
auth: this.extractAuthConfig(env),
|
||||
security: this.extractSecurityConfig(env),
|
||||
openai: this.extractOpenAIConfig(env),
|
||||
scheduler: this.extractSchedulerConfig(env),
|
||||
email: this.extractEmailConfig(env),
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@ -191,7 +191,7 @@ export const DynamicAuditLogsPanel = createDynamicComponent(
|
||||
|
||||
// React wrapper for React.lazy with Suspense
|
||||
export function createLazyComponent<
|
||||
T extends Record<string, any> = Record<string, any>,
|
||||
T extends Record<string, unknown> = Record<string, unknown>,
|
||||
>(
|
||||
importFunc: () => Promise<{ default: ComponentType<T> }>,
|
||||
fallback: ComponentType = LoadingSpinner
|
||||
|
||||
@ -15,6 +15,26 @@ import {
|
||||
type MockResponseType,
|
||||
} from "./openai-responses";
|
||||
|
||||
interface ChatCompletionParams {
|
||||
model: string;
|
||||
messages: Array<{ role: string; content: string }>;
|
||||
temperature?: number;
|
||||
max_tokens?: number;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
interface BatchCreateParams {
|
||||
input_file_id: string;
|
||||
endpoint: string;
|
||||
completion_window: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface FileCreateParams {
|
||||
file: string; // File content as string for mock purposes
|
||||
purpose: string;
|
||||
}
|
||||
|
||||
interface MockOpenAIConfig {
|
||||
enabled: boolean;
|
||||
baseDelay: number; // Base delay in ms to simulate API latency
|
||||
@ -115,12 +135,9 @@ class OpenAIMockServer {
|
||||
/**
|
||||
* Mock chat completions endpoint
|
||||
*/
|
||||
async mockChatCompletion(request: {
|
||||
model: string;
|
||||
messages: Array<{ role: string; content: string }>;
|
||||
temperature?: number;
|
||||
max_tokens?: number;
|
||||
}): Promise<MockChatCompletion> {
|
||||
async mockChatCompletion(
|
||||
request: ChatCompletionParams
|
||||
): Promise<MockChatCompletion> {
|
||||
this.requestCount++;
|
||||
|
||||
await this.simulateDelay();
|
||||
@ -172,12 +189,9 @@ class OpenAIMockServer {
|
||||
/**
|
||||
* Mock batch creation endpoint
|
||||
*/
|
||||
async mockCreateBatch(request: {
|
||||
input_file_id: string;
|
||||
endpoint: string;
|
||||
completion_window: string;
|
||||
metadata?: Record<string, string>;
|
||||
}): Promise<MockBatchResponse> {
|
||||
async mockCreateBatch(
|
||||
request: BatchCreateParams
|
||||
): Promise<MockBatchResponse> {
|
||||
await this.simulateDelay();
|
||||
|
||||
if (this.shouldSimulateError()) {
|
||||
@ -214,10 +228,7 @@ class OpenAIMockServer {
|
||||
/**
|
||||
* Mock file upload endpoint
|
||||
*/
|
||||
async mockUploadFile(request: {
|
||||
file: string; // File content
|
||||
purpose: string;
|
||||
}): Promise<{
|
||||
async mockUploadFile(request: FileCreateParams): Promise<{
|
||||
id: string;
|
||||
object: string;
|
||||
purpose: string;
|
||||
@ -364,23 +375,42 @@ export const openAIMock = new OpenAIMockServer();
|
||||
/**
|
||||
* Drop-in replacement for OpenAI client that uses mocks when enabled
|
||||
*/
|
||||
export class MockOpenAIClient {
|
||||
private realClient: unknown;
|
||||
interface OpenAIClient {
|
||||
chat: {
|
||||
completions: {
|
||||
create: (params: ChatCompletionParams) => Promise<MockChatCompletion>;
|
||||
};
|
||||
};
|
||||
batches: {
|
||||
create: (params: BatchCreateParams) => Promise<MockBatchResponse>;
|
||||
retrieve: (batchId: string) => Promise<MockBatchResponse>;
|
||||
};
|
||||
files: {
|
||||
create: (params: FileCreateParams) => Promise<{
|
||||
id: string;
|
||||
object: string;
|
||||
purpose: string;
|
||||
filename: string;
|
||||
}>;
|
||||
content: (fileId: string) => Promise<string>;
|
||||
};
|
||||
}
|
||||
|
||||
constructor(realClient: unknown) {
|
||||
export class MockOpenAIClient {
|
||||
private realClient: OpenAIClient;
|
||||
|
||||
constructor(realClient: OpenAIClient) {
|
||||
this.realClient = realClient;
|
||||
}
|
||||
|
||||
get chat() {
|
||||
return {
|
||||
completions: {
|
||||
create: async (params: any) => {
|
||||
create: async (params: ChatCompletionParams) => {
|
||||
if (openAIMock.isEnabled()) {
|
||||
return openAIMock.mockChatCompletion(params as any);
|
||||
return openAIMock.mockChatCompletion(params);
|
||||
}
|
||||
return (this.realClient as any).chat.completions.create(
|
||||
params as any
|
||||
);
|
||||
return this.realClient.chat.completions.create(params);
|
||||
},
|
||||
},
|
||||
};
|
||||
@ -388,34 +418,34 @@ export class MockOpenAIClient {
|
||||
|
||||
get batches() {
|
||||
return {
|
||||
create: async (params: any) => {
|
||||
create: async (params: BatchCreateParams) => {
|
||||
if (openAIMock.isEnabled()) {
|
||||
return openAIMock.mockCreateBatch(params as any);
|
||||
return openAIMock.mockCreateBatch(params);
|
||||
}
|
||||
return (this.realClient as any).batches.create(params as any);
|
||||
return this.realClient.batches.create(params);
|
||||
},
|
||||
retrieve: async (batchId: string) => {
|
||||
if (openAIMock.isEnabled()) {
|
||||
return openAIMock.mockGetBatch(batchId);
|
||||
}
|
||||
return (this.realClient as any).batches.retrieve(batchId);
|
||||
return this.realClient.batches.retrieve(batchId);
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
get files() {
|
||||
return {
|
||||
create: async (params: any) => {
|
||||
create: async (params: FileCreateParams) => {
|
||||
if (openAIMock.isEnabled()) {
|
||||
return openAIMock.mockUploadFile(params);
|
||||
}
|
||||
return (this.realClient as any).files.create(params);
|
||||
return this.realClient.files.create(params);
|
||||
},
|
||||
content: async (fileId: string) => {
|
||||
if (openAIMock.isEnabled()) {
|
||||
return openAIMock.mockGetFileContent(fileId);
|
||||
}
|
||||
return (this.realClient as any).files.content(fileId);
|
||||
return this.realClient.files.content(fileId);
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@ -181,7 +181,8 @@ class PerformanceMonitor {
|
||||
// Placeholder for analytics integration
|
||||
// You could send this to Google Analytics, Vercel Analytics, etc.
|
||||
if (typeof window !== "undefined" && "gtag" in window) {
|
||||
(window as any).gtag("event", "core_web_vital", {
|
||||
const gtag = (window as { gtag?: (...args: unknown[]) => void }).gtag;
|
||||
gtag?.("event", "core_web_vital", {
|
||||
name: metricName,
|
||||
value: Math.round(value),
|
||||
metric_rating: this.getRating(metricName, value),
|
||||
|
||||
@ -169,7 +169,7 @@ export class PerformanceCache<K extends {} = string, V = unknown> {
|
||||
/**
|
||||
* Memoize a function with caching
|
||||
*/
|
||||
memoize<Args extends any[], Return extends V>(
|
||||
memoize<Args extends unknown[], Return extends V>(
|
||||
fn: (...args: Args) => Promise<Return> | Return,
|
||||
keyGenerator?: (...args: Args) => K,
|
||||
ttl?: number
|
||||
@ -421,7 +421,7 @@ export class CacheUtils {
|
||||
/**
|
||||
* Cache the result of an async function
|
||||
*/
|
||||
static cached<T extends any[], R>(
|
||||
static cached<T extends unknown[], R>(
|
||||
cacheName: string,
|
||||
fn: (...args: T) => Promise<R>,
|
||||
options: CacheOptions & {
|
||||
|
||||
@ -155,34 +155,36 @@ export class RequestDeduplicator {
|
||||
}> = [];
|
||||
|
||||
// Create the main promise
|
||||
const promise = new Promise<T>(async (resolve, reject) => {
|
||||
const promise = new Promise<T>((resolve, reject) => {
|
||||
resolvers.push({ resolve, reject });
|
||||
|
||||
try {
|
||||
const result = await fn();
|
||||
// Execute the async function
|
||||
fn()
|
||||
.then((result) => {
|
||||
// Cache the result
|
||||
if (options.ttl && options.ttl > 0) {
|
||||
this.results.set(key, {
|
||||
value: result,
|
||||
timestamp: Date.now(),
|
||||
ttl: options.ttl,
|
||||
});
|
||||
}
|
||||
|
||||
// Cache the result
|
||||
if (options.ttl && options.ttl > 0) {
|
||||
this.results.set(key, {
|
||||
value: result,
|
||||
timestamp: Date.now(),
|
||||
ttl: options.ttl,
|
||||
});
|
||||
}
|
||||
// Resolve all waiting promises
|
||||
resolvers.forEach(({ resolve: res }) => res(result));
|
||||
})
|
||||
.catch((error) => {
|
||||
this.stats.errors++;
|
||||
|
||||
// Resolve all waiting promises
|
||||
resolvers.forEach(({ resolve: res }) => res(result));
|
||||
} catch (error) {
|
||||
this.stats.errors++;
|
||||
|
||||
// Reject all waiting promises
|
||||
const errorToReject =
|
||||
error instanceof Error ? error : new Error(String(error));
|
||||
resolvers.forEach(({ reject: rej }) => rej(errorToReject));
|
||||
} finally {
|
||||
// Clean up pending request
|
||||
this.pendingRequests.delete(key);
|
||||
}
|
||||
// Reject all waiting promises
|
||||
const errorToReject =
|
||||
error instanceof Error ? error : new Error(String(error));
|
||||
resolvers.forEach(({ reject: rej }) => rej(errorToReject));
|
||||
})
|
||||
.finally(() => {
|
||||
// Clean up pending request
|
||||
this.pendingRequests.delete(key);
|
||||
});
|
||||
});
|
||||
|
||||
// Set up timeout if specified
|
||||
|
||||
@ -167,7 +167,11 @@ function startMonitoringIfEnabled(enabled?: boolean): void {
|
||||
/**
|
||||
* Helper function to record request metrics if enabled
|
||||
*/
|
||||
function recordRequestIfEnabled(timer: ReturnType<typeof PerformanceUtils.createTimer>, isError: boolean, enabled?: boolean): void {
|
||||
function recordRequestIfEnabled(
|
||||
timer: ReturnType<typeof PerformanceUtils.createTimer>,
|
||||
isError: boolean,
|
||||
enabled?: boolean
|
||||
): void {
|
||||
if (enabled) {
|
||||
performanceMonitor.recordRequest(timer.end(), isError);
|
||||
}
|
||||
@ -185,7 +189,7 @@ async function executeRequestWithOptimizations(
|
||||
if (opts.cache?.enabled || opts.deduplication?.enabled) {
|
||||
return executeWithCacheOrDeduplication(req, opts, originalHandler);
|
||||
}
|
||||
|
||||
|
||||
// Direct execution with monitoring
|
||||
const { result } = await PerformanceUtils.measureAsync(routeName, () =>
|
||||
originalHandler(req)
|
||||
@ -216,18 +220,16 @@ async function executeWithCacheOrDeduplication(
|
||||
opts.cache.ttl
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
// Deduplication only
|
||||
const deduplicator =
|
||||
deduplicators[
|
||||
opts.deduplication?.deduplicatorName as keyof typeof deduplicators
|
||||
] || deduplicators.api;
|
||||
|
||||
return deduplicator.execute(
|
||||
cacheKey,
|
||||
() => originalHandler(req),
|
||||
{ ttl: opts.deduplication?.ttl }
|
||||
);
|
||||
return deduplicator.execute(cacheKey, () => originalHandler(req), {
|
||||
ttl: opts.deduplication?.ttl,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
@ -247,7 +249,12 @@ export function enhanceAPIRoute(
|
||||
|
||||
try {
|
||||
startMonitoringIfEnabled(opts.monitoring?.enabled);
|
||||
const response = await executeRequestWithOptimizations(req, opts, routeName, originalHandler);
|
||||
const response = await executeRequestWithOptimizations(
|
||||
req,
|
||||
opts,
|
||||
routeName,
|
||||
originalHandler
|
||||
);
|
||||
recordRequestIfEnabled(timer, false, opts.monitoring?.recordRequests);
|
||||
return response;
|
||||
} catch (error) {
|
||||
@ -263,8 +270,10 @@ export function enhanceAPIRoute(
|
||||
export function PerformanceEnhanced(
|
||||
options: PerformanceIntegrationOptions = {}
|
||||
) {
|
||||
return <T extends new (...args: any[]) => {}>(constructor: T) =>
|
||||
class extends constructor {
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Required for mixin class pattern - TypeScript requires any[] for constructor parameters in mixins
|
||||
return <T extends new (...args: any[]) => {}>(Constructor: T) =>
|
||||
class extends Constructor {
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Required for mixin class pattern - TypeScript requires any[] for constructor parameters in mixins
|
||||
constructor(...args: any[]) {
|
||||
super(...args);
|
||||
|
||||
@ -279,7 +288,7 @@ export function PerformanceEnhanced(
|
||||
if (typeof originalMethod === "function") {
|
||||
(this as Record<string, unknown>)[methodName] =
|
||||
enhanceServiceMethod(
|
||||
`${constructor.name}.${methodName}`,
|
||||
`${Constructor.name}.${methodName}`,
|
||||
originalMethod.bind(this),
|
||||
options
|
||||
);
|
||||
|
||||
@ -777,9 +777,8 @@ export class PerformanceUtils {
|
||||
}
|
||||
|
||||
descriptor.value = async function (...args: unknown[]) {
|
||||
const { result } = await PerformanceUtils.measureAsync(
|
||||
metricName,
|
||||
() => originalMethod.apply(this, args)
|
||||
const { result } = await PerformanceUtils.measureAsync(metricName, () =>
|
||||
originalMethod.apply(this, args)
|
||||
);
|
||||
return result;
|
||||
};
|
||||
|
||||
18
package.json
18
package.json
@ -8,14 +8,18 @@
|
||||
"build:analyze": "ANALYZE=true next build",
|
||||
"dev": "pnpm exec tsx server.ts",
|
||||
"dev:next-only": "next dev --turbopack",
|
||||
"format": "npx prettier --write .",
|
||||
"format:check": "npx prettier --check .",
|
||||
"format": "pnpm format:prettier && pnpm format:biome",
|
||||
"format:check": "pnpm format:check-prettier && pnpm format:check-biome",
|
||||
"format:biome": "biome format --write",
|
||||
"format:check-biome": "biome format",
|
||||
"format:prettier": "npx prettier --write .",
|
||||
"format:check-prettier": "npx prettier --check .",
|
||||
"lint": "next lint",
|
||||
"lint:fix": "npx eslint --fix",
|
||||
"biome:check": "biome check .",
|
||||
"biome:fix": "biome check --write .",
|
||||
"biome:format": "biome format --write .",
|
||||
"biome:lint": "biome lint .",
|
||||
"biome:check": "biome check",
|
||||
"biome:fix": "biome check --write",
|
||||
"biome:format": "biome format --write",
|
||||
"biome:lint": "biome lint",
|
||||
"prisma:generate": "prisma generate",
|
||||
"prisma:migrate": "prisma migrate dev",
|
||||
"prisma:seed": "pnpm exec tsx prisma/seed.ts",
|
||||
@ -37,6 +41,8 @@
|
||||
"test:csp:full": "pnpm test:csp && pnpm test:csp:validate && pnpm test:vitest tests/unit/enhanced-csp.test.ts tests/integration/csp-middleware.test.ts tests/integration/csp-report-endpoint.test.ts",
|
||||
"lint:md": "markdownlint-cli2 \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\"",
|
||||
"lint:md:fix": "markdownlint-cli2 --fix \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\"",
|
||||
"lint:ci-warning": "biome ci --diagnostic-level=warn",
|
||||
"lint:ci-error": "biome ci --diagnostic-level=error",
|
||||
"migration:backup": "pnpm exec tsx scripts/migration/backup-database.ts full",
|
||||
"migration:backup:schema": "pnpm exec tsx scripts/migration/backup-database.ts schema",
|
||||
"migration:backup:data": "pnpm exec tsx scripts/migration/backup-database.ts data",
|
||||
|
||||
10139
pnpm-lock.yaml
generated
10139
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
@ -15,10 +15,10 @@ The migration will be performed incrementally to minimize disruption. We will st
|
||||
|
||||
### Why tRPC?
|
||||
|
||||
- **End-to-End Type Safety:** Eliminates a class of runtime errors by ensuring the client and server conform to the same data contracts. TypeScript errors will appear at build time if the client and server are out of sync.
|
||||
- **Improved Developer Experience:** Provides autocompletion for API procedures and their data types directly in the editor.
|
||||
- **Simplified Data Fetching:** Replaces manual `fetch` calls and `useEffect` hooks with clean, declarative tRPC hooks (`useQuery`, `useMutation`).
|
||||
- **No Code Generation:** Leverages TypeScript inference, avoiding a separate schema definition or code generation step.
|
||||
- **End-to-End Type Safety:** Eliminates a class of runtime errors by ensuring the client and server conform to the same data contracts. TypeScript errors will appear at build time if the client and server are out of sync.
|
||||
- **Improved Developer Experience:** Provides autocompletion for API procedures and their data types directly in the editor.
|
||||
- **Simplified Data Fetching:** Replaces manual `fetch` calls and `useEffect` hooks with clean, declarative tRPC hooks (`useQuery`, `useMutation`).
|
||||
- **No Code Generation:** Leverages TypeScript inference, avoiding a separate schema definition or code generation step.
|
||||
|
||||
### Integration Strategy: Gradual Adoption
|
||||
|
||||
@ -240,11 +240,11 @@ export default function UsersPage() {
|
||||
|
||||
## 4. Next Steps & Future Enhancements
|
||||
|
||||
- **Authentication & Context:** Implement a `createContext` function to pass session data (e.g., from NextAuth.js) to your tRPC procedures. This will allow for protected procedures.
|
||||
- **Input Validation:** Extensively use `zod` in the `.input()` part of procedures to validate all incoming data.
|
||||
- **Error Handling:** Implement robust error handling on both the client and server.
|
||||
- **Mutations:** Begin migrating `POST`, `PUT`, and `DELETE` endpoints to tRPC mutations.
|
||||
- **Optimistic UI:** For mutations, implement optimistic updates to provide a faster user experience.
|
||||
- **Authentication & Context:** Implement a `createContext` function to pass session data (e.g., from NextAuth.js) to your tRPC procedures. This will allow for protected procedures.
|
||||
- **Input Validation:** Extensively use `zod` in the `.input()` part of procedures to validate all incoming data.
|
||||
- **Error Handling:** Implement robust error handling on both the client and server.
|
||||
- **Mutations:** Begin migrating `POST`, `PUT`, and `DELETE` endpoints to tRPC mutations.
|
||||
- **Optimistic UI:** For mutations, implement optimistic updates to provide a faster user experience.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@ -6,33 +6,33 @@ This directory contains comprehensive migration scripts for deploying the new ar
|
||||
|
||||
### 1. Database Migrations
|
||||
|
||||
- `01-schema-migrations.sql` - Prisma database schema migrations
|
||||
- `02-data-migrations.sql` - Data transformation scripts
|
||||
- `validate-database.ts` - Database validation and health checks
|
||||
- `01-schema-migrations.sql` - Prisma database schema migrations
|
||||
- `02-data-migrations.sql` - Data transformation scripts
|
||||
- `validate-database.ts` - Database validation and health checks
|
||||
|
||||
### 2. Environment Configuration
|
||||
|
||||
- `environment-migration.ts` - Environment variable migration guide
|
||||
- `config-validator.ts` - Configuration validation scripts
|
||||
- `environment-migration.ts` - Environment variable migration guide
|
||||
- `config-validator.ts` - Configuration validation scripts
|
||||
|
||||
### 3. Deployment Scripts
|
||||
|
||||
- `deploy.ts` - Main deployment orchestrator
|
||||
- `pre-deployment-checks.ts` - Pre-deployment validation
|
||||
- `post-deployment-validation.ts` - Post-deployment verification
|
||||
- `rollback.ts` - Rollback procedures
|
||||
- `deploy.ts` - Main deployment orchestrator
|
||||
- `pre-deployment-checks.ts` - Pre-deployment validation
|
||||
- `post-deployment-validation.ts` - Post-deployment verification
|
||||
- `rollback.ts` - Rollback procedures
|
||||
|
||||
### 4. Health Checks
|
||||
|
||||
- `health-checks.ts` - Comprehensive system health validation
|
||||
- `trpc-endpoint-tests.ts` - tRPC endpoint validation
|
||||
- `batch-processing-tests.ts` - Batch processing system tests
|
||||
- `health-checks.ts` - Comprehensive system health validation
|
||||
- `trpc-endpoint-tests.ts` - tRPC endpoint validation
|
||||
- `batch-processing-tests.ts` - Batch processing system tests
|
||||
|
||||
### 5. Migration Utilities
|
||||
|
||||
- `backup-database.ts` - Database backup procedures
|
||||
- `restore-database.ts` - Database restore procedures
|
||||
- `migration-logger.ts` - Migration logging utilities
|
||||
- `backup-database.ts` - Database backup procedures
|
||||
- `restore-database.ts` - Database restore procedures
|
||||
- `migration-logger.ts` - Migration logging utilities
|
||||
|
||||
## Usage
|
||||
|
||||
@ -94,9 +94,9 @@ The migration implements a blue-green deployment strategy:
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Automatic database backups before migration
|
||||
- Rollback scripts for quick recovery
|
||||
- Health checks at each stage
|
||||
- Progressive feature enablement
|
||||
- Comprehensive logging and monitoring
|
||||
- Backwards compatibility maintained during migration
|
||||
- Automatic database backups before migration
|
||||
- Rollback scripts for quick recovery
|
||||
- Health checks at each stage
|
||||
- Progressive feature enablement
|
||||
- Comprehensive logging and monitoring
|
||||
- Backwards compatibility maintained during migration
|
||||
|
||||
@ -450,7 +450,7 @@ Examples:
|
||||
`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
runCommand()
|
||||
.then((result) => {
|
||||
|
||||
@ -517,7 +517,7 @@ if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
};
|
||||
|
||||
runTests()
|
||||
.then((result) => {
|
||||
|
||||
Reference in New Issue
Block a user