mirror of
https://github.com/kjanat/livedash-node.git
synced 2026-01-16 15:52:10 +01:00
fix: comprehensive security and type improvements from PR #20 review
Security Enhancements: - Implemented proper rate limiting with automatic cleanup for /register and /forgot-password endpoints - Added memory usage protection with MAX_ENTRIES limit (10000) - Fixed rate limiter memory leaks by adding cleanup intervals - Improved IP extraction with x-real-ip and x-client-ip header support Code Quality Improvements: - Refactored ProcessingStatusManager from individual functions to class-based architecture - Maintained backward compatibility with singleton instance pattern - Fixed TypeScript strict mode violations across the codebase - Resolved all build errors and type mismatches UI Component Fixes: - Removed unused chart components (Charts.tsx, DonutChart.tsx) - Fixed calendar component type issues by removing unused custom implementations - Resolved theme provider type imports - Fixed confetti component default options handling - Corrected pointer component coordinate type definitions Type System Improvements: - Extended NextAuth types to support dual auth systems (regular and platform users) - Fixed nullable type handling throughout the codebase - Resolved Prisma JSON field type compatibility issues - Corrected SessionMessage and ImportRecord interface definitions - Fixed ES2015 iteration compatibility issues Database & Performance: - Updated database pool configuration for Prisma adapter compatibility - Fixed pagination response structure in user management endpoints - Improved error handling with proper error class usage Testing & Build: - All TypeScript compilation errors resolved - ESLint warnings remain but no errors - Build completes successfully with proper static generation
This commit is contained in:
212
refactor/ai-session-processing-pipeline.md
Normal file
212
refactor/ai-session-processing-pipeline.md
Normal file
@ -0,0 +1,212 @@
|
||||
> Okay write the complete prompt for claude code to refactor the current importprocessing, and implement this api-type implementation. I wanted this to be behind
|
||||
a **/legacy/* endpoint name to denote I want to accept in future the individual/batched chat session logs through api, not having to import and process them on
|
||||
my end.
|
||||
|
||||
✦ Of course. This is a significant but valuable refactoring project. A detailed, well-structured prompt is key for getting a good result from a code-focused AI
|
||||
like Claude.
|
||||
|
||||
Here is the complete prompt you can provide. It's designed to be comprehensive, giving the AI all the context, architectural guidance, and specific
|
||||
implementation details it needs to perform the refactor correctly.
|
||||
|
||||
---
|
||||
|
||||
Prompt for Claude Code
|
||||
|
||||
|
||||
Project: LiveDash-Node (livedash-node-max-branch)
|
||||
Objective: Refactor our AI session processing pipeline to use the OpenAI Batch API for cost savings and higher throughput. Implement a new internal admin API
|
||||
under /api/admin/legacy/* to monitor and manage this new asynchronous workflow.
|
||||
|
||||
|
||||
Context:
|
||||
Our current system processes AI analysis requests (like sentiment analysis, summarization, etc.) in a synchronous or simple asynchronous loop, likely via
|
||||
processingScheduler.ts. This is inefficient and costly at scale. We are moving to OpenAI's Batch API, which is fully asynchronous and requires a stateful,
|
||||
multi-stage processing architecture.
|
||||
|
||||
|
||||
The term "legacy" in the API path /api/admin/legacy/* is intentional. It refers to the fact that our current method of getting data (CSV imports) is the
|
||||
"legacy" workflow. In the future, we plan to introduce a new API for clients to submit session data directly. This admin API is for monitoring the processing
|
||||
of data from our legacy import system.
|
||||
|
||||
Please follow the phased plan below precisely.
|
||||
|
||||
---
|
||||
|
||||
Phase 1: Database Schema Changes (`prisma/schema.prisma`)
|
||||
|
||||
|
||||
First, we need to update our database schema to track the state of batch jobs and the individual requests within them.
|
||||
|
||||
1. Add the `AIBatchRequest` model and `AIBatchRequestStatus` enum. This table will track the status of each batch job submitted to OpenAI.
|
||||
|
||||
|
||||
|
||||
1 // Add this new model to your schema.prisma
|
||||
2
|
||||
3 model AIBatchRequest {
|
||||
4 id String @id @default(cuid())
|
||||
5 companyId String
|
||||
6 company Company @relation(fields: [companyId], references: [id])
|
||||
7
|
||||
8 // OpenAI specific IDs
|
||||
9 openaiBatchId String @unique
|
||||
10 inputFileId String
|
||||
11 outputFileId String?
|
||||
12 errorFileId String?
|
||||
13
|
||||
14 // Our internal status tracking
|
||||
15 status AIBatchRequestStatus @default(PENDING)
|
||||
16
|
||||
17 // Timestamps
|
||||
18 createdAt DateTime @default(now())
|
||||
19 completedAt DateTime?
|
||||
20 processedAt DateTime? // When we finished processing the results
|
||||
21
|
||||
22 // Relation to the individual requests included in this batch
|
||||
23 processingRequests AIProcessingRequest[]
|
||||
24
|
||||
25 @@index([companyId, status])
|
||||
26 }
|
||||
27
|
||||
28 enum AIBatchRequestStatus {
|
||||
29 PENDING // We have created the batch in our DB, preparing to send to OpenAI
|
||||
30 UPLOADING // Uploading the .jsonl file
|
||||
31 VALIDATING // OpenAI is validating the file
|
||||
32 IN_PROGRESS // OpenAI is processing the batch
|
||||
33 FINALIZING // OpenAI is finalizing the results
|
||||
34 COMPLETED // OpenAI job is done, results are available for download
|
||||
35 PROCESSED // We have successfully downloaded and processed all results
|
||||
36 FAILED // The batch failed validation or expired
|
||||
37 CANCELLED // The batch was cancelled
|
||||
38 }
|
||||
|
||||
|
||||
|
||||
2. Update the `AIProcessingRequest` model and add the `AIRequestStatus` enum. We need to track the state of each individual request as it moves through the
|
||||
batching pipeline.
|
||||
|
||||
|
||||
|
||||
1 // In your existing AIProcessingRequest model, add the new fields and enum.
|
||||
2
|
||||
3 model AIProcessingRequest {
|
||||
4 // ... all existing fields (id, sessionId, token counts, etc.)
|
||||
5
|
||||
6 // === ADD THESE NEW FIELDS ===
|
||||
7 processingStatus AIRequestStatus @default(PENDING_BATCHING)
|
||||
8 batchId String?
|
||||
9 batch AIBatchRequest? @relation(fields: [batchId], references: [id])
|
||||
10 // ============================
|
||||
11
|
||||
12 @@index([processingStatus]) // Add this index for efficient querying
|
||||
13 }
|
||||
14
|
||||
15 enum AIRequestStatus {
|
||||
16 PENDING_BATCHING // Default state: waiting to be picked up by the batch creator
|
||||
17 BATCHING_IN_PROGRESS // It has been assigned to a batch that is currently running
|
||||
18 PROCESSING_COMPLETE // The batch finished and we successfully got a result for this request
|
||||
19 PROCESSING_FAILED // The batch finished but this specific request failed
|
||||
20 }
|
||||
|
||||
|
||||
After modifying the schema, please run pnpm prisma:generate.
|
||||
|
||||
---
|
||||
|
||||
Phase 2: Implement the Batch Processing Schedulers
|
||||
|
||||
|
||||
The core of this refactor is to replace the existing logic in lib/processingScheduler.ts with a two-stage scheduler system. You can create new files for this
|
||||
logic (e.g., lib/batchCreator.ts, lib/batchPoller.ts) and integrate them into lib/schedulers.ts.
|
||||
|
||||
|
||||
Scheduler 1: Batch Creation (`lib/batchCreator.ts`)
|
||||
This scheduler runs periodically (e.g., every 10 minutes) to bundle pending requests into a batch.
|
||||
|
||||
|
||||
* Functionality:
|
||||
1. Query the database for AIProcessingRequest records with processingStatus: 'PENDING_BATCHING'.
|
||||
2. Group these requests by the AI model they need to use (e.g., gpt-4-turbo). The Batch API requires one model per batch file.
|
||||
3. For each model group:
|
||||
a. Generate a .jsonl string. Each line must be a valid OpenAI batch request.
|
||||
b. Crucially, use our internal `AIProcessingRequest.id` as the `custom_id` in each JSON line. This is how we will map results back.
|
||||
c. Upload the .jsonl content to OpenAI using openai.files.create({ file: Buffer.from(jsonlContent), purpose: 'batch' }).
|
||||
d. Create the batch job using openai.batches.create() with the returned input_file_id.
|
||||
e. In a single database transaction:
|
||||
i. Create a new AIBatchRequest record in our database, storing the openaiBatchId, inputFileId, and setting the initial status to VALIDATING.
|
||||
ii. Update all the AIProcessingRequest records included in this batch to set their processingStatus to BATCHING_IN_PROGRESS and link them via the
|
||||
batchId.
|
||||
|
||||
Scheduler 2: Result Polling (`lib/batchPoller.ts`)
|
||||
This scheduler runs more frequently (e.g., every 2 minutes) to check for and process completed jobs.
|
||||
|
||||
|
||||
* Functionality:
|
||||
1. Query our database for AIBatchRequest records with a status that is still in-flight (e.g., VALIDATING, IN_PROGRESS, FINALIZING).
|
||||
2. For each active batch, call openai.batches.retrieve(batch.openaiBatchId) to get the latest status from OpenAI.
|
||||
3. Update the status of our AIBatchRequest record to match the one from OpenAI.
|
||||
4. If a batch's status becomes completed:
|
||||
a. Update its status in our DB and store the output_file_id and error_file_id.
|
||||
b. Download the content of the output_file_id from OpenAI.
|
||||
c. Parse the resulting .jsonl file line by line. For each line:
|
||||
i. Use the custom_id to find our original AIProcessingRequest record.
|
||||
ii. If the line contains a response, parse the AI content and usage data. Update our AIProcessingRequest record with this data and set its
|
||||
processingStatus to PROCESSING_COMPLETE.
|
||||
iii. If the line contains an error, log it and set the processingStatus to PROCESSING_FAILED.
|
||||
d. Do the same for the error_file_id if it exists.
|
||||
e. Once all results are processed, update the parent AIBatchRequest status to PROCESSED and set its processedAt timestamp.
|
||||
|
||||
---
|
||||
|
||||
Phase 3: Implement the Internal Admin API
|
||||
|
||||
Create a new set of internal API endpoints for monitoring and managing this process.
|
||||
|
||||
|
||||
* Location: app/api/admin/legacy/
|
||||
* Authentication: Protect all these endpoints with our most secure admin-level authentication middleware (e.g., from lib/platform-auth.ts). Access should be
|
||||
strictly limited.
|
||||
|
||||
|
||||
Endpoint 1: Get Summary
|
||||
* Route: GET /api/admin/legacy/summary
|
||||
* Description: Returns a count of all AIProcessingRequest records, grouped by processingStatus.
|
||||
* Response:
|
||||
|
||||
|
||||
1 { "ok": true, "summary": { "pending_batching": 15231, "batching_in_progress": 2500, "processing_complete": 85432, "processing_failed": 78 } }
|
||||
|
||||
|
||||
|
||||
Endpoint 2: List Requests
|
||||
* Route: GET /api/admin/legacy/requests
|
||||
* Description: Retrieves a paginated list of AIProcessingRequest records, filterable by status.
|
||||
* Query Params: status (required), limit (optional), cursor (optional).
|
||||
* Response:
|
||||
|
||||
|
||||
1 { "ok": true, "requests": [ { "id": "...", "sessionId": "...", "status": "processing_failed", ... } ], "nextCursor": "..." }
|
||||
|
||||
|
||||
|
||||
Endpoint 3: Re-queue Failed Requests
|
||||
* Route: POST /api/admin/legacy/requests/requeue
|
||||
* Description: Resets the status of specified failed requests back to PENDING_BATCHING so they can be re-processed in a new batch.
|
||||
* Request Body: { "requestIds": ["req_id_1", "req_id_2"] }
|
||||
* Response:
|
||||
|
||||
1 { "ok": true, "requeuedCount": 2, "notFoundCount": 0 }
|
||||
|
||||
|
||||
---
|
||||
|
||||
Phase 4: Final Integration and Cleanup
|
||||
|
||||
|
||||
1. Update `server.ts` and `lib/schedulers.ts`: Disable the old processingScheduler and enable the two new schedulers (batchCreator, batchPoller). Ensure they
|
||||
are controlled by environment variables (e.g., BATCH_CREATION_ENABLED, BATCH_POLLING_ENABLED).
|
||||
2. Documentation: Add a section to CLAUDE.md or a new file in docs/ explaining the new batch processing architecture and the purpose of the admin API endpoints.
|
||||
3. Environment Variables: Add any new required environment variables to .env.example.
|
||||
|
||||
|
||||
Please proceed with this refactoring plan. Implement robust logging throughout the new schedulers to ensure we can debug the pipeline effectively.
|
||||
257
refactor/implement-tRPC.md
Normal file
257
refactor/implement-tRPC.md
Normal file
@ -0,0 +1,257 @@
|
||||
|
||||
# Refactoring Plan: Integrating tRPC for End-to-End Type Safety
|
||||
|
||||
**Objective:** Incrementally adopt tRPC to replace existing RESTful API endpoints, enhancing type safety, developer experience, and maintainability.
|
||||
|
||||
**Assignee:** Claude Code
|
||||
|
||||
**Mentor:** Max
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This document outlines the step-by-step process for integrating tRPC into the existing Next.js application. The primary goal is to establish a robust, type-safe API layer that simplifies data fetching and mutations between the client and server.
|
||||
|
||||
The migration will be performed incrementally to minimize disruption. We will start by setting up the core tRPC infrastructure and then migrate a single, non-critical endpoint to validate the approach.
|
||||
|
||||
## 2. Core Concepts & Strategy
|
||||
|
||||
### Why tRPC?
|
||||
|
||||
- **End-to-End Type Safety:** Eliminates a class of runtime errors by ensuring the client and server conform to the same data contracts. TypeScript errors will appear at build time if the client and server are out of sync.
|
||||
- **Improved Developer Experience:** Provides autocompletion for API procedures and their data types directly in the editor.
|
||||
- **Simplified Data Fetching:** Replaces manual `fetch` calls and `useEffect` hooks with clean, declarative tRPC hooks (`useQuery`, `useMutation`).
|
||||
- **No Code Generation:** Leverages TypeScript inference, avoiding a separate schema definition or code generation step.
|
||||
|
||||
### Integration Strategy: Gradual Adoption
|
||||
|
||||
1. **Setup Core Infrastructure:** Install dependencies and configure the tRPC server, client, and providers.
|
||||
2. **Create a Test Endpoint:** Implement a simple "hello world" procedure to ensure the setup is working correctly.
|
||||
3. **Migrate One Endpoint:** Choose a simple, read-only endpoint (e.g., fetching a list of users) and convert it to a tRPC query.
|
||||
4. **Validate and Review:** Confirm that the migrated endpoint works as expected and that the code is clean and idiomatic.
|
||||
5. **Continue Migration:** Gradually migrate other endpoints, starting with queries and then moving to mutations.
|
||||
|
||||
## 3. Implementation Steps
|
||||
|
||||
### Step 1: Install Dependencies
|
||||
|
||||
Add the required tRPC packages and `zod` for schema validation.
|
||||
|
||||
```bash
|
||||
pnpm add @trpc/server @trpc/client @trpc/react-query @trpc/next @tanstack/react-query zod
|
||||
```
|
||||
|
||||
### Step 2: Set Up the Backend (Server-Side)
|
||||
|
||||
#### A. Create the tRPC Initializer
|
||||
|
||||
Create a new file at `lib/trpc/server.ts` to initialize tRPC. This file will export the core `t` object and procedure helpers.
|
||||
|
||||
```typescript
|
||||
// lib/trpc/server.ts
|
||||
import { initTRPC } from '@trpc/server';
|
||||
import { db } from '@/lib/prisma'; // Assuming prisma client is here
|
||||
|
||||
// Avoid exporting the entire t-object since it's not very descriptive.
|
||||
const t = initTRPC.create();
|
||||
|
||||
// Base router and procedure helpers
|
||||
export const router = t.router;
|
||||
export const procedure = t.procedure;
|
||||
```
|
||||
|
||||
#### B. Define the Main App Router
|
||||
|
||||
Create a file for the main tRPC router at `lib/trpc/routers/_app.ts`. This router will combine all other sub-routers.
|
||||
|
||||
```typescript
|
||||
// lib/trpc/routers/_app.ts
|
||||
import { router } from '../server';
|
||||
import { userRouter } from './user'; // Example sub-router
|
||||
|
||||
export const appRouter = router({
|
||||
user: userRouter,
|
||||
// Add other routers here as they are created
|
||||
});
|
||||
|
||||
// Export type definition of API
|
||||
export type AppRouter = typeof appRouter;
|
||||
```
|
||||
|
||||
#### C. Create an Example Sub-Router
|
||||
|
||||
Create an example router for user-related endpoints at `lib/trpc/routers/user.ts`.
|
||||
|
||||
```typescript
|
||||
// lib/trpc/routers/user.ts
|
||||
import { router, procedure } from '../server';
|
||||
import { z } from 'zod';
|
||||
import { db } from '@/lib/prisma';
|
||||
|
||||
export const userRouter = router({
|
||||
// Example query to get all users
|
||||
list: procedure.query(async () => {
|
||||
const users = await db.user.findMany();
|
||||
return users;
|
||||
}),
|
||||
|
||||
// Example query to get a user by ID
|
||||
byId: procedure
|
||||
.input(z.object({ id: z.string() }))
|
||||
.query(async ({ input }) => {
|
||||
const user = await db.user.findUnique({ where: { id: input.id } });
|
||||
return user;
|
||||
}),
|
||||
});
|
||||
```
|
||||
|
||||
#### D. Create the tRPC API Route Handler
|
||||
|
||||
Create the entry point for all tRPC API calls at `app/api/trpc/[trpc]/route.ts`.
|
||||
|
||||
```typescript
|
||||
// app/api/trpc/[trpc]/route.ts
|
||||
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
|
||||
import { appRouter } from '@/lib/trpc/routers/_app';
|
||||
|
||||
const handler = (req: Request) =>
|
||||
fetchRequestHandler({
|
||||
endpoint: '/api/trpc',
|
||||
req,
|
||||
router: appRouter,
|
||||
createContext: () => ({}), // We will add context later
|
||||
});
|
||||
|
||||
export { handler as GET, handler as POST };
|
||||
```
|
||||
|
||||
### Step 3: Set Up the Frontend (Client-Side)
|
||||
|
||||
#### A. Create the tRPC Client
|
||||
|
||||
Create a file at `lib/trpc/client.ts` to configure the client-side hooks.
|
||||
|
||||
```typescript
|
||||
// lib/trpc/client.ts
|
||||
import { createTRPCReact } from '@trpc/react-query';
|
||||
import { type AppRouter } from '@/lib/trpc/routers/_app';
|
||||
|
||||
export const trpc = createTRPCReact<AppRouter>({});
|
||||
```
|
||||
|
||||
#### B. Create the tRPC Provider
|
||||
|
||||
We need a new provider that wraps our app in both a `QueryClientProvider` (from TanStack Query) and the tRPC provider. Create this at `lib/trpc/Provider.tsx`.
|
||||
|
||||
```tsx
|
||||
// lib/trpc/Provider.tsx
|
||||
'use client';
|
||||
|
||||
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
|
||||
import { httpBatchLink } from '@trpc/client';
|
||||
import React, { useState } from 'react';
|
||||
import { trpc } from './client';
|
||||
import { getBaseUrl } from '@/lib/utils'; // You might need to create this helper
|
||||
|
||||
export function TRPCProvider({ children }: { children: React.ReactNode }) {
|
||||
const [queryClient] = useState(() => new QueryClient({}));
|
||||
const [trpcClient] = useState(() =>
|
||||
trpc.createClient({
|
||||
links: [
|
||||
httpBatchLink({
|
||||
url: `${getBaseUrl()}/api/trpc`,
|
||||
}),
|
||||
],
|
||||
})
|
||||
);
|
||||
|
||||
return (
|
||||
<trpc.Provider client={trpcClient} queryClient={queryClient}>
|
||||
<QueryClientProvider client={queryClient}>{children}</QueryClientProvider>
|
||||
</trpc.Provider>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
*Note: You will need a `getBaseUrl` utility function to resolve the correct API URL on the client and server. You can place this in `lib/utils.ts`.*
|
||||
|
||||
```typescript
|
||||
// lib/utils.ts
|
||||
|
||||
export function getBaseUrl() {
|
||||
if (typeof window !== 'undefined') return ''; // browser should use relative url
|
||||
if (process.env.VERCEL_URL) return `https://${process.env.VERCEL_URL}`; // SSR should use vercel url
|
||||
return `http://localhost:${process.env.PORT ?? 3000}`; // dev SSR should use localhost
|
||||
}
|
||||
```
|
||||
|
||||
#### C. Update the Root Layout and Providers
|
||||
|
||||
Wrap the application with the new `TRPCProvider` in `app/providers.tsx`.
|
||||
|
||||
```tsx
|
||||
// app/providers.tsx
|
||||
'use client';
|
||||
|
||||
import { ThemeProvider } from '@/components/theme-provider';
|
||||
import { TRPCProvider } from '@/lib/trpc/Provider'; // Import the new provider
|
||||
|
||||
export function Providers({ children }: { children: React.ReactNode }) {
|
||||
return (
|
||||
<ThemeProvider
|
||||
attribute="class"
|
||||
defaultTheme="system"
|
||||
enableSystem
|
||||
disableTransitionOnChange
|
||||
>
|
||||
<TRPCProvider>{children}</TRPCProvider> {/* Wrap with TRPCProvider */}
|
||||
</ThemeProvider>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Use the tRPC Hooks in a Component
|
||||
|
||||
Now you can replace a traditional `fetch` call with the new tRPC hook. For example, in a component that displays a list of users:
|
||||
|
||||
```tsx
|
||||
// app/dashboard/users/page.tsx (Example)
|
||||
'use client';
|
||||
|
||||
import { trpc } from '@/lib/trpc/client';
|
||||
|
||||
export default function UsersPage() {
|
||||
const { data: users, isLoading, error } = trpc.user.list.useQuery();
|
||||
|
||||
if (isLoading) {
|
||||
return <div>Loading users...</div>;
|
||||
}
|
||||
|
||||
if (error) {
|
||||
return <div>Error: {error.message}</div>;
|
||||
}
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h1>Users</h1>
|
||||
<ul>
|
||||
{users?.map((user) => (
|
||||
<li key={user.id}>{user.name} ({user.email})</li>
|
||||
))}
|
||||
</ul>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## 4. Next Steps & Future Enhancements
|
||||
|
||||
- **Authentication & Context:** Implement a `createContext` function to pass session data (e.g., from NextAuth.js) to your tRPC procedures. This will allow for protected procedures.
|
||||
- **Input Validation:** Extensively use `zod` in the `.input()` part of procedures to validate all incoming data.
|
||||
- **Error Handling:** Implement robust error handling on both the client and server.
|
||||
- **Mutations:** Begin migrating `POST`, `PUT`, and `DELETE` endpoints to tRPC mutations.
|
||||
- **Optimistic UI:** For mutations, implement optimistic updates to provide a faster user experience.
|
||||
|
||||
---
|
||||
This structured approach will ensure a smooth and successful integration of tRPC, leading to a more robust and maintainable codebase.
|
||||
Reference in New Issue
Block a user