feat: implement OpenAI Batch API for cost-efficient AI processing

- Add AIBatchRequest and AIRequestStatus models to Prisma schema
- Create comprehensive batch processing system (lib/batchProcessor.ts)
- Add intelligent batch scheduler with automated management
- Update processing pipeline to use batch requests instead of direct API calls
- Integrate batch scheduler into main server startup
- Achieve 50% cost reduction on OpenAI API usage
- Improve rate limiting and processing reliability
This commit is contained in:
2025-07-05 14:13:19 +02:00
committed by Kaj Kowalski
parent 5798988012
commit 8c8f360936
6 changed files with 1028 additions and 35 deletions

View File

@ -108,13 +108,13 @@ User: Third
expect(result.success).toBe(true);
expect(result.messages).toHaveLength(3);
// First message should be at start time
expect(result.messages![0].timestamp.getTime()).toBe(startTime.getTime());
// Last message should be at end time
expect(result.messages![2].timestamp.getTime()).toBe(endTime.getTime());
// Middle message should be between start and end
const midTime = result.messages![1].timestamp.getTime();
expect(midTime).toBeGreaterThan(startTime.getTime());
@ -174,7 +174,7 @@ System: Mixed case system
expect(result.success).toBe(true);
expect(result.messages).toHaveLength(2);
// Check that timestamps were parsed correctly
const firstTimestamp = result.messages![0].timestamp;
expect(firstTimestamp.getFullYear()).toBe(2024);