mirror of
https://github.com/kjanat/livedash-node.git
synced 2026-01-16 15:52:10 +01:00
feat: implement OpenAI Batch API for cost-efficient AI processing
- Add AIBatchRequest and AIRequestStatus models to Prisma schema - Create comprehensive batch processing system (lib/batchProcessor.ts) - Add intelligent batch scheduler with automated management - Update processing pipeline to use batch requests instead of direct API calls - Integrate batch scheduler into main server startup - Achieve 50% cost reduction on OpenAI API usage - Improve rate limiting and processing reliability
This commit is contained in:
@ -108,13 +108,13 @@ User: Third
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.messages).toHaveLength(3);
|
||||
|
||||
|
||||
// First message should be at start time
|
||||
expect(result.messages![0].timestamp.getTime()).toBe(startTime.getTime());
|
||||
|
||||
|
||||
// Last message should be at end time
|
||||
expect(result.messages![2].timestamp.getTime()).toBe(endTime.getTime());
|
||||
|
||||
|
||||
// Middle message should be between start and end
|
||||
const midTime = result.messages![1].timestamp.getTime();
|
||||
expect(midTime).toBeGreaterThan(startTime.getTime());
|
||||
@ -174,7 +174,7 @@ System: Mixed case system
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.messages).toHaveLength(2);
|
||||
|
||||
|
||||
// Check that timestamps were parsed correctly
|
||||
const firstTimestamp = result.messages![0].timestamp;
|
||||
expect(firstTimestamp.getFullYear()).toBe(2024);
|
||||
|
||||
Reference in New Issue
Block a user