feat: Refactor data processing pipeline with AI cost tracking and enhanced session management

- Updated environment configuration to include Postgres database settings.
- Enhanced import processing to minimize field copying and rely on AI for analysis.
- Implemented detailed AI processing request tracking, including token usage and costs.
- Added new models for Question and SessionQuestion to manage user inquiries separately.
- Improved session processing scheduler with AI cost reporting functionality.
- Created a test script to validate the refactored pipeline and display processing statistics.
- Updated Prisma schema and migration files to reflect new database structure and relationships.
This commit is contained in:
Max Kowalski
2025-06-27 21:15:44 +02:00
parent 601e2e4026
commit 6f9ac219c2
10 changed files with 747 additions and 198 deletions

View File

@ -17,6 +17,10 @@ SESSION_PROCESSING_INTERVAL="0 * * * *" # Every hour (cron format) - AI pro
SESSION_PROCESSING_BATCH_SIZE="0" # 0 = process all sessions, >0 = limit number
SESSION_PROCESSING_CONCURRENCY="5" # Number of sessions to process in parallel
# Postgres Database Configuration
DATABASE_URL_TEST="postgresql://"
DATABASE_URL="postgresql://"
# Example configurations:
# - For development (no schedulers): SCHEDULER_ENABLED=false
# - For testing (every 5 minutes): CSV_IMPORT_INTERVAL=*/5 * * * *