A high-performance, asynchronous report generation engine built with Laravel. Designed to handle complex SQL queries, large datasets, and real-time user feedback through a decoupled SSE (Server-Sent Events) architecture.
The system follows a Domain-Driven Pipe & Filter architecture, focusing on memory efficiency and scalability.
graph TD
Client[VueJS Frontend] <--> NestJS[NestJS SSE Bridge]
Client -- "POST /api/reports/{type}" --> Laravel[Laravel API]
Laravel -- "Dispatch Job" --> RedisQ[Redis Queue]
RedisQ -- "Execute Pipeline" --> Worker[Laravel Worker]
Worker -- "SQL Cursor" --> DB[(Database)]
Worker -- "Flush Buffers" --> S3[Minio/S3 Storage]
Worker -- "Publish Events" --> RedisPub[Redis Pub/Sub]
RedisPub -- "Subscribe" --> NestJS
- Laravel API: Entry point for report requests, validation, and configuration.
- Pipe & Filter Engine: A sequential pipeline that processes reports in stages (Start, Build, Count, Process, Zip, Finish).
- Redis: Dual-purpose as a reliable Job Queue and a high-speed Message Broker (Pub/Sub).
- NestJS Bridge (External): Handles long-lived SSE connections, offloading concurrency from the main PHP application.
The codebase is organized by domain-specific responsibility:
app/
├── Actions/
│ ├── Files/ # Low-level file operations (Multipart S3 Uploads, Expiration)
│ └── Reports/ # Business actions (SSE Publishing, Status Changes)
├── Reports/ # The "Report Engines" (Mappers, Queries, DTOs)
│ ├── Abstract/ # Base contracts for ReportQuery, ReportMapper, and Filters
│ ├── ProposalGeneration/
│ └── ResumeGeneration/
├── Pipelines/
│ └── CSV/ # Pipe & Filter implementation for CSV generation
│ └── Pipes/ # Granular steps: BuildQuery, CountRows, ProcessRows, etc.
├── Services/
│ ├── Csv/ # CsvExportService (Memory-safe buffering & Streaming)
│ ├── ReportDispatch/ # Orchestrates request -> background job
│ └── Pagination/ # Handles "Preview" mode using the same Report Engines
├── Support/
│ └── Reports/ # ReportProcessManager (Heartbeats, Cancellation, Monitoring)
├── Jobs/ # Background processing & Event debouncing
├── ValueObjects/ # Immutable configuration objects (ReportConfiguration)
└── DTOs/ # Strongly-typed data transfer objects
User requests are normalized into a ReportConfiguration ValueObject before being queued.
sequenceDiagram
participant U as Client
participant C as ReportController
participant S as ReportDispatchService
participant J as ProcessReportQueryJob
U->>C: POST /api/reports/proposals
C->>C: Create ReportConfiguration
C->>S: dispatch(Configuration)
S->>S: Validate Duplicate Processes
S->>S: Persist ExportProgress (Status: WAITING)
S->>J: Dispatch(ReportProcessorData)
S-->>U: 202 Accepted (process_id)
The heavy lifting is handled by a sequential pipeline where each stage has a single responsibility.
graph LR
Start[StartReport] --> Build[BuildQuery]
Build --> Count[CountRows]
Count --> Process[ProcessRows]
Process --> Zip[ZipCsv]
Zip --> Finish[FinishReport]
subgraph "ProcessRows Core Loop"
Cursor[Eloquent Cursor] -- "Row-by-row" --> Buffer[Service Buffer]
Buffer -- "Chunk reached?" --> Disk[Append to Local Disk]
Disk -- "Every 3s" --> SSE[Queue SSE Progress]
Disk -- "Every 10s" --> Beat[Redis Heartbeat]
end
Laravel publishes to Redis, which NestJS bridges to the client. This prevents PHP workers from being blocked by slow SSE clients.
sequenceDiagram
participant W as Laravel Worker
participant R as Redis Pub/Sub
participant N as NestJS Bridge
participant C as Client
W->>W: Processed 5000 rows
W->>R: PUBLISH reports:events:user:{id} { "progress": 50% }
R-->>N: Trigger Subscriber
N->>C: Push SSE: progress
- Memory Management:
- Eloquent Cursors: Streams database results without loading the entire collection.
- Chunked Buffering: Rows are buffered in memory and flushed to disk in configurable sizes (e.g., 5000 rows) to keep memory usage flat.
- Concurrency Control:
- Duplicate Prevention: Checks for active reports with identical filters for the same user before dispatching.
- Heartbeat Monitoring: Workers send a heartbeat to Redis. A scheduled command (
VerifyReportHeartbeatsCommand) detects and fails "zombie" jobs.
- Resiliency:
- SSE Debouncing:
SendSseEventJobcan be configured to drop intermediate updates if the queue is backed up, ensuring the UI always receives the latest state. - Multipart Uploads: Large zip files are uploaded to S3 using multi-threaded streaming (10MB chunks).
- SSE Debouncing:
- Extensibility:
- Boilerplate Generation: A custom Artisan command creates all necessary files (Mapper, Query, Controller, Request) to ensure consistency.
- Generate Boilerplate:
php artisan make:report {DomainName} --report_name="Friendly Name" --queue="queue_name" - Define Query: Implement
formulateTables,formulateConditions, andformulateColumnsinapp/Reports/{DomainName}/{DomainName}Query.php. - Define Mapper: Implement the
mapmethod inapp/Reports/{DomainName}/{DomainName}Mapper.phpto transform raw DB rows into CSV columns. - Register Route: Add the route to
routes/api.phpas suggested by the command output:Route::match(['GET', 'POST'], 'your-report-name', YourReportController::class);
Maintained by the Core Engineering Team.