v0.9.0
Release 0.9.0
When a Flowthru pipeline fails at runtime, you now get structured error reports with pre-populated GitHub issue URLs — making it vastly easier to diagnose problems and report bugs back to the team.
What's New
- Runtime Exception Reporting: When a flow encounters a runtime error, you now get an automatic error report that classifies the failure (possible Flowthru bug vs. external factors like network or filesystem issues) and generates a pre-populated GitHub issue URL with full context: stack trace, environment details, flow name, and which step failed. This cuts debugging time and makes bug reports complete before you file them.
- Shallow Inspection Performance: Significant performance improvements when Flowthru inspects catalog metadata across all storage adapters. JSON, EFCore, GraphQL, and Parquet serializers are all faster at inspecting schema and structure without loading full datasets. If you're using flows with many catalog items, you'll notice faster flow startup times.
- Parquet IO Optimization: Parquet serialization now includes configurable options for fine-tuning behavior on different hardware and data scales.
Bug Fixes
- Flow Configuration: Fixed configuration catalog generation across all example flows.
- Spark Compatibility: Temporarily removed pending Databricks branch integration — will be restored once the integration is complete.
- Example Configurations: Resolved configuration issues in KedroSpaceflightsCustom and template test dependencies.
🚀 Features
- runtime exception reporting addition (6a4f30db)
🩹 Fixes
- performance resolution for shallow inspection across extensions (78e4c3a1)
- performance improvements on parquet IO (8aa35955)
- resolve flow config issues (e20a9144)
- temp remove spark compat pending databricks branch integration (65b1aee3)
- resolve KedroSpaceflightsCustom config settings (a654c16e)
- resolve brittle template test pack dependency (c0038c9d)
❤️ Thank You
- Spencer Elkington