Comprehensive unit tests for all Pale Fire modules and functions.
# Install dependencies
pip install -r requirements.txt
# Run all tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html| File | Description | Tests |
|---|---|---|
test_config.py |
Configuration module | 15+ tests |
test_palefire_core.py |
EntityEnricher & QuestionTypeDetector | 25+ tests |
test_search_functions.py |
Search and helper functions | 20+ tests |
test_api.py |
FastAPI endpoints and models | 15+ tests |
test_ai_agent.py |
AI Agent (ModelManager, Daemon, Parsers) | 47+ tests |
Current test coverage by module:
- config.py: Configuration validation, defaults, helpers
- modules/PaleFireCore.py: Entity extraction, question detection
- Search functions: Query parsing, scoring, ranking
- api.py: Pydantic models, endpoint logic
- agents/AIAgent.py: ModelManager, AIAgentDaemon lifecycle, keyword/entity extraction
- agents/parsers/: File parsers (TXT, CSV, PDF, Spreadsheet)
pytestpytest tests/test_config.pypytest tests/test_config.py::TestConfig::test_neo4j_config_defaultspytest --cov=. --cov-report=html --cov-report=termpytest -vTests are marked for easy filtering:
@pytest.mark.unit- Unit tests@pytest.mark.integration- Integration tests@pytest.mark.slow- Slow running tests@pytest.mark.requires_neo4j- Requires Neo4j@pytest.mark.requires_spacy- Requires spaCy
Run specific markers:
pytest -m unit
pytest -m "not slow"
pytest -m "not requires_neo4j"- Create test file:
test_<module>.py - Create test class:
class Test<Feature>: - Write test methods:
def test_<functionality>(): - Run tests:
pytest tests/test_<module>.py
Example:
class TestMyFeature:
def test_basic_functionality(self):
"""Test basic feature works."""
result = my_function()
assert result is not Noneimport pytest
class TestFeatureName:
"""Test FeatureName functionality."""
def test_normal_case(self):
"""Test normal operation."""
assert function() == expected
def test_edge_case(self):
"""Test edge case."""
assert function(edge_input) == edge_output
def test_error_handling(self):
"""Test error handling."""
with pytest.raises(ValueError):
function(invalid_input)Directory for test input files (sample documents, CSV files, PDFs, etc.). Use this directory for persistent test files that should be version-controlled.
Usage:
@pytest.fixture
def sample_file(examples_input_dir):
return examples_input_dir / 'sample.txt'Directory for test output files (parsed results, extracted keywords, etc.). Use this directory for storing test outputs for verification.
Usage:
@pytest.fixture
def output_file(examples_output_dir):
return examples_output_dir / 'result.json'Note: Tests currently use tempfile for temporary files, but these directories are available for integration tests that need persistent test files.
Tests run automatically on:
- Pull requests
- Commits to main branch
- Manual triggers
- Testing Guide - Complete testing documentation
- pytest Documentation
- Coverage Reports - After running with --cov
Test Suite v1.0 - Ensuring Quality! ✅