Skip to content

PortableRunner tests: surface worker-thread exceptions on main thread after wait_until_finish() (fixes #35211)#36485

Merged
damccorm merged 3 commits intoapache:masterfrom
jh1231223:fix-portable-runner-warn
Oct 16, 2025
Merged

PortableRunner tests: surface worker-thread exceptions on main thread after wait_until_finish() (fixes #35211)#36485
damccorm merged 3 commits intoapache:masterfrom
jh1231223:fix-portable-runner-warn

Conversation

@jh1231223
Copy link
Copy Markdown
Contributor

@jh1231223 jh1231223 commented Oct 12, 2025

Thanks for reviewing!

It seems the current failures are unrelated to the modifications in this PR. If I’ve overlooked something, please let me know and I’ll address it promptly.

Context / Issue

What this change does

  • Captures threading.excepthook during the test run.
  • Wraps beam.Pipeline.run(...).wait_until_finish() so that any captured worker-thread exception is rethrown on the main thread at the end of the run.
  • Updates test_assert_that to use this shim and assert the expected failure via assertRaisesRegex, turning the flaky pass into a deterministic failure for the negative case.

Why
Ensures the failure path for assert_that is exercised reliably and reported as a proper test failure, rather than a warning emitted from a background thread.

Summary
The failure originally occurred on a worker thread rather than the main thread, causing pytest to emit a PytestUnhandledThreadExceptionWarning instead of failing the test. This fix intercepts those background-thread exceptions via threading.excepthook and re-throws them on the main thread after wait_until_finish(), so pytest now treats them as proper test failures. The warning no longer appears because the exception is no longer “unhandled,” eliminating the nondeterministic pass/fail behavior.

Verification
Stress run x200 (no flakes):

for i in {1..200}; do
  echo "Run $i"
  pytest -q apache_beam/runners/portability/portable_runner_test.py::PortableRunnerTest::test_assert_that \
    -W error::pytest.PytestUnhandledThreadExceptionWarning -s --maxfail=1 || break
done

Result: 200/200 passes; no thread-exception warnings.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @jh1231223, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a long-standing flakiness issue in the PortableRunner tests where exceptions originating in worker threads were not properly propagated to the main thread, leading to PytestUnhandledThreadExceptionWarning instead of a test failure. The solution involves intercepting these background thread exceptions and re-raising them on the main thread once the pipeline execution concludes. This ensures that test failures are reliably reported, making the test suite more robust and predictable.

Highlights

  • Flaky Test Fix: Addresses a known flaky test behavior in PortableRunnerTest::test_assert_that (tracked in [Failing Test]: :sdks:python:test-suites:tox:py39:testPython39 can be flaky #35211) where assert_that failures on worker threads were only surfaced as warnings.
  • Exception Handling: Introduces a mechanism to capture exceptions from worker threads using threading.excepthook.
  • Pipeline Run Wrapping: Wraps beam.Pipeline.run(...).wait_until_finish() to rethrow any captured worker-thread exceptions on the main thread after the pipeline completes.
  • Test Determinism: Updates test_assert_that to utilize this new exception propagation, converting an intermittently passing test into a deterministic failure for the negative case, thus eliminating flakiness.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 12, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 55.07%. Comparing base (fb80584) to head (a26f249).

Additional details and impacted files
@@             Coverage Diff              @@
##             master   #36485      +/-   ##
============================================
- Coverage     55.07%   55.07%   -0.01%     
  Complexity     1666     1666              
============================================
  Files          1059     1059              
  Lines        165387   165387              
  Branches       1190     1190              
============================================
- Hits          91088    91083       -5     
- Misses        72124    72129       +5     
  Partials       2175     2175              
Flag Coverage Δ
python 80.98% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Copy Markdown
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers:

R: @damccorm for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@jh1231223
Copy link
Copy Markdown
Contributor Author

assign set of reviewers

@github-actions
Copy link
Copy Markdown
Contributor

Reviewers are already assigned to this PR: @damccorm

with self.assertRaisesRegex(Exception, 'Failed assert'):
with self.create_pipeline() as p:
assert_that(p | beam.Create(['a', 'b']), equal_to(['a']))
with patch_portable_runner_for_test(): # pylint: disable=not-context-manager
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - this is a good find.

With that said, I think that fixing the test is probably the wrong move here. If this test is flaky, it indicates that the underlying runner is not doing the right thing (the runner should fail this test everytime without any patches). So I think we need to address the behavior on the underlying runner, not in the test.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally agree that the issue isn't the test.

I changed PortableRunner.wait_until_finish() so it re-raises worker errors instead of just logging them on the message stream. Before, failures could get buried in logs and the timing made the run look “successful” sometimes, which caused the flake. Now the error bubbles up, the job fails deterministically, and the test passes without any test changes.

The changes are in portable_runner.py, plus a tiny companion tweak in local_job_service.py.

Copy link
Copy Markdown
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Letting the remaining tests run before merging, but this generally looks good to me - thanks! This is a nice find

@damccorm damccorm merged commit 118b3c7 into apache:master Oct 16, 2025
120 of 121 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants