8000 Make ops consistent and test ops consistency. by hertschuh · Pull Request #21334 · keras-team/keras · GitHub
[go: up one dir, main page]

Skip to content

Make ops consistent and test ops consistency. #21334

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 2, 2025

Conversation

hertschuh
Copy link
Collaborator
@hertschuh hertschuh commented May 29, 2025

Added tests to verify that:

  • All the parameters in the op class call (dynamic / tensor parameters) and __init__ (static / non-tensor parameters) match the parameters in the op function
    • the names should match
    • default values should match
    • the order should be consistent, usually with the static parameters all at the end
  • Ops are exported in keras.ops
  • Ops are exported in their module, e.g. keras.ops.numpy
  • Ops are implemented in every backend
  • The backend implementation has the same signature as the op wrapper (same parameters in the same order with the same default values)

Note about default values: only the default values declared in the common op 8000 function are used by users, which is why those values were not changed and were applied to the class and backend implementations.

  • The default values for the class __init__ are not used because the op function passes all parameters when constructing the class.
  • The default values for the backend implementation are not used, because the common op function passes all parameters when calling the backend implementation. The exception to this is when an op implementation calls another op directly.

Therefore:

  • The defaults on the backend implementations are just a reminder for the implementer
  • The defaults on the class __init__ are mostly a reminder, but can also serve as a forward compatibility mechanism when deserializing a model after a parameter is added.

Fixed all inconsistencies, mostly inconsistent default values.

@codecov-commenter
Copy link
codecov-commenter commented May 30, 2025

Codecov Report

Attention: Patch coverage is 94.47853% with 9 lines in your changes missing coverage. Please review.

Project coverage is 82.65%. Comparing base (de9cf25) to head (f3de9dc).

Files with missing lines Patch % Lines
keras/src/backend/openvino/core.py 40.00% 3 Missing ⚠️
keras/src/ops/nn.py 70.00% 3 Missing ⚠️
keras/src/backend/openvino/nn.py 71.42% 2 Missing ⚠️
keras/api/_tf_keras/keras/ops/numpy/__init__.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21334      +/-   ##
==========================================
- Coverage   82.65%   82.65%   -0.01%     
==========================================
  Files         565      565              
  Lines       54823    54833      +10     
  Branches     8514     8515       +1     
==========================================
+ Hits        45315    45323       +8     
- Misses       7414     7417       +3     
+ Partials     2094     2093       -1     
Flag Coverage Δ
keras 82.46% <94.47%> (-0.01%) ⬇️
keras-jax 63.58% <44.78%> (-0.01%) ⬇️
keras-numpy 58.75% <58.28%> (+<0.01%) ⬆️
keras-openvino 33.13% <39.87%> (-0.01%) ⬇️
keras-tensorflow 63.99% <52.76%> (-0.01%) ⬇️
keras-torch 63.65% <48.46%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@hertschuh hertschuh force-pushed the ops_consistency branch 2 times, most recently from 1ef82ba to dfdd66c Compare May 30, 2025 22:36
Added tests to verify that:
- All the parameters in the op class `call` (dynamic / tensor parameters) and `__init__` (static / non-tensor parameters) match the parameters in the op function
  - the names should match
  - default values should match
  - the order should be consistent, usually with the static parameters all at the end
- Ops are exported in `keras.ops`
- Ops are exported in their module, e.g. `keras.ops.numpy`
- Ops are implemented in every backend
- The backend implementation has the same signature as the op wrapper (same parameters in the same order with the same default values)

Note about default values: only the default values declared in the common op function are used by users, which is why those values were not changed and were applied to the class and backend implementations.
- The default values for the class `__init__` are not used because the op function passes all parameters when constructing the class.
- The default values for the backend implementation are not used, because the common op function passes all parameters when calling the backend implementation. The exception to this is when an op implementation calls another op directly.

Therefore:
- The defaults on the backend implementations are just a reminder for the implementer
- The defaults on the class `__init__` are mostly a reminder, but can also serve as a forward compatibility mechanism when deserializing a model after a parameter is added.

Fixed all inconsistencies, mostly inconsistent default values.
Copy link
Collaborator
@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for the PR!

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Jun 2, 2025
@fchollet fchollet merged commit ad7bbb8 into keras-team:master Jun 2, 2025
10 checks passed
@hertschuh hertschuh deleted the ops_consistency branch June 2, 2025 18:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kokoro:force-run ready to pull Ready to be merged into the codebase size:L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0