Skip to content

Conversation

@EmanAbdelhaleem
Copy link
Contributor

@EmanAbdelhaleem EmanAbdelhaleem commented Jan 20, 2026

Fixes #1625
Depends on #1576
Related to #1575

Details
This PR implements Setups resource, and refactor its existing functions

@codecov-commenter
Copy link

codecov-commenter commented Jan 20, 2026

Codecov Report

❌ Patch coverage is 64.59391% with 279 lines in your changes missing coverage. Please review.
✅ Project coverage is 54.22%. Comparing base (d421b9e) to head (2cee514).

Files with missing lines Patch % Lines
openml/_api/clients/http.py 52.63% 90 Missing ⚠️
openml/_api/resources/base/versions.py 24.71% 67 Missing ⚠️
openml/_api/setup/backend.py 65.16% 31 Missing ⚠️
openml/_api/resources/base/fallback.py 26.31% 28 Missing ⚠️
openml/_api/resources/setup.py 70.58% 25 Missing ⚠️
openml/_api/setup/_utils.py 56.00% 11 Missing ⚠️
openml/testing.py 55.55% 8 Missing ⚠️
openml/_api/setup/builder.py 81.08% 7 Missing ⚠️
openml/setups/setup.py 33.33% 6 Missing ⚠️
openml/_api/resources/base/base.py 78.57% 3 Missing ⚠️
... and 2 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1619      +/-   ##
==========================================
+ Coverage   52.04%   54.22%   +2.18%     
==========================================
  Files          36       63      +27     
  Lines        4333     5040     +707     
==========================================
+ Hits         2255     2733     +478     
- Misses       2078     2307     +229     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@geetu040 geetu040 mentioned this pull request Jan 21, 2026
25 tasks
@EmanAbdelhaleem EmanAbdelhaleem marked this pull request as ready for review January 21, 2026 23:13
@EmanAbdelhaleem
Copy link
Contributor Author

@geetu040 Ready for review

Copy link
Collaborator

@geetu040 geetu040 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https clients are already initialized, you copy design from tests/test_api/test_versions.py

flow.name = f"{get_sentinel()}{flow.name}"
flow.publish()
TestBase._mark_entity_for_removal("flow", flow.flow_id, flow.name)
TestBase.logger.info(f"collected from {__file__.split('/')[-1]}: {flow.flow_id}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's nice that you have it here, but it won't take affect, I should add TestBase as parent for TestAPIBase so this works

assert setup_id == run.setup_id


class TestSetupV2(TestAPIBase):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this class has no tests, you should check if it not implemented methods raise the correct exception

self.resource = SetupV1API(self.http_client)
self.extension = SklearnExtension()

@pytest.mark.uses_test_server()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be moved to class level

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are talking about the extension, right?
I followed the code of the existing tests. and it's initialized in the setUp() function, I am not even sure if moving it would make difference, the setUp() is already called before running the class tests, rigth?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was talking about @pytest.mark.uses_test_server()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good, hopefully other tests are not breaking.

# although the flow exists (created as of previous statement),
# we can be sure there are no setups (yet) as it was just created
# and hasn't been ran
setup_id = openml.setups.setup_exists(flow)
Copy link
Contributor Author

@EmanAbdelhaleem EmanAbdelhaleem Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@geetu040
I am not sure how to test the exists method without mocking as it is dependent on a Flow object and a run Object. These tests I am using here, which I copied from test_setups_functions.py, are not actually testing the API cuz they are using the openml.setups.setup_exists()

The API methods signature is def exists(self, file_elements: dict[str, Any]) -> int | bool:, so, I need to pass file_elements if I want to test it, and to get these file elements, the following code is used in the def setup_exists(flow: OpenMLFlow) -> int: function in setups/functions.py

# sadly, this api call relies on a run object
    openml.flows.functions._check_flow_for_server_id(flow)
    if flow.model is None:
        raise ValueError("Flow should have model field set with the actual model.")
    if flow.extension is None:
        raise ValueError("Flow should have model field set with the correct extension.")

    # checks whether the flow exists on the server and flow ids align
    exists = flow_exists(flow.name, flow.external_version)
    if exists != flow.flow_id:
        raise ValueError(
            f"Local flow id ({flow.id}) differs from server id ({exists}). "
            "If this issue persists, please contact the developers.",
        )

    openml_param_settings = flow.extension.obtain_parameter_values(flow)
    description = xmltodict.unparse(_to_dict(flow.flow_id, openml_param_settings), pretty=True)
    file_elements = {
        "description": ("description.arff", description),
    }

So, I first need to create and publish the flow as in these tests and then get the file_elements as in the above code, However, I think this might be wrong for a unit test, What do you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes you are right, this is not ideal for a unit test, but we already have many not so ideal tests, adding one more won't be a problem. but I would suggest if mocking can make it simple for you then sure go ahead and use that, we are already using it in some places where without mocking irrelevant parts of code would need to be executed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENH] V1 → V2 API Migration - setups

5 participants