[pull] master from ray-project:master#569
Merged
pull[bot] merged 47 commits intofishbone:masterfrom May 25, 2022
Merged
Conversation
Due to we have already removed the multiple workers in one process, remove RayRuntimeInternal for purpose.
…y use bold font (#24994)
…25104) When the primary copy of an object is lost, owner will try to pin the secondary copy. In the meantime, the secondary copy might be evicted. In this case, the PinObjectIDs rpc call should return error to let the owner know that the pin failed. Otherwise the owner will mistakenly think the secondary copy is pinned.
Flush command stdout/stderr before exiting CommandRunner.run, so that setup command output is less likely to get swallowed.
…iner` class methods. (#24684)
…t and learning tests. (#24579)
This improves error handling per https://docs.google.com/document/d/1IeEsJOiurg-zctOcBjY-tQVbsCmURFSnUCTkx_4a7Cw/edit#heading=h.pdzl9cil9e8z (the RPC part). Semantics If all queries to the source failed, raise a RayStateApiException. If partial queries are failed, warnings.warn the partial failure when print_api_stats=True. It is true for CLI. It is false when it is used within Python API or json / yaml format is required.
Currently the release test runner prefers the first successfully version of a cluster env, instead of the last version. But sometimes a cluster env may build successfully on Anyscale but cannot launch cluster successfully (e.g. version 2 here) or new dependencies need to be installed, so a new version needs to be built. The existing logic always picks up the 1st successful build and cannot pick up the new cluster env version. Although this is an edge case (tweaking cluster env versions, with the same Ray wheel or cluster env name), I believe it is possible for others to run into it. Also, avoid running most of the CI tests for changes under release/ray_release/.
… for other Q Learning Algos. (#24923)
…runtime_env logic (#25087)
Ludwig 0.5.1 requires jsonschema>4, so we have to install it in the test environment. Related: ludwig-ai/ludwig#2055
changed, there is no `central_f1` now.
…ct (#25004) * hang * update * up * up * comment
…ger class (#24771) This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core. - This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core - This PR prepares Tune to move to a CheckpointStrategy object - In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train) The consolidation is split into three PRs: 1. This PR - adds a common checkpoint manager class. 2. #24772 - based on this PR, adds the integration for Ray Train 3. #24430 - based on #24772, adds the integration for Ray Tune
dataset_shuffle_push_based_sort_1tb is consistently passing for weeks.
closes #24475 Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead. Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds. Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as 1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles. 2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph. Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR. ### Current 10 node chain with deployment graph `.bind()` ``` chain_length: 10, num_clients: 1 latency_mean_ms: 41.05, latency_std_ms: 15.18 throughput_mean_tps: 27.5, throughput_std_tps: 3.2 ``` ### Using raw deployment handle without dag overhead ``` chain_length: 10, num_clients: 1 latency_mean_ms: 20.39, latency_std_ms: 4.57 throughput_mean_tps: 51.9, throughput_std_tps: 1.04 ``` ### After this PR: ``` chain_length: 10, num_clients: 1 latency_mean_ms: 20.35, latency_std_ms: 0.87 throughput_mean_tps: 48.4, throughput_std_tps: 1.43 ```
Add chaos tests for dataset shuffle: both push-based and non-push-based.
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories. Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com> Co-authored-by: Eric Liang <ekhliang@gmail.com>
Update a few docs and param names.
`TensorflowPredictor.predict` doesn't correctly produce logits. For more information, see #25137.
…n push-based shuffle (#25108) This fixes two bugs in Datasets push-based shuffle: Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead. We were only reporting partial stats for the merge stage. Related issue number Issue 1 is necessary for performance at large-scale (#24480).
Now that the "smaller_instances" versions of these tests are stable, we can stop running the version that uses bigger instances.
* workflow indexing * simplify workflow storage API * Only fix workflow status when updating the status. * support status filter
When loading the data from GCS, for detached actors, we treat it the same as normal actors. But the detached actor lives beyond the job's scope and should be loaded even when the job is finished. This PR fixed it.
This PR adds API annotations or changes the scope of several Ray Tune library classes.
Gzipping binary data is inefficient and slows down data transfer significantly.
MLDataset is replaced by Ray Dataset.
Add landing & key concepts pages for clusters
Redo for PR #24698: This fixes two bugs in data locality: When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere. The locality policy did not take into account spilled locations. Added C++ unit tests and Python tests for the above. Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
As described in the related issue, using model_weight as the key throws an error. This update points the user to use model as the key instead. Co-authored-by: tamilflix <tamilflix30@gmail.com>
We want to use `clangd` as the language server. `clangd` is an awesome language server that has many features and is very accurate. But it needs a `compile_commands.json` to work. This PR adds a popular bazel rule to generate this file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )