Skip to content

Conversation

@daphne-cornelisse
Copy link

@daphne-cornelisse daphne-cornelisse commented Dec 14, 2025

... and a small bug fix regarding collision / off-road rewards

@daphne-cornelisse daphne-cornelisse marked this pull request as ready for review December 14, 2025 00:08
@greptile-apps
Copy link

greptile-apps bot commented Dec 14, 2025

Greptile Overview

Greptile Summary

Reduced MAX_ROAD_SEGMENT_OBSERVATIONS from 200 to 128 to optimize training performance, achieving a 45% speedup (220K → 320K steps/sec on RTX 4080) while maintaining model score.

Key Changes

  • Decreased maximum road segment observations from 200 to 128 in drive.h
  • Reduces observation space size by 72 road features (7 features per segment × 72 segments)
  • Automatically propagates through memory allocation, neural network buffers, and Python bindings via the constant definition

Impact

  • Memory: Smaller observation arrays allocated in allocate() function
  • Neural Network: Reduced obs_road buffer size in DriveNet
  • Performance: Lower memory bandwidth and computation requirements during training
  • Model Quality: PR title indicates score is maintained despite fewer road observations

Validation

All usages of MAX_ROAD_SEGMENT_OBSERVATIONS properly reference the constant definition, ensuring consistency across the codebase. The change is minimal, well-contained, and follows the existing architecture pattern.

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk - it's a single constant change that propagates correctly throughout the codebase
  • The change is extremely simple and well-isolated: a single constant definition that is properly referenced throughout the codebase. All dependent code uses the constant rather than hardcoded values, ensuring consistency. The 45% performance improvement with maintained model score demonstrates successful empirical validation. No logic changes, no new code, and no risk of introducing bugs.
  • No files require special attention

Important Files Changed

File Analysis

Filename Score Overview
pufferlib/ocean/drive/drive.h 5/5 Reduced MAX_ROAD_SEGMENT_OBSERVATIONS from 200 to 128, which directly reduces observation space size and memory allocation for performance optimization

Sequence Diagram

sequenceDiagram
    participant Env as Drive Environment
    participant Alloc as allocate()
    participant Obs as get_observations()
    participant Net as DriveNet
    participant Train as Training Loop
    
    Note over Env: MAX_ROAD_SEGMENT_OBSERVATIONS = 128 (was 200)
    
    Env->>Alloc: Initialize environment
    Alloc->>Alloc: Calculate max_obs size
    Note over Alloc: ego_dim + PARTNER_FEATURES*(MAX_AGENTS-1)<br/>+ ROAD_FEATURES*MAX_ROAD_SEGMENT_OBSERVATIONS
    Alloc->>Alloc: Allocate observations array
    Note over Alloc: Size reduced by 72 road features<br/>(7 features × 72 fewer segments)
    
    Train->>Obs: Collect observations per agent
    Obs->>Obs: Get neighbor road entities
    Note over Obs: Limited to 128 road segments<br/>(was 200)
    Obs->>Obs: Fill road observations
    Obs->>Obs: Zero-pad remaining slots
    Note over Obs: Fewer padding slots needed
    Obs-->>Train: Return observations
    
    Train->>Net: Forward pass with observations
    Net->>Net: Process road features
    Note over Net: obs_road buffer size:<br/>num_agents × 128 × 13<br/>(was num_agents × 200 × 13)
    Net-->>Train: Return actions
    
    Note over Train: Performance improvement:<br/>220K → 320K steps/sec on RTX 4080<br/>Due to smaller observation space
Loading

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Agent Settings | Greptile

if (collision_state > 0) {
if (collision_state == VEHICLE_COLLISION) {
env->rewards[i] = env->reward_vehicle_collision;
env->rewards[i] += env->reward_vehicle_collision;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was this a bug before or is this a bug now?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was this way in the original implementation, I just never noticed it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dynamics_model = classic
reward_vehicle_collision = -0.5
reward_offroad_collision = -0.2
reward_offroad_collision = -0.5

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unsure if this was also intended to be included in the PR

@daphne-cornelisse daphne-cornelisse merged commit 43933b2 into main Dec 14, 2025
14 checks passed
@daphne-cornelisse daphne-cornelisse deleted the dc/speedup branch December 14, 2025 05:46
m2kulkarni added a commit to Emerge-Lab/Adaptive_Driving_Agent that referenced this pull request Feb 1, 2026
* Goal behavior fixes (Emerge-Lab#124)

* Make sure we can overwrite goal_behavior from python side and other minor improvements.

* Fix stop goal behavior bug.

* Make goal radius configurable for WOSAC eval.

* Reset to defaults + cleanup.

* Minor

* Minor

* Incorprate feedback.

* Update drive.h

Accel is being cut in half for no reason

* Add mode to only control the self-driving car (SDC) (Emerge-Lab#130)

* Add control mode.

* Fix error message.

* Fix incorrect obs dim in draw_agent_obs (Emerge-Lab#109)

* Fix incorrect obs dim in draw_agent_obs

* Update drive.h

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Replace product distribution action space with joint distribution (Emerge-Lab#104)

* make joint action space, currently uses multidiscrete and should be replaced with discrete

* Fix shape mismatch in logits.

* Minor

* Revert: Puffer doesn't like Discrete

* Minor

* Make action dim conditional on dynamics model.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Replace default ent_coef and learning_rate hparams (Emerge-Lab#134)

* Replace default learning rate and ent_coef.

* Minor

* Round.

* Add new weights binary with joint action space. (Emerge-Lab#136)

* Add support for logging optional evals during training (Emerge-Lab#133)

* Quick integration of WOSAC eval during training, will clean up tomorrow.

* Refactor eval code into separate util functions.

* Refactor code to support more eval modes.

* Add human replay evaluation mode.

* Address comments.

* Fix args and add to readme

* Improve and simplify code.

* Minor.

* Reset to default ini settings.

* Test for ini parsing (python and C) (Emerge-Lab#116)

* Add python test for ini file parsing

- Check values from default.ini
- Check values from drive.ini
- Additional checks for comments capabilities

* Add C test for ini file parsing

- Add CMake project to configure, build and test
- Test value parsing
- Test comments format
- Add comments for (un)expected results

* FIX: Solve all memory errors in tests

- Compile with asan

* Remove unprinted messages

* Add utest to the CI

- Ini parsing tests
- Update comments to clarify intent

* Update tests/ini_parser/ini_tester.c

- Change check conditions to if/else instead of ifs
- Speed up parsing speed (exist as soon as match is found)

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update tests/ini_parser/ini_tester.c

- Fix mismatch assignation

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* FIX: Move num_map to the high level of testing

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Fix missing arg (Emerge-Lab#141)

* Add WOSAC interaction + map metrics. Switch from np -> torch. (Emerge-Lab#138)

* Adding Interaction features

Notes:
- Need to add safeguards to load each map only once
- Might be slow if we increase num_agents per scenario, next step will
be torch.

I added some tests to see the distance and ttc computations are correct,
and metrics_sanity_check looks okay. I'll keep making some plots to
validate it.

* Added the additive smoothing logic for Bernoulli estimate.

Ref in original code:
message BernoulliEstimate {
    // Additive smoothing to apply to the underlying 2-bins histogram, to avoid
    // infinite values for empty bins.
    optional float additive_smoothing_pseudocount = 4 [default = 0.001];
  }

* Little cleanup of estimators.py

* Towards map-based realism metrics:

First step: extract the map from the vecenv

* Second step: Map features (signed distance to road edges)

A bunch of little tests in test_map_metric_features.py to ensure this do what it is supposed to do.

python -m pufferlib.ocean.benchmark.test_map_metrics

Next steps should be straightforward.

Will need to check at some point if doing this on numpy isnt too slow

* Map-based features.

This works, and passes all the tests, I would still want to make additionnal checks with the renderer because we never know.

With this, we have the whole set of WOSAC metrics (except for traffic lights), and we might also have the same issue as the original WOSAC code: it is slow.

Next step would be to transition from numpy to torch.

* Added a visual sanity check, plot random  trajectories and indicate when WOSAC sees an offorad or a collision

python pufferlib/ocean/benchmark/visual_sanity_check.py

* Update WOSAC control mode and ids.

* Eval mask for tracks_to_predict agents

* Replacing numpy by torch for the computation of interaction and map metrics.

It makes the computation way faster, and all the tests pass.

I didn't switch kinematics to torch because it was already fast, but I might make the change for consistency.

* Precommit

* Resolve small comments.

* More descriptive error message when going OOM.

---------

Co-authored-by: WaelDLZ <wawa@CRE1-W60060.vnet.valeo.com>
Co-authored-by: Waël Doulazmi <wawa@10-20-1-143.dynapool.wireless.nyu.edu>
Co-authored-by: Waël Doulazmi <wawa@Waels-MacBook-Air.local>
Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Multi map render support to wandb (Emerge-Lab#143)

Co-authored-by: Pragnay Mandavilli <pm3881@gr052.hpc.nyu.edu>

* Add mode for controlled experiments (Emerge-Lab#144)

* Add option for targeted experiments.

* Rename for clarity.

* Minor

* Remove tag

* Add to help message and make deepcopy of args to prevent state pollution.

* Little optimizations to use less memory in interaction_features.py (Emerge-Lab#146)

* Little optimizations to use less memory in interaction_features.py

They mostly consist in using in-place operations and deleting unused variables.

Code passes the tests.

Next steps:
- clean the .cpu().numpy() in ttc computation
- memory optimization for the map_features as well

* Add future todo.

---------

Co-authored-by: Waël Doulazmi <waeldoulazmi@gmail.com>

* Fix broken link

* Data processing script that works decent. (Emerge-Lab#150)

* Pass `map_dir` to the env via `.ini` and enable evaluation on a different dataset (Emerge-Lab#151)

* Support train/test split with datasets.

* Switch defaults.

* Minor.

* Typo.

* More robust way of parsing the path.

* Add sprites in headless rendering (Emerge-Lab#152)

* Load the sprites inside eval-gif()

* Color consistency.

* pedestrians and cyclists 3d models

* Minor.

---------

Co-authored-by: Spencer Cheng <spenccheng@gmail.com>

* Faster file processing (Emerge-Lab#153)

* multiprocessing and progbar

* cleanup

* Add link to small clean eval dataset

* Fix link typo

* Gif for readme (Emerge-Lab#155)

* Test

* Edit.

* Edit.

* Fix link?

* Fix vertical spaces.

* Update README.md

* Several small improvements for release (Emerge-Lab#159)

* Get rid of magic numbers in torch net.

* Stop recording agent view once agent reaches first got goal. Respawning vids look confusing.

* Add in missing models for headless rendering.

* Fix bbox rotation bug in render function.

* Remove magic numbers. Define constants once in drive.h and read from there.

* WIP changes (Emerge-Lab#156)

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Releas note

* Remove magic numbers in `drivenet.h`, set `MAX_AGENTS=32` by default (Emerge-Lab#165)

* Get rid of magic numbers in torch net.

* Stop recording agent view once agent reaches first got goal. Respawning vids look confusing.

* Add in missing models for headless rendering.

* Fix bbox rotation bug in render function.

* Remove magic numbers. Define constants once in drive.h and read from there.

* Remove all magic numbers in drivenet.h

* Clean up more magic numbers.

* Minor

* Minor.

* Stable: Ensure all tests are passing (Emerge-Lab#168)

* Test the tests

* Fix

* Add option to zoom in on the map or show full map (Emerge-Lab#163)

* Modifying render to view full map

* Removing vlue lines from maps

* Add option to zoom in on the map.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Add documentation (Emerge-Lab#170)

* Documentation first pass

* incorporate previous docs

* style updates and corrections

* Add GitHub Actions workflow for docs deployment (Emerge-Lab#172)

* styling fixes (Emerge-Lab#173)

* Add clang format (Emerge-Lab#132)

* Add clang format

- Format C/C++/CUDA
- Prevent formatting json
- Prevent formatting pufferlib extensions

* [FORMAT] Apply clang format

- No code changes

* Add clang format

- Format C/C++/CUDA
- Prevent formatting json
- Prevent formatting pufferlib extensions

* [FORMAT] Apply clang format

* Keep matrix printing as it is
* No code change

* small default change.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Add Sanity Command + Maps (Emerge-Lab#175)

* initial commit

* Ignore generated sanity binaries and compute ego speed along heading

* Fix get_global_agent_state to include length/width outputs

* Revert ego speed change in drive.h

* Add sanity runner wiring and wandb name hook

* Revert drive path changes; use map_dir from sanity

* Set sanity runs to use generated map_dir and render map

* Expand map path buffer in drive binding to avoid overflow

* fix maps and add docs

* update readme with documentation link

* Simplify docs.

* Apply precommit.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Documentation edits (Emerge-Lab#176)

* Softer theme and edits.

* Improve structure.

* Blog post v0

* Typo

* Fixes

* Early environment resets based on agents' respawn status.  (Emerge-Lab#167)

* Added early termination parameter based on respawn status of all agents in an episode

* pre-commit fix

* fix test

* Apply precommit.

* Reduce variance in aggregate metrics by logging only if we have data for at least num_agents.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Speed up end-to-end training: 220K -> 320K on RTX 4080 by reducing # road points (score maintained) (Emerge-Lab#177)

* 220K -> 320K.

* Reward bug fix.

* Minor.

* Add pt. (Emerge-Lab#179)

* Docs edits (Emerge-Lab#178)

* Simplify data downloading.

* Add links.

* Update WOSAC eval section.

* Minor.

* Rephrase.

* Fixes.

* Naming edits.

* Naming edits.

* There is a typo in torch.py

* Use num_maps for eval (Emerge-Lab#164)

* Use num_maps for eval

* readme.md didnt pass precommit ?

* Add the use_all_maps arg in the call to resample

* Update the wosac sanity checks to use all maps as well. Nicer prints in the eval

* Add a comment in drive.ini

* Update HumanReplay to follow the same logic

* Remove num_agents from WOSAC eval as it is not needed anymore.
Update the comments in drive.ini

* Change a comment in drive.py

* Update wosac.md

* Evaluated the base policy on validation interactive dataset and updated wosac.md with its score.

Also put back default behavior in drive.ini

* Fix small bug in `drive.c` and add binary weights cpt  (Emerge-Lab#184)

* Add model binary file and make demo() work.

* Add docs.

* Add docs.

---------

Co-authored-by: Daphne <daphn3cor@gmail.com>

* Carla junction filter (Emerge-Lab#187)

* Added Z Coords, Polygonized Junction Area to handle point in polygon query

* Added Snapped Polylines to better polygonizing

* Fixed Extra Road Lines bug, better polygonization with debugging

* Fixed initial heading angle for Trajectory sampling

* Maps Before manual filtering

* Carla Before Manual with z coordinates

* NaN fixes

* Minor

* Carla Maps Cleaned 6/8

* Add pyxodr submodule at external/pyxodr

* added external submodule support, fixed num_tries for valid init position of agents, added arg parsing, cleaned up code

* Removed unstable process_roads_script, use carla_py123d

* add avg_speed as arg

* Remove old Carla Maps

* Update README.md

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Remove old jsons and testing files

* Remove redundant instructions from README

* indentation changes

* Minor editorial edits.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Daphne <daphn3cor@gmail.com>

* Working Carla Maps (Emerge-Lab#189)

* collision fix (Emerge-Lab#192)

* collision fix

* lowered to 250 for actual theoretical guarantee

* Fix Ego Speed Calculation (Emerge-Lab#166)

* initial commit

* remove function

* restore weights, add trailing newline, fix ini

* update to unify logic and refactor to function

* fix reverse computation

* precommit fix

---------

Co-authored-by: Kevin <kj2676@nyu.edu>

* Small bug fix that makes road edge not appear in agent view for jerk model. (Emerge-Lab#197)

* add womd video (Emerge-Lab#195)

* Documentation first pass

* incorporate previous docs

* style updates and corrections

* initial commit

* Format drive helpers

* Add stop/remove collision behavior back (Emerge-Lab#169)

* Adding collision behavior back

* Removing uneccesary snippets

* Rebased

* precommit fixes

* Pre-Commit fixes

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* updated docs with multinode training cmd (Emerge-Lab#174)

* Carla2d towns (Emerge-Lab#201)

* Added valid 2d carla towns, some code cleanup

* Add carla 3D maps with objects

* initial commit (Emerge-Lab#204)

* Fix goal resampling in Carla maps and make metrics suitable for resampling in longer episodes (Emerge-Lab#186)

* Make road lines and lanes visible in map.

* Simplify goal resample algorithm: Pick best road lane point in road graph.

* Delete redundant code.

* Make the target distance to the new goal configurable.

* Generalize metrics to work for longer episodes with resampling. Also delete a bunch of unused graph topology code.

* Minor

* Apply precommit.

* Fix in visualizer.

* fix metrics

* WIP

* Add goal behavior flag.

* Add fallback for goal resampling and cleanup.

* Make goal radius more visible.

* Minor

* Make grid appear in the background.

* Minor.

* Merge

* Fix bug in logging num goals reached and sampled.

* Add goal taret

* Use classic dynamics model.

* Fix descrepancies between demo() and eval_gif().

* Small bug fix.

* Reward shaping

* Termination mode must be 0 for Carla maps.

* Add all args from ini to demo() env.

* Clean up visualization code.

* Clean up metrics and vis.

* Fix metrics.

* Add diversity to agent view.

* Add better fallback.

* Reserve red for cars that are in collision.

* Keep track of current goals.

* Carla testing simple/

* Use classic dynamics by default.

* Fix small bug in goal logging (respawn).

* Always draw agent obs when resampling goals.

* Increase render videos timeout (carla maps take longer).

* Minor vis changes.

* Minor vis changes.

* Rmv displacement error for now and add goal speed target.

* Add optional goal speed.

* Incorporate suggestions.

* Revert settings.

* Revert settings.

* Revert settings.

* Fixes

* Add docs

* Minor

* Make grid appear in background.

* Edits.

* Typo.

* Minor visual adaptations.

---------

Co-authored-by: Daphne <daphn3cor@gmail.com>
Co-authored-by: julianh65 <jhunt17159@gmail.com>

* Minor correction in resampling code (Emerge-Lab#183)

* Corrections of the resample code in drive.py:

- the will_resample=1 followed by if will_resample looked weird to me (probably legacy code ?)
- When we resample we should update the values of self.agent_offsets map dirs and num envs.

The fact that we didn't update them isn't an issue because right now they are not accessed anywhere in the code, but then we should either remove these attributes of the Drive Class or either make ensure they contain the right values if someone wants to use them later.

* Minor

* Fix merge conflicts.

---------

Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Allow human to drive with agents through classic and jerk dynamics model (Emerge-Lab#206)

* Fix human control with joint action space & classic model: Was still assuming multi-discrete.

* Enable human control with jerks dynamics model.

* Color actions yellow when controlling.

* Slightly easier control problem?

* Add tiny jerk penalty: Results in smooth behavior.

* Pre-commit

* Minor edits.

* Revert ini changes.

---------

Co-authored-by: Daphne <daphn3cor@gmail.com>

* Added WOSAC results on the 10k validation dataset (Emerge-Lab#185)

* Added WOSAC results on the 10k validation dataset

* Code to evaluate SMART + associated doc

* Edits.

* Add link to docs.

---------

Co-authored-by: Wael Boumediene Doulazmi <wbd2016@gl001.hpc.nyu.edu>
Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>

* Drive with agents in browser (Emerge-Lab#215)

* Good behavior with trained policy - resampling.

* Hardcode to 10 max agents for web version.

* Browser demo v1.

* More descriptive docs.

* Release post edits.

* Docs improvements.

* Run precommit.

* Better policy.

* Revert .ini changes, except one.

* Delete drive.dSYM/Contents/Info.plist

* Delete pufferlib/resources/drive/puffer_drive_gljhhrl6.bin

---------

Co-authored-by: Daphne <daphn3cor@gmail.com>

* Fix demo (Emerge-Lab#217)

Co-authored-by: Daphne <daphn3cor@gmail.com>

* Do not randomly switch to another agent in FPV. (Emerge-Lab#219)

Co-authored-by: Daphne <daphn3cor@gmail.com>

* switch docs to mdbooks doc format (Emerge-Lab#218)

Move from mkdocs to mdbooks. Code heavily claude assisted.

* Markdown edits and fix demo. (Emerge-Lab#221)

Co-authored-by: Daphne <daphn3cor@gmail.com>

* small fixes in the docs (Emerge-Lab#220)

* fix minor errors

* try to fix things for dark mode

* Fixing dark/light mode error

---------

Co-authored-by: Eugene Vinitsky <eugene@percepta.ai>
Co-authored-by: Aditya Gupta <adigupta2602@gmail.com>

* Release 2.0 (Emerge-Lab#214)

* Ensure the arrows are on the ground.

* Date fix.

* Update data docs with mixed dataset.

* Increase map range.

* Remove ini file for web demo.

* WIP

* Edit release post.

* Minor docs fixes.

* Writing changes.

* Fix metrics.

* Delete outdated readme files.

* Minor.

* Improve docs.

* Fix institutions.

* Update training info.

* Reset configs to womd defaults.

* Update score metrics.

* Update docs

* Update policy for demo.

* Update demo files (new cpt)

* Minor.

* Add video.

* Minor.

* Keep defaults.

---------

Co-authored-by: Daphne <daphn3cor@gmail.com>

* Fix space and game files.

* Fix sup tags.

* Self play working

* Population play and self play rebased

* All features working

* fixing co player features

* trying to pass tests

* fixing tests #2

* fixing tests  #3

* attempting to fix tests #4

* attempting to fix tests #4

* attempting to fix tests #4

* fixing batch size > 1 bug

* add back binary

* changed map dir

* moved maps

* fix test API and config

* fixing tests

* fix tests

---------

Co-authored-by: Daphne Cornelisse <33460159+daphne-cornelisse@users.noreply.github.com>
Co-authored-by: Eugene Vinitsky <eugenevinitsky@users.noreply.github.com>
Co-authored-by: Daphne Cornelisse <cor.daphne@gmail.com>
Co-authored-by: AJE <231052006+aje-valeo@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Waël Doulazmi <73849155+WaelDLZ@users.noreply.github.com>
Co-authored-by: WaelDLZ <wawa@CRE1-W60060.vnet.valeo.com>
Co-authored-by: Waël Doulazmi <wawa@10-20-1-143.dynapool.wireless.nyu.edu>
Co-authored-by: Waël Doulazmi <wawa@Waels-MacBook-Air.local>
Co-authored-by: Pragnay Mandavilli <108453901+mpragnay@users.noreply.github.com>
Co-authored-by: Pragnay Mandavilli <pm3881@gr052.hpc.nyu.edu>
Co-authored-by: Waël Doulazmi <waeldoulazmi@gmail.com>
Co-authored-by: Spencer Cheng <spenccheng@gmail.com>
Co-authored-by: Kevin Joseph <kevinwinston184@gmail.com>
Co-authored-by: Aditya Gupta <adigupta2602@gmail.com>
Co-authored-by: Julian Hunt <46860985+julianh65@users.noreply.github.com>
Co-authored-by: riccardosavorgnan <22272744+riccardosavorgnan@users.noreply.github.com>
Co-authored-by: Daphne <daphn3cor@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Kevin <kj2676@nyu.edu>
Co-authored-by: julianh65 <jhunt17159@gmail.com>
Co-authored-by: Wael Boumediene Doulazmi <wbd2016@gl001.hpc.nyu.edu>
Co-authored-by: Eugene Vinitsky <eugene@percepta.ai>
Co-authored-by: charliemolony59@gmail.com <cpm9831@ga021.hpc.nyu.edu>
Co-authored-by: charliemolony59@gmail.com <cpm9831@ga009.hpc.nyu.edu>
Co-authored-by: charliemolony59@gmail.com <cpm9831@ga014.hpc.nyu.edu>
Co-authored-by: charliemolony59@gmail.com <cpm9831@ga019.hpc.nyu.edu>
Co-authored-by: Mohit Kulkarni <mkulkarni@ethz.ch>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants