Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
d9227cf
Adding `use_safetensors` argument to give more control to users (#2123)
Narsil Mar 16, 2023
77e0ea8
[docs] Add safety checker to ethical guidelines (#2699)
stevhliu Mar 16, 2023
a4b2c2f
train_unconditional save restore unet parameters (#2706)
williamberman Mar 16, 2023
a41850a
Improve deprecation error message when using cross_attention import (…
patrickvonplaten Mar 16, 2023
f4bbcb2
fix image link in inpaint doc (#2693)
yiyixuxu Mar 17, 2023
a169571
[docs] Update ONNX doc to use `optimum` (#2702)
sayakpaul Mar 17, 2023
116f70c
Enabling gradient checkpointing for VAE (#2536)
aandyw Mar 17, 2023
9ecd924
[Tests] Correct PT2 (#2724)
patrickvonplaten Mar 18, 2023
ec2c1bc
Update README.md
patrickvonplaten Mar 18, 2023
fdcff56
Fix more slow tests
patrickvonplaten Mar 18, 2023
d91dc57
Update mps.mdx (#2749)
tolgacangoz Mar 20, 2023
a9f28b6
Update torch2.0.mdx (#2748)
tolgacangoz Mar 20, 2023
af86b0c
Update fp16.mdx (#2746)
tolgacangoz Mar 20, 2023
25ed7cb
Update dreambooth.mdx (#2742)
tolgacangoz Mar 20, 2023
afe59a9
Update philosophy.mdx (#2752)
tolgacangoz Mar 20, 2023
ba87c16
Update text_inversion.mdx (#2751)
tolgacangoz Mar 20, 2023
73bdad0
add: controlnet entry to training section in the docs. (#2677)
sayakpaul Mar 21, 2023
c10d685
Update numbers for Habana Gaudi in documentation (#2734)
regisss Mar 21, 2023
2120b4e
Improve Contribution Doc (#2043)
patrickvonplaten Mar 21, 2023
f024e00
Fix typos (#2715)
apivovarov Mar 21, 2023
0042efd
[1929]: Add CLIP guidance for Img2Img stable diffusion pipeline (#2723)
nipunjindal Mar 21, 2023
58bcf46
Add guidance start/end parameters to StableDiffusionControlNetImg2Img…
hyowon-ha Mar 21, 2023
1fcf279
Fix mps tests on torch 2.0 (#2766)
pcuenca Mar 21, 2023
b33bd91
Add option to set dtype in pipeline.to() method (#2317)
1lint Mar 21, 2023
ca1e407
stable diffusion depth batching fix (#2757)
williamberman Mar 21, 2023
a39d42b
[docs] update torch 2 benchmark (#2764)
pcuenca Mar 21, 2023
7fe8861
[docs] Clarify purpose of reproducibility docs (#2756)
stevhliu Mar 22, 2023
ca1a222
[MS Text To Video] Add first text to video (#2738)
patrickvonplaten Mar 22, 2023
92e1164
`mps`: remove warmup passes (#2771)
pcuenca Mar 22, 2023
e0d8c9e
Support for Offset Noise in examples (#2753)
haofanwang Mar 23, 2023
c681ad1
add: section on multiple controlnets. (#2762)
sayakpaul Mar 23, 2023
9dc8444
[Examples] InstructPix2Pix instruct training script (#2478)
sayakpaul Mar 23, 2023
0d633a4
deduplicate training section in the docs. (#2788)
sayakpaul Mar 23, 2023
a8315ce
[UNet3DModel] Fix with attn processor (#2790)
patrickvonplaten Mar 23, 2023
8e35ef0
[doc wip] literalinclude (#2718)
Mar 23, 2023
14e3a28
Rename 'CLIPFeatureExtractor' class to 'CLIPImageProcessor' (#2732)
ainoya Mar 23, 2023
2ef9bdd
Music Spectrogram diffusion pipeline (#1044)
kashif Mar 23, 2023
055c90f
[2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipe…
nipunjindal Mar 23, 2023
0d7aac3
[Docs] small fixes to the text to video doc. (#2787)
sayakpaul Mar 23, 2023
dc5b4e2
Update train_text_to_image_lora.py (#2767)
haofanwang Mar 23, 2023
aa0531f
Skip `mps` in text-to-video tests (#2792)
pcuenca Mar 23, 2023
df91c44
Flax controlnet (#2727)
yiyixuxu Mar 23, 2023
1870fb0
[docs] Add Colab notebooks and Spaces (#2713)
stevhliu Mar 23, 2023
b94880e
Add AudioLDM (#2232)
sanchit-gandhi Mar 23, 2023
4a98d6e
Update train_text_to_image_lora.py (#2795)
haofanwang Mar 24, 2023
37a44bb
Add ModelEditing pipeline (#2721)
bahjat-kawar Mar 24, 2023
f6feb69
Relax DiT test (#2808)
kashif Mar 24, 2023
c4892f1
Update onnxruntime package candidates (#2666)
PeixuanZuo Mar 24, 2023
dbcb15c
[Stable UnCLIP] Finish Stable UnCLIP (#2814)
patrickvonplaten Mar 24, 2023
5883d8d
[Docs] update docs (Stable unCLIP) to reflect the updated ckpts. (#2815)
sayakpaul Mar 24, 2023
9fb0217
StableDiffusionModelEditingPipeline documentation (#2810)
bahjat-kawar Mar 24, 2023
abb22b4
Update `examples` README.md to include the latest examples (#2839)
sayakpaul Mar 27, 2023
1d7b4b6
Ruff: apply same rules as in transformers (#2827)
pcuenca Mar 27, 2023
4c26cb9
[Tests] Fix slow tests (#2846)
patrickvonplaten Mar 27, 2023
7bc2fff
Fix StableUnCLIPImg2ImgPipeline handling of explicitly passed image e…
unishift Mar 27, 2023
b10f527
Helper function to disable custom attention processors (#2791)
pcuenca Mar 27, 2023
fab4f3d
improve stable unclip doc. (#2823)
sayakpaul Mar 28, 2023
58fc824
add: better warning messages when handling multiple conditionings. (#…
sayakpaul Mar 28, 2023
d4f846f
[WIP]Flax training script for controlnet (#2818)
yiyixuxu Mar 28, 2023
81125d8
Make dynamo wrapped modules work with save_pretrained (#2726)
pcuenca Mar 28, 2023
42d9501
[Init] Make sure shape mismatches are caught early (#2847)
patrickvonplaten Mar 28, 2023
c0afca2
updated onnx pndm test (#2811)
kashif Mar 28, 2023
585f621
[Stable Diffusion] Allow users to disable Safety checker if loading m…
Stax124 Mar 28, 2023
8bdf423
fix KarrasVePipeline bug (#2828)
junhsss Mar 28, 2023
0f14335
StableDiffusionLongPromptWeightingPipeline: Do not hardcode pad token…
AkiSakurai Mar 28, 2023
b76d9fd
Remove suggestion to use cuDNN benchmark in docs (#2793)
d1g1t Mar 28, 2023
159a0bf
Remove duplicate sentence in docstrings (#2834)
qqaatw Mar 28, 2023
7d75681
Update the legacy inpainting SD pipeline, to allow calling it with on…
cmdr2 Mar 28, 2023
920a15c
Fix link to LoRA training guide in DreamBooth training guide (#2836)
ushuz Mar 28, 2023
663c654
[WIP][Docs] Use DiffusionPipeline Instead of Child Classes when Loadi…
dg845 Mar 28, 2023
25d927a
Add `last_epoch` argument to `optimization.get_scheduler` (#2850)
felixblanke Mar 28, 2023
4d0f412
[WIP] Check UNet shapes in StableDiffusionInpaintPipeline __init__ (#…
dg845 Mar 28, 2023
53377ef
[2761]: Add documentation for extra_in_channels UNet1DModel (#2817)
nipunjindal Mar 28, 2023
1384546
[Tests] Adds a test to check if `image_embeds` None case is handled p…
sayakpaul Mar 28, 2023
37c8248
Update evaluation.mdx (#2862)
tolgacangoz Mar 28, 2023
3980858
Update overview.mdx (#2864)
tolgacangoz Mar 28, 2023
ef4c2fa
Update alt_diffusion.mdx (#2865)
tolgacangoz Mar 28, 2023
03fe36f
Update paint_by_example.mdx (#2869)
tolgacangoz Mar 28, 2023
628fefb
Update stable_diffusion_safe.mdx (#2870)
tolgacangoz Mar 28, 2023
40a7b86
[Docs] Correct phrasing (#2873)
patrickvonplaten Mar 28, 2023
d82b032
[Examples] Add streaming support to the ControlNet training example i…
sayakpaul Mar 29, 2023
3be4891
feat: allow offset_noise in dreambooth training example (#2826)
yamanahlawat Mar 29, 2023
e47459c
[docs] Performance tutorial (#2773)
stevhliu Mar 29, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ community include:
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
overall diffusers community

Examples of unacceptable behavior include:

Expand All @@ -34,6 +34,7 @@ Examples of unacceptable behavior include:
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Spamming issues or PRs with links to projects unrelated to this library
* Other conduct which could reasonably be considered inappropriate in a
professional setting

Expand Down
556 changes: 380 additions & 176 deletions CONTRIBUTING.md

Large diffs are not rendered by default.

110 changes: 110 additions & 0 deletions PHILOSOPHY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Philosophy

🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities.
Its purpose is to serve as a **modular toolbox** for both inference and training.

We aim at building a library that stands the test of time and therefore take API design very seriously.

In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones:

## Usability over Performance

- While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library.
- Diffusers aim at being a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages.
- Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired.

## Simple over easy

As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library:
- We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management.
- Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible.
- Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers.
- Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. Dreambooth or textual inversion training
is very simple thanks to diffusers' ability to separate single components of the diffusion pipeline.

## Tweakable, contributor-friendly over abstraction

For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
In short, just like Transformers does for modeling files, diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers.
Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable.
**However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because:
- Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions.
- Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions.
- Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel.

At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look
at [this blog post](https://huggingface.co/blog/transformers-design-philosophy).

In diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such
as [DDPM](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [UnCLIP (Dalle-2)](https://huggingface.co/docs/diffusers/v0.12.0/en/api/pipelines/unclip#overview) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models#diffusers.UNet2DConditionModel).

Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗.
We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).

## Design Philosophy in Details

Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consist of three major classes, [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
Let's walk through more in-detail design decisions for each class.

### Pipelines

Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%)), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference.

The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [#Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [`DiffusionPipeline`]
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
- Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs.
- Pipelines are **not** intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner)
- Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines.
- Pipelines should be named after the task they are intended to solve.
- In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file.

### Models

Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**.

The following design principles are followed:
- Models correspond to **a type of model architecture**. *E.g.* the [`UNet2DConditionModel`] class is used for all UNet variations that expect 2D image inputs and are conditioned on some context.
- All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py), [`transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformer_2d.py), etc...
- Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy.
- Models intend to expose complexity, just like PyTorch's module does, and give clear error messages.
- Models all inherit from `ModelMixin` and `ConfigMixin`.
- Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain.
- Models should by default have the highest precision and lowest performance setting.
- To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different.
- Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work.
- The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and
readable longterm, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).

### Schedulers

Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**.

The following design principles are followed:
- All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
- Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained.
- One scheduler python file corresponds to one scheduler algorithm (as might be defined in a paper).
- If schedulers share similar functionalities, we can make use of the `#Copied from` mechanism.
- Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`.
- Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](./using-diffusers/schedulers.mdx).
- Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called.
- Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon
- The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1).
- Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box".
- In almost all cases, novel schedulers shall be implemented in a new scheduling file.
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,18 @@ Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to l
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |

## Contribution

We ❤️ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)

Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
just hang out ☕.

## Credits

This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:
Expand Down
Loading