diff --git a/docs/source/ko/index.mdx b/docs/source/ko/index.mdx
index d01dff5c5e00..a83dd0d0b29e 100644
--- a/docs/source/ko/index.mdx
+++ b/docs/source/ko/index.mdx
@@ -16,48 +16,82 @@ specific language governing permissions and limitations under the License.
-# 𧨠Diffusers
-
-π€ Diffusersλ μ¬μ νμ΅λ λΉμ λ° μ€λμ€ νμ° λͺ¨λΈμ μ 곡νκ³ , μΆλ‘ λ° νμ΅μ μν λͺ¨λμ λꡬ μμ μν μ ν©λλ€.
-
-λ³΄λ€ μ ννκ², π€ Diffusersλ λ€μμ μ 곡ν©λλ€:
-
-- λ¨ λͺ μ€μ μ½λλ‘ μΆλ‘ μ μ€νν μ μλ μ΅μ νμ° νμ΄νλΌμΈμ μ 곡ν©λλ€. ([**Using Diffusers**](./using-diffusers/conditional_image_generation)λ₯Ό μ΄ν΄λ³΄μΈμ) μ§μλλ λͺ¨λ νμ΄νλΌμΈκ³Ό ν΄λΉ λ
Όλ¬Έμ λν κ°μλ₯Ό λ³΄λ €λ©΄ [**Pipelines**](#pipelines)μ μ΄ν΄λ³΄μΈμ.
-- μΆλ‘ μμ μλ vs νμ§μ μ μΆ©μ μν΄ μνΈκ΅νμ μΌλ‘ μ¬μ©ν μ μλ λ€μν λ
Έμ΄μ¦ μ€μΌμ€λ¬λ₯Ό μ 곡ν©λλ€. μμΈν λ΄μ©μ [**Schedulers**](./api/schedulers/overview)λ₯Ό μ°Έκ³ νμΈμ.
-- UNetκ³Ό κ°μ μ¬λ¬ μ νμ λͺ¨λΈμ end-to-end νμ° μμ€ν
μ κ΅¬μ± μμλ‘ μ¬μ©ν μ μμ΅λλ€. μμΈν λ΄μ©μ [**Models**](./api/models)μ μ°Έκ³ νμΈμ.
-- κ°μ₯ μΈκΈ°μλ νμ° λͺ¨λΈ ν
μ€ν¬λ₯Ό νμ΅νλ λ°©λ²μ 보μ¬μ£Όλ μμ λ€μ μ 곡ν©λλ€. μμΈν λ΄μ©μ [**Training**](./training/overview)λ₯Ό μ°Έκ³ νμΈμ.
-
-## 𧨠Diffusers νμ΄νλΌμΈ
-
-λ€μ νμλ 곡μμ μΌλ‘ μ§μλλ λͺ¨λ νμ΄νλΌμΈ, κ΄λ ¨ λ
Όλ¬Έ, μ§μ μ¬μ©ν΄ λ³Ό μ μλ Colab λ
ΈνΈλΆ(μ¬μ© κ°λ₯ν κ²½μ°)μ΄ μμ½λμ΄ μμ΅λλ€.
-
-| Pipeline | Paper | Tasks | Colab
-|---|---|:---:|:---:|
-| [alt_diffusion](./api/pipelines/alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
-| [audio_diffusion](./api/pipelines/audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation | [](https://colab.research.google.com/github/teticio/audio-diffusion/blob/master/notebooks/audio_diffusion_pipeline.ipynb)
-| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
-| [dance_diffusion](./api/pipelines/dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
-| [ddpm](./api/pipelines/ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
-| [ddim](./api/pipelines/ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
-| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
-| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
-| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
-| [paint_by_example](./api/pipelines/paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
-| [pndm](./api/pipelines/pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
-| [score_sde_ve](./api/pipelines/score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
-| [score_sde_vp](./api/pipelines/score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
-| [stable_diffusion](./api/pipelines/stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
-| [stable_diffusion](./api/pipelines/stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
-| [stable_diffusion](./api/pipelines/stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
-| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
-| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
-| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
-| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | [](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb)
-| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
-| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
-| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
-| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
-| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
-| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
-
-**μ°Έκ³ **: νμ΄νλΌμΈμ ν΄λΉ λ¬Έμμ μ€λͺ
λ λλ‘ νμ° μμ€ν
μ μ¬μ©ν λ°©λ²μ λν κ°λ¨ν μμ
λλ€.
+
+# Diffusers
+
+π€ Diffusersλ μ΄λ―Έμ§, μ€λμ€, μ¬μ§μ΄ λΆμμ 3D ꡬ쑰λ₯Ό μμ±νκΈ° μν μ΅μ²¨λ¨ μ¬μ νλ ¨λ diffusion λͺ¨λΈμ μν λΌμ΄λΈλ¬λ¦¬μ
λλ€. κ°λ¨ν μΆλ‘ μ루μ
μ μ°Ύκ³ μλ , μ체 diffusion λͺ¨λΈμ νλ ¨νκ³ μΆλ , π€ Diffusersλ λ κ°μ§ λͺ¨λλ₯Ό μ§μνλ λͺ¨λμ ν΄λ°μ€μ
λλ€. μ ν¬ λΌμ΄λΈλ¬λ¦¬λ [μ±λ₯λ³΄λ€ μ¬μ©μ±](conceptual/philosophy#usability-over-performance), [κ°νΈν¨λ³΄λ€ λ¨μν¨](conceptual/philosophy#simple-over-easy), κ·Έλ¦¬κ³ [μΆμνλ³΄λ€ μ¬μ©μ μ§μ κ°λ₯μ±](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction)μ μ€μ μ λκ³ μ€κ³λμμ΅λλ€.
+
+μ΄ λΌμ΄λΈλ¬λ¦¬μλ μΈ κ°μ§ μ£Όμ κ΅¬μ± μμκ° μμ΅λλ€:
+
+- λͺ μ€μ μ½λλ§μΌλ‘ μΆλ‘ ν μ μλ μ΅μ²¨λ¨ [diffusion νμ΄νλΌμΈ](api/pipelines/overview).
+- μμ± μλμ νμ§ κ°μ κ· νμ λ§μΆκΈ° μν΄ μνΈκ΅νμ μΌλ‘ μ¬μ©ν μ μλ [λ
Έμ΄μ¦ μ€μΌμ€λ¬](api/schedulers/overview).
+- λΉλ© λΈλ‘μΌλ‘ μ¬μ©ν μ μκ³ μ€μΌμ€λ¬μ κ²°ν©νμ¬ μ체μ μΈ end-to-end diffusion μμ€ν
μ λ§λ€ μ μλ μ¬μ νμ΅λ [λͺ¨λΈ](api/models).
+
+
+
+## Supported pipelines
+
+| Pipeline | Paper/Repository | Tasks |
+|---|---|:---:|
+| [alt_diffusion](./api/pipelines/alt_diffusion) | [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
+| [audio_diffusion](./api/pipelines/audio_diffusion) | [Audio Diffusion](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation |
+| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation |
+| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
+| [dance_diffusion](./api/pipelines/dance_diffusion) | [Dance Diffusion](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
+| [ddpm](./api/pipelines/ddpm) | [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
+| [ddim](./api/pipelines/ddim) | [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
+| [if](./if) | [**IF**](./api/pipelines/if) | Image Generation |
+| [if_img2img](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
+| [if_inpainting](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
+| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
+| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
+| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
+| [paint_by_example](./api/pipelines/paint_by_example) | [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
+| [pndm](./api/pipelines/pndm) | [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
+| [score_sde_ve](./api/pipelines/score_sde_ve) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
+| [score_sde_vp](./api/pipelines/score_sde_vp) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
+| [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [Semantic Guidance](https://arxiv.org/abs/2301.12247) | Text-Guided Generation |
+| [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation |
+| [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation |
+| [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting |
+| [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [MultiDiffusion](https://multidiffusion.github.io/) | Text-to-Panorama Generation |
+| [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | Text-Guided Image Editing|
+| [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [Zero-shot Image-to-Image Translation](https://pix2pixzero.github.io/) | Text-Guided Image Editing |
+| [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
+| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation Unconditional Image Generation |
+| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
+| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
+| [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing |
+| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
+| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
+| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation |
+| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
+| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [Safe Stable Diffusion](https://arxiv.org/abs/2211.05105) | Text-Guided Generation |
+| [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation |
+| [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation |
+| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
+| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
+| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation |
+| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
+| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
+| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
+| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
diff --git a/docs/source/ko/using-diffusers/conditional_image_generation.mdx b/docs/source/ko/using-diffusers/conditional_image_generation.mdx
new file mode 100644
index 000000000000..5525ac990ca4
--- /dev/null
+++ b/docs/source/ko/using-diffusers/conditional_image_generation.mdx
@@ -0,0 +1,60 @@
+
+
+# μ‘°κ±΄λΆ μ΄λ―Έμ§ μμ±
+
+[[open-in-colab]]
+
+μ‘°κ±΄λΆ μ΄λ―Έμ§ μμ±μ μ¬μ©νλ©΄ ν
μ€νΈ ν둬ννΈμμ μ΄λ―Έμ§λ₯Ό μμ±ν μ μμ΅λλ€. ν
μ€νΈλ μλ² λ©μΌλ‘ λ³νλλ©°, μλ² λ©μ λ
Έμ΄μ¦μμ μ΄λ―Έμ§λ₯Ό μμ±νλλ‘ λͺ¨λΈμ 쑰건ννλ λ° μ¬μ©λ©λλ€.
+
+[`DiffusionPipeline`]μ μΆλ‘ μ μν΄ μ¬μ νλ ¨λ diffusion μμ€ν
μ μ¬μ©νλ κ°μ₯ μ¬μ΄ λ°©λ²μ
λλ€.
+
+λ¨Όμ [`DiffusionPipeline`]μ μΈμ€ν΄μ€λ₯Ό μμ±νκ³ λ€μ΄λ‘λν νμ΄νλΌμΈ [체ν¬ν¬μΈνΈ](https://huggingface.co/models?library=diffusers&sort=downloads)λ₯Ό μ§μ ν©λλ€.
+
+μ΄ κ°μ΄λμμλ [μ μ¬ Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)κ³Ό ν¨κ» ν
μ€νΈ-μ΄λ―Έμ§ μμ±μ [`DiffusionPipeline`]μ μ¬μ©ν©λλ€:
+
+```python
+>>> from diffusers import DiffusionPipeline
+
+>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
+```
+
+[`DiffusionPipeline`]μ λͺ¨λ λͺ¨λΈλ§, ν ν°ν, μ€μΌμ€λ§ κ΅¬μ± μμλ₯Ό λ€μ΄λ‘λνκ³ μΊμν©λλ€.
+μ΄ λͺ¨λΈμ μ½ 14μ΅ κ°μ νλΌλ―Έν°λ‘ ꡬμ±λμ΄ μκΈ° λλ¬Έμ GPUμμ μ€νν κ²μ κ°λ ₯ν κΆμ₯ν©λλ€.
+PyTorchμμμ λ§μ°¬κ°μ§λ‘ μμ±κΈ° κ°μ²΄λ₯Ό GPUλ‘ μ΄λν μ μμ΅λλ€:
+
+```python
+>>> generator.to("cuda")
+```
+
+μ΄μ ν
μ€νΈ ν둬ννΈμμ `μμ±κΈ°`λ₯Ό μ¬μ©ν μ μμ΅λλ€:
+
+```python
+>>> image = generator("An image of a squirrel in Picasso style").images[0]
+```
+
+μΆλ ₯κ°μ κΈ°λ³Έμ μΌλ‘ [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) κ°μ²΄λ‘ λνλ©λλ€.
+
+νΈμΆνμ¬ μ΄λ―Έμ§λ₯Ό μ μ₯ν μ μμ΅λλ€:
+
+```python
+>>> image.save("image_of_squirrel_painting.png")
+```
+
+μλ μ€νμ΄μ€λ₯Ό μ¬μ©ν΄λ³΄κ³ μλ΄ λ°°μ¨ λ§€κ°λ³μλ₯Ό μμ λ‘κ² μ‘°μ νμ¬ μ΄λ―Έμ§ νμ§μ μ΄λ€ μν₯μ λ―ΈμΉλμ§ νμΈν΄ 보μΈμ!
+
+
\ No newline at end of file
diff --git a/docs/source/ko/using-diffusers/stable_diffusion_jax_how_to.mdx b/docs/source/ko/using-diffusers/stable_diffusion_jax_how_to.mdx
index e5785374413c..ef2da6bdf902 100644
--- a/docs/source/ko/using-diffusers/stable_diffusion_jax_how_to.mdx
+++ b/docs/source/ko/using-diffusers/stable_diffusion_jax_how_to.mdx
@@ -8,7 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--->
# JAX / Flaxμμμ 𧨠Stable Diffusion!