{% include sidebar.html %} diff --git a/_pages/about.md b/_pages/about.md index bfb8807189..40561d4fab 100644 --- a/_pages/about.md +++ b/_pages/about.md @@ -2,21 +2,31 @@ permalink: / title: "" excerpt: "" -author_profile: true +author_profile: false redirect_from: - /about/ - /about.html --- -{% include_relative includes/intro.md %} +

Yuhao Shen

+
+
+ {% capture bio_md %}{% include_relative includes/intro.md %}{% endcapture %} + {{ bio_md | markdownify }} +
+ +
-If you like the template of this homepage, welcome to star and fork my open-sourced template version [AcadHomepage ![](https://img.shields.io/github/stars/RayeRen/acad-homepage.github.io?style=social)](https://github.com/RayeRen/acad-homepage.github.io). - -{% include_relative includes/news.md %} +

I'm always open to academic discussions, potential collaborations and interdisciplinary projects. Feel free to reach out at: yuhaoshen [at] link [dot] cuhk [dot] edu [dot] cn.

{% include_relative includes/pub.md %} +{% include_relative includes/news.md %} + {% include_relative includes/honers.md %} -{% include_relative includes/others.md %} \ No newline at end of file +{% include_relative includes/others.md %} diff --git a/_pages/includes/homepage.md b/_pages/includes/homepage.md index 33e459fe2a..f4c6dc9c07 100644 --- a/_pages/includes/homepage.md +++ b/_pages/includes/homepage.md @@ -1,5 +1,5 @@ # 📎 Homepages -- Personal Pages: https://rayeren.github.io (updated recently🔥) -- Linkedin: https://www.linkedin.com/in/rayeren +- Personal Pages: https://yuhos16.github.io (updated recently🔥) +- Linkedin: https://www.linkedin.com/in/yuhos16 - Google Scholar: https://scholar.google.com/citations?user=4FA6C0AAAAAJ - DBLP: https://dblp.org/pid/75/6568-6.html diff --git a/_pages/includes/honers.md b/_pages/includes/honers.md index 29237503ef..8b13789179 100644 --- a/_pages/includes/honers.md +++ b/_pages/includes/honers.md @@ -1,10 +1 @@ -# 🎖 Honors and Awards -- *2021.10* Tencent Scholarship (Top 1%) -- *2021.10* National Scholarship (Top 1%) -- *2020.12* [Baidu Scholarship](https://baike.baidu.com/item/%E7%99%BE%E5%BA%A6%E5%A5%96%E5%AD%A6%E9%87%91/9929412) (10 students in the world each year) -- *2020.12* [AI Chinese new stars](https://mp.weixin.qq.com/s?__biz=MzA4NzQ5MTA2NA==&mid=2653639431&idx=1&sn=25b6368c1954419b9090840347d9a27d&chksm=8be75b90bc90d286a5af3ef8e610e822d705dc3cf4382b45e3f14489f3e7ec4fd8c95ed0eceb&mpshare=1&scene=2&srcid=0511LMlj9Qv9DeIZAjMjYAU9&sharer_sharetime=1620731348139&sharer_shareid=631c113940cb81f34895aa25ab14422a#rd) (100 worldwide each year) -- *2020.12* [AI Chinese New Star Outstanding Scholar](https://mp.weixin.qq.com/s?__biz=MzA4NzQ5MTA2NA==&mid=2653639431&idx=1&sn=25b6368c1954419b9090840347d9a27d&chksm=8be75b90bc90d286a5af3ef8e610e822d705dc3cf4382b45e3f14489f3e7ec4fd8c95ed0eceb&mpshare=1&scene=2&srcid=0511LMlj9Qv9DeIZAjMjYAU9&sharer_sharetime=1620731348139&sharer_shareid=631c113940cb81f34895aa25ab14422a#rd) (10 candidates worldwide each year) -- *2020.12* [ByteDance Scholars Program](https://ur.bytedance.com/scholarship) (10 students in China each year) -- *2020.10* Tianzhou Chen Scholarship (Top 1%) -- *2020.10* National Scholarship (Top 1%) -- *2015.10* National Scholarship (Undergraduate) (Top 1%) \ No newline at end of file + diff --git a/_pages/includes/intro.md b/_pages/includes/intro.md index 59edbf7745..24a4ab2745 100644 --- a/_pages/includes/intro.md +++ b/_pages/includes/intro.md @@ -1,9 +1,22 @@ -I am now working on audio-driven video generation and text-to-speech research. If you are seeking any form of **academic cooperation**, please feel free to email me at [rayeren613@gmail.com](mailto:rayeren613@gmail.com). We are hiring interns! -I graduated from [Chu Kochen Honors College](http://ckc.zju.edu.cn/ckcen/main.htm), Zhejiang University (浙江大学竺可桢学院) with a bachelor's degree and from the Department of Computer Science and Technology, Zhejiang University (浙江大学计算机科学与技术学院) with a master's degree, advised by [Zhou Zhao (赵洲)](https://person.zju.edu.cn/zhaozhou). I also collaborate with [Xu Tan (谭旭)](https://www.microsoft.com/en-us/research/people/xuta/), [Tao Qin (秦涛)](https://www.microsoft.com/en-us/research/people/taoqin/) and [Tie-yan Liu (刘铁岩)](https://www.microsoft.com/en-us/research/people/tyliu/) from [Microsoft Research Asia](https://www.microsoft.com/en-us/research/group/machine-learning-research-group/) closely. +

M.Phil Student in Computer Science
+School of Data Science
+The Chinese University of Hong Kong, Shenzhen

-I won the [Baidu Scholarship](https://baike.baidu.com/item/%E7%99%BE%E5%BA%A6%E5%A5%96%E5%AD%A6%E9%87%91/9929412) (10 candidates worldwide each year) and [ByteDance Scholars Program](https://ur.bytedance.com/scholarship) (10 candidates worldwide each year) in 2020 and was selected as one of [the top 100 AI Chinese new stars](https://mp.weixin.qq.com/s?__biz=MzA4NzQ5MTA2NA==&mid=2653639431&idx=1&sn=25b6368c1954419b9090840347d9a27d&chksm=8be75b90bc90d286a5af3ef8e610e822d705dc3cf4382b45e3f14489f3e7ec4fd8c95ed0eceb&mpshare=1&scene=2&srcid=0511LMlj9Qv9DeIZAjMjYAU9&sharer_sharetime=1620731348139&sharer_shareid=631c113940cb81f34895aa25ab14422a#rd) and AI Chinese New Star Outstanding Scholar (10 candidates worldwide each year). + -My research interest includes speech synthesis, neural machine translation and automatic music generation. I have published 50+ papers at the top international AI conferences such as NeurIPS, ICML, ICLR, KDD. -To promote the communication among the Chinese ML & NLP community, we (along with other 11 young scholars worldwide) founded the [MLNLP community](https://space.bilibili.com/168887299) in 2021. I am honored to be one of the chairs of the MLNLP committee. +Hello! I’m Yuhao. My primary research interest is **AI for Healthcare**. As a member of the HEAL Group, I’m fortunate to work under the supervision of [Prof. Juexiao Zhou](https://www.joshuachou.ink/about/). My current work centers on the following three areas: + +
Medical LLMs & Agents
+
LLM Reasoning & Reinforcement learning
+
Medical Imaging & Computer Vision
+ + diff --git a/_pages/includes/news.md b/_pages/includes/news.md index 62d6068b15..d4040cde9f 100644 --- a/_pages/includes/news.md +++ b/_pages/includes/news.md @@ -1,6 +1,3 @@ # 🔥 News -- *2024.03*: 🎉 Two papers are accepted by ICLR 2024 -- *2023.05*: 🎉 Five papers are accepted by ACL 2023 -- *2023.01*: DiffSinger was introduced in [a very popular video](https://www.bilibili.com/video/BV1uM411t7ZJ) (2000k+ views) in Bilibili! -- *2023.01*: I join TikTok as a speech research scientist in Singapore! -- *2022.02*: I release a modern and responsive academic personal [homepage template](https://github.com/RayeRen/acad-homepage.github.io). Welcome to STAR and FORK! \ No newline at end of file +- *2025.06*: I graduated from Hangzhou Dianzi University as an outstanding graduate. +- *2025.05*: I officially joined the [HEAL Group](https://www.joshuachou.ink/about/) as an MPhil student. diff --git a/_pages/includes/others.md b/_pages/includes/others.md index 0d810f2353..b0f123164f 100644 --- a/_pages/includes/others.md +++ b/_pages/includes/others.md @@ -1,19 +1,9 @@ -# 📖 Educations -- *2019.06 - 2022.04*, Master, Zhejiang University, Hangzhou. -- *2015.09 - 2019.06*, Undergraduate, Chu Kochen Honors College, Zhejiang Univeristy, Hangzhou. -- *2012.09 - 2015.06*, Luqiao Middle School, Taizhou. +# 📖 Education +- *2025.09 - Present:* M.Phil Student in Computer Science, School of Data Science, The Chinese University of Hong Kong, Shenzhen. +- *2021.09 - 2025.06:* B.Eng. in Computer Science and Technology, School of Computer Science, Hangzhou Dianzi University, Hangzhou. -# 💬 Invited Talks -- *2022.02*, Hosted MLNLP seminar \| [\[Video\]](https://www.bilibili.com/video/BV1wF411x7qh) -- *2021.06*, Audio & Speech Synthesis, Huawei internal talk -- *2021.03*, Non-autoregressive Speech Synthesis, PaperWeekly & biendata \| [\[video\]](https://www.bilibili.com/video/BV1uf4y1t7Hr/) -- *2020.12*, Non-autoregressive Speech Synthesis, Huawei Noah's Ark Lab internal talk - -# 💻 Internships -- *2021.06 - 2021.09*, Alibaba, Hangzhou. -- *2019.05 - 2020.02*, [EnjoyMusic](https://enjoymusic.ai/), Hangzhou. -- *2019.02 - 2019.05*, [YiWise](https://www.yiwise.com/), Hangzhou. -- *2018.08 - 2019.02*, [MSRA, machine learning Group](https://www.microsoft.com/en-us/research/group/machine-learning-research-group/), Beijing. -- *2018.01 - 2018.06*, [NetEase, AI department](https://hr.163.com/zc/12-ai/index.html), Hangzhou. -- *2017.08 - 2018.12*, DashBase (acquired by [Cisco](https://blogs.cisco.com/news/349511)), Hangzhou. +# 💻 Internship +- *2025.03 - 2025.05:* Reasearch Assistant, The Chinese University of Hong Kong, Shenzhen. +- *2024.12 - 2025.02:* Algorithm Engineer, Hangzhou Huaxin Mechanical and Electrical Engineering Co., Ltd, Hangzhou. +- *2023.07 - 2023.09:* Reasearch Assistant, Hangzhou Institute of Technology of Xidian University, Hangzhou. diff --git a/_pages/includes/pub.md b/_pages/includes/pub.md index e5c8f61622..bfb1f59f3e 100644 --- a/_pages/includes/pub.md +++ b/_pages/includes/pub.md @@ -1,136 +1,53 @@ - -# 📝 Publications -## 🎙 Speech Synthesis - - -
NeurIPS 2019
sym
+# 📚 Publication +*# Equal Contribution* + +
Under Review
SkinGPT-R1
-[FastSpeech: Fast, Robust and Controllable Text to Speech](https://papers.nips.cc/paper/8580-fastspeech-fast-robust-and-controllable-text-to-speech.pdf) \\ -**Yi Ren**, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu +[Trustworthy and Fair SkinGPT-R1 for Democratizing Dermatological Reasoning across Diverse Ethnicities](https://arxiv.org/abs/2511.15242) \\ +**Yuhao Shen**, Zhangtianyi Chen, Yuanhao He, Yan Xu, Shuping Zhang, Liyuan Sun, Zijian Wang, Yinghao Zhu, Yuyuan Yang, Jiahe Qian, Ziwen Wang, Xinyuan Zhang, Wenbin Liu, Zongyuan Ge, Tao Lu, Siyuan Yan, Juexiao Zhou -[**Project**](https://speechresearch.github.io/fastspeech/) - -- FastSpeech is the first fully parallel end-to-end speech synthesis model. -- **Academic Impact**: This work is included by many famous speech synthesis open-source projects, such as [ESPNet ![](https://img.shields.io/github/stars/espnet/espnet?style=social)](https://github.com/espnet/espnet). Our work are promoted by more than 20 media and forums, such as [机器之心](https://mp.weixin.qq.com/s/UkFadiUBy-Ymn-zhJ95JcQ)、[InfoQ](https://www.infoq.cn/article/tvy7hnin8bjvlm6g0myu). -- **Industry Impact**: FastSpeech has been deployed in [Microsoft Azure TTS service](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911) and supports 49 more languages with state-of-the-art AI quality. It was also shown as a text-to-speech system acceleration example in [NVIDIA GTC2020](https://resources.nvidia.com/events/GTC2020s21420). +- Introduce **SkinGPT-R1**, a dermatology VLM that achieves interpretable and equitable diagnosis across diverse ethnicities by performing explicit, step-by-step, and verifiable diagnostic chain-of-thought reasoning. +
- -
ICLR 2021
sym
+
Under Review
SkinCaRe Dataset
-[FastSpeech 2: Fast and High-Quality End-to-End Text to Speech](https://arxiv.org/abs/2006.04558) \\ -**Yi Ren**, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu - -[**Project**](https://speechresearch.github.io/fastspeech2/) - - This work is included by many famous speech synthesis open-source projects, such as [PaddlePaddle/Parakeet ![](https://img.shields.io/github/stars/PaddlePaddle/PaddleSpeech?style=social)](https://github.com/PaddlePaddle/PaddleSpeech), [ESPNet ![](https://img.shields.io/github/stars/espnet/espnet?style=social)](https://github.com/espnet/espnet) and [fairseq ![](https://img.shields.io/github/stars/pytorch/fairseq?style=social)](https://github.com/pytorch/fairseq). -
-
- +[SkinCaRe: A Multimodal Dermatology Dataset Annotated with Medical Caption and Chain-of-Thought Reasoning](https://arxiv.org/abs/2405.18004) [Dataset] \\ +**Yuhao Shen#**, Liyuan Sun#, Yan Xu#, Wenbin Liu#, Shuping Zhang#, Shawn Afvari, Zhongyi Han, Jiaoyan Song, Yongzhi Ji, Tao Lu, Xiaonan He, Xin Gao, Juexiao Zhou -
ICLR 2024
sym
-
- -[Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis](https://openreview.net/forum?id=mvMI3N4AvD) \\ -Ziyue Jiang, Jinglin Liu, **Yi Ren**, et al. + -[**Project**](https://boostprompt.github.io/boostprompt/) - - This work has been deployed on many TikTok products. - - Advandced zero-shot voice cloning model. +- Release **SkinCaRe**, unifying SkinCAP (medical captions) and SkinCoT (clinician-verified chains-of-thought) for transparent dermatologic reasoning. +
-
AAAI 2022
sym
+
Under Review
DermBench & DermEval
-[DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism](https://arxiv.org/abs/2105.02446) \\ -Jinglin Liu, Chengxi Li, **Yi Ren**, Feiyang Chen, Zhou Zhao +[Towards Trustworthy Dermatology MLLMs: A Benchmark and Multimodal Evaluator for Diagnostic Narratives](https://arxiv.org/abs/2511.09195) \\ +**Yuhao Shen#**, Jiahe Qian#, Shuping Zhang#, Zhangtianyi Chen, Tao Lu, Juexiao Zhou* -- Many [video demos](https://www.bilibili.com/video/BV1be411N7JA) created by the [DiffSinger community](https://github.com/openvpi) are released. -- DiffSinger was introduced in [a very popular video](https://www.bilibili.com/video/BV1uM411t7ZJ) (1600k+ views) on Bilibili! + -- [**Project**](https://diffsinger.github.io/) \| [![](https://img.shields.io/github/stars/NATSpeech/NATSpeech?style=social&label=DiffSpeech Stars)](https://github.com/NATSpeech/NATSpeech) \| [![](https://img.shields.io/github/stars/MoonInTheRiver/DiffSinger?style=social&label=DiffSinger Stars)](https://github.com/MoonInTheRiver/DiffSinger) \| [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?label=Demo)](https://huggingface.co/spaces/NATSpeech/DiffSpeech) +- Propose **DermBench** (six clinical dimensions) and **DermEval** (reference-free evaluator) for image–text dermatology reasoning aligned with physician scoring. +
- -
NeurIPS 2021
sym
+
Under Review
CoTBox-TTT
-[PortaSpeech: Portable and High-Quality Generative Text-to-Speech](https://arxiv.org/abs/2109.15166) \\ -**Yi Ren**, Jinglin Liu, Zhou Zhao +[CoTBox-TTT: Grounding Medical VQA with Visual Chain-of-Thought Boxes During Test-time Training](https://arxiv.org/abs/2511.12446) \\ +Jiahe Qian#, **Yuhao Shen#**, Zhangtianyi Chen, Juexiao Zhou, Peisong Wang -[**Project**](https://portaspeech.github.io/) \| [![](https://img.shields.io/github/stars/NATSpeech/NATSpeech?style=social&label=Code+Stars)](https://github.com/NATSpeech/NATSpeech) \| [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?label=Demo)](https://huggingface.co/spaces/NATSpeech/PortaSpeech) -
-
- -- `AAAI 2024` [Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context Modeling](https://arxiv.org/abs/2312.11947), Rui Liu, Yifan Hu, **Yi Ren**, et al. [![](https://img.shields.io/github/stars/walker-hyf/ECSS?style=social&label=Code+Stars)](https://github.com/walker-hyf/ECSS) -- ``ICML 2023`` [Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion Models](https://text-to-audio.github.io/paper.pdf), Rongjie Huang, Jiawei Huang, Dongchao Yang, **Yi Ren**, et al. -- ``ACL 2023`` [CLAPSpeech: Learning Prosody from Text Context with Contrastive Language-Audio Pre-Training](), Zhenhui Ye, Rongjie Huang, **Yi Ren**, et al. -- ``ACL 2023`` [FluentSpeech: Stutter-Oriented Automatic Speech Editing with Context-Aware Diffusion Models](), Ziyue Jiang, Qian Yang, Jialong Zuo, Zhenhui Ye, Rongjie Huang, **Yi Ren** and Zhou Zhao -- ``ACL 2023`` [Revisiting and Incorporating GAN and Diffusion Models in High-Fidelity Speech Synthesis](), Rongjie Huang, **Yi Ren**, Ziyue Jiang, et al. -- ``ACL 2023`` [Improving Prosody with Masked Autoencoder and Conditional Diffusion Model For Expressive Text-to-Speech](), Rongjie Huang, Chunlei Zhang, **Yi Ren**, et al. -- `ICLR 2023` [Bag of Tricks for Unsupervised Text-to-Speech](https://openreview.net/forum?id=SbR9mpTuBn), **Yi Ren**, Chen Zhang, Shuicheng Yan -- `INTERSPEECH 2023` [StyleS2ST: zero-shot style transfer for direct speech-to-speech translation](https://arxiv.org/abs/2305.17732), Kun Song, **Yi Ren**, Yi Lei, et al. -- `INTERSPEECH 2023` [GenerTTS: Pronunciation Disentanglement for Timbre and Style Generalization in Cross-Lingual Text-to-Speech](https://arxiv.org/abs/2306.15304), Yahuan Cong, Haoyu Zhang, Haopeng Lin, Shichao Liu, Chunfeng Wang, **Yi Ren**, et al. -- `NeurIPS 2022` [Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech](), Ziyue Jiang, Zhe Su, Zhou Zhao, Qian Yang, **Yi Ren**, et al. [![](https://img.shields.io/github/stars/Zain-Jiang/Dict-TTS?style=social&label=Code+Stars)](https://github.com/Zain-Jiang/Dict-TTS) -- `NeurIPS 2022` [GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain Text-to-Speech](), Rongjie Huang, **Yi Ren**, et al. -- `NeurIPS 2022` [M4Singer: a Multi-Style, Multi-Singer and Musical Score Provided Mandarin Singing Corpus](), Lichao Zhang, Ruiqi Li, Shoutong Wang, Liqun Deng, Jinglin Liu, **Yi Ren**, et al. *(Datasets and Benchmarks Track)* [![](https://img.shields.io/github/stars/M4Singer/M4Singer?style=social&label=Dataset+Stars)](https://github.com/M4Singer/M4Singer) -- ``ACM-MM 2022`` [ProDiff: Progressive Fast Diffusion Model for High-Quality Text-to-Speech](), Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, **Yi Ren**, [![](https://img.shields.io/github/stars/Rongjiehuang/ProDiff?style=social&label=Code+Stars)](https://github.com/Rongjiehuang/ProDiff) -- ``ACM-MM 2022`` [SingGAN: Generative Adversarial Network For High-Fidelity Singing Voice Generation](https://arxiv.org/abs/2110.07468), Rongjie Huang, Chenye Cui, Chen Feiayng, **Yi Ren**, et al. -- ``IJCAI 2022`` [SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech](), Zhenhui Ye, Zhou Zhao, **Yi Ren**, et al. [![](https://img.shields.io/github/stars/yerfor/SyntaSpeech?style=social&label=Code+Stars)](https://github.com/yerfor/SyntaSpeech) -- ``IJCAI 2022`` (Oral) [EditSinger: Zero-Shot Text-Based Singing Voice Editing System with Diverse Prosody Modeling](), Lichao Zhang, Zhou Zhao, **Yi Ren**, et al. -- ``IJCAI 2022`` [FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis](), Rongjie Huang, Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu, **Yi Ren**, Zhou Zhao, (Oral), [![](https://img.shields.io/github/stars/Rongjiehuang/FastDiff?style=social&label=Code+Stars)](https://github.com/Rongjiehuang/FastDiff) -- ``NAACL 2022`` [A Study of Syntactic Multi-Modality in Non-Autoregressive Machine Translation](), Kexun Zhang, Rui Wang, Xu Tan, Junliang Guo, **Yi Ren**, et al. -- ``ACL 2022`` [Revisiting Over-Smoothness in Text to Speech](https://arxiv.org/abs/2202.13066), **Yi Ren**, Xu Tan, Tao Qin, et al. -- ``ACL 2022`` [Learning the Beauty in Songs: Neural Singing Voice Beautifier](https://arxiv.org/abs/2202.13277), Jinglin Liu, Chengxi Li, **Yi Ren**, et al. \| [![](https://img.shields.io/github/stars/MoonInTheRiver/NeuralSVB?style=social&label=Code+Stars)](https://github.com/MoonInTheRiver/NeuralSVB) -- ``ICASSP 2022`` [ProsoSpeech: Enhancing Prosody With Quantized Vector Pre-training in Text-to-Speech](https://prosospeech.github.io/), **Yi Ren**, et al. -- ``INTERSPEECH 2021`` [EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional Text-to-Speech Model](https://arxiv.org/abs/2106.09317), Chenye Cui, **Yi Ren**, et al. -- ``INTERSPEECH 2021`` (best student paper award candidate) [WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution](https://arxiv.org/abs/2106.08507), Kexun Zhang, **Yi Ren**, Changliang Xu and Zhou Zhao -- ``ICASSP 2021`` [Denoising Text to Speech with Frame-Level Noise Modeling](https://arxiv.org/abs/2012.09547), Chen Zhang, **Yi Ren**, Xu Tan, et al. \| [**Project**](https://speechresearch.github.io/denoispeech/) -- ``ACM-MM 2021`` [Multi-Singer: Fast Multi-Singer Singing Voice Vocoder With A Large-Scale Corpus](https://arxiv.org/pdf/2112.10358), Rongjie Huang, Feiyang Chen, **Yi Ren**, et al. (Oral) -- ``IJCAI 2021`` [FedSpeech: Federated Text-to-Speech with Continual Learning](https://www.ijcai.org/proceedings/2021/527), Ziyue Jiang, **Yi Ren**, et al. -- ``KDD 2020`` [DeepSinger: Singing Voice Synthesis with Data Mined From the Web](https://dl.acm.org/doi/abs/10.1145/3394486.3403249), **Yi Ren**, Xu Tan, Tao Qin, et al. \| [**Project**](https://speechresearch.github.io/deepsinger/) -- ``KDD 2020`` [LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition](https://dl.acm.org/doi/abs/10.1145/3394486.3403331), Jin Xu, Xu Tan, **Yi Ren**, et al. \| [**Project**](https://speechresearch.github.io/lrspeech/) -- ``INTERSPEECH 2020`` [MultiSpeech: Multi-Speaker Text to Speech with Transformer](https://www.isca-speech.org/archive/Interspeech_2020/pdfs/3139.pdf), Mingjian Chen, Xu Tan, **Yi Ren**, et al. \| [**Project**](https://speechresearch.github.io/multispeech/) -- ``ICML 2019`` (Oral) [Almost Unsupervised Text to Speech and Automatic Speech Recognition](https://pdfs.semanticscholar.org/9075/a3e6271e5ef4953491488d1776527e632408.pdf), **Yi Ren**, Xu Tan, Tao Qin, et al. \| [**Project**](https://speechresearch.github.io/unsuper/) - -## 👄 TalkingFace & Avatar - -
ICLR 2024
sym
-
+ -[Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis](https://openreview.net/forum?id=7ERQPyR2eb), Zhenhui Ye, Tianyun Zhong, Yi Ren, et al. (Spotlight) [**Project**](https://real3dportrait.github.io/) | [**Code**](https://github.com/yerfor/Real3DPortrait) +- Evidence-first **test-time training** with all backbones frozen; update a small set of continuous soft prompts guided by **visual chain-of-thought boxes**. +
- -- `ICLR 2023` [GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis](https://openreview.net/forum?id=YfwMIDhPccD), Zhenhui Ye, Ziyue Jiang, **Yi Ren**, et al. -- `AAAI 2024` [AMD: Autoregressive Motion Diffusion](https://arxiv.org/abs/2305.09381), Bo Han, Hao Peng, Minjing Dong, **Yi Ren**, et al. -- ``AAAI 2022`` [Parallel and High-Fidelity Text-to-Lip Generation](https://arxiv.org/abs/2107.06831), Jinglin Liu, Zhiying Zhu, **Yi Ren**, et al. \| [![](https://img.shields.io/github/stars/Dianezzy/ParaLip?style=social&label=ParaLip Stars)](https://github.com/Dianezzy/ParaLip) -- ``AAAI 2022`` [Flow-based Unconstrained Lip to Speech Generation](https://ojs.aaai.org/index.php/AAAI/article/view/19966), Jinzheng He, Zhou Zhao, **Yi Ren**, et al. -- ``ACM-MM 2020`` [FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire](https://dl.acm.org/doi/10.1145/3394171.3413740), Jinglin Liu, **Yi Ren**, et al. - -## 📚 Machine Translation -- ``ACL 2023`` [AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation](), Rongjie Huang, Huadai Liu, Xize Cheng, **Yi Ren**, et al. -- `ICLR 2023` [TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation](https://openreview.net/forum?id=UVAmFAtC5ye), Rongjie Huang, Jinglin Liu, Huadai Liu, **Yi Ren**, Lichao Zhang, Jinzheng He, Zhou Zhao -- ``AAAI 2021`` [UWSpeech: Speech to Speech Translation for Unwritten Languages](https://arxiv.org/abs/2006.07926), Chen Zhang, Xu Tan, **Yi Ren**, et al. \| [**Project**](https://speechresearch.github.io/uwspeech/) -- ``IJCAI 2020`` [Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation](https://www.ijcai.org/Proceedings/2020/0534.pdf), Jinglin Liu, **Yi Ren**, Xu Tan, et al. -- ``ACL 2020`` [SimulSpeech: End-to-End Simultaneous Speech to Text Translation](https://www.aclweb.org/anthology/2020.acl-main.350), **Yi Ren**, Jinglin Liu, Xu Tan, et al. -- ``ACL 2020`` [A Study of Non-autoregressive Model for Sequence Generation](https://arxiv.org/abs/2004.10454), **Yi Ren**, Jinglin Liu, Xu Tan, et al. -- ``ICLR 2019`` [Multilingual Neural Machine Translation with Knowledge Distillation](https://openreview.net/forum?id=S1gUsoR9YX), Xu Tan, **Yi Ren**, Di He, et al. - - -## 🎼 Music & Dance Generation -- ``IEEE TMM`` [SDMuse: Stochastic Differential Music Editing and Generation via Hybrid Representation](https://ieeexplore.ieee.org/document/10149095), Chen Zhang, Yi Ren, Kejun Zhang, Shuicheng Yan. -- ``AAAI 2021`` [SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint](https://arxiv.org/abs/2012.05168), Zhonghao Sheng, Kaitao Song, Xu Tan, **Yi Ren**, et al. -- ``ACM-MM 2020`` (Oral) [PopMAG: Pop Music Accompaniment Generation](https://dl.acm.org/doi/10.1145/3394171.3413721), **Yi Ren**, Jinzheng He, Xu Tan, et al. \| [**Project**](https://speechresearch.github.io/popmag/) - -## 🧑‍🎨 Generative Model -- ``ICLR 2022`` [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://openreview.net/forum?id=PlKWVd2yBkY), Luping Liu, **Yi Ren**, Zhijie Lin, Zhou Zhao \| [![](https://img.shields.io/github/stars/luping-liu/PNDM?style=social&label=Code+Stars)](https://github.com/luping-liu/PNDM) \| [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/pseudo-numerical-methods-for-diffusion-models-1/image-generation-on-celeba-64x64)](https://paperswithcode.com/sota/image-generation-on-celeba-64x64?p=pseudo-numerical-methods-for-diffusion-models-1) - -## Others -- `NeurIPS 2023` [Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective](https://openreview.net/forum?id=Rp4PA0ez0m), Pengfei Wei, Lingdong Kong, Xinghua Qu, **Yi Ren**, et al. -- ``ACM-MM 2022`` [Video-Guided Curriculum Learning for Spoken Video Grounding](), Yan Xia, Zhou Zhao, Shangwei Ye, Yang Zhao, Haoyuan Li, **Yi Ren** \ No newline at end of file diff --git a/_pages/includes/pub_short.md b/_pages/includes/pub_short.md index efa9775b11..8b13789179 100644 --- a/_pages/includes/pub_short.md +++ b/_pages/includes/pub_short.md @@ -1,32 +1 @@ -# 💻 Selected Research Papers - -My full paper list is shown at [my personal homepage](https://rayeren.github.io). - -#### 🎙 Audio and Speech Processing -- ``ICLR 2021`` [FastSpeech 2: Fast and High-Quality End-to-End Text to Speech](https://arxiv.org/abs/2006.04558), **Yi Ren**, Chenxu Hu, Xu Tan, et al. -- ``NeurIPS 2019`` [FastSpeech: Fast, Robust and Controllable Text to Speech](https://papers.nips.cc/paper/8580-fastspeech-fast-robust-and-controllable-text-to-speech.pdf), **Yi Ren**, Yangjun Ruan, Xu Tan, et al. -- `ICLR 2024` [Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis](https://openreview.net/forum?id=mvMI3N4AvD), Ziyue Jiang, Jinglin Liu, **Yi Ren**, et al. -- ``AAAI 2022`` [DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism](https://arxiv.org/abs/2105.02446), Jinglin Liu, Chengxi Li, **Yi Ren**, et al. [**Project**](https://diffsinger.github.io/) \| [![](https://img.shields.io/github/stars/NATSpeech/NATSpeech?style=social&label=DiffSpeech+Stars)](https://github.com/NATSpeech/NATSpeech) \| [![](https://img.shields.io/github/stars/MoonInTheRiver/DiffSinger?style=social&label=DiffSinger+Stars)](https://github.com/MoonInTheRiver/DiffSinger) \| [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?label=Demo)](https://huggingface.co/spaces/NATSpeech/DiffSpeech) -- ``NeurIPS 2021`` [PortaSpeech: Portable and High-Quality Generative Text-to-Speech](https://arxiv.org/abs/2109.15166), **Yi Ren**, Jinglin Liu, Zhou Zhao, [**Project**](https://portaspeech.github.io/) \| [![](https://img.shields.io/github/stars/NATSpeech/NATSpeech?style=social&label=Code+Stars)](https://github.com/NATSpeech/NATSpeech) \| [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?label=Demo)](https://huggingface.co/spaces/NATSpeech/PortaSpeech) -- ``ICML 2023`` [Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion Models](https://text-to-audio.github.io/paper.pdf), Rongjie Huang, Jiawei Huang, Dongchao Yang, **Yi Ren**, et al. -- ``ICLR 2023`` [Bag of Tricks for Unsupervised Text-to-Speech](https://openreview.net/forum?id=SbR9mpTuBn), **Yi Ren**, Chen Zhang, Shuicheng Yan -- ``ACL 2022`` [Learning the Beauty in Songs: Neural Singing Voice Beautifier](https://arxiv.org/abs/2202.13277), Jinglin Liu, Chengxi Li, **Yi Ren**, Zhiying Zhu, Zhou Zhao \| [![](https://img.shields.io/github/stars/MoonInTheRiver/NeuralSVB?style=social&label=Code+Stars)](https://github.com/MoonInTheRiver/NeuralSVB) -- ``NeurIPS 2022`` [Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech](), Ziyue Jiang, Zhe Su, Zhou Zhao, Qian Yang, **Yi Ren**, et al. [![](https://img.shields.io/github/stars/Zain-Jiang/Dict-TTS?style=social&label=Code+Stars)](https://github.com/Zain-Jiang/Dict-TTS) - -#### 👄 Talkingface Generation -- ``ICLR 2024`` [Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis](https://openreview.net/forum?id=7ERQPyR2eb), Zhenhui Ye, Tianyun Zhong, **Yi Ren**, et al. -- ``ICLR 2023`` [GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis](https://openreview.net/forum?id=YfwMIDhPccD), Zhenhui Ye, Ziyue Jiang`, **Yi Ren**, et al. - -#### 📚 Machine Translation -- ``ACL 2023`` [AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation](), Rongjie Huang, Huadai Liu, Xize Cheng, **Yi Ren**, et al. -- ``ICLR 2023`` [TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation](https://openreview.net/forum?id=UVAmFAtC5ye), Rongjie Huang, Jinglin Liu, Huadai Liu, **Yi Ren**, et al. -- ``ACL 2020`` [SimulSpeech: End-to-End Simultaneous Speech to Text Translation](https://www.aclweb.org/anthology/2020.acl-main.350), **Yi Ren**, et al. -- ``ICLR 2019`` [Multilingual Neural Machine Translation with Knowledge Distillation](https://openreview.net/forum?id=S1gUsoR9YX), Xu Tan, **Yi Ren**, et al. - -#### 🎼 Music Generation -- ``ACM-MM 2020`` [PopMAG: Pop Music Accompaniment Generation](https://dl.acm.org/doi/10.1145/3394171.3413721), **Yi Ren**, Jinzheng He, Xu Tan, et al. - -#### 🧑‍🎨 Generative Model -- ``ICLR 2022`` [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://openreview.net/forum?id=PlKWVd2yBkY), Luping Liu, **Yi Ren**, et al. \| [![](https://img.shields.io/github/stars/luping-liu/PNDM?style=social&label=Code+Stars)](https://github.com/luping-liu/PNDM) \| [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/pseudo-numerical-methods-for-diffusion-models-1/image-generation-on-celeba-64x64)](https://paperswithcode.com/sota/image-generation-on-celeba-64x64?p=pseudo-numerical-methods-for-diffusion-models-1) - diff --git a/_sass/_base.scss b/_sass/_base.scss index 4d792fa476..358750fe4d 100644 --- a/_sass/_base.scss +++ b/_sass/_base.scss @@ -21,8 +21,14 @@ body { } } +/* global image responsiveness */ +img { + max-width: 100%; + height: auto; +} + h1, h2, h3, h4, h5, h6 { - margin: 1em 0 0.5em; + margin: 0.8em 0 0.4em; line-height: 1.2; font-family: $header-font-family; font-weight: bold; diff --git a/_sass/_masthead.scss b/_sass/_masthead.scss index 90397f2c53..64fcff11a0 100644 --- a/_sass/_masthead.scss +++ b/_sass/_masthead.scss @@ -7,6 +7,7 @@ top: 0; background-color: white; border-bottom: 1px solid $border-color; + box-shadow: 0 8px 24px rgba(0,0,0,0.06); -webkit-animation: intro 0.3s both; animation: intro 0.3s both; -webkit-animation-delay: 0.15s; @@ -18,7 +19,9 @@ @include clearfix; padding: .5em; font-family: $sans-serif-narrow; - + position: relative; + backdrop-filter: saturate(180%) blur(6px); + @include breakpoint($x-large) { max-width: $x-large; } @@ -61,4 +64,21 @@ } } +.theme-toggle { + position: absolute; + right: 0.5em; + top: 0.25em; + z-index: 25; + + button { + background: transparent; + border: 1px solid $border-color; + border-radius: 999px; + padding: 0.25em 0.5em; + font-size: 1rem; + line-height: 1; + cursor: pointer; + } +} + diff --git a/_sass/_mixins.scss b/_sass/_mixins.scss index 14782b1942..09b6a83b05 100644 --- a/_sass/_mixins.scss +++ b/_sass/_mixins.scss @@ -43,8 +43,6 @@ */ @mixin clearfix { - clear: both; - &::after { clear: both; content: ""; diff --git a/_sass/_page.scss b/_sass/_page.scss index 32c17f86bb..60167c9ce2 100644 --- a/_sass/_page.scss +++ b/_sass/_page.scss @@ -5,31 +5,46 @@ #main { @include container; @include clearfix; - margin-top: 1em; + margin-top: 0; padding-left: 1em; padding-right: 1em; animation: intro 0.3s both; animation-delay: 0.35s; @include breakpoint($x-large) { - max-width: $x-large; + max-width: $large; + } + + @include breakpoint($large) { + display: flex; + align-items: flex-start; + gap: 3em; } } .page { + margin-top: 0; @include breakpoint($large) { - @include span(10 of 12 last); - @include prefix(0.5 of 12); + @include span(9 of 12); + @include prefix(0 of 12); @include suffix(0 of 12); + @include nobreak(); + float: none; + flex: 1 1 auto; + order: 1; + width: auto; + min-width: 0; } .page__inner-wrap { @include full(); + @include nobreak(); .page__content, .page__meta, .page__share { @include full(); + @include nobreak(); } } } @@ -49,27 +64,93 @@ } .page__content { - #about-me{ - margin-top: -10em; - &:before { - content: ''; - display: block; - position: relative; - width: 0; - height: 10em; - margin-top: -10em; - } - } + #about-me{ margin-top: 0; } h1 { - margin-top: 1em; + margin-top: 2.5em; padding-bottom: 0.5em; border-bottom: 1px solid $border-color; } + .anchor + h1 { margin-top: 0; border-bottom: 0; } p, li, dl { font-size: 1em; } + .bio-wrap{ + display: grid; + grid-template-columns: 1fr; + align-items: start; + row-gap: 1em; + @include breakpoint($medium){ + grid-template-columns: 1fr 200px; + column-gap: 2em; + row-gap: 0; + } + @include breakpoint($large){ + grid-template-columns: 1fr 220px; + } + } + + .bio-main{ + flex: 1 1 auto; + min-width: 0; + .bio-title{ margin: 0 0 .5em; font-size: $type-size-3; background: transparent; border: 0; box-shadow: none; padding: 0; } + input{ display: none; } + } + + .bio-side{ + flex: 0 0 260px; + width: 260px; + + .profile_box{ display: block; } + .author__avatar img{ max-width: 180px; } + .author__name{ display: none; } + .author__bio{ display: none; } + .author__urls{ display: none !important; } + .author__urls_sm{ display: block !important; font-size: 1.25em; } + + @include breakpoint($small){ + flex: none; + width: 100%; + margin: 0.5em 0 1em; + } + } + + .bio-fixed{ + width: 100%; + max-width: 220px; + margin: 0 auto; + @include breakpoint($medium){ + width: 200px; + max-width: none; + margin: 0; + } + @include breakpoint($large){ + width: 220px; + } + .bio-photo{ display: flex; justify-content: center; } + .bio-photo img{ width: 100%; height: auto; border-radius: 6px; box-shadow: 0 2px 6px rgba(0,0,0,.2); } + @include breakpoint($small){ + .bio-photo img{ max-height: 40vh; object-fit: contain; } + } + .bio-motto{ margin:.6em 0 0; font-size:$type-size-5; font-style: italic; text-align:center; } + } + + .bio-main pre{ + background: transparent; + border: 0; + padding: 0; + margin: 0; + } + + .bio-main .highlight, + .bio-main code{ background: transparent; border: 0; box-shadow: none; } + + .inline-links{ display:grid; grid-template-columns: repeat(3, minmax(0,1fr)); column-gap:.75em; row-gap:.4em; align-items:center; font-size:$type-size-5; margin:.4em 0 1em; max-width:100%; } + .inline-links .il-item{ display:inline-flex; align-items:center; gap:.25em; min-width:0; } + .inline-links a{ text-decoration:none; white-space:nowrap; overflow:hidden; text-overflow:ellipsis; } + .inline-links i{ margin-right:.25em; } + /* paragraph indents */ p { margin: 0 0 $indent-var; @@ -410,3 +491,4 @@ font-size: $type-size-6; text-transform: uppercase; } + .contact-line{ margin: 1.2em 0; } diff --git a/_sass/_sidebar.scss b/_sass/_sidebar.scss index 2b635af623..33fe2d7b69 100644 --- a/_sass/_sidebar.scss +++ b/_sass/_sidebar.scss @@ -14,7 +14,12 @@ margin-bottom: 1em; @include breakpoint($large) { - @include span(2 of 12); + float: none; + margin-left: 0; + width: 300px; + max-width: 300px; + flex: 0 0 300px; + order: 2; opacity: 1; -webkit-transition: opacity 0.2s ease-in-out; transition: opacity 0.2s ease-in-out; @@ -97,8 +102,11 @@ border-radius: 50%; @include breakpoint($large) { - padding: 5px; - border: 1px solid $border-color; + width: auto; + height: auto; + padding: 0; + border: 0; + box-shadow: 0 0 0 1px $border-color; } } } @@ -121,6 +129,7 @@ .author__name { margin: 0; + white-space: nowrap; @include breakpoint($large) { margin-top: 10px; diff --git a/_sass/_syntax.scss b/_sass/_syntax.scss index e139514c94..e48d80687e 100644 --- a/_sass/_syntax.scss +++ b/_sass/_syntax.scss @@ -33,6 +33,12 @@ div.highlighter-rouge, figure.highlight { } } +@include breakpoint(max-width $small) { + div.highlighter-rouge, figure.highlight { + &:before { display: none; } + } +} + .highlighter-rouge{ background-color: #03228d; color: white; diff --git a/_sass/_variables.scss b/_sass/_variables.scss index c7ffff96e0..d1a0736722 100644 --- a/_sass/_variables.scss +++ b/_sass/_variables.scss @@ -14,14 +14,14 @@ $indent-var : 0.5em; /* system typefaces */ $serif : Georgia, Times, serif; -$sans-serif : "Trebuchet MS", Helvetica, sans-serif; +$sans-serif : -apple-system, system-ui, "SF Pro Text", "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif; // $sans-serif : Georgia, serif, sans-serif; // $sans-serif : -apple-system, ".SFNSText-Regular", "San Francisco", "Roboto", "Segoe UI", "Helvetica Neue", "Lucida Grande", Arial, sans-serif; $monospace : Monaco, Consolas, "Lucida Console", monospace; /* sans serif typefaces */ -$sans-serif-narrow : $sans-serif; +$sans-serif-narrow : "Times New Roman", Times, serif; $helvetica : Helvetica, "Helvetica Neue", Arial, sans-serif; /* serif typefaces */ @@ -31,9 +31,9 @@ $bodoni : "Bodoni MT", serif; $calisto : "Calisto MT", serif; $garamond : Garamond, serif; -$global-font-family : $sans-serif; -$header-font-family : $sans-serif; -$caption-font-family : $serif; +$global-font-family : "Times New Roman", Times, serif; +$header-font-family : "Times New Roman", Times, serif; +$caption-font-family : "Times New Roman", Times, serif; /* type scale */ $type-size-1 : 2.441em; // ~39.056px @@ -117,8 +117,8 @@ $x-large : 1800px; $small : 600px !default; $medium : 768px !default; $medium-wide : 900px !default; -$large : 925px !default; -$x-large : 1280px !default; +$large : 900px !default; +$x-large : 1160px !default; /* Grid diff --git a/assets/css/main.scss b/assets/css/main.scss index b48c010bb4..7cbc758775 100644 --- a/assets/css/main.scss +++ b/assets/css/main.scss @@ -10,7 +10,7 @@ * */ -@import "vendor/breakpoint/breakpoint"; // media query mixins +@import "vendor/breakpoint/_breakpoint"; /* media query mixins */ @import "variables"; @import "mixins"; @import "vendor/susy/susy"; @@ -41,69 +41,306 @@ @import "print"; .paper-box { - display: flex; - justify-content: left; + display: flex; + justify-content: left; + align-items: center; + flex-direction: row; + flex-wrap: wrap; + border-bottom: 1px #efefef solid; + padding: 0.8em 1.2em 0.6em; + min-height: 0; + margin-bottom: 0.3em; + background: rgba(255,255,255,0.6); + border-radius: 12px; + box-shadow: 0 6px 20px rgba(0,0,0,0.06); + transition: transform .2s ease, box-shadow .2s ease; + + &:hover { + transform: translateY(-2px); + box-shadow: 0 10px 28px rgba(0,0,0,0.1); + } + + .paper-box-image { + justify-content: center; align-items: center; - flex-direction: row; - flex-wrap: wrap; - border-bottom: 1px #efefef solid; - padding: 2em 0 2em 0; + display: flex; + width: 100%; + height: auto; + order: 1; + flex: 1 1 100%; + position: relative; - - .paper-box-image{ - justify-content: center; - display: flex; - width: 100%; - order: 2; - img { - max-width: 400px; - box-shadow: 3px 3px 6px #888; - object-fit: cover; - } + > div { + position: relative; + width: 100%; + height: 100%; } - - .paper-box-text{ - max-width: 100%; - order: 1; + img { + width: 100%; + height: auto; + max-width: 100%; + box-shadow: 3px 3px 6px #888; + object-fit: contain; } - - @include breakpoint($medium) { - .paper-box-image{ - justify-content: left; - min-width: 200px; - max-width: 40%; - order: 1; - } - - .paper-box-text{ - justify-content: left; - padding-left: 2em; - max-width: 60%; - order: 2; - } + } - } + .paper-box-text { + max-width: 100%; + order: 2; + flex: 1 1 100%; + margin-top: 0.6em; + padding-left: 0; + } + @include breakpoint($medium) { + flex-wrap: nowrap; + align-items: center; + .paper-box-image { + justify-content: center; + align-items: center; + align-self: center; + width: auto; + flex: 0 0 40%; + min-width: 220px; + } + + .paper-box-text { + justify-content: left; + max-width: 60%; + flex: 1 1 auto; + margin-top: 0; + padding-left: 1.3em; + font-size: 1.02em; + } + } } -$scroll_offset : 2em; -h1:before, .anchor:before { - content: ''; - display: block; - position: relative; - width: 0; - height: $scroll_offset; - margin-top: -$scroll_offset; +html.theme-dark .paper-box { + background: rgba(32,32,32,0.6); + box-shadow: 0 6px 20px rgba(0,0,0,0.4); } +$scroll_offset: 0; +.anchor:before { content: ""; display: block; width: 0; height: 0; margin-top: 0; } + .badge { - padding-left: 1rem; - padding-right: 1rem; - position: absolute; - margin-top: .5em; - margin-left: -.5em; - color: white; - background-color: #00369f; - font-size: .8em; -} \ No newline at end of file + padding-left: 1rem; + padding-right: 1rem; + position: absolute; + top: 0.5em; + left: 0.5em; + color: white; + background-color: #00369f; + font-size: 0.8em; + border-radius: 6px; + pointer-events: none; +} +body { + font-size: 18px; // 默认大多是16px,你可以根据喜好改为 18~20px + line-height: 1.8; // 行间距可提升可读性 + background: linear-gradient(180deg, #ffffff 0%, #f6f7fb 100%); +} + +html.theme-dark body { + background: linear-gradient(180deg, #0f0f0f 0%, #171717 100%); + color: #e0e0e0; +} + +html.theme-dark a { + color: #8ab4f8; +} +html.theme-dark a:visited { color: #a0bdfc; } +html.theme-dark a:hover { color: #bcd2ff; } + +html.theme-dark .masthead { + background-color: #1a1a1a; + border-bottom-color: #333; +} + +html.theme-dark .greedy-nav { + background: #1a1a1a; +} + +html.theme-dark .greedy-nav a { color: #f0f0f0; } + +html.theme-dark .greedy-nav .visible-links a:before { + background: rgba(255, 255, 255, 0.3); +} + +html.theme-dark .greedy-nav .hidden-links { + background: #1e1e1e; + border-color: #333; + box-shadow: 0 0 10px rgba(0, 0, 0, 0.5); +} + +html.theme-dark .greedy-nav .hidden-links:after { + border-color: #1e1e1e transparent; +} + +html.theme-dark .page__footer { + background-color: #0f0f0f; + color: #aaa; + border-top-color: #333; +} + +html.theme-dark .page__content h1 { + border-bottom-color: #333; +} +html.theme-dark .page__content h1, +html.theme-dark .page__content h2, +html.theme-dark .page__content h3 { color: #f2f2f2; } + +html.theme-dark .page__share, +html.theme-dark .page__related, +html.theme-dark .page__comments-title { + border-top-color: #333; +} + +html.theme-dark .page__comments-form { + background: #1a1a1a; +} + +html.theme-dark p > code, +html.theme-dark a > code, +html.theme-dark li > code, +html.theme-dark figcaption > code, +html.theme-dark td > code { + background: #1e1e1e; + border-color: #333; +} +html.theme-dark .page__content strong, +html.theme-dark .page__content b { color: #ffffff; } +html.theme-dark .page__content em, +html.theme-dark .page__content i { color: #e6e6e6; } +html.theme-dark .page__content mark { background: #3a5fff; color: #fff; border-radius: 2px; } + +html.theme-dark .toc { + background-color: #1a1a1a; + border-color: #333; + color: #bbb; +} + +html.theme-dark .toc .nav__title { + background: #333; +} +html.theme-dark .theme-toggle button { + border-color: #333; +} +html.theme-dark .author__avatar img { + border-color: #444; + box-shadow: 0 0 0 1px #444; +} + .author__avatar img { box-shadow: 0 0 0 1px $border-color; } + +html.theme-dark .author__urls .fas, +html.theme-dark .author__urls .fab, +html.theme-dark .author__urls .far, +html.theme-dark .author__urls .fal { + color: #ddd; +} +html.theme-dark .author__urls .fa { color: #ddd; } +html.theme-dark .author__urls a { + color: #ddd; +} +html.theme-dark .author__urls_sm a { + color: #ddd; +} + +.theme-toggle-fixed { + position: fixed; + top: 0.75rem; + right: 0.75rem; + z-index: 1000; +} + +.theme-toggle-fixed button { + background: transparent; + border: 1px solid $border-color; + border-radius: 999px; + padding: 0.4em 0.6em; + font-size: 1.2rem; + line-height: 1; + color: $dark-gray; +} + +.theme-toggle-fixed svg { + width: 26px; + height: 26px; +} + +.theme-toggle-fixed .icon-sun { display: none; } +.theme-toggle-fixed .icon-moon { display: inline; } + +html.theme-dark .theme-toggle-fixed button { color: #e0e0e0; } +html.theme-dark .theme-toggle-fixed .icon-sun { display: inline; } +html.theme-dark .theme-toggle-fixed .icon-moon { display: none; } +html.theme-dark .theme-toggle-fixed button { border-color: #333; } + +html.theme-dark ::selection { + background: #3a5fff; + color: #fff; +} +html.theme-dark ::-moz-selection { + background: #3a5fff; + color: #fff; +} + + +html.theme-dark a:hover { + text-shadow: 0 0 8px rgba(138, 180, 248, 0.6); +} + +/* vertical accent bars before name and welcome link */ +.author__name { + display: inline-flex; + align-items: center; +} +.author__name:before { + content: ""; + display: inline-block; + width: 4px; + height: 1.2em; + background-color: #00369f; + border-radius: 2px; + margin-right: 8px; +} + +.masthead__menu-home-item a { + display: inline-flex; + align-items: center; + font-weight: 700; +} +html:not(.theme-dark) .masthead__menu-home-item a { color: $text-color; } + +.accent-bar { + display: inline-block; + width: 4px; + height: 1em; + background-color: #00369f; + border-radius: 2px; + margin-right: 8px; + vertical-align: middle; +} + +.boxed-link { + display: inline-block; + border: 1px solid #00369f; + border-radius: 6px; + padding: 0.1em 0.4em; + margin-left: 0.6em; +} + +.vitem { + display: flex; + align-items: center; + margin: 0.25em 0; +} +.accent-bar-sm { + display: inline-block; + width: 4px; + height: 1em; + background-color: #00369f; + border-radius: 2px; + margin-right: 8px; +} diff --git a/assets/js/_main.js b/assets/js/_main.js index 83a5fad921..91b162fba7 100644 --- a/assets/js/_main.js +++ b/assets/js/_main.js @@ -2,19 +2,19 @@ jQuery plugin settings and other scripts ========================================================================== */ -$(document).ready(function(){ - // Sticky footer - var bumpIt = function() { +$(document).ready(function () { + // Sticky footer + var bumpIt = function () { $("body").css("margin-bottom", $(".page__footer").outerHeight(true)); }, didResize = false; bumpIt(); - $(window).resize(function() { + $(window).resize(function () { didResize = true; }); - setInterval(function() { + setInterval(function () { if (didResize) { didResize = false; bumpIt(); @@ -26,8 +26,11 @@ $(document).ready(function(){ // init sticky sidebar $(".sticky").Stickyfill(); - var stickySideBar = function(){ - var show = $(".author__urls-wrapper button").length === 0 ? $(window).width() > 925 : !$(".author__urls-wrapper button").is(":visible"); + var stickySideBar = function () { + var show = + $(".author__urls-wrapper button").length === 0 + ? $(window).width() > 925 + : !$(".author__urls-wrapper button").is(":visible"); // console.log("has button: " + $(".author__urls-wrapper button").length === 0); // console.log("Window Width: " + windowWidth); // console.log("show: " + show); @@ -46,22 +49,24 @@ $(document).ready(function(){ stickySideBar(); - $(window).resize(function(){ + $(window).resize(function () { stickySideBar(); }); // Follow menu drop down - $(".author__urls-wrapper button").on("click", function() { - $(".author__urls").fadeToggle("fast", function() {}); + $(".author__urls-wrapper button").on("click", function () { + $(".author__urls").fadeToggle("fast", function () {}); $(".author__urls-wrapper button").toggleClass("open"); }); // init smooth scroll - $("a").smoothScroll({offset: -20}); + $("a").smoothScroll({ offset: -20 }); // add lightbox class to all image links - $("a[href$='.jpg'],a[href$='.jpeg'],a[href$='.JPG'],a[href$='.png'],a[href$='.gif']").addClass("image-popup"); + $( + "a[href$='.jpg'],a[href$='.jpeg'],a[href$='.JPG'],a[href$='.png'],a[href$='.gif']" + ).addClass("image-popup"); // Magnific-Popup options $(".image-popup").magnificPopup({ @@ -71,12 +76,12 @@ $(document).ready(function(){ // } // return true; // }, - type: 'image', - tLoading: 'Loading image #%curr%...', + type: "image", + tLoading: "Loading image #%curr%...", gallery: { enabled: true, navigateByImgClick: true, - preload: [0,1] // Will preload 0 - before current, and 1 after the current image + preload: [0, 1], // Will preload 0 - before current, and 1 after the current image }, image: { tError: 'Image #%curr% could not be loaded.', @@ -84,15 +89,19 @@ $(document).ready(function(){ removalDelay: 500, // Delay in milliseconds before popup is removed // Class that is added to body when popup is open. // make it unique to apply your CSS animations just to this exact popup - mainClass: 'mfp-zoom-in', + mainClass: "mfp-zoom-in", callbacks: { - beforeOpen: function() { + beforeOpen: function () { // just a hack that adds mfp-anim class to markup - this.st.image.markup = this.st.image.markup.replace('mfp-figure', 'mfp-figure mfp-with-anim'); - } + this.st.image.markup = this.st.image.markup.replace( + "mfp-figure", + "mfp-figure mfp-with-anim" + ); + }, }, closeOnContentClick: true, - midClick: true // allow opening popup on middle mouse click. Always set it to true if you don't provide alternative source. + midClick: true, // allow opening popup on middle mouse click. Always set it to true if you don't provide alternative source. }); + // theme toggle handled inline in scripts.html to avoid duplicate handlers }); diff --git a/images/500k.jpg b/images/500k.jpg new file mode 100644 index 0000000000..17dd5521b0 Binary files /dev/null and b/images/500k.jpg differ diff --git a/images/TTT.jpg b/images/TTT.jpg new file mode 100644 index 0000000000..09d7ec663d Binary files /dev/null and b/images/TTT.jpg differ diff --git a/images/android-chrome-192x192.png b/images/android-chrome-192x192.png index 20035d9328..a27dec3755 100755 Binary files a/images/android-chrome-192x192.png and b/images/android-chrome-192x192.png differ diff --git a/images/android-chrome-512x512.png b/images/android-chrome-512x512.png index 61b9f26be1..6a970cf918 100755 Binary files a/images/android-chrome-512x512.png and b/images/android-chrome-512x512.png differ diff --git a/images/apple-touch-icon.png b/images/apple-touch-icon.png index b67f1bf511..d4b1f6bf4f 100755 Binary files a/images/apple-touch-icon.png and b/images/apple-touch-icon.png differ diff --git a/images/dermeval.jpg b/images/dermeval.jpg new file mode 100644 index 0000000000..9094a5d9d2 Binary files /dev/null and b/images/dermeval.jpg differ diff --git a/images/diffsinger.png b/images/diffsinger.png deleted file mode 100644 index a6b2c92069..0000000000 Binary files a/images/diffsinger.png and /dev/null differ diff --git a/images/favicon-16x16.png b/images/favicon-16x16.png index 5aa57024aa..9753ec1c97 100755 Binary files a/images/favicon-16x16.png and b/images/favicon-16x16.png differ diff --git a/images/favicon-32x32.png b/images/favicon-32x32.png index 7ac55b62af..206b5379e9 100755 Binary files a/images/favicon-32x32.png and b/images/favicon-32x32.png differ diff --git a/images/favicon.ico b/images/favicon.ico index 93fda86cd2..6044634848 100755 Binary files a/images/favicon.ico and b/images/favicon.ico differ diff --git a/images/fs.png b/images/fs.png deleted file mode 100644 index 061988a9ff..0000000000 Binary files a/images/fs.png and /dev/null differ diff --git a/images/fs2.png b/images/fs2.png deleted file mode 100644 index fb07391e40..0000000000 Binary files a/images/fs2.png and /dev/null differ diff --git a/images/logo-sea-header-desktop.webp b/images/logo-sea-header-desktop.webp deleted file mode 100644 index 1e7f4163f6..0000000000 Binary files a/images/logo-sea-header-desktop.webp and /dev/null differ diff --git a/images/logo1.png b/images/logo1.png new file mode 100644 index 0000000000..c1ffd37440 Binary files /dev/null and b/images/logo1.png differ diff --git a/images/mega.png b/images/mega.png deleted file mode 100644 index ee0c274e8f..0000000000 Binary files a/images/mega.png and /dev/null differ diff --git a/images/microsoft_logo.svg b/images/microsoft_logo.svg deleted file mode 100644 index 54ffab35c6..0000000000 --- a/images/microsoft_logo.svg +++ /dev/null @@ -1,42 +0,0 @@ - - - - - - - - - - - - diff --git a/images/portaspeech.png b/images/portaspeech.png deleted file mode 100644 index 75ca78fe0a..0000000000 Binary files a/images/portaspeech.png and /dev/null differ diff --git a/images/profile.jpg b/images/profile.jpg new file mode 100644 index 0000000000..61c59820e2 Binary files /dev/null and b/images/profile.jpg differ diff --git a/images/r1.png b/images/r1.png new file mode 100644 index 0000000000..f531462c34 Binary files /dev/null and b/images/r1.png differ diff --git a/images/real3d.png b/images/real3d.png deleted file mode 100644 index 7d13cdef9c..0000000000 Binary files a/images/real3d.png and /dev/null differ diff --git a/images/ry_profile.jpeg b/images/ry_profile.jpeg deleted file mode 100644 index 877e2ec69f..0000000000 Binary files a/images/ry_profile.jpeg and /dev/null differ diff --git a/images/skincare.jpg b/images/skincare.jpg new file mode 100644 index 0000000000..c9f7a48ca2 Binary files /dev/null and b/images/skincare.jpg differ diff --git a/images/skincare.png b/images/skincare.png new file mode 100644 index 0000000000..9823f013b8 Binary files /dev/null and b/images/skincare.png differ diff --git a/images/tiktok.png b/images/tiktok.png deleted file mode 100644 index a1bc968e95..0000000000 Binary files a/images/tiktok.png and /dev/null differ diff --git a/images/ttt.png b/images/ttt.png new file mode 100644 index 0000000000..2a11ef2943 Binary files /dev/null and b/images/ttt.png differ