Just as it is often warm despite the recent December weather, it is difficult to choose clothes because it is often different from the previously known weather. In addition, as the season returned, there were many times when I wondered how to dress this time last year. To solve this problem, we came up with CLO-C.
Based on the user's location and current temperature, the focus is on recommending clothes suitable for the individual. Using the object recognition function of the YOLOv8 model, it analyzes the clothes currently worn by the user and suggests the optimal clothes accordingly. This will allow users to choose the right clothes for the current climate conditions, regardless of the season.
| 이름 | 학번 | 이메일 | 관심분야 |
|---|---|---|---|
| 김민규 | 202035507 | mkkim01@gachon.ac.kr | 데이터 |
| 구도연 | 202135705 | kdy1021@gachon.ac.kr | 컴퓨터비전 |
| 김민선 | 202135726 | minseon9286@naver.com | 카메라 인식 / 프론트엔드 |
| 심서현 | 202135791 | angelsh0805@gachon.ac.kr | 백엔드 |
| 정승민 | 202135832 | taky0315@naver.com | 비전/데이터 |
- In the React Native App, the user uploads the image.
- Save photos in Firebase storage, create a collection via link, and return the id to the app.
- Change the app to a form that is good to deliver to fastAPI and deliver the value.
- Get the ID value with the yolo python code and access the firebase storage to obtain the image
- Use the obtained image as an input to make a prediction
- Store the prediction result image and the cropped picture in firebase storage and receive the link
- Create a collection with the link you received and save it in the firestore.
- In React native, app can get an image by accessing the stored firestore.
is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including [detection], [segmentation], [pose estimation], [tracking, and [classification]. This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains.
url : https://docs.ultralytics.com/#yolo-a-brief-history
we can modify config -> model -> yolov8 -> yolov8.yaml file. Change number of classification 80 to 10. and make we use model -n the smallest model in yolov8
And we Train model with yolov8n.yaml and data.yaml
Dataset for detecting clothes in various portraits.
<yolov8.yaml>
- Ultralytics YOLO 🚀, AGPL-3.0 license YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect train: /Users/jeongseungmin/Desktop/20231206/Train_Result/train val: /Users/jeongseungmin/Desktop/20231206/Train_Result/test
nc: 10 names: ['sunglass','hat','jacket','shirt','pants','shorts','skirt','dress','bag','shoe']
scales: model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
-
YOLOv8.0n backbone backbone: [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]]
- [-1, 1, Conv, [128, 3, 2]]
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]]
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]]
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]]
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]]
-
YOLOv8.0n head head:
-
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
-
[[-1, 6], 1, Concat, [1]] cat backbone P4
-
[-1, 3, C2f, [512]] # 12
-
[-1, 1, nn.Upsample, [None, 2, 'nearest']]
-
[[-1, 4], 1, Concat, [1]] cat backbone P3
-
[-1, 3, C2f, [256]] 15 (P3/8-small)
-
[-1, 1, Conv, [256, 3, 2]]
-
[[-1, 12], 1, Concat, [1]] cat head P4
-
[-1, 3, C2f, [512]] 18 (P4/16-medium)
-
[-1, 1, Conv, [512, 3, 2]]
-
[[-1, 9], 1, Concat, [1]] cat head P5
-
[-1, 3, C2f, [1024]] 21 (P5/32-large)
-
[[15, 18, 21], 1, Detect, [nc]] Detect(P3, P4, P5)
-
We progress in Colab because of GPU and we get 100 Epoch in Colab for 3 hours And save model parameter "yolov8n-Epoch100.pt".
So we can get result in seconds
-
top Output weather data in real time (emulator is positioned as San Francisco) Outputs the five-day weather forecast every three hours
-
center Print a cropped recommended picture
-
Last Showing pictures of your outfit that meet the requirements
Print your own clothes to the calendar The picture I took is expressed on the calendar (expressed with dots) When choosing a date, you can see your own picture
If you post a picture of your outfit and choose a variety of evaluations Saved on Firebase
You can take pictures of your outfit and save it Classifying and showing in the closet
:�Users uploading photos, leaving feedback, and then saved to Firebase with weather and date information
: The ID of the feedback collection above is received and stored, and the result predicted by the model is uploaded to the firebase storage.











