Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
6724012
test to use github
wakameds Sep 24, 2021
137b0db
test
wakameds Sep 24, 2021
c24a76d
Colaboratory を使用して作成しました
wakameds Sep 24, 2021
cee0e3f
Colaboratory を使用して作成しました
wakameds Sep 24, 2021
b4e4ee5
Colaboratory を使用して作成しました
wakameds Sep 24, 2021
33c5b6f
test
wakameds Sep 24, 2021
e9a19a8
Merge branch 'topic-recognition' of https://github.com/wakameds/Patte…
wakameds Sep 24, 2021
cf32fee
ss
wakameds Sep 24, 2021
0e8fc4f
Delete recognition/s4633139 directory
wakameds Oct 8, 2021
ff4c5b8
Delete Musk_RCNN.ipynb
wakameds Oct 8, 2021
d9ae207
upload dataloader
wakameds Oct 8, 2021
1f0b1f9
Merge branch 'topic-recognition' of https://github.com/wakameds/Patte…
wakameds Oct 8, 2021
68cdb62
test2
Oct 8, 2021
1a3384b
upload colab.file from Colab
Oct 15, 2021
a6b3311
Improved Unet file from Colab
Oct 17, 2021
6b55cbd
upload improved Unet model file
wakameds Oct 17, 2021
86e4925
Merge branch 'topic-recognition' of https://github.com/wakameds/Patte…
wakameds Oct 17, 2021
02e5662
upload the file for criterion for the improved UNet
wakameds Oct 17, 2021
dd7e85d
upload the dataloader and criterion files for the improved UNet
wakameds Oct 17, 2021
dcb8e90
upload the files to train model and to evaluate the performance for t…
wakameds Oct 17, 2021
30d2aaf
upload the main files for the improved Unet
wakameds Oct 17, 2021
990299f
remove UNetjupyter.ipynb
wakameds Oct 17, 2021
a489568
remove dataloader ipynb
wakameds Oct 17, 2021
44edc73
upload the results obtained from IUnet
wakameds Oct 17, 2021
27f0ba3
revised colab file with segmentation part
Oct 18, 2021
ff31c09
revised colab file with segmentation part
Oct 18, 2021
cf4fce8
revised colab file with segmentation part
Oct 18, 2021
755fc90
Delete dice_coefficient.png
wakameds Oct 18, 2021
1a107fc
Delete image.png
wakameds Oct 18, 2021
50d0f45
Delete dice_loss.png
wakameds Oct 18, 2021
0047d7f
Delete seg.png
wakameds Oct 18, 2021
45e5f02
Revised code in the files and add comments
wakameds Oct 19, 2021
e6d3857
Create README.md
wakameds Oct 19, 2021
89d50e3
Revised code in the files
wakameds Oct 20, 2021
fd34233
Merge remote-tracking branch 'origin/topic-recognition' into topic-re…
wakameds Oct 20, 2021
ca4b900
Revised code
wakameds Oct 20, 2021
5fa2afd
Update README.md
wakameds Oct 20, 2021
e8cceca
add comments in the file
wakameds Oct 20, 2021
1ff8d86
Merge remote-tracking branch 'origin/topic-recognition' into topic-re…
wakameds Oct 20, 2021
3413697
Update README.md
wakameds Oct 22, 2021
1c3f4a8
change epoch num from 1 to 20
wakameds Oct 22, 2021
be31655
Merge remote-tracking branch 'origin/topic-recognition' into topic-re…
wakameds Oct 22, 2021
b9e5ea6
Revise README.md
wakameds Oct 22, 2021
b38403e
test
wakameds Sep 24, 2021
68f05ce
test to upload with Colaboratory
wakameds Sep 24, 2021
5e755b2
Colaboratory を使用して作成しました
wakameds Sep 24, 2021
a8d59ea
Delete recognition/s4633139 directory
wakameds Oct 8, 2021
9d6ce0e
upload dataloader
wakameds Oct 8, 2021
00436fa
Delete Musk_RCNN.ipynb
wakameds Oct 8, 2021
a4ee4bc
test2
Oct 8, 2021
9e81c7d
upload colab.file from Colab
Oct 15, 2021
3877806
Improved Unet file from Colab
Oct 17, 2021
b6bcf3a
revised colab file with segmentation part
Oct 18, 2021
ca87d30
upload improved Unet model file
wakameds Oct 17, 2021
037dd58
upload the dataloader and criterion files for the improved UNet
wakameds Oct 17, 2021
825725e
upload the files to train model and to evaluate the performance for t…
wakameds Oct 17, 2021
77891e0
upload the main files for the improved Unet
wakameds Oct 17, 2021
73dd281
remove UNetjupyter.ipynb
wakameds Oct 17, 2021
6eda619
remove dataloader ipynb
wakameds Oct 17, 2021
bf3fda8
upload the results obtained from IUnet
wakameds Oct 17, 2021
47a789d
Delete dice_coefficient.png
wakameds Oct 18, 2021
ae35f0d
Delete image.png
wakameds Oct 18, 2021
4f6a00d
Delete dice_loss.png
wakameds Oct 18, 2021
c675a4b
Delete seg.png
wakameds Oct 18, 2021
28c807a
Revised code in the files and add comments
wakameds Oct 19, 2021
3f95c84
Revised code in the files
wakameds Oct 20, 2021
f103071
Create README.md
wakameds Oct 19, 2021
284223d
Revised code
wakameds Oct 20, 2021
bc18688
add comments in the file
wakameds Oct 20, 2021
5c0b40d
Update README.md
wakameds Oct 20, 2021
ad96dd5
change epoch num from 1 to 20
wakameds Oct 22, 2021
ad202dc
Update README.md
wakameds Oct 22, 2021
ffcec40
Revise README.md
wakameds Oct 22, 2021
fcb6138
Remove commit 02e5662
wakameds Nov 17, 2021
40e8095
Merge remote-tracking branch 'origin/topic-recognition' into topic-re…
wakameds Nov 17, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
153 changes: 52 additions & 101 deletions recognition/ISICs_Unet/README.md
Original file line number Diff line number Diff line change
@@ -1,101 +1,52 @@
# Segment the ISICs data set with the U-net

## Project Overview
This project aim to solve the segmentation of skin lesian (ISIC2018 data set) using the U-net, with all labels having a minimum Dice similarity coefficient of 0.7 on the test set[Task 3].

## ISIC2018
![ISIC example](imgs/example.jpg)

Skin Lesion Analysis towards Melanoma Detection

Task found in https://challenge2018.isic-archive.com/


## U-net
![UNet](imgs/uent.png)

U-net is one of the popular image segmentation architectures used mostly in biomedical purposes. The name UNet is because it’s architecture contains a compressive path and an expansive path which can be viewed as a U shape. This architecture is built in such a way that it could generate better results even for a less number of training data sets.

## Data Set Structure

data set folder need to be stored in same directory with structure same as below
```bash
ISIC2018
|_ ISIC2018_Task1-2_Training_Input_x2
|_ ISIC_0000000
|_ ISIC_0000001
|_ ...
|_ ISIC2018_Task1_Training_GroundTruth_x2
|_ ISIC_0000000_segmentation
|_ ISIC_0000001_segmentation
|_ ...
```

## Dice Coefficient

The Sørensen–Dice coefficient is a statistic used to gauge the similarity of two samples.

Further information in https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient

## Dependencies

- python 3
- tensorflow 2.1.0
- pandas 1.1.4
- numpy 1.19.2
- matplotlib 3.3.2
- scikit-learn 0.23.2
- pillow 8.0.1


## Usages

- Run `train.py` for training the UNet on ISIC data.
- Run `evaluation.py` for evaluation and case present.

## Advance

- Modify `setting.py` for custom setting, such as different batch size.
- Modify `unet.py` for custom UNet, such as different kernel size.

## Algorithm

- data set:
- The data set we used is the training set of ISIC 2018 challenge data which has segmentation labels.
- Training: Validation: Test = 1660: 415: 519 = 0.64: 0.16 : 0.2 (Training: Test = 4: 1 and in Training, further split 4: 1 for Training: Validation)
- Training data augmentations: rescale, rotate, shift, zoom, grayscale
- model:
- Original UNet with padding which can keep the shape of input and output same.
- The first convolutional layers has 16 output channels.
- The activation function of all convolutional layers is ELU.
- Without batch normalization layers.
- The inputs is (384, 512, 1)
- The output is (384, 512, 1) after sigmoid activation.
- Optimizer: Adam, lr = 1e-4
- Loss: dice coefficient loss
- Metrics: accuracy & dice coefficient

## Results

Evaluation dice coefficient is 0.805256724357605.

plot of train/valid Dice coefficient:

![img](imgs/train_and_valid_dice_coef.png)

case present:

![case](imgs/case%20present.png)

## Reference
Manna, S. (2020). K-Fold Cross Validation for Deep Learning using Keras. [online] Medium. Available at: https://medium.com/the-owl/k-fold-cross-validation-in-keras-3ec4a3a00538 [Accessed 24 Nov. 2020].

zhixuhao (2020). zhixuhao/unet. [online] GitHub. Available at: https://github.com/zhixuhao/unet.

GitHub. (n.d.). NifTK/NiftyNet. [online] Available at: https://github.com/NifTK/NiftyNet/blob/a383ba342e3e38a7ad7eed7538bfb34960f80c8d/niftynet/layer/loss_segmentation.py [Accessed 24 Nov. 2020].

Team, K. (n.d.). Keras documentation: Losses. [online] keras.io. Available at: https://keras.io/api/losses/#creating-custom-losses [Accessed 24 Nov. 2020].

262588213843476 (n.d.). unet.py. [online] Gist. Available at: https://gist.github.com/abhinavsagar/fe0c900133cafe93194c069fe655ef6e [Accessed 24 Nov. 2020].

Stack Overflow. (n.d.). python - Disable Tensorflow debugging information. [online] Available at: https://stackoverflow.com/questions/35911252/disable-tensorflow-debugging-information [Accessed 24 Nov. 2020].
# Segmenting ISICs with U-Net

COMP3710 Report recognition problem 3 (Segmenting ISICs data set with U-Net) solved in TensorFlow

Created by Christopher Bailey (45576430)

## The problem and algorithm
The problem solved by this program is binary segmentation of the ISICs skin lesion data set. Segmentation is a way to label pixels in an image according to some grouping, in this case lesion or non-lesion. This translates images of skin to masks representing areas of concern for skin lesions.

U-Net is a form of autoencoder where the downsampling path is expected to learn the features of the image and the upsampling path learns how to recreate the masks. Long skip connections between downpooling and upsampling layers are utilised to overcome the bottleneck in traditional autoencoders allowing feature representations to be recreated.

## How it works
A four layer padded U-Net is used, preserving skin features and mask resolution. The implementation utilises Adam as the optimizer and implements Dice distance as the loss function as this appeared to give quicker convergence than other methods (eg. binary cross-entropy).

The utilised metric is a Dice coefficient implementation. My initial implementation appeared faulty and was replaced with a 3rd party implementation which appears correct. 3 epochs was observed to be generally sufficient to observe Dice coefficients of 0.8+ on test datasets but occasional non-convergence was observed and could be curbed by increasing the number of epochs. Visualisation of predictions is also implemented and shows reasonable correspondence. Orange bandaids represent an interesting challenge for the implementation as presented.

### Training, validation and testing split
Training, validation and testing uses a respective 60:20:20 split, a commonly assumed starting point suggested by course staff. U-Net in particular was developed to work "with very few training images" (Ronneberger et al, 2015) The input data for this problem consists of 2594 images and masks. This split appears to provide satisfactory results.

## Using the model
### Dependencies required
* Python3 (tested with 3.8)
* TensorFlow 2.x (tested with 2.3)
* glob (used to load filenames)
* matplotlib (used for visualisations, tested with 3.3)

### Parameter tuning
The model was developed on a GTX 1660 TI (6GB VRAM) and certain values (notably batch size and image resolution) were set lower than might otherwise be ideal on more capable hardware. This is commented in the relevant code.

### Running the model
The model is executed via the main.py script.

### Example output
Given a batch size of 1 and 3 epochs the following output was observed on a single run:
Era | Loss | Dice coefficient
--- | ---- | ----------------
Epoch 1 | 0.7433 | 0.2567
Epoch 2 | 0.3197 | 0.6803
Epoch 3 | 0.2657 | 0.7343
Testing | 0.1820 | 0.8180


### Figure 1 - example visualisation plot
Skin images in left column, true mask middle, predicted mask right column
![Visualisation of predictions](visual.png)

## References
Segments of code in this assignment were used from or based on the following sources:
1. COMP3710-demo-code.ipynb from Guest Lecture
1. https://www.tensorflow.org/tutorials/load_data/images
1. https://www.tensorflow.org/guide/gpu
1. Karan Jakhar (2019) https://medium.com/@karan_jakhar/100-days-of-code-day-7-84e4918cb72c
1 change: 1 addition & 0 deletions recognition/s4633139/IUNet.ipynb

Large diffs are not rendered by default.

117 changes: 117 additions & 0 deletions recognition/s4633139/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Improved UNet for ISIC2018 image segmentation
The project is the practical work for COMP3710 in 2021. This report will summarise the information with the improved UNet model in the repository.


## Objective
The project objective is to implement the improved UNet for ISIC2018 image segmentation. UNet is a developed model for biomedical image segmentation, which automatically identifies the tumour area.<sup>[1]</sup> The automatic image segmentation without objective will support the medical and experimental works, while the higher accurate image segmentation performance is also required. This project aimed to implement the improved UNet model for Brain tumours into the ISIC2018 image dataset.


## Model Architecture
Figure 1 shows the improved UNet architecture for Brain tumours.<sup>[2]</sup> The improved model utilises the context module, 3x3 convolution layer with stride = 2 instead of max-pooling layer, localisation module, and segmentation layer extracted from localisation layer. In down-sampling part, context block works as residual blocks of ResNet. The context block consists of 3x3 convolution layer, batch normalisation layer, dropout layer, and activity function layer. LeakyReLu is applied as the activation function in the model. The output through the context block is concatenated to the input for the localisation modules in up-sampling block. After that, the output features are decreased with 3x3 convolution layer for the following context block.

In up-sampling, the concatenated input is fed into the localization block. Then, the output from the localization is fed into the convolutional layer to transform into a segmentation layer to add to the next segmentation layer and the up-sampling block, respectively.

<p align="center">
<img width="750" height="350" src = https://user-images.githubusercontent.com/85863413/137878529-a434ecb5-6331-418e-a1cc-892d1ad480c6.png>
</p>

<p align="center">
<b>Figure1: The improved UNet model architecture</b><sup>[2]</sup>
</p>

In terms of the loss function, dice loss is utilised for UNet. The loss function is represented as

<p align="center">
<img width="180" height="30" src = https://user-images.githubusercontent.com/85863413/138026696-fd7fd35e-bc0b-4b16-8b63-bc67f5011c78.png>
</p>

Dice coefficient is represented as

<p align="center">
<img width="120" height="30" src = https://user-images.githubusercontent.com/85863413/138026430-b43c30cf-100d-4a29-b8c7-094adb299d17.png>
</p>

Dice coefficient measures the similarity between the target mask and the predicted mask from the model.


## Files
This repository includes the below files for the improved UNet.

**criterion.py:** This file consists of the two criterion functions: dice coefficient and dice loss. The functions are utilised for training and evaluating the model through the forward and backpropagation steps.

**dataloader.py:** This file is concerning data preparation for UNet model, which works for data loading, image augmentation, transformation into data loader.

**model_train_val.py:** This file works to train the model and to assess the segmentation performance with dice coefficient and dice loss. The function in the file returns lists recorded the criteria values by epochs: TRAIN_DICE, VAL_DICE, TRAIN_LOSS, VAL_LOSS.

**model.py:** This file includes the classes to build the improved UNet model. The classes with Context, Localization, Up-sampling, Segmentation, and Convolution to down-sampling, and Improved UNet are provided. In this model, Sigmoid function for the binary classification between mask and non-mask areas is utilised instead of softmax function.

**driver.py:** The file performs all procedures for the project, data preparation, training model, and model evaluation. The file includes the parameters with the improved UNet: FEATURE_SIZE, IN_CHANEL, OUT_CHANEL, IMG_TF, MASK_TF, BATCH_SIZE, EPOCHS, and LR. The image size is resized into 128x128 as the initial parameter. Also, random_split() is set to split the dataset into train set, validation set, and test set by 50:25:25. Adam is applied as an optimizer to train the model.

**visualise.py:** The file contains the five functions for plotting the test result and training dice coefficient and dice loss by epochs, and output the segmented images with the predicted mask.


## Dataset
ISIC 2018 Task1 is a dataset with skin cancer images shared by the International Skin Imaging Collaboration (ISIC). <sup>[3]</sup> The dataset consists of 2594 images and mask images, respectively. The dataset is split into the train set, validation, and test set by ratio: 0.5: 0.25: 0.25.


## How to run
“driver.py” calls all files in the repository to train the model and to evaluate the performance. ISIC dataset is needed to be set in the same directory including the files. After that, put the command ‘python driver.py’ in the terminal and execute the command.


## Dependencies
The model training and evaluation was executed under the environment.
* Pytorch 1.9.0+cu111
* Python 3.7.12
* Matplotlib 3.3.4



## Results
#### Dice coefficient and loss
The figure is about train and validation dice coefficient and losses by 50 epochs. The validation dice coefficient was approximately 0.85 and it was stable after 15 epochs.


<p align="center">
<img width="400" height="300" src = https://user-images.githubusercontent.com/85863413/138023981-96eeecf9-3bbb-4e6e-a4f9-e7376b7dd216.png>
</p>

<p align="center">
<b>Figure2. Dice coefficient</b>
</p>


The validation dice loss was stable at roughly 0.15 while train loss declined after epoch 15.


<p align="center">
<img width="400" height="300" src = https://user-images.githubusercontent.com/85863413/138024051-ecd6ef76-3c51-4493-abdd-8c04d6dd8d26.png>
</p>


<p align="center">
<b>Figure3. Dice loss</b>
</p>



#### Segmentation
The trained UNet predict the mask from the image in test set. The segmentations in the right-hand side column are the images covered with the predicted mask. The dice coefficient of the image is provided in the label. The dice coefficients in the figure recorded over 0.87.


<p align="center">
<img width="500" height="700" src = https://user-images.githubusercontent.com/85863413/138024077-2d17b2fd-fb1c-4ff9-8030-8ea7521f2420.png>
</p>


<p align="center">
<b>Figure4. Segmentation</b>
</p>



## References
[1] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham. https://arxiv.org/abs/1505.04597

[2] Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., & Maier-Hein, K. H. (2017, September). Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In International MICCAI Brainlesion Workshop (pp. 287-297). Springer, Cham. https://arxiv.org/pdf/1802.10508v1.pdf

[3] ISIC 2018 Task1 https://paperswithcode.com/dataset/isic-2018-task-1
42 changes: 42 additions & 0 deletions recognition/s4633139/criterion.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Copyright (c) 2021, H.WAKAYAMA, All rights reserved.
# File: criterion.py
# Author: Hideki WAKAYAMA
# Contact: h.wakayama@uq.net.au
# Platform: macOS Big Sur Ver 11.2.1, Pycharm pro 2021.1
# Time: 20/10/2021, 09:52
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

#dice coefficient
def dice_coef(pred, target):
"""
function to compute the dice coefficient
param----
pred(tensor[B,C,W,H]): predicted mask images
target(tensor[B,C,W,H]: target mask images
return---
dice coefficient
"""
batch_size = len(pred)
somooth = 1.

pred_flat = pred.view(batch_size, -1)
target_flat = target.view(batch_size, -1)

intersection = (pred_flat*target_flat).sum()
dice_coef = (2.*intersection+somooth)/(pred_flat.sum()+target_flat.sum()+somooth)
return dice_coef


#loss
def dice_loss(pred, target):
"""
function to compute dice loss
param----
pred(tensor[B,C,W,H]): predicted mask images
target(tensor[B,C,W,H]): target mask images
return----
dice loss
"""
dice_loss = 1 - dice_coef(pred, target)
return dice_loss
47 changes: 47 additions & 0 deletions recognition/s4633139/dataloader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Copyright (c) 2021, H.WAKAYAMA, All rights reserved.
# File: dataloader.py
# Author: Hideki WAKAYAMA
# Contact: h.wakayama@uq.net.au
# Platform: macOS Big Sur Ver 11.2.1, Pycharm pro 2021.1
# Time: 19/10/2021, 15:47
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

import os
from torch.utils.data import Dataset
from PIL import Image

os.chdir("./ISIC2018_Task1-2_Training_Data")

class UNet_dataset(Dataset):
def __init__(self,
img_dir='./ISIC2018_Task1-2_Training_Input_x2',
mask_dir='./ISIC2018_Task1_Training_GroundTruth_x2',
img_transforms=None,
mask_transforms=None,
):

self.img_dir = img_dir
self.mask_dir = mask_dir
self.img_transforms = img_transforms
self.mask_transforms = mask_transforms
self.imgs = [file for file in sorted(os.listdir(self.img_dir)) if file.endswith('.jpg')]
self.masks = [file for file in sorted(os.listdir(self.mask_dir)) if file.endswith('.png')]

def load_data(self, idx):
img_path = os.path.join(self.img_dir, self.imgs[idx])
mask_path = os.path.join(self.mask_dir, self.masks[idx])
img = Image.open(img_path).convert('RGB')
mask = Image.open(mask_path).convert('L')
return img, mask

def __getitem__(self, idx):
img, mask = self.load_data(idx)
if self.img_transforms is not None:
img = self.img_transforms(img)
if self.mask_transforms is not None:
mask = self.mask_transforms(mask)
return img, mask

def __len__(self):
return len(self.imgs)
Loading