Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
70533cc
Add readme header
Nov 21, 2020
93c43d8
update readme
Nov 21, 2020
dd9a3ec
Update web/README.md
Nov 21, 2020
4c9ccbb
Added more setup to mobile and web readme
MouradLachhab Nov 22, 2020
0d96a4d
Changed formatting for commands
MouradLachhab Nov 22, 2020
4251818
Removed extra character
MouradLachhab Nov 22, 2020
5499c57
Added missing web readme
MouradLachhab Nov 22, 2020
860bbd4
Removed python from web prerequesites
MouradLachhab Nov 22, 2020
3ca7656
Tweaked backend readme
MouradLachhab Nov 22, 2020
20c416a
Made Linux vs Windows commands popup more using list dash
MouradLachhab Nov 22, 2020
09e10ea
Update README.md
Nov 22, 2020
6d78508
Update README.md
Nov 22, 2020
97956e1
Update README.md
Nov 22, 2020
3f8369c
Update README.md
mateobelanger Nov 22, 2020
be5230d
Update README.md
mateobelanger Nov 22, 2020
2ccde23
Update README.md
mateobelanger Nov 22, 2020
2dd35dd
Centered and reduce the size of gen. arch. img
mateobelanger Nov 22, 2020
724b5f6
html italic for figure description
mateobelanger Nov 22, 2020
071e7c1
Update README.md
mateobelanger Nov 22, 2020
3ee4743
Update README.md
mateobelanger Nov 22, 2020
e4d8345
Update README.md
mateobelanger Nov 22, 2020
88b5bcf
Update README.md
mateobelanger Nov 22, 2020
5f24f37
Update README.md
mateobelanger Nov 22, 2020
7f4141d
Badges for web readme
mateobelanger Nov 22, 2020
8f04420
Added badges to backend readme
mateobelanger Nov 22, 2020
a41addc
Removed unnecessary html
mateobelanger Nov 22, 2020
31c06e0
added links to badges in web readme
mateobelanger Nov 22, 2020
45df4b7
Update README.md
mateobelanger Nov 22, 2020
3c36f9e
Update README.md
mateobelanger Nov 22, 2020
01ab9b4
Update README.md
mateobelanger Nov 22, 2020
e7a3c1c
Update README.md
Nov 22, 2020
4a4b1af
Apply suggestions from code review
Nov 22, 2020
66183f9
Update mobile/README.md
Nov 22, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 78 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,48 +1,97 @@
# Polydodo: Automatic Sleep Analysis Tool
<h1 align="center">
<br>
<img src="https://raw.githubusercontent.com/wiki/PolyCortex/polydodo/img/dodo.png" alt="Polydodo" height="200">
<br>
Polydodo
<br>
</h1>

This projects aims to offer a comprehensive guide to **record polysomnographic EEG data from home** with an OpenBCI, a website to **upload sleep data** to our classifier and an **interactive visualisation** tool to observe the classified night of sleep.
<h4 align="center">A simple automatic sleep scoring tool that uses OpenBCI boards.</h4>

## Dev requirements
<p align="center">
<a href="https://polycortex.github.io/polydodo/#/">
<img src="https://img.shields.io/badge/web-client-9cf?style=for-the-badge&logo=React"
alt="web client">
</a>
<a href="https://github.com/PolyCortex/polydodo/releases/latest/download/polydodo_app_android.apk">
<img alt="GitHub release (latest by date)" src="https://img.shields.io/github/v/release/PolyCortex/polydodo?label=android-apk&logo=android&style=for-the-badge">
</a>
<img alt="GitHub all releases" src="https://img.shields.io/github/downloads/PolyCortex/polydodo/total?color=orange&label=downloads&style=for-the-badge">
<a href="http://polycortex.polymtl.ca/">
<img src="https://img.shields.io/badge/about%20us-%E2%84%B9-blue?style=for-the-badge&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAB3VBMVEU4qds2qNo1qNo3qNo5qds4qds3qNo5qds6qtswpdk5qds3qNo3qNo5qds3qNo4qds3qNo1p9o4qds2qNo3qNo4qdszp9o0p9o4qds1p9o7qtuQzuqRz+s7qts2qNpPs981p9rz+vz0+v03qNo2qNpIsN2Ozuoqo9j///////8ro9iLzeqf1e0nodee1e2Nzeoootjt9/vs9vspotiJzOmPz+qMzeqa0+wnodib0+yW0ewvpdkvpdmIy+k7qtuLzeqNzuo7qts4qdsspNhpvuNxweU4qds4qds4qds3qNo8qts3qNovpdkupNk3qNo8qts3qNoxptk4qds2qNotpNlTtN9Qs984qdpUteA5qduy3fH///9vweTD5fTA5POf1e3r9vu84vJuwOQ3qNsqo9iIy+kanNXY7vjX7fchn9aGyukro9g+q9zL6PUyptnc8PjT7PcvpdnJ5/VwweVvweXb7/gwpdkupNltwOSh1u5Ert01p9rc7/hDrd2j1+41qNq+4/O74fI6qtu94vOY0uxKsN4zptrU7PdJsN6a0+xWtuAootje8PnQ6vYendY3qNrB5PNYtuDo9fpStN/M6fXn9PqAyOfi8vnw+Px8xufm9PpYt+A0p9pbuOE6qdveSf6lAAAAUXRSTlMAAAAAAAM1q+j8rjlu/XYCS/38/FCy+/y1Btn+/NoHCd7+8v7gCQne/vrfCAneCQne/v7eCQkJCd4JBdjaB7D8/LRL/fz8T3Z3Oq7p/f3qsDt4St8jAAAAAWJLR0QovbC1sgAAAAd0SU1FB+QLFhI3GjFk5DYAAAEWSURBVBjTY2BgZWRj5+AMDOLk4OJmYmVgYGXm4Q0OCQ0LjwgJjuRj5mdgFogUFIqKjo6JjRMWjBRhZmAWDRKLjY5PSEyKjhVPlmBmkJRKkY5OTUvPyMyKlsmWlWOQV8hRzM3Oyy8ozC5SUlZRZVBTL9YoCckrLSgrT9Ks0NJm0NGt1KiqLqspKA+u1ajT1WPQN0g0DA4qBApE1hs1GJswmBo0GgZHAgVCmuqNmo3NGMwtWjRay9pK29vKOjQ6dS0ZrKy7NGKDu3t6+/onaEy0sWVgsQuxj540ecrUadOjHUIcWRiYnSKdXWZEzYyOnuXqHOnGzMDP4t4UUj17zuS59SHzPFj4Qd719PL2CQr09fMPYGBlAADh2ktDQeyqCAAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAyMC0xMS0yMlQxODo1NDo1NCswMDowMO6afBwAAAAldEVYdGRhdGU6bW9kaWZ5ADIwMjAtMTEtMjJUMTg6NTQ6NTQrMDA6MDCfx8SgAAAAAElFTkSuQmCC">
</a>
</p>

### Web
<p align="center">
<a href="#key-features">Key Features</a> •
<a href="#how-it-works">How It Works</a> •
<a href="#project-structure">Project Structure</a> •
<a href="#learn-more">Learn more</a> •
<a href="#learn-more">About us</a>
</p>

- Install Yarn package manager
___

### Python
This project aims to offer a cheaper and more accessible way to perform sleep studies from home. To achieve this, a machine learning classifier is used to automate the manual sleep scoring step. This drastically cuts the time needed to process each and every sleep sequences and it completely eliminates the learning curve associated with manual sleep scoring.

- Install Python 3 and pip
- Consider using `venv` to create a virtual environment
🌐 Our web application does exactly all that and is available [here](https://polycortex.github.io/polydodo/). Check it out!

### Flutter
🤖 Our Android app is underway. Give us a star to stay tuned for upcoming news about its release!

- Install the latest stable version of flutter
**This application is not intended for medical purposes and the data it produces should not be used in such context. Its only goal is to help you study your sleep by yourself. Always seek the advice of a physician on any questions regarding a medical condition.**

### VS Code
## Key features

- Install VS Code
- Install the project's recommended extensions
- Compatible with both OpenBCI's Cyton and Ganglion boards.
- Automatic sleep stage scoring based on the AASM's labels.
- A comprehensive guide on how to record polysomnographic EEG data from home.
- A nice and straightforward UI to help you upload, visualize and save your sleep.

## How it works

Polydodo is composed of two client apps, a web one and a mobile one, through which the user can interact. These clients are not complementary but are alternatives to one another. Each of these clients uses the same local server which hosts the automatic sleep stages classification algorithm.

The web client allows the user to upload a data file acquired using an OpenBCI board and then presents him a detailed and personalized analysis of his sleep. Additionally, this client will further detail the process by which we come to classify sleep in stages and offer a review of this process. OpenBCI boards must be configured via OpenBCI GUI and data must be saved on a SD card (Cyton only) or through a session file.

On the other hand, the mobile client offers a tool that can be used on a regular basis. Unlike the web application, this app can save sleep sequences for later consultation and display the aggregated results of several nights of sleep on a dashboard. Also, it will guide the user from the installation of the electrodes to the end of his data acquisition.

Finally, both these clients use a local HTTP server that is easy to install. This server is hosted locally so that your data is not sent over the internet. Biosignal data are sensitive and this is our way to promise you security.
<p align="center">
<br>
<img alt="General architecture of the project" src="https://github.com/PolyCortex/polydodo/wiki/img/general_architecture_small.png">
<br>
<i>Figure 1. Technology diagram with the flow of incoming and outgoing data to clients.</i>
</p>

## Dev workflow
As the above diagram states, in the case of the mobile application, the data is received in real time, and in the case of the web application, the data is received asynchronously. In both cases, the data is classified after the end of the acquisition on the local server.

### Web
## Project Structure

This project is split into different folders that represent the standalone parts of our project:

- The `ai/` folder contains all of our machine learning prototypes. It mainly consists of a set of notebooks that documents our work. It is there that we trained our sleep stage classification algorithm, validated, tested and serialized it for production. For more information, see [`ai/README.md`](https://github.com/PolyCortex/polydodo/tree/master/ai); and open the notebooks as a lot of documentation can be found there;
- The `backend/` folder contains the python server that uses the serialized model from the `ai/` notebooks. This is the local server that must be used with the web app and the mobile app. See [`server/README.md`](https://github.com/PolyCortex/polydodo/tree/master/backend);
- `web/` contains the React web app which is the UI for our project. See [`web/README.md`](https://github.com/PolyCortex/polydodo/tree/master/web) for more info;
- `mobile` contains the Flutter app. This app is an alternative to our web app. It can interface directly with OpenBCI boards which makes it even simpler to proceed to your own sleep analysis. See [`mobile/README.md`](https://github.com/PolyCortex/polydodo/tree/master/mobile) for more info.

## Getting started

### VS Code

- Install VS Code
- Open this project's workspace via the `polydodo.code-workspace` file.
- Install all the project's recommended extensions

- Open workspace `polydodo.code-workspace`
- Install Python packages by running `pip install --user -r backend/requirements.txt`
- Install node modules by running `yarn install --cwd web`
- Fetch Flutter dependencies through the `Flutter` extension
- Start dev server by running `python backend/app.py`
For more information about how to get started for each part (web, server, mobile) of the project, head to the eponym folder and look for the `README.md` file.

### Building the server as a single executable
## Learn more

Run `python -m PyInstaller --onefile app.py`
For more information, please refer to our [wiki pages](https://github.com/PolyCortex/polydodo/wiki). This is where you'll get all of our official documentation.

### Running the server locally
## About us

- [Login](https://docs.github.com/en/free-pro-team@latest/packages/using-github-packages-with-your-projects-ecosystem/configuring-docker-for-use-with-github-packages#authenticating-with-a-personal-access-token) to Github Docker registry
- `docker pull docker.pkg.github.com/polycortex/polydodo/backend:latest`
- `docker run -p 8080:8080 docker.pkg.github.com/polycortex/polydodo/backend:latest`
[PolyCortex](http://polycortex.polymtl.ca/) is a student club based at [Polytechnique Montreal](https://www.polymtl.ca/).

### Mobile
The goal of PolyCortex is to develop expertise in neuroscience and engineering to solve neuroengineering problems. This field aims to create technological solutions dedicated to the search for innovative solutions to neuroscience problems.

Prior to build execute build-runner to generate the app's routes.
`flutter packages pub run build_runner watch --delete-conflicting-outputs`
To do this, we recommend the use of solutions revolving around the design of brain-machine interface devices, the implementation of embedded systems, the use of machine learning and signal processing techniques.
46 changes: 28 additions & 18 deletions ai/README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
# Sleep Stage Classification

This project aims to classify a full night of sleep based on two-channels of raw EEG signal. The sleep stage annotations should follow those of the *American Academy of Sleep Medicine (AASM) scoring manual* [[1]](https://aasm.org/clinical-resources/scoring-manual/).
This project aims to classify a full night of sleep based on two-channels of raw EEG signal. The sleep stage annotations should follow those of the _American Academy of Sleep Medicine (AASM) scoring manual_ [[1]](https://aasm.org/clinical-resources/scoring-manual/).

A particularity of this project is that the data on which we will later apply our classifier will be different from the data from which we've trained on. Indeed, since there's no large public dataset of sleep stage classification based on the hardware we use and recommend (because of its affordability), we need to train on a dataset that used different recording equipment. Thus, our pipeline should also be able to classify sleep based on EEG acquired by different types of hardware (i.e. OpenBCI).

## Project Summary

Once the right dataset was chosen, the following steps were taken in order to successfully classify sleep stages:
1) Dataset exploration
2) Feature extraction
3) Model exploration
4) Testing on Open BCI data
5) Feature and annotation formatting to csv

1. Dataset exploration
2. Feature extraction
3. Model exploration
4. Testing on Open BCI data
5. Feature and annotation formatting to csv

## How to Recreate Results

Expand All @@ -21,10 +22,11 @@ You must first install package dependencies by running the following:

Afterwards, the order in which the notebooks should be run is the following:

1) `exploration/subject_exploration.ipynb`: This notebook will generate the recording's info file, namely `recordings-info.csv`, which holds the extrapolated offset time at which the user closed the lights. It also holds the total night duration. Those information will be used to crop the recordings to only keep epochs of the subject's night.
2) `feature_extraction.ipynb`: This notebook takes the recordings file and extract the different features. It will save two files, one that holds the features (`x_features.npy`) and the other which holds the sleep stage labels (`y_observations.npy`). If you also want to test the OpenBCI performance, it extracts the feature from the OpenBCI recordings into the `X_openbci_HP.npy` and scored labels into `y_openbci_HP.npy`.
3) `models/{RF_HMM, KNN, NB, SVC, voting_clf}.ipynb`: These notebooks train the corresponding classifier with the previously extracted features. Each notebook also saves the trained classifier into the `trained_models` folder. Also, in order to have the hidden markov model matrices for the postprecessing step, you must run the final steps of `models/RF_HMM.ipynb`.
4) `prediction_{openbci, anonymous}.ipynb`: These notebooks allows you to check the accuracy of a trained classifier on a single night recording. It takes in input the features, that have to be extracted beforehand, and outputs the epoch's labels.
1. `exploration/subject_exploration.ipynb`: This notebook will generate the recording's info file, namely `recordings-info.csv`, which holds the extrapolated offset time at which the user closed the lights. It also holds the total night duration. Those information will be used to crop the recordings to only keep epochs of the subject's night.
2. `feature_extraction.ipynb`: This notebook takes the recordings file and extract the different features. It will save two files, one that holds the features (`x_features.npy`) and the other which holds the sleep stage labels (`y_observations.npy`). If you also want to test the OpenBCI performance, it extracts the feature from the OpenBCI recordings into the `X_openbci_HP.npy` and scored labels into `y_openbci_HP.npy`.
3. `models/{RF_HMM, KNN, NB, SVC, voting_clf}.ipynb`: These notebooks train the corresponding classifier with the previously extracted features. Each notebook also saves the trained classifier into the `trained_models` folder. Also, in order to have the hidden markov model matrices for the postprecessing step, you must run the final steps of `models/RF_HMM.ipynb`.
4. `prediction_{openbci, anonymous}.ipynb`: These notebooks allows you to check the accuracy of a trained classifier on a single night recording. It takes in input the features, that have to be extracted beforehand, and outputs the epoch's labels.

## Dataset & Exploration

We will cover the choices that led us to Sleep-EDF as our main dataset, a brief overview and exploration results.
Expand All @@ -42,17 +44,18 @@ On the other hand, for the next iterations of improving our classifier, the use
Sleep-EDF extended is a dataset that is separated in two sections: sleep cassette (SC) and sleep telemetry (ST). They were compiled for two different research; the further was intended to study the impact of age and sex over sleep, and the latter was intended to study the effect of Temazepan on sleep. We only used the SC part of the dataset, because we initially only wanted to train on subjects that didn't have sleep pathologies.

As stated on the Physionet website, resource group managed by the National Institutes of Health (NIH), the SC part of the dataset can be described as:
> The 153 SC* files (SC = Sleep Cassette) were obtained in a 1987-1991 study of age effects on sleep in healthy Caucasians aged 25-101, without any sleep-related medication [2]. Two PSGs of about 20 hours each were recorded during two subsequent day-night periods at the subjects homes. Subjects continued their normal activities but wore a modified Walkman-like cassette-tape recorder described in chapter VI.4 (page 92) of Bob’s 1987 thesis [7]. [...]

> The 153 SC\* files (SC = Sleep Cassette) were obtained in a 1987-1991 study of age effects on sleep in healthy Caucasians aged 25-101, without any sleep-related medication [2]. Two PSGs of about 20 hours each were recorded during two subsequent day-night periods at the subjects homes. Subjects continued their normal activities but wore a modified Walkman-like cassette-tape recorder described in chapter VI.4 (page 92) of Bob’s 1987 thesis [7]. [...]

Overall, there are 82 subjects whom participated in this research. The following signals have been recorded:
| Label | Sample Frequency | Physical Range | Unit | Digital Range | High Pass | Low Pass |
| Label | Sample Frequency | Physical Range | Unit | Digital Range | High Pass | Low Pass |
|----------------|------------------|----------------|------|---------------|---------------------|----------|
| EEG Fpz-Cz | 100 Hz | [-192,+192] | uV | [-2048,+2047] | 0.5 Hz | - |
| EEG Pz-Oz | 100 Hz | [-197,+196] | uV | [-2048,+2047] | 0.5 Hz | - |
| EOG Horizontal | 100 Hz | [-1009,+1009] | uV | [-2048,+2047] | 0.5 Hz | - |
| Resp oro-nasal | 1 Hz | [-2048,+2047] | - | [-2048,+2047] | 0.03 Hz | 0.9 Hz |
| EMG Sumbental | 1 Hz | [-5,+5] | uV | [-2500,+2500] | 16 Hz Rectification | 0.7 Hz |
| Temp Rectal | 1 Hz | [+34,+40] | °C | [-2849,+2731] | - | - |
| EEG Fpz-Cz | 100 Hz | [-192,+192] | uV | [-2048,+2047] | 0.5 Hz | - |
| EEG Pz-Oz | 100 Hz | [-197,+196] | uV | [-2048,+2047] | 0.5 Hz | - |
| EOG Horizontal | 100 Hz | [-1009,+1009] | uV | [-2048,+2047] | 0.5 Hz | - |
| Resp oro-nasal | 1 Hz | [-2048,+2047] | - | [-2048,+2047] | 0.03 Hz | 0.9 Hz |
| EMG Sumbental | 1 Hz | [-5,+5] | uV | [-2500,+2500] | 16 Hz Rectification | 0.7 Hz |
| Temp Rectal | 1 Hz | [+34,+40] | °C | [-2849,+2731] | - | - |

> The EOG and EEG signals were each sampled at 100 Hz. The submental-EMG signal was electronically highpass filtered, rectified and low-pass filtered after which the resulting EMG envelope expressed in uV rms (root-mean-square) was sampled at 1Hz. Oro-nasal airflow, rectal body temperature and the event marker were also sampled at 1Hz.

Expand Down Expand Up @@ -98,3 +101,10 @@ CNN, sur l’´epoque ayant le meilleur score de justesse. Finalement,
l’algorithme de Viterbi est appliqu´e afin de trouv´e
la s´equence d’´etats cach´es la plus probable ´etant donn´e nos
´emissions sur notre ensemble de test. -->

## Model export

The ONNX library from Facebook and Microsoft is used to serialize the model and export it to production. This allows to improve the interoperability of the models developed by allowing to serialize sklearn pipelines, pytorch and others to onnxruntime thanks to protobuf,. onnxruntime then makes it possible to run these pipelines on platforms other than Python.

Check out `export_to_onnx.ipynb` to see how we serialized our trained model for it to be used in production.
Refer to `backend/` to see how we are then using it in a production environment.
Loading