diff --git a/README.md b/README.md index 58e3d79c..f5c6d9bd 100644 --- a/README.md +++ b/README.md @@ -1,48 +1,97 @@ -# Polydodo: Automatic Sleep Analysis Tool +

+
+ Polydodo +
+ Polydodo +
+

-This projects aims to offer a comprehensive guide to **record polysomnographic EEG data from home** with an OpenBCI, a website to **upload sleep data** to our classifier and an **interactive visualisation** tool to observe the classified night of sleep. +

A simple automatic sleep scoring tool that uses OpenBCI boards.

-## Dev requirements +

+ + web client + + + GitHub release (latest by date) + +GitHub all releases + + + +

-### Web +

+ Key Features • + How It Works • + Project Structure • + Learn more • + About us +

-- Install Yarn package manager +___ -### Python +This project aims to offer a cheaper and more accessible way to perform sleep studies from home. To achieve this, a machine learning classifier is used to automate the manual sleep scoring step. This drastically cuts the time needed to process each and every sleep sequences and it completely eliminates the learning curve associated with manual sleep scoring. -- Install Python 3 and pip -- Consider using `venv` to create a virtual environment +🌐 Our web application does exactly all that and is available [here](https://polycortex.github.io/polydodo/). Check it out! -### Flutter +🤖 Our Android app is underway. Give us a star to stay tuned for upcoming news about its release! -- Install the latest stable version of flutter +**This application is not intended for medical purposes and the data it produces should not be used in such context. Its only goal is to help you study your sleep by yourself. Always seek the advice of a physician on any questions regarding a medical condition.** -### VS Code +## Key features -- Install VS Code -- Install the project's recommended extensions +- Compatible with both OpenBCI's Cyton and Ganglion boards. +- Automatic sleep stage scoring based on the AASM's labels. +- A comprehensive guide on how to record polysomnographic EEG data from home. +- A nice and straightforward UI to help you upload, visualize and save your sleep. + +## How it works + +Polydodo is composed of two client apps, a web one and a mobile one, through which the user can interact. These clients are not complementary but are alternatives to one another. Each of these clients uses the same local server which hosts the automatic sleep stages classification algorithm. + +The web client allows the user to upload a data file acquired using an OpenBCI board and then presents him a detailed and personalized analysis of his sleep. Additionally, this client will further detail the process by which we come to classify sleep in stages and offer a review of this process. OpenBCI boards must be configured via OpenBCI GUI and data must be saved on a SD card (Cyton only) or through a session file. + +On the other hand, the mobile client offers a tool that can be used on a regular basis. Unlike the web application, this app can save sleep sequences for later consultation and display the aggregated results of several nights of sleep on a dashboard. Also, it will guide the user from the installation of the electrodes to the end of his data acquisition. + +Finally, both these clients use a local HTTP server that is easy to install. This server is hosted locally so that your data is not sent over the internet. Biosignal data are sensitive and this is our way to promise you security. +

+
+ General architecture of the project +
+Figure 1. Technology diagram with the flow of incoming and outgoing data to clients. +

-## Dev workflow +As the above diagram states, in the case of the mobile application, the data is received in real time, and in the case of the web application, the data is received asynchronously. In both cases, the data is classified after the end of the acquisition on the local server. -### Web +## Project Structure + +This project is split into different folders that represent the standalone parts of our project: + +- The `ai/` folder contains all of our machine learning prototypes. It mainly consists of a set of notebooks that documents our work. It is there that we trained our sleep stage classification algorithm, validated, tested and serialized it for production. For more information, see [`ai/README.md`](https://github.com/PolyCortex/polydodo/tree/master/ai); and open the notebooks as a lot of documentation can be found there; +- The `backend/` folder contains the python server that uses the serialized model from the `ai/` notebooks. This is the local server that must be used with the web app and the mobile app. See [`server/README.md`](https://github.com/PolyCortex/polydodo/tree/master/backend); +- `web/` contains the React web app which is the UI for our project. See [`web/README.md`](https://github.com/PolyCortex/polydodo/tree/master/web) for more info; +- `mobile` contains the Flutter app. This app is an alternative to our web app. It can interface directly with OpenBCI boards which makes it even simpler to proceed to your own sleep analysis. See [`mobile/README.md`](https://github.com/PolyCortex/polydodo/tree/master/mobile) for more info. + +## Getting started + +### VS Code + +- Install VS Code +- Open this project's workspace via the `polydodo.code-workspace` file. +- Install all the project's recommended extensions -- Open workspace `polydodo.code-workspace` -- Install Python packages by running `pip install --user -r backend/requirements.txt` -- Install node modules by running `yarn install --cwd web` -- Fetch Flutter dependencies through the `Flutter` extension -- Start dev server by running `python backend/app.py` +For more information about how to get started for each part (web, server, mobile) of the project, head to the eponym folder and look for the `README.md` file. -### Building the server as a single executable +## Learn more -Run `python -m PyInstaller --onefile app.py` +For more information, please refer to our [wiki pages](https://github.com/PolyCortex/polydodo/wiki). This is where you'll get all of our official documentation. -### Running the server locally +## About us -- [Login](https://docs.github.com/en/free-pro-team@latest/packages/using-github-packages-with-your-projects-ecosystem/configuring-docker-for-use-with-github-packages#authenticating-with-a-personal-access-token) to Github Docker registry -- `docker pull docker.pkg.github.com/polycortex/polydodo/backend:latest` -- `docker run -p 8080:8080 docker.pkg.github.com/polycortex/polydodo/backend:latest` +[PolyCortex](http://polycortex.polymtl.ca/) is a student club based at [Polytechnique Montreal](https://www.polymtl.ca/). -### Mobile +The goal of PolyCortex is to develop expertise in neuroscience and engineering to solve neuroengineering problems. This field aims to create technological solutions dedicated to the search for innovative solutions to neuroscience problems. -Prior to build execute build-runner to generate the app's routes. -`flutter packages pub run build_runner watch --delete-conflicting-outputs` +To do this, we recommend the use of solutions revolving around the design of brain-machine interface devices, the implementation of embedded systems, the use of machine learning and signal processing techniques. diff --git a/ai/README.md b/ai/README.md index 3c11d40f..ded79e78 100644 --- a/ai/README.md +++ b/ai/README.md @@ -1,17 +1,18 @@ # Sleep Stage Classification -This project aims to classify a full night of sleep based on two-channels of raw EEG signal. The sleep stage annotations should follow those of the *American Academy of Sleep Medicine (AASM) scoring manual* [[1]](https://aasm.org/clinical-resources/scoring-manual/). +This project aims to classify a full night of sleep based on two-channels of raw EEG signal. The sleep stage annotations should follow those of the _American Academy of Sleep Medicine (AASM) scoring manual_ [[1]](https://aasm.org/clinical-resources/scoring-manual/). A particularity of this project is that the data on which we will later apply our classifier will be different from the data from which we've trained on. Indeed, since there's no large public dataset of sleep stage classification based on the hardware we use and recommend (because of its affordability), we need to train on a dataset that used different recording equipment. Thus, our pipeline should also be able to classify sleep based on EEG acquired by different types of hardware (i.e. OpenBCI). ## Project Summary Once the right dataset was chosen, the following steps were taken in order to successfully classify sleep stages: -1) Dataset exploration -2) Feature extraction -3) Model exploration -4) Testing on Open BCI data -5) Feature and annotation formatting to csv + +1. Dataset exploration +2. Feature extraction +3. Model exploration +4. Testing on Open BCI data +5. Feature and annotation formatting to csv ## How to Recreate Results @@ -21,10 +22,11 @@ You must first install package dependencies by running the following: Afterwards, the order in which the notebooks should be run is the following: -1) `exploration/subject_exploration.ipynb`: This notebook will generate the recording's info file, namely `recordings-info.csv`, which holds the extrapolated offset time at which the user closed the lights. It also holds the total night duration. Those information will be used to crop the recordings to only keep epochs of the subject's night. -2) `feature_extraction.ipynb`: This notebook takes the recordings file and extract the different features. It will save two files, one that holds the features (`x_features.npy`) and the other which holds the sleep stage labels (`y_observations.npy`). If you also want to test the OpenBCI performance, it extracts the feature from the OpenBCI recordings into the `X_openbci_HP.npy` and scored labels into `y_openbci_HP.npy`. -3) `models/{RF_HMM, KNN, NB, SVC, voting_clf}.ipynb`: These notebooks train the corresponding classifier with the previously extracted features. Each notebook also saves the trained classifier into the `trained_models` folder. Also, in order to have the hidden markov model matrices for the postprecessing step, you must run the final steps of `models/RF_HMM.ipynb`. -4) `prediction_{openbci, anonymous}.ipynb`: These notebooks allows you to check the accuracy of a trained classifier on a single night recording. It takes in input the features, that have to be extracted beforehand, and outputs the epoch's labels. +1. `exploration/subject_exploration.ipynb`: This notebook will generate the recording's info file, namely `recordings-info.csv`, which holds the extrapolated offset time at which the user closed the lights. It also holds the total night duration. Those information will be used to crop the recordings to only keep epochs of the subject's night. +2. `feature_extraction.ipynb`: This notebook takes the recordings file and extract the different features. It will save two files, one that holds the features (`x_features.npy`) and the other which holds the sleep stage labels (`y_observations.npy`). If you also want to test the OpenBCI performance, it extracts the feature from the OpenBCI recordings into the `X_openbci_HP.npy` and scored labels into `y_openbci_HP.npy`. +3. `models/{RF_HMM, KNN, NB, SVC, voting_clf}.ipynb`: These notebooks train the corresponding classifier with the previously extracted features. Each notebook also saves the trained classifier into the `trained_models` folder. Also, in order to have the hidden markov model matrices for the postprecessing step, you must run the final steps of `models/RF_HMM.ipynb`. +4. `prediction_{openbci, anonymous}.ipynb`: These notebooks allows you to check the accuracy of a trained classifier on a single night recording. It takes in input the features, that have to be extracted beforehand, and outputs the epoch's labels. + ## Dataset & Exploration We will cover the choices that led us to Sleep-EDF as our main dataset, a brief overview and exploration results. @@ -42,17 +44,18 @@ On the other hand, for the next iterations of improving our classifier, the use Sleep-EDF extended is a dataset that is separated in two sections: sleep cassette (SC) and sleep telemetry (ST). They were compiled for two different research; the further was intended to study the impact of age and sex over sleep, and the latter was intended to study the effect of Temazepan on sleep. We only used the SC part of the dataset, because we initially only wanted to train on subjects that didn't have sleep pathologies. As stated on the Physionet website, resource group managed by the National Institutes of Health (NIH), the SC part of the dataset can be described as: -> The 153 SC* files (SC = Sleep Cassette) were obtained in a 1987-1991 study of age effects on sleep in healthy Caucasians aged 25-101, without any sleep-related medication [2]. Two PSGs of about 20 hours each were recorded during two subsequent day-night periods at the subjects homes. Subjects continued their normal activities but wore a modified Walkman-like cassette-tape recorder described in chapter VI.4 (page 92) of Bob’s 1987 thesis [7]. [...] + +> The 153 SC\* files (SC = Sleep Cassette) were obtained in a 1987-1991 study of age effects on sleep in healthy Caucasians aged 25-101, without any sleep-related medication [2]. Two PSGs of about 20 hours each were recorded during two subsequent day-night periods at the subjects homes. Subjects continued their normal activities but wore a modified Walkman-like cassette-tape recorder described in chapter VI.4 (page 92) of Bob’s 1987 thesis [7]. [...] Overall, there are 82 subjects whom participated in this research. The following signals have been recorded: -| Label | Sample Frequency | Physical Range | Unit | Digital Range | High Pass | Low Pass | +| Label | Sample Frequency | Physical Range | Unit | Digital Range | High Pass | Low Pass | |----------------|------------------|----------------|------|---------------|---------------------|----------| -| EEG Fpz-Cz | 100 Hz | [-192,+192] | uV | [-2048,+2047] | 0.5 Hz | - | -| EEG Pz-Oz | 100 Hz | [-197,+196] | uV | [-2048,+2047] | 0.5 Hz | - | -| EOG Horizontal | 100 Hz | [-1009,+1009] | uV | [-2048,+2047] | 0.5 Hz | - | -| Resp oro-nasal | 1 Hz | [-2048,+2047] | - | [-2048,+2047] | 0.03 Hz | 0.9 Hz | -| EMG Sumbental | 1 Hz | [-5,+5] | uV | [-2500,+2500] | 16 Hz Rectification | 0.7 Hz | -| Temp Rectal | 1 Hz | [+34,+40] | °C | [-2849,+2731] | - | - | +| EEG Fpz-Cz | 100 Hz | [-192,+192] | uV | [-2048,+2047] | 0.5 Hz | - | +| EEG Pz-Oz | 100 Hz | [-197,+196] | uV | [-2048,+2047] | 0.5 Hz | - | +| EOG Horizontal | 100 Hz | [-1009,+1009] | uV | [-2048,+2047] | 0.5 Hz | - | +| Resp oro-nasal | 1 Hz | [-2048,+2047] | - | [-2048,+2047] | 0.03 Hz | 0.9 Hz | +| EMG Sumbental | 1 Hz | [-5,+5] | uV | [-2500,+2500] | 16 Hz Rectification | 0.7 Hz | +| Temp Rectal | 1 Hz | [+34,+40] | °C | [-2849,+2731] | - | - | > The EOG and EEG signals were each sampled at 100 Hz. The submental-EMG signal was electronically highpass filtered, rectified and low-pass filtered after which the resulting EMG envelope expressed in uV rms (root-mean-square) was sampled at 1Hz. Oro-nasal airflow, rectal body temperature and the event marker were also sampled at 1Hz. @@ -98,3 +101,10 @@ CNN, sur l’´epoque ayant le meilleur score de justesse. Finalement, l’algorithme de Viterbi est appliqu´e afin de trouv´e la s´equence d’´etats cach´es la plus probable ´etant donn´e nos ´emissions sur notre ensemble de test. --> + +## Model export + +The ONNX library from Facebook and Microsoft is used to serialize the model and export it to production. This allows to improve the interoperability of the models developed by allowing to serialize sklearn pipelines, pytorch and others to onnxruntime thanks to protobuf,. onnxruntime then makes it possible to run these pipelines on platforms other than Python. + +Check out `export_to_onnx.ipynb` to see how we serialized our trained model for it to be used in production. +Refer to `backend/` to see how we are then using it in a production environment. diff --git a/backend/README.md b/backend/README.md new file mode 100644 index 00000000..14cb6310 --- /dev/null +++ b/backend/README.md @@ -0,0 +1,81 @@ +# Polydodo classification backend (local server) + +

+ + + + + + + + + + + +

+ + +This server is responsible for the automatic sleep stage scoring of recorded EEG data. For more info about the deployed model, see [our wiki page](https://github.com/PolyCortex/polydodo/wiki/model). + +## Getting started +### Prerequesites +- Install [Python 3.8](https://www.python.org/downloads/). + - Make sure that the "Python38\Scripts" folder is added to your environnment path varible on Windows in order to use pip and hupper without any issue. + +### Setup + +Create a new virtual environment to isolate Python packages. +``` +python -m venv .venv +``` + +Activate your virtual environment. This step will need to be done everytime you re-open your terminal. +- Linux/macOS: +```bash +source .venv/bin/activate +``` +- Windows: +``` +.\.venv\Scripts\activate.bat +``` + +Install the required dependencies. +```bash +pip install -r requirements.txt -r requirements-dev.txt +``` + +If you are running on Linux or MacOS, you also have to install OpenMP with your package manager. It is a dependency of ONNX runtime, used to load our model and make predictions. + +```bash +apt-get install libgomp1 # on linux +brew install libomp # on macos +``` + +### Run the server + +If you want to run the backend with hot reload enabled (you must have installed the development requirements), run the following command. + +``` +hupper -m app +``` + +### Run the tests + +You can run our unit tests with the following command, after installing the development requirements: + +```bash +pytest +``` + +### Profile application +*A profile is a set of statistics that describes how often and for how long various parts of the program executed.* -[Python Software Foundation](https://docs.python.org/3/library/profile.html) + +- Run `python profiler.py` + +- Send the request to the server + +- Open the profiler result contained in `profiles` folder with `snakeviz` + +### Building the server as a single executable + +Run `python -m PyInstaller --onefile app.py` diff --git a/backend/readme.md b/backend/readme.md deleted file mode 100644 index 90cc17b1..00000000 --- a/backend/readme.md +++ /dev/null @@ -1,58 +0,0 @@ -# Backend - -## Setup - -Create a new virtual environment to isolate Python packages. - -```bash -virtualenv -p /usr/local/bin/python3.7 venv -``` - -Activate your virtual environment. - -```bash -source venv/bin/activate -``` - -Install the required dependencies. - -```bash -pip install -r requirements.txt -r requirements-dev.txt -``` - -If you are running on Linux or MacOS, you also have to install OpenMP with your package manager. It is a dependency of ONNX runtime, used to load our model and make predictions. - -```bash -apt-get install libgomp1 # on linux -brew install libomp # on macos -``` - -## Run it locally - -Activate your virtual environment. - -```bash -source venv/bin/activate -``` - -If you want to run the backend with hot reload enabled (you must have installed the development requirements), run the following command. - -```bash -hupper -m app -``` - -## Run the tests - -You can run our unit tests with the following command, after installing the development requirements: - -```bash -pytest -``` - -## Profile application - -- Run `python profiler.py` - -- Send the request to the server - -- Open the profiler result contained in `profiles` folder with `snakeviz` diff --git a/mobile/README.md b/mobile/README.md index b3c68635..a73273f6 100644 --- a/mobile/README.md +++ b/mobile/README.md @@ -1,16 +1,62 @@ -# Polydodo +# Polydodo mobile client -A new Flutter project. +

+ + web client + + + web client + + + web client + + + web client + +

+ +This mobile app receives, scores, saves and aggregates sleep sequences. Unlike the web application, this app can save sleep sequences for later consultation and display the aggregated results of several nights of sleep on a dashboard. Also, it will guide the user from the installation of the electrodes, until the end of their data acquisition. It does not require the use of OpenBCI GUI since it directly interfaces with the OpenBCI board. + +In the case of the Cyton, this is done using the serial protocol via an OTG cable that is plugged into the phone and with the Cyton Dongle plugged in. In the case of the Ganglion, the connection is made by bluetooth. + +## Development Framework + +We are using Flutter which is a high-level framework for mobile development. The speed of development and the possibility to develop a cross-platform product (Android and iOS) from a single code repository without sacrificing performance has been the determining factor in the use of this technology. Flutter uses the Dart language which is typed and compiled. + +## Targeted Platform + +We target Android as our only platform. While development frameworks like Flutter allow these two platforms to be targeted using a single source code, there are some small tasks and extra attention that should be paid in this regard. If it becomes worth it, we will also target iOS in the future. ## Getting Started -This project is a starting point for a Flutter application. +### Prerequisites +- Install the latest stable version of [flutter](https://flutter.dev/docs/get-started/install/). +- Install the latest stable version of [Android Studio](https://developer.android.com/studio/index.html). + - Android Studio is a great tool as it installs the required Android SDK, but also provides an Android Virtual Device manager which allows you to emulate an android device. + +### Setup +First, at the root of the mobile/ folder, download the required dependencies using: +``` +flutter pub get +``` + +Prior to build execute build-runner to generate the app's routes. +``` +flutter packages pub run build_runner watch --delete-conflicting-outputs +``` + +You now have the option to run the app on an emulator or on a live device. If you wish to run the app on a live device, you will need to use [Android Debug Bridge](https://developer.android.com/studio/command-line/adb). -A few resources to get you started if this is your first Flutter project: +Once you have an emulator setup or a connection to your live device, you can now run the app using: +``` +flutter run +``` +If you are using Microsoft's Visual Studio Code IDE, build & Launch configurations are already setted for you. Head to the **Run** tab and run the **Mobile debug** option. -- [Lab: Write your first Flutter app](https://flutter.dev/docs/get-started/codelab) -- [Cookbook: Useful Flutter samples](https://flutter.dev/docs/cookbook) +## Learn more -For help getting started with Flutter, view our -[online documentation](https://flutter.dev/docs), which offers tutorials, -samples, guidance on mobile development, and a full API reference. +Refer to the [wiki pages](https://github.com/PolyCortex/polydodo/wiki) to learn more about our mobile app project. diff --git a/web/README.md b/web/README.md new file mode 100644 index 00000000..3e9807c1 --- /dev/null +++ b/web/README.md @@ -0,0 +1,40 @@ +# Polydodo web client + +

+ + web client + + web client + + + web client + +

+ + +This web app aims to offer a comprehensive guide on **how to record polysomnographic EEG data from home** with an OpenBCI, a form to **upload sleep data** to our classifier and an **interactive scrolly-telling visualization** to observe the night of sleep. Finally, it is possible to export the classifier results for further use. + +This app was designed on top of React.js and the data visualizations were created using D3.js. + +## Getting started + +### Prerequisites +- Install the latest stable version of [Yarn package manager](https://classic.yarnpkg.com/lang/en/). +- Install the latest LTS version of [Node.js](https://nodejs.org/en/download/). + +### Setup +Once you have installed the required frameworks, you can now go into the mobile folder using a terminal. + +From there, you can install the required node modules using: +``` +yarn install +``` + +Aftwards, it is possible to run the web client using: +``` +yarn run start +``` +