Skip to content

Synthetic data generation #2

@berkgercek

Description

@berkgercek

Some functionality for synthetic data generation already exists in the prior localization repository in synth.py and glm_predict.py.

These should be reorganized in a sensible way to:

  1. Generate predicted spike counts in bins from a new design matrix with the same covariates, or a subset of the existing design matrix, using a nglm.predict() method.

  2. Decompose PETHs into their subcomponents contributed by each kernel

  3. Optionally generate totally synthetic spike times, which is harder especially if you want to randomize when the spike occurs within a bin. This could be useful for testing other data analysis methods or pipelines.

  4. and 2. are high-priority and low-hanging fruit given that the primitives for 1. exist in sklearn and the code for 2. has already been written. They just need to be adapted.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions