Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
dd0fb2f
new extra_functions: get_spike_features_of_chunk and get_spike_featur…
Olimaol Mar 12, 2024
1818910
Merge branch 'olimaol_develop' of https://github.com/Olimaol/CompNeur…
Olimaol Mar 12, 2024
8649c09
OptNeuron: adjusted recording period
Olimaol Mar 12, 2024
fa5f46f
Merge branch 'olimaol_develop' of https://github.com/Olimaol/CompNeur…
Olimaol Mar 14, 2024
3ef7af6
new fitCorbit neuron
Olimaol Mar 14, 2024
baf249b
OptNeuron: run returns now all parameters from variable bounds in sep…
Olimaol Mar 15, 2024
6b81166
Merge branch 'olimaol_develop' of https://github.com/Olimaol/CompNeur…
Olimaol Mar 15, 2024
bcfa757
extra_functions:
Olimaol Mar 20, 2024
436b9a5
Merge branch 'olimaol_develop' of https://github.com/Olimaol/CompNeur…
Olimaol Mar 20, 2024
1505ada
updated docs
Olimaol Mar 20, 2024
41fa003
DeapCma:
Olimaol Mar 21, 2024
efa030c
implemented test for model_configurator
Olimaol Mar 22, 2024
a8c165d
simulation_funcitons: new class SimulationEvents
Olimaol Mar 28, 2024
1bd6ceb
extra_functions: new class RNG
Olimaol Mar 28, 2024
2788c30
RecordingTimes.combine_periods: returns np.nan array if variable was …
Olimaol Apr 8, 2024
7ecde34
new final neuron model IzhikevichGolomb
Olimaol Apr 9, 2024
669711a
removed not used import
Olimaol Apr 10, 2024
463ace9
new distributions test for model_configurator
Olimaol Apr 15, 2024
702a7cf
model_configurator: further developed transformation from pre spikes …
Olimaol Apr 16, 2024
1166622
implemented method to obtain current conductance distribution from pr…
Olimaol Apr 17, 2024
c06cedd
.
Olimaol Apr 18, 2024
8ddb084
changed DistPreSpikes pdf calculation to simple histogram (before kde)
Olimaol May 2, 2024
e702e9b
generate_model: added warn argument for create() and compile()
Olimaol May 23, 2024
b17f174
fixed sci function
Olimaol May 24, 2024
7a1d7dc
model_configurator: percentile for distributions can now be used
Olimaol May 28, 2024
f711458
can run model_configurator again
Olimaol May 28, 2024
63ff907
model_configurator: tested model reduction
Olimaol May 29, 2024
4f0eec7
model_configurator: found a way to reduce a model, next implement thi…
Olimaol May 30, 2024
7e5f7cf
model_configurator: started implementing reduced model
Olimaol May 30, 2024
15f7b8f
reduce model continued
Olimaol May 31, 2024
1f83fdf
model_configurator: started using reduce model in model configurator …
Olimaol Jun 3, 2024
61d2f7a
new neuron model with noise based on SNR
Olimaol Jun 4, 2024
3e32432
model_configurator: started new implementation
Olimaol Jun 4, 2024
5b6b27f
model configurator: continued restructuring
Olimaol Jun 5, 2024
8e6dbd0
model_configurator:
Olimaol Jun 6, 2024
c67d718
model configurator: continued new structure
Olimaol Jun 7, 2024
ce3aacc
model configurator: continued new structure
Olimaol Jun 10, 2024
9b551f2
model configurator started implementign get_base
Olimaol Jun 11, 2024
f7e62eb
model_conf: continued with get_base optimization
Olimaol Jun 12, 2024
09a3dfa
moved get_spike_features_of_chunk and get_spike_features_loss_of_chun…
Olimaol Jun 17, 2024
6c1e444
updated doc strings
Olimaol Jun 17, 2024
d5bc890
cleaned up code
Olimaol Jun 17, 2024
6eb8d18
updated version
Olimaol Jun 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,4 @@ dist/
!site/*
*.pkl
*json
*.log
8 changes: 8 additions & 0 deletions docs/built_in/neuron_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,10 @@
options:
heading_level: 3
show_root_full_path: false
::: CompNeuroPy.neuron_models.final_models.izhikevich_2003_like_nm.Izhikevich2003NoisyBaseSNR
options:
heading_level: 3
show_root_full_path: false

## Izhikevich (2007)-like Neurons
::: CompNeuroPy.neuron_models.final_models.izhikevich_2007_like_nm.Izhikevich2007
Expand Down Expand Up @@ -106,6 +110,10 @@
heading_level: 3
show_root_full_path: false
::: CompNeuroPy.neuron_models.final_models.izhikevich_2007_like_nm.Izhikevich2007NoisyAmpaOscillating
options:
heading_level: 3
show_root_full_path: false
::: CompNeuroPy.neuron_models.final_models.izhikevich_2007_like_nm.IzhikevichGolomb
options:
heading_level: 3
show_root_full_path: false
165 changes: 165 additions & 0 deletions docs/examples/deap_cma.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
## Introduction
This example demonstrates how to use the DeapCma class to optimize parameters.

## Code
```python
from CompNeuroPy import DeapCma
import numpy as np


### for DeapCma we need to define the evaluate_function
def evaluate_function(population):
"""
Calculate the loss for a population of individuals.

Args:
population (np.ndarray):
population of individuals (i.e., parameter sets) to evaluate

Returns:
loss_values (list[tuple]):
list of tuples, where each tuple contains the loss for an individual of the
population
"""
loss_list = []
### the population is a list of individuals
for individual in population:
### the individual is a list of parameters
p0, p1, p2 = individual
### calculate the loss of the individual
loss_of_individual = float((p0 - 3) ** 2 + (p1 - 7) ** 2 + (p2 - (-2)) ** 2)
### insert the loss of the individual into the list of tuples
loss_list.append((loss_of_individual,))

return loss_list


def get_source_solutions(lb, ub):
"""
DeapCma can use source solutions to initialize the optimization process. This
function returns an example of source solutions.

Source solutions are a list of tuples, where each tuple contains the parameters of
an individual (np.ndarray) and its loss (float).

Returns:
source_solutions (list[tuple]):
list of tuples, where each tuple contains the parameters of an individual
and its loss
"""
### create random solutions
source_solutions_parameters = np.random.uniform(0, 1, (100, 3)) * (ub - lb) + lb
### evaluate the random solutions
source_solutions_losses = evaluate_function(source_solutions_parameters)
### create a list of tuples, where each tuple contains the parameters of an
### individual and its loss
source_solutions = [
(source_solutions_parameters[idx], source_solutions_losses[idx][0])
for idx in range(len(source_solutions_parameters))
]
### only use the best 10 as source solutions
source_solutions = sorted(source_solutions, key=lambda x: x[1])[:10]

return source_solutions


def main():
### define lower bounds of paramters to optimize
lb = np.array([-10, -10, 0])

### define upper bounds of paramters to optimize
ub = np.array([10, 15, 5])

### create an "minimal" instance of the DeapCma class
deap_cma = DeapCma(
lower=lb,
upper=ub,
evaluate_function=evaluate_function,
)

### create an instance of the DeapCma class using all optional attributes
### to initialize one could give a p0 array (same shape as lower and upper) and a
### sig0 value or use source solutions (as shown here)
deap_cma_optional = DeapCma(
lower=lb,
upper=ub,
evaluate_function=evaluate_function,
max_evals=1000,
p0=None,
sig0=None,
param_names=["a", "b", "c"],
learn_rate_factor=1,
damping_factor=1,
verbose=True,
plot_file="logbook_optional.png",
cma_params_dict={},
source_solutions=get_source_solutions(lb=lb, ub=ub),
hard_bounds=True,
)

### run the optimization, since max_evals was not defined during initialization of
### the DeapCma instance, it has to be defined here
### it automatically saves a plot file showing the loss over the generations
deap_cma_result = deap_cma.run(max_evals=1000)

### run the optimization with all optional attributes
deap_cma_optional_result = deap_cma_optional.run(verbose=False)

### print the best parameters and its loss, since we did not define the names of the
### parameters during initialization of the DeapCma instance, the names are param0,
### param1, param2, also print everything that is in the dict returned by the run
best_param_dict = {
param_name: deap_cma_result[param_name]
for param_name in ["param0", "param1", "param2"]
}
print("\nFirst (minimal) optimization:")
print(f"Dict from run function contains: {list(deap_cma_result.keys())}")
print(f"Best parameters: {best_param_dict}")
print(f"Loss of best parameters: {deap_cma_result['best_fitness']}\n")

### print the same for the second optimization
best_param_dict = {
param_name: deap_cma_optional_result[param_name]
for param_name in ["a", "b", "c"]
}
print("Second optimization (with all optional attributes):")
print(f"Dict from run function contains: {list(deap_cma_optional_result.keys())}")
print(f"Best parameters: {best_param_dict}")
print(f"Loss of best parameters: {deap_cma_optional_result['best_fitness']}")

return 1


if __name__ == "__main__":
main()
```

## Conosole Output
```console
$ python deap_cma.py
ANNarchy 4.7 (4.7.3b) on linux (posix).
Starting optimization with:
centroid: [4.57628308 7.39815401 1.30602549], (scaled: [0.72881415 0.69592616 0.2612051 ])
sigma: [2.90435163 3.63043954 0.72608791], (scaled: 0.14521758155307307)
lambda (The number of children to produce at each generation): 7
mu (The number of parents to keep from the lambda children): 3
weights: [0.63704257 0.28457026 0.07838717]
mueff: 2.0286114646100617
ccum (Cumulation constant for covariance matrix.): 0.5714285714285714
cs (Cumulation constant for step-size): 0.5017818438926943
ccov1 (Learning rate for rank-one update): 0.09747248265066792
ccovmu (Learning rate for rank-mu update): 0.038593139193450914
damps (Damping for step-size): 1.5017818438926942
24%|██████████████████████████████▏ | 238/1000 [00:00<00:00, 1265.35gen/s, best loss: 0.00000]
17%|█████████████████████ | 166/1000 [00:00<00:00, 1369.98gen/s, best loss: 4.00000]

First (minimal) optimization:
Dict from run function contains: ['param0', 'param1', 'param2', 'logbook', 'deap_pop', 'best_fitness']
Best parameters: {'param0': 3.0, 'param1': 7.0, 'param2': -2.0}
Loss of best parameters: 0.0

Second optimization (with all optional attributes):
Dict from run function contains: ['a', 'b', 'c', 'logbook', 'deap_pop', 'best_fitness']
Best parameters: {'a': 3.000000004587328, 'b': 6.999999980571925, 'c': 0.0}
Loss of best parameters: 4.0
```
38 changes: 19 additions & 19 deletions docs/main/generate_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,26 @@ One can create a CompNeuroPy-model using the `CompNeuroModel` class. The `CompNe
2. **model creation**: create the ANNarchy objects (populations, projections), i.e., run the `model_creation function`
3. **model compilation**: compile all created models

## Example
<pre><code>from CompNeuroPy import CompNeuroModel
my_model = CompNeuroModel(model_creation_function=create_model, ### the most important part, this function creates the model (populations, projections)
model_kwargs={'a':1, 'b':2}, ### define the two arguments a and b of function create_model
name='my_model', ### you can give the model a name
description='my simple example model', ### you can give the model a description
do_create=True, ### create the model directly
do_compile=True, ### let the model (and all models created before) compile directly
compile_folder_name='my_model') ### name of the saved compilation folder
</code></pre>
!!! example
<pre><code>from CompNeuroPy import CompNeuroModel
my_model = CompNeuroModel(model_creation_function=create_model, ### the most important part, this function creates the model (populations, projections)
model_kwargs={'a':1, 'b':2}, ### define the two arguments a and b of function create_model
name='my_model', ### you can give the model a name
description='my simple example model', ### you can give the model a description
do_create=True, ### create the model directly
do_compile=True, ### let the model (and all models created before) compile directly
compile_folder_name='my_model') ### name of the saved compilation folder
</code></pre>

The following function could be the corresponding model_creation_function:
<pre><code>from ANNarchy import Population, Izhikevich
def create_model(a, b):
pop = Population(geometry=a, neuron=Izhikevich, name='Izh_pop_a') ### first population, size a
pop.b = 0 ### some parameter adjustment
Population(geometry=b, neuron=Izhikevich, name='Izh_pop_b') ### second population, size b
</code></pre>
Here, two populations are created (both use built-in Izhikevich neuron model of ANNarchy). The function does not require a return value. It is important that all populations and projections have unique names.
The following function could be the corresponding model_creation_function:
<pre><code>from ANNarchy import Population, Izhikevich
def create_model(a, b):
pop = Population(geometry=a, neuron=Izhikevich, name='Izh_pop_a') ### first population, size a
pop.b = 0 ### some parameter adjustment
Population(geometry=b, neuron=Izhikevich, name='Izh_pop_b') ### second population, size b
</code></pre>
Here, two populations are created (both use built-in Izhikevich neuron model of ANNarchy). The function does not require a return value. It is important that all populations and projections have unique names.

A more detailed example is available in the [Examples](../examples/generate_models.md).
A more detailed example is available in the [Examples](../examples/generate_models.md).

::: CompNeuroPy.generate_model.CompNeuroModel
Loading