The following is as series of reports produced for the course Computational Neuroscience Methods at l'École Normale Supérieure, Departement d'Études Cognitives.
The first report covers spike-train analysis and visualization with raster plots, and outlines both the Leaky Intergrate-and-Fire and the Hodkgin-Huxley models for simulating action potentials.
The second report focuses on the application of dynamical systems to computational neuroscience. It is effectively an exercise in solving differential equations computationally, within the context of various models of artificial neural networks. Notably, hopfield networks and ring attractors.
The third report is an implementation and analysis of two models for decision making in cognitive science: the drift-diffusion model and recurrent neural networks.
Finally, the last report is a reproduction of certain figures from the paper "Generating Coherent Patterns of Activity from Chaotic Neural Networks" David Sussillo and L.F. Abbott, wherein they establish a new method for training neural networks that exhibit chaotic internal activations at baseline, but can quickly converge onto desired, regular activation patterns. The affiliated notebook includes my translation of the authors' MATLAB code into Python.