Day 0 : Introduction to the Tutorials
Contents
Day 0 : Introduction to the Tutorials¶
The processing of information by the nervous system spans across space and time, and mathematical modelling of these dynamics have found to be an essential tool. These models have been used extensively to study the dynamics and mechanisms of information processing at both the individual neuron level and the system of neurons level. These models generally utilize systems of simultaneous ordinary differential equations (ODEs) which are solved as initial value problems using well-studied methods from numerical analysis such as Euler’s Method and Runge Kutta methods. From a detailed study of the mechanism of action of the neurons, ion channels, neurotransmitters or neuromodulators and their dynamics in different models, equations have been found that describe the behavior of neurons and synapse. By forming interconnected systems of these individual groups of differential equations, the overall dynamics and of networks can be studied through deterministic or stochastic simulations which can be easily perturbed unlike the case for in vivo experiments.
A significant issue with such simulations is computational complexity. As the number of neurons increase, the number of possible synaptic connections increases quadratically. That is, for a system of n neurons there can be at most n2 different synapses of one type, each with its own set of equations. Thus, simulations can take very long times for large values of n. A solution to this problem is to implement some form of parallelization in the ODE solver and the system of equations itself. One of the simplest methods of parallelizable computation is in the form of matrix algebra which can be accelerated using libraries such as BLAS which can only be used for accelerating CPU based computations. Similarly, CUDA is available for speeding up computations on Nvidia-based GPUs and TPUs. However, there is a barrier of entry to using low-level packages like CUDA for the general user as it sometimes requires an in-depth understanding of the architecture, particularly for troubleshooting.
This is where TensorFlow (an open-source Google product) gives us a massive edge. TensorFlow allows us much greater scalability and is way more flexible in terms of ease of implementation for specialized hardware. With minimal changes in the implementation, the code can be executed on a wide variety of heterogeneous and distributed systems ranging from mobile devices to clusters and specialized computing devices such as GPU and TPU cards. The modern advances in GPU/TPU design allow us to access even higher degrees of parallelization. It is now even possible to have hundred of TeraFLOPS of computing power in a single small computing device. With TensorFlow, we can access these resources without even requiring an in-depth understanding of its architecture and technical knowledge of specific low-level packages like CUDA.
Summary¶
In the form of a 7-day tutorial, you will be introduced to the mathematical modelling of neuronal networks based on the Hodgkin-Huxley Differential Equations and instructed on developing highly parallelized but easily readable code for numerical methods such as Euler’s Method and Runge-Kutta Methods to solve differential equations in Python and use them to simulate neuronal dynamics. To develop scalable code, Google’s open-source package we will introduce you to TensorFlow, and you will learn to develop simulations using this package and handling the few limitations that come with this implementation. You will also be introduced to coding structures that maximize the parallelizability of the simulation. Finally, you will use the coding paradigm that was developed to simulate a model of Locus Antennal Lobe described in previous literature. A basic introduction to distributed tensorflow is also provided to you as optional material.
How to use the Tutorials¶
A reader familiar with Python will find this tutorial accessible. We use a number of NumPy and MatPlotLib functions to simulate and display figures. These libraries are well documented with excellent introductory guides. During Day 1 of this tutorial, we introduce numerical integration using Python without using any TensorFlow functions. On Day 2, we use TensorFlow functions to implement the integrator. On Days 3 and 4, we use the code developed on Days 1 and 2 to simulate networks of conductance-based neurons. Day 5 talks about memory management in TensorFlow. Readers who are interested in solving differential equations in other domains will find the tutorial on Days 1, 2, and 5 self-contained. The tutorials are linked in the Supporting information and are available as as Jupyter notebooks (.ipynb files). The notebooks can be viewed or run online using Binder, Google Colab or Kaggle Notebook the links for which are available in the repository. Please make sure GPU usage is enabled on Google Colab and Kaggle Notebooks for the best performance. We also provide.html files that can be read using any browser. To run the notebooks locally, we recommend that readers install Python 3.6 or above, Jupyter Notebook, NumPy 1.19.or above, MatPlotLib 3.2.2 or above, and TensorFlow 1.13 or above using the Anaconda distribution of Python 3. This tutorial uses TensorFlow version 2.6. In order to be compatible with a different versions of TensorFlow (1.x and 2.x), we provide alternate import statements consistent with both versions of TensorFlow in the tutorial. We suggest that you utilize either TensorFlow v1.x or TensorFlow 2.x with eager execution disabled for best performance.
Tutorial Content¶
Tutorial Day |
Run |
Run |
View |
---|---|---|---|
Day 1: Of Numerical Integration and Python |
|||
Day 2: Let the Tensors Flow |
|||
Day 3: Cells in Silicon |
|||
Day 4: Neurons and Networks |
|||
Day 5: Optimal Mind Control |
|||
Day 6: Example Implementation (Locust AL) |
|||
Day 7: (Optional) Distributed Tensorflow |
Pre-rendered HTML files are available in the ‘static’ folder on each day
WARNING: If you are running PSST using Kaggle, make sure you have logged in to your verified Kaggle account and enabled Internet Access for the kernel. If you do not do so, the code will give errors from Day 5 onwards. For instructions on enabling Internet on Kaggle Kernels, visit: https://www.kaggle.com/product-feedback/63544