IGI Reading Group
This project is maintained by IGITUGraz
The IGI Reading Group is the informal Journal Club of IGI where we meet about once a week and discuss papers that are related to our research.
In the WS 2020/21, the journal club will take place every Tuesday at 11:00 in the IGI seminar room.
If you’re from outside IGI and would like to attend a particular session, contact mueller [at] igi.tugraz.at
.
Ceca, Christoph, Dominik, Florian, Franz, Horst, Michael, Ozan, Philipp, Roland, Romain, Samuel, Špela, Thomas L.
Please present papers that have some direct relevance for our work: e.g. machine learning papers describing relevant developments in the field like new methods or architectures, or experimental papers discussing new data for our models or analysis methods for our experiments. Presented works should enhance our knowledge and provide inspiration for our research.
Presentations should convey the relevant findings from your selected paper with a focus on our group, i.e. also prepare the relevant background information, important concepts from the cited literature, etc. required to understand the main findings. You don’t need to prepare slides.
Start your presentation by giving a 5-minute overview before going into the details. The intro should include:
Date | Moderator | Paper |
---|---|---|
19.01.2021 | Florian | Levina, Anna, J. Michael Herrmann, and Manfred Denker. “Critical branching processes in neural networks.” PAMM: Proceedings in Applied Mathematics and Mechanics. Vol. 7. No. 1. Berlin: WILEY‐VCH Verlag, 2007. |
26.01.2021 | Thomas L. |
Feel free to add papers of interest.
Date | Moderator | Paper |
---|---|---|
12.01.2020 | Špela | Young, Benjamin D., James A. Escalon, and Dennis Mathew. “Odors: from chemical structures to gaseous plumes.” Neuroscience & Biobehavioral Reviews 111 (2020): 19-29. |
01.12.2020 | Samuel | Ly, Calvin, et al. “Psychedelics promote structural and functional neural plasticity.” Cell reports 23.11 (2018): 3170-3182. |
24.11.2020 | Roland | Rajeswaran, Aravind, et al. “Meta-learning with implicit gradients.” Advances in Neural Information Processing Systems. 2019. |
17.11.2020 | Ozan | Dapello, Joel, et al. “Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations.” Advances in Neural Information Processing Systems 33 (2020). |
10.11.2020 | Horst | Sharma, Archit, et al. “Dynamics-aware unsupervised discovery of skills.” arXiv preprint arXiv:1907.01657 (2019). |
03.11.2020 | Franz | Ramsauer, Hubert, et al. “Hopfield networks is all you need.” arXiv preprint arXiv:2008.02217 (2020). |
27.10.2020 | Florian | Reimann, Michael W., et al. “Cliques of neurons bound into cavities provide a missing link between structure and function.” Frontiers in computational neuroscience 11 (2017): 48. |
20.10.2020 | Christoph | Nieder, Andreas. “Neural constraints on human number concepts.” Current Opinion in Neurobiology 60 (2020): 28-36. |
13.10.2020 | Ceca | Fitz, Hartmut, et al. “Neuronal spike-rate adaptation supports working memory in language processing.” Proceedings of the National Academy of Sciences 117.34 (2020): 20881-20889. |
30.09.2020 | Michael | Frankland, Steven M., and Joshua D. Greene. “Concepts and compositionality: in search of the brain’s language of thought.” Annual review of psychology 71 (2020): 273-303. (pdf) |
30.07.2020 | Arjun | Mittal, Sarthak, et al. “Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules.” arXiv preprint arXiv:2006.16981 (2020). |
Goyal, Anirudh, et al. “Recurrent independent mechanisms.” arXiv preprint arXiv:1909.10893 (2019). | ||
02.03.2020 | Luca | Hudson, Drew A., and Christopher D. Manning. “Compositional attention networks for machine reasoning.” arXiv preprint arXiv:1803.03067 (2018). |
24.02.2020 | Arjun | Schrittwieser, Julian, et al. “Mastering atari, go, chess and shogi by planning with a learned model.” arXiv preprint arXiv:1911.08265 (2019). |
03.02.2020 | Ceca | Introduction to ANOVA analysis (in Kass, Robert E., Uri T. Eden, and Emery N. Brown. Analysis of neural data. Vol. 491. New York: Springer, 2014.) |
Lindsay, Grace W., et al. “Hebbian learning in a random network captures selectivity properties of the prefrontal cortex.” Journal of Neuroscience 37.45 (2017): 11021-11036. | ||
Rigotti, Mattia, et al. “The importance of mixed selectivity in complex cognitive tasks.” Nature 497.7451 (2013): 585-590. | ||
06.12.2019 | Thomas | Henaff, Mikael, et al. “Tracking the world state with recurrent entity networks.” arXiv preprint arXiv:1612.03969 (2016). |
02.12.2019 | Philipp | Patel, Devdhar, et al. “Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI games.” arXiv preprint arXiv:1903.11012 (2019). |
25.11.2019 | Special session | |
Florian | Barrett, David GT, Sophie Deneve, and Christian K. Machens. “Optimal compensation for neuron loss.” Elife 5 (2016): e12454. | |
Franz | Voelker, Aaron R., and Chris Eliasmith. “Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells.” Neural computation 30.3 (2018): 569-609. | |
Voelker, Aaron, Ivana Kajić, and Chris Eliasmith. “Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks.” Advances in Neural Information Processing Systems. 2019. | ||
Arjun | Frady, E. Paxon, and Friedrich T. Sommer. “Robust computation with rhythmic spike patterns.” Proceedings of the National Academy of Sciences 116.36 (2019): 18050-18059. | |
11.11.2019 | Michael | Habenschuss, Stefan, Zeno Jonke, and Wolfgang Maass. “Stochastic computations in cortical microcircuit models.” PLoS computational biology 9.11 (2013): e1003311. |
Berkes, Pietro, et al. “Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.” Science 331.6013 (2011): 83-87. | ||
29.10.2019 | Darjan | Nayebi, Aran, et al. “Task-Driven convolutional recurrent models of the visual system.” Advances in Neural Information Processing Systems. 2018. |
27.09.2019 | Horst | Marblestone, Adam H., Greg Wayne, and Konrad P. Kording. “Toward an integration of deep learning and neuroscience.” Frontiers in computational neuroscience 10 (2016): 94. |
07.06.2019 | Franz | Hung, Chia-Chun, et al. “Optimizing agent behavior over long time scales by transporting value.” arXiv preprint arXiv:1810.06721 (2018) |
16.05.2019 | Elias | Frankle, Jonathan, and Michael Carbin. “The lottery ticket hypothesis: Finding sparse, trainable neural networks.” arXiv preprint arXiv:1803.03635 (2018). |
09.05.2019 | Rapid fire session | |
Anand | Karnani, Mahesh M., et al. “A Blanket of Inhibition: Functional Inferences from Dense Inhibitory Connectivity”. Current Opinion in Neurobiology, 2014 | |
Okun, Michael, et al. “Diverse Coupling of Neurons to Populations in Sensory Cortex”. Nature, 2015. | ||
Darjan | Bönstrup, Marlene, et al. “A Rapid Form of Offline Consolidation in Skill Learning.” Current Biology (2019). | |
Triefenbach, Fabian, et al. “Phoneme recognition with large hierarchical reservoirs.” Advances in neural information | ||
Michael | Saxe, Andrew M., et al. “On Random Weights and Unsupervised Feature Learning.” ICML. Vol. 2. No. 3. 2011. | |
Philipp | Frady, Edward & Sommer, Friedrich. “Robust computation with rhythmic spike patterns.” arxiv preprint: arXiv:1901.07718 (2019) | |
Arjun | Akrout, M., Wilson, C., Humphreys, P. C., Lillicrap, T., & Tweed, D. (2019). Using Weight Mirrors to Improve Feedback Alignment. arXiv preprint arXiv:1904.05391 | |
Ceca | Behrens, Timothy E. J., et al. “What Is a Cognitive Map? Organising Knowledge for Flexible Behaviour.” 2018, doi:10.1101/365593.; LINK: https://www.cell.com/neuron/pdf/S0896-6273(18)30856-0.pdf | |
18.04.2019 | Ceca | O’Reilly, Randall C., Thomas E. Hazy, and Seth A. Herd. “The Leabra Cognitive Architecture: How to Play 20 Principles with Nature..” The Oxford handbook of cognitive science 91 (2016): 91-116. |
11.04.2019 | Darjan | Krotov, Dmitry, and John J. Hopfield. “Unsupervised learning by competing hidden units.” Proceedings of the National Academy of Sciences (2019): 201820458. |
20.03.2019 | Arjun | Koutnik, Jan, et al. “A clockwork rnn.” arXiv preprint arXiv:1402.3511 (2014). |
Chung, Junyoung, Sungjin Ahn, and Yoshua Bengio. “Hierarchical multiscale recurrent neural networks.” arXiv preprint arXiv:1609.01704 (2016). | ||
14.03.2019 | Anand | Jaderberg, Max, et al. “Human-level performance in first-person multiplayer games with population-based deep reinforcement learning.” arXiv preprint arXiv:1807.01281 (2018). |
17.12.2018 | Thomas L. | Beaulieu-Laroche, Lou, et al. “Enhanced Dendritic Compartmentalization in Human Cortical Neurons.” Cell 175.3 (2018): 643-651. |
20.11.2018 | Michael | Kutter, Esther F., et al. “Single Neurons in the Human Brain Encode Numbers.” Neuron (2018). |
Quiroga, R. Quian, et al. “Invariant visual representation by single neurons in the human brain.” Nature 435.7045 (2005): 1102. | ||
09.11.2018 | Guillaume | Wasmuht, Dante Francisco, et al. “Intrinsic neuronal dynamics predict distinct functional roles during working memory.” Nature communications 9.1 (2018): 3499. |
19.10.2018 | Franz | Perich, Matthew G., Juan A. Gallego, and Lee E. Miller. “A neural population mechanism for rapid learning.” Neuron (2018). |
12.10.2018 | Darjan | Zeng, Andy, et al. “Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning.” arXiv preprint arXiv:1803.09956 (2018). |
Dubey, Rachit, et al. “Investigating Human Priors for Playing Video Games.” arXiv preprint arXiv:1802.10217 (2018). | ||
05.10.2018 | Ceca | Rougier, Nicolas P., et al. “Prefrontal cortex and flexible cognitive control: Rules without symbols.” Proceedings of the National Academy of Sciences 102.20 (2005): 7338-7343. |
21.09.2018 | Arjun | Palm, Rasmus Berg, Ulrich Paquet, and Ole Winther. “Recurrent Relational Networks.” arXiv preprint arXiv:1711.08028 (2018). |
10.08.2018 | Anand | Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., & Pontil, M. (2018). Bilevel Programming for Hyperparameter Optimization and Meta-Learning. ArXiv:1806.04910 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1806.04910 (ICML 2018) |
03.08.2018 | Darjan | Siwani, Samer, et al. “OLMα2 cells bidirectionally modulate learning.” Neuron (2018). |
20.07.2018 | Arjun | Henaff, Mikael, et al. “Tracking the world state with recurrent entity networks.” arXiv preprint arXiv:1612.03969 (2016). |
13.07.2018 | Franz | Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton. “Dynamic routing between capsules.” Advances in Neural Information Processing Systems. 2017. |
23.04.2018 | Ceca | Glimcher, Paul W. “Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis.” Proceedings of the National Academy of Sciences 108.Supplement 3 (2011): 15647-15654. |
12.03.2018 | Franz | Houthooft, Rein, et al. “Vime: Variational information maximizing exploration.” Advances in Neural Information Processing Systems. 2016. |
Blundell, Charles, et al. “Weight uncertainty in neural networks.” arXiv preprint arXiv:1505.05424 (2015). | ||
02.03.2018 | Anand | Finn, Chelsea, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks.” arXiv preprint arXiv:1703.03400 (2017). |
23.02.2018 | Michael | Mostafa, Hesham, Vishwajith Ramesh, and Gert Cauwenberghs. “Deep supervised learning using local errors.” arXiv preprint arXiv:1711.06756 (2017). |
09.02.2018 | Anand | Costa, R., Assael, Y., Shillingford, B., de Freitas, N. & Vogels, Ti. Cortical microcircuits as gated-recurrent neural networks. in Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 271–282 (Curran Associates, Inc., 2017). |
02.02.2018 | Guillaume | Bahdanau, Dzmitry, et al. “End-to-end attention-based large vocabulary speech recognition.” Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016. |
Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. “Speech recognition with deep recurrent neural networks.” Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, 2013. | ||
Graves, Alex, et al. “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.” Proceedings of the 23rd international conference on Machine learning. ACM, 2006. | ||
Amodei, Dario, et al. “Deep speech 2: End-to-end speech recognition in english and mandarin.” International Conference on Machine Learning. 2016. | ||
Hannun, Awni, et al. “Deep speech: Scaling up end-to-end speech recognition.” arXiv preprint arXiv:1412.5567 (2014). | ||
26.01.2018 | Darjan | Mishra, Nikhil, et al. “Meta-learning with temporal convolutions.” arXiv preprint arXiv:1707.03141 (2017). https://arxiv.org/abs/1707.03141 |
19.01.2018 | Arjun | Jaderberg, Max, et al. “Population Based Training of Neural Networks.” arXiv preprint arXiv:1711.09846 (2017). |
12.01.2018 | Thomas B. | Wang, Peng, et al. “Multi-attention network for one shot learning.” 2017 IEEE conference on computer vision and pattern recognition, CVPR. 2017. |
05.01.2018 | Thomas L. | Jaderberg, Max, et al. “Decoupled neural interfaces using synthetic gradients.” arXiv preprint arXiv:1608.05343 (2016). |
07.12.2017 | Franz | Graves. “Adaptive Computation Time for Recurrent Neural Networks.” arXiv:1603.08983 (2016). https://arxiv.org/abs/1603.08983 |
01.12.2017 | Guillaume | Sussillo, Stavisky, Kao, Ryu, Shenoy. “Making brain-machine interfaces robust to future neural variability” nature communications |
Panzeri, Harvey, Piasini, Latham, Fellin “Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention and Behavior” | ||
Lee, Delbruck, and Pfeiffer “Training Deep Spiking Neural Networks Using Backpropagation” | ||
24.11.2017 | Anand | Xu, Yan, Xiaoqin Zeng, and Shuiming Zhong. “A new supervised learning algorithm for spiking neurons.” Neural computation 25.6 (2013): 1472-1511. |
Ponulak, Filip, and Andrzej Kasiński. “Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting.” Neural Computation 22.2 (2010): 467-510. | ||
17.11.2017 | Michael | Hadji, Isma, and Richard P. Wildes. “A Spatiotemporal Oriented Energy Network for Dynamic Texture Recognition.” arXiv preprint arXiv:1708.06690 (2017). https://arxiv.org/abs/1708.06690 |
06.10.2017 | Guillaume | Song Han et al. 2017 - SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size https://arxiv.org/abs/1602.07360 (submitted to ICLR 2017) |
Song Han et al. 2017 - ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA https://arxiv.org/abs/1612.00694 | ||
Collins et al. 2014 - Memory bounded neural network https://arxiv.org/pdf/1412.1442.pdf | ||
Song Han et al. 2015 - Learning both weights and connections https://arxiv.org/pdf/1506.02626.pdf (appeared in NIPS) | ||
29.10.2017 | Jian | Dvorkin R, Ziv NE (2016) Relative Contributions of Specific Activity Histories and Spontaneous Processes to Size Remodeling of Glutamatergic Synapses. PLoS Biol 14(10): e1002572. https://doi.org/10.1371/journal.pbio.1002572 |
Rubinski A, Ziv NE (2015) Remodeling and Tenacity of Inhibitory Synapses: Relationships with Network Activity and Neighboring Excitatory Synapses. PLoS Comput Biol 11(11): e1004632. https://doi.org/10.1371/journal.pcbi.1004632 | ||
Statman A, Kaufman M, Minerbi A, Ziv NE, Brenner N (2014) Synaptic Size Dynamics as an Effectively Stochastic Process. PLoS Comput Biol 10(10): e1003846. https://doi.org/10.1371/journal.pcbi.1003846 | ||
10.08.2017 | Franz | Zoph, Barret, and Quoc V. Le. “Neural architecture search with reinforcement learning.” arXiv preprint arXiv:1611.01578 (2016). |
02.08.2017 | David | Friston K. and Kiebel S. “Predictive coding under the free-energy principle.” Phil. Trans. R. Soc. B (2009) 364, 1211–1221. |
Friston K. “Variational filtering.” NeuroImage (2008) 41, 747-766. | ||
26.07.2017 | Anand | Spratling, M. W. “A review of predictive coding algorithms.” Brain and cognition 112 (2017): 92-97. |
Rao, Rajesh PN, and Dana H. Ballard. “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.” Nature neuroscience 2.1 (1999): 79-87. | ||
PredNet: Lotter, William, Gabriel Kreiman, and David Cox. “Deep predictive coding networks for video prediction and unsupervised learning.” arXiv preprint arXiv:1605.08104 (2016). | ||
26.07.2017 | Michael M. | Dosovitskiy, Alexey, and Vladlen Koltun. “Learning to act by predicting the future.” arXiv preprint arXiv:1611.01779 (2016). |
13.06.2017 | Guillaume | Lillicrap, Timothy P., et al. “Random synaptic feedback weights support error backpropagation for deep learning.” Nature Communications 7 (2016). |
25.04.2017 | Guillaume | Salimans, Tim, et al. “Evolution Strategies as a Scalable Alternative to Reinforcement Learning.” arXiv preprint arXiv:1703.03864 (2017). |
25.04.2017 | Anand | Whittington, James CR, and Rafal Bogacz. “An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.” Neural Computation (2017). |
25.04.2017 | Arjun | Orhan, A. Emin, and Wei Ji Ma. “Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback.” arXiv preprint arXiv:1601.03060 (2016). APA |
25.04.2017 | David | Schiess, Mathieu, Robert Urbanczik, and Walter Senn. “Somato-dendritic synaptic plasticity and error-backpropagation in active dendrites.” PLoS Comput Biol 12.2 (2016): e1004638. |
18.04.2017 | Anand | Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014. |
14.04.2017 | David | Variational Auto-encoders |
Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013). | ||
04.04.2017 | Guillaume and David | Variational Inference |
Mnih, Andriy, and Karol Gregor. “Neural variational inference and learning in belief networks.” arXiv preprint arXiv:1402.0030 (2014). | ||
28.03.2017 | Arjun | Dirichlet Distributions |
Blei, David M., and Michael I. Jordan. “Variational inference for Dirichlet process mixtures.” Bayesian analysis 1.1 (2006): 121-143. | ||
Sethuraman, Jayaram. “A constructive definition of Dirichlet priors.” Statistica sinica (1994): 639-650. | ||
Blackwell, David, and James B. MacQueen. “Ferguson distributions via Pólya urn schemes.” The annals of statistics (1973): 353-355. | ||
Ferguson, Thomas S. “A Bayesian analysis of some nonparametric problems.” The annals of statistics (1973): 209-230. | ||
02.12.2016 | Arjun | Nessler, Bernhard, et al. “Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity.” PLoS Comput Biol 9.4 (2013): e1003037. |
18.11.2016 | Guillaume | Nithianantharajah, Jess, et al. “Synaptic scaffold evolution generated components of vertebrate cognitive complexity.” Nature neuroscience 16.1 (2013): 16-24. |
Carlisle, Holly J., et al. “Opposing effects of PSD‐93 and PSD‐95 on long‐term potentiation and spike timing‐dependent plasticity.” The Journal of physiology 586.24 (2008): 5885-5900. | ||
11.11.2016 | Anand | Rigotti, Mattia, et al. “The importance of mixed selectivity in complex cognitive tasks.” Nature 497.7451 (2013): 585-590. |
09.09.2016 | Ke Bai | Eliasmith, Chris, et al. “A large-scale model of the functioning brain.” science 338.6111 (2012): 1202-1205. |
02.09.2016 | Ke Bai | Bobier, Bruce, Terrence C. Stewart, and Chris Eliasmith. “A unifying mechanistic model of selective attention in spiking neurons.” PLoS Comput Biol 10.6 (2014): e1003577. |
16.08.2016 | David | Zenke, Friedemann, Everton J. Agnes, and Wulfram Gerstner. “Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks.” Nature communications 6 (2015). |
05.08.2016 | Zhaofei | Raju, Rajkumar Vasudeva, and Xaq Pitkow. “Inference by Reparameterization in Neural Population Codes.” Advances in Neural Information Processing Systems. 2016. |
28.07.2016 | Anna | Buzsáki, György. “Neural syntax: cell assemblies, synapsembles, and readers.” Neuron 68.3 (2010): 362-385. |
21.07.2016 | Guillaume | Chung, Junyoung, et al. “Empirical evaluation of gated recurrent neural networks on sequence modeling.” arXiv preprint arXiv:1412.3555 (2014). |
Sussillo, David, and L. F. Abbott. “Random walk initialization for training very deep feedforward networks.” arXiv preprint arXiv:1412.6558 (2014). | ||
27.05.2016 | Guillaume | Williams, Ronald J. “Simple statistical gradient-following algorithms for connectionist reinforcement learning.” Machine learning 8.3-4 (1992): 229-256. |
24.03.2016 | Anand | Denève, Sophie, and Christian K. Machens. “Efficient codes and balanced networks.” Nature neuroscience 19.3 (2016): 375-382. |
17.03.2016 | Anand | Abbott, L. F., Brian DePasquale, and Raoul-Martin Memmesheimer. “Building functional networks of spiking model neurons.” Nature neuroscience 19.3 (2016): 350-355. |
10.03.2016 | David | Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780. |
Graves, Alex, and Jürgen Schmidhuber. “Offline handwriting recognition with multidimensional recurrent neural networks.” Advances in neural information processing systems. 2009. | ||
Graves, Alex. “Generating sequences with recurrent neural networks.” arXiv preprint arXiv:1308.0850 (2013). | ||
Graves, Alex, Greg Wayne, and Ivo Danihelka. “Neural turing machines.” arXiv preprint arXiv:1410.5401 (2014). | ||
26.02.2016 | Guillaume | Gardner, Brian, Ioana Sporea, and André Grüning. “Learning spatiotemporally encoded pattern transformations in structured spiking neural networks.” Neural computation (2015). |
15.12.2015 | Guillaume | Hennequin, Guillaume, Tim P. Vogels, and Wulfram Gerstner. “Optimal control of transient dynamics in balanced networks supports generation of complex movements.” Neuron 82.6 (2014): 1394-1406. |
11.12.2015 | Gernot | Avermann, Michael, et al. “Microcircuits of excitatory and inhibitory neurons in layer 2/3 of mouse barrel cortex.” Journal of neurophysiology 107.11 (2012): 3116-3134. |
31.11.2015 | Christoph | Mante, Valerio, et al. “Context-dependent computation by recurrent dynamics in prefrontal cortex.” Nature 503.7474 (2013): 78-84. |
17.11.2015 | David | Pfister, Jean-Pascal, Peter Dayan, and Máté Lengyel. “Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials.” Nature neuroscience 13.10 (2010): 1271-1275. |
27.10.2015 | Zhaofei | Habenschuss, Stefan, Helmut Puhr, and Wolfgang Maass. “Emergence of optimal decoding of population codes through STDP.” Neural computation 25.6 (2013): 1371-1407. |
20.10.2015 | Anand | Maass, Wolfgang, Thomas Natschläger, and Henry Markram. “Real-time computing without stable states: A new framework for neural computation based on perturbations.” Neural computation 14.11 (2002): 2531-2560. |
13.10.2015 | Guillaume | Brunel, Nicolas. “Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.” Journal of computational neuroscience 8.3 (2000): 183-208. |