IGI Reading Group
This project is maintained by IGITUGraz
The IGI Reading Group is the informal Journal Club of IGI where we meet about once a week and discuss papers that are related to our research.
In the SS 2022, the journal club will take place every Tuesday at 10:00 online (or, if possible, in the IGI seminar room).
If you’re from outside IGI and would like to attend a particular session, contact ceca [at] igi.tugraz.at
.
Please present papers that have a clear relation to our work: e.g. machine learning papers describing relevant developments in the field like new methods or architectures, or experimental papers discussing new data for our models or analysis methods for our experiments. Presented works should enhance our knowledge and provide inspiration for our research.
Start your presentation by giving a 5-minute overview before going into the details. The intro should include:
Presentations should convey the relevant findings from your selected paper with a focus on our group, i.e. also prepare the relevant background information, important concepts from the cited literature, etc. required to understand the main findings. You don’t need to prepare slides.
Date | Moderator | Paper |
---|---|---|
Fall 2022 | tba | tba |
Feel free to add papers of interest.
Date | Moderator | Paper |
---|---|---|
14.06.2022 | Romain | Reed, Scott, et al. “A generalist agent.” arXiv preprint arXiv:2205.06175 (2022). |
20.05.2022 | Thomas | Flesch, Timo, et al. “Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.” arXiv preprint arXiv:2203.11560 (2022). |
10.05.2022 | Roland | Baevski, Alexei, et al. “Data2vec: A general framework for self-supervised learning in speech, vision and language.” arXiv preprint arXiv:2202.03555 (2022). |
04.05.2022 | Max | Ito, Takuya, et al. “Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior.” Nature communications 13.1 (2022): 1-16. |
27.04.2022 | Ozan | Jain, Saachi, et al. “Missingness Bias in Model Debugging.” arXiv preprint arXiv:2204.08945 (2022). |
05.04.2022 | Horst | Banino, Andrea, Jan Balaguer, and Charles Blundell. “Pondernet: Learning to ponder.” arXiv preprint arXiv:2107.05407 (2021). |
29.03.2022 | Guozhang | Iyer, Abhiram, et al. “Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments.” arXiv preprint arXiv:2201.00042 (2021). |
22.03.2022 | Florian | Riihimäki, Henri. “Simplicial $ q $-connectivity of directed graphs with applications to network analysis.” arXiv preprint arXiv:2202.07307 (2022). |
15.03.2022 | Eben | Krioukov, Dmitri, et al. “Hyperbolic geometry of complex networks.” Physical Review E 82.3 (2010): 036106. |
08.03.2022 | Christoph | Kleyko, Denis, et al. “Vector symbolic architectures as a computing framework for nanoscale hardware.” arXiv preprint arXiv:2106.05268 (2021). |
01.03.2022 | Ceca | Kutter, Esther F., et al. “Neuronal codes for arithmetic rule processing in the human brain.” Current Biology (2022). |
09.02.2022 | Florian | About Monte-Carlo Markov Chains. Fosdick, Bailey K., et al. “Configuring random graph models with fixed degree sequences.” Siam Review 60.2 (2018): 315-355. |
Young, Jean-Gabriel, et al. “Construction of and efficient sampling from the simplicial configuration model.” Physical Review E 96.3 (2017): 032312. | ||
Artzy-Randrup, Yael, and Lewi Stone. “Generating uniformly distributed random networks.” Physical Review E 72.5 (2005): 056708. | ||
01.02.2022 | Thomas | Beniaguev, D., Shapira, S., Segev, I. & London, M. (n.d.). Multiple Synaptic Contacts combined with Dendritic Filtering enhance Spatio-Temporal Pattern Recognition capabilities of Single Neurons. https://doi.org/10.1101/2022.01.28.478132 |
28.01.2022 | Romain | About astrocytes. Bazargani, Narges, and David Attwell. “Astrocyte calcium signaling: the third wave.” Nature neuroscience 19.2 (2016): 182-189. |
Semyanov, Alexey, Christian Henneberger, and Amit Agarwal. “Making sense of astrocytic calcium signals—from acquisition to interpretation.” Nature Reviews Neuroscience 21.10 (2020): 551-564. | ||
Santello, Mirko, Nicolas Toni, and Andrea Volterra. “Astrocyte function from information processing to cognition and cognitive impairment.” Nature neuroscience 22.2 (2019): 154-166. | ||
12.01.2022 | Roland | Lopes, Vasco, et al. “Guided Evolution for Neural Architecture Search.” arXiv preprint arXiv:2110.15232 (2021). |
14.12.2021 | Ozan | Schott, Lukas, et al. “Visual representation learning does not generalize strongly within the same domain.” arXiv preprint arXiv:2107.08221 (2021). |
01.12.2021 | Max | Beniaguev, David, Idan Segev, and Michael London. “Single cortical neurons as deep artificial neural networks.” Neuron 109.17 (2021): 2727-2739. |
23.11.2021 | Isabel | Koay, Sue Ann, et al. “Sequential and efficient neural-population coding of complex task information.” bioRxiv (2021): 801654. |
16.11.2021 | Horst | Kim, Timothy D., et al. “Inferring latent dynamics underlying neural population activity via neural differential equations.” International Conference on Machine Learning. PMLR, 2021. |
09.11.2021 | Guozhang | Fischler-Ruiz, Walter, et al. “Olfactory landmarks and path integration converge to form a cognitive spatial map.” Neuron (2021). |
02.11.2021 | Special session | |
Florian | Raussen, Martin. “Connectivity of spaces of directed paths in geometric models for concurrent computation.” arXiv preprint arXiv:2106.11703 (2021). | |
Bronstein, Michael M., et al. “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.” arXiv preprint arXiv:2104.13478 (2021). | ||
Romain | Losonczy, Attila, Judit K. Makara, and Jeffrey C. Magee. “Compartmentalized dendritic plasticity and input feature storage in neurons.” Nature 452.7186 (2008): 436-441. | |
Ceca | Flesch, Timo, et al. “Rich and lazy learning of task representations in brains and neural networks.” bioRxiv (2021). | |
Thomas | Lin, Stephanie, Jacob Hilton, and Owain Evans. “TruthfulQA: Measuring How Models Mimic Human Falsehoods.” arXiv preprint arXiv:2109.07958 (2021). | |
Megatron-Turing NLG | ||
Power, Alethea, et al. “Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets.” ICLR MATH-AI Workshop. 2021. | ||
Grewal, K., et al. “Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning.” (2021). | ||
Pinitas, Kosmas, Spyridon Chavlis, and Panayiota Poirazi. “Dendritic Self-Organizing Maps for Continual Learning.” arXiv preprint arXiv:2110.13611 (2021). | ||
Max | Levi, Hila, and Shimon Ullman. “Multi-task learning by a top-down control network.” 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. | |
Isabel | Yap, Ee-Lynn, et al. “Bidirectional perisomatic inhibitory plasticity of a Fos neuronal network.” Nature 590.7844 (2021): 115-121. | |
Titouan | Engelhard, Ben, et al. “Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons.” Nature 570.7762 (2019): 509-513. | |
Christoph | Wightman, Ross, Hugo Touvron, and Hervé Jégou. “ResNet strikes back: An improved training procedure in timm.” arXiv preprint arXiv:2110.00476 (2021). | |
Yujie | Miconi, Thomas, et al. “Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity.” arXiv preprint arXiv:2002.10585 (2020). | |
Guozhang | Rumyantsev, Oleg I., et al. “Fundamental bounds on the fidelity of sensory cortical coding.” Nature 580.7801 (2020): 100-105. | |
19.10.2021 | Eben | Bordelon, Blake, and Cengiz Pehlevan. “Population Codes Enable Learning from Few Examples By Shaping Inductive Bias.” bioRxiv (2021). |
12.10.2021 | Christoph | Pogodin, Roman, et al. “Towards Biologically Plausible Convolutional Networks.” arXiv preprint arXiv:2106.13031 (2021). |
05.10.2021 | Ceca | Schuman, Catherine D., et al. “Non-traditional input encoding schemes for spiking neuromorphic systems.” 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. |
30.06.2021 | Max | Payeur, Alexandre, et al. “Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits.” Nature neuroscience (2021): 1-10. |
23.06.2021 | Isabel | Bos, Hannah, Anne-Marie Oswald, and Brent Doiron. “Untangling stability and gain modulation in cortical circuits with multiple interneuron classes.” bioRxiv (2020). |
16.06.2021 | Eben | Sezener, Eren, et al. “A rapid and efficient learning rule for biological neural circuits.” bioRxiv (2021). |
09.06.2021 | Titouan | Rubin, Jonathan E., et al. “The credit assignment problem in cortico‐basal ganglia‐thalamic networks: A review, a problem and a possible solution.” European Journal of Neuroscience 53.7 (2021): 2234-2253. (pdf) |
02.06.2021 | Thomas | Tyulmankov, Danil, Guangyu Robert Yang, and L. F. Abbott. “Meta-learning local synaptic plasticity for continual familiarity detection.” bioRxiv (2021). |
26.05.2021 | Špela | Krasheninnikova, Elena, et al. “Reinforcement learning for pricing strategy optimization in the insurance industry.” Engineering applications of artificial intelligence 80 (2019): 8-19. |
19.05.2021 | Romain | Rusch, T. Konstantin, and Siddhartha Mishra. “Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies.” arXiv preprint arXiv:2010.00951 (2020). |
12.05.2021 | Roland | Mangla, Puneet, et al. “Charting the right manifold: Manifold mixup for few-shot learning.” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2020. |
05.05.2021 | Ozan | Xiao, Kai, et al. “Noise or signal: The role of image backgrounds in object recognition.” arXiv preprint arXiv:2006.09994 (2020). |
28.04.2021 | Michael | Gidon, Albert, et al. “Dendritic action potentials and computation in human layer 2/3 cortical neurons.” Science 367.6473 (2020): 83-87. |
21.04.2021 | Horst | Cross, Logan, et al. “Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments.” Neuron 109.4 (2021): 724-738. |
14.04.2021 | Guozhang | van de Ven, Gido M., Hava T. Siegelmann, and Andreas S. Tolias. “Brain-inspired replay for continual learning with artificial neural networks.” Nature communications 11.1 (2020): 1-14. |
31.03.2021 | Franz | Hyvärinen, Aapo, and Peter Dayan. “Estimation of non-normalized statistical models by score matching.” Journal of Machine Learning Research 6.4 (2005). |
Song, Yang, and Stefano Ermon. “Generative modeling by estimating gradients of the data distribution.” arXiv preprint arXiv:1907.05600 (2019). | ||
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising diffusion probabilistic models.” arXiv preprint arXiv:2006.11239 (2020). | ||
Song, Yang, et al. “Score-Based Generative Modeling through Stochastic Differential Equations.” arXiv preprint arXiv:2011.13456 (2020). | ||
24.03.2021 | Florian | Motta, Alessandro, et al. “Dense connectomic reconstruction in layer 4 of the somatosensory cortex.” Science 366.6469 (2019). |
Billeh, Yazan N., et al. “Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex.” Neuron 106.3 (2020): 388-403. | ||
Rees, Christopher L., Keivan Moradi, and Giorgio A. Ascoli. “Weighing the evidence in Peters’ rule: does neuronal morphology predict connectivity?.” Trends in neurosciences 40.2 (2017): 63-71. (pdf) | ||
17.03.2021 | Dominik | Kato, Saul, et al. “Global brain dynamics embed the motor command sequence of Caenorhabditis elegans.” Cell 163.3 (2015): 656-669. |
10.03.2021 | Christoph | Menick, Jacob, et al. “Practical Real Time Recurrent Learning with a Sparse Approximation to the Jacobian.” ICLR 2021 (2021). |
03.03.2021 | Ceca | Kendall, Jack, et al. “Training End-to-End Analog Neural Networks with Equilibrium Propagation.” arXiv preprint arXiv:2006.01981 (2020). |
26.01.2021 | Thomas L. | Radford, Alec, et al. “Learning Transferable Visual Models From Natural Language Supervision.” Image 2: T2. |
19.01.2021 | Florian | Levina, Anna, J. Michael Herrmann, and Manfred Denker. “Critical branching processes in neural networks.” PAMM: Proceedings in Applied Mathematics and Mechanics. Vol. 7. No. 1. Berlin: WILEY‐VCH Verlag, 2007. |
12.01.2020 | Špela | Young, Benjamin D., James A. Escalon, and Dennis Mathew. “Odors: from chemical structures to gaseous plumes.” Neuroscience & Biobehavioral Reviews 111 (2020): 19-29. |
01.12.2020 | Samuel | Ly, Calvin, et al. “Psychedelics promote structural and functional neural plasticity.” Cell reports 23.11 (2018): 3170-3182. |
24.11.2020 | Roland | Rajeswaran, Aravind, et al. “Meta-learning with implicit gradients.” Advances in Neural Information Processing Systems. 2019. |
17.11.2020 | Ozan | Dapello, Joel, et al. “Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations.” Advances in Neural Information Processing Systems 33 (2020). |
10.11.2020 | Horst | Sharma, Archit, et al. “Dynamics-aware unsupervised discovery of skills.” arXiv preprint arXiv:1907.01657 (2019). |
03.11.2020 | Franz | Ramsauer, Hubert, et al. “Hopfield networks is all you need.” arXiv preprint arXiv:2008.02217 (2020). |
27.10.2020 | Florian | Reimann, Michael W., et al. “Cliques of neurons bound into cavities provide a missing link between structure and function.” Frontiers in computational neuroscience 11 (2017): 48. |
20.10.2020 | Christoph | Nieder, Andreas. “Neural constraints on human number concepts.” Current Opinion in Neurobiology 60 (2020): 28-36. |
13.10.2020 | Ceca | Fitz, Hartmut, et al. “Neuronal spike-rate adaptation supports working memory in language processing.” Proceedings of the National Academy of Sciences 117.34 (2020): 20881-20889. |
30.09.2020 | Michael | Frankland, Steven M., and Joshua D. Greene. “Concepts and compositionality: in search of the brain’s language of thought.” Annual review of psychology 71 (2020): 273-303. (pdf) |
30.07.2020 | Arjun | Mittal, Sarthak, et al. “Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules.” arXiv preprint arXiv:2006.16981 (2020). |
Goyal, Anirudh, et al. “Recurrent independent mechanisms.” arXiv preprint arXiv:1909.10893 (2019). | ||
02.03.2020 | Luca | Hudson, Drew A., and Christopher D. Manning. “Compositional attention networks for machine reasoning.” arXiv preprint arXiv:1803.03067 (2018). |
24.02.2020 | Arjun | Schrittwieser, Julian, et al. “Mastering atari, go, chess and shogi by planning with a learned model.” arXiv preprint arXiv:1911.08265 (2019). |
03.02.2020 | Ceca | Introduction to ANOVA analysis (in Kass, Robert E., Uri T. Eden, and Emery N. Brown. Analysis of neural data. Vol. 491. New York: Springer, 2014.) |
Lindsay, Grace W., et al. “Hebbian learning in a random network captures selectivity properties of the prefrontal cortex.” Journal of Neuroscience 37.45 (2017): 11021-11036. | ||
Rigotti, Mattia, et al. “The importance of mixed selectivity in complex cognitive tasks.” Nature 497.7451 (2013): 585-590. | ||
06.12.2019 | Thomas | Henaff, Mikael, et al. “Tracking the world state with recurrent entity networks.” arXiv preprint arXiv:1612.03969 (2016). |
02.12.2019 | Philipp | Patel, Devdhar, et al. “Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI games.” arXiv preprint arXiv:1903.11012 (2019). |
25.11.2019 | Special session | |
Florian | Barrett, David GT, Sophie Deneve, and Christian K. Machens. “Optimal compensation for neuron loss.” Elife 5 (2016): e12454. | |
Franz | Voelker, Aaron R., and Chris Eliasmith. “Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells.” Neural computation 30.3 (2018): 569-609. | |
Voelker, Aaron, Ivana Kajić, and Chris Eliasmith. “Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks.” Advances in Neural Information Processing Systems. 2019. | ||
Arjun | Frady, E. Paxon, and Friedrich T. Sommer. “Robust computation with rhythmic spike patterns.” Proceedings of the National Academy of Sciences 116.36 (2019): 18050-18059. | |
11.11.2019 | Michael | Habenschuss, Stefan, Zeno Jonke, and Wolfgang Maass. “Stochastic computations in cortical microcircuit models.” PLoS computational biology 9.11 (2013): e1003311. |
Berkes, Pietro, et al. “Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.” Science 331.6013 (2011): 83-87. | ||
29.10.2019 | Darjan | Nayebi, Aran, et al. “Task-Driven convolutional recurrent models of the visual system.” Advances in Neural Information Processing Systems. 2018. |
27.09.2019 | Horst | Marblestone, Adam H., Greg Wayne, and Konrad P. Kording. “Toward an integration of deep learning and neuroscience.” Frontiers in computational neuroscience 10 (2016): 94. |
07.06.2019 | Franz | Hung, Chia-Chun, et al. “Optimizing agent behavior over long time scales by transporting value.” arXiv preprint arXiv:1810.06721 (2018) |
16.05.2019 | Elias | Frankle, Jonathan, and Michael Carbin. “The lottery ticket hypothesis: Finding sparse, trainable neural networks.” arXiv preprint arXiv:1803.03635 (2018). |
09.05.2019 | Rapid fire session | |
Anand | Karnani, Mahesh M., et al. “A Blanket of Inhibition: Functional Inferences from Dense Inhibitory Connectivity”. Current Opinion in Neurobiology, 2014 | |
Okun, Michael, et al. “Diverse Coupling of Neurons to Populations in Sensory Cortex”. Nature, 2015. | ||
Darjan | Bönstrup, Marlene, et al. “A Rapid Form of Offline Consolidation in Skill Learning.” Current Biology (2019). | |
Triefenbach, Fabian, et al. “Phoneme recognition with large hierarchical reservoirs.” Advances in neural information | ||
Michael | Saxe, Andrew M., et al. “On Random Weights and Unsupervised Feature Learning.” ICML. Vol. 2. No. 3. 2011. | |
Philipp | Frady, Edward & Sommer, Friedrich. “Robust computation with rhythmic spike patterns.” arxiv preprint: arXiv:1901.07718 (2019) | |
Arjun | Akrout, M., Wilson, C., Humphreys, P. C., Lillicrap, T., & Tweed, D. (2019). Using Weight Mirrors to Improve Feedback Alignment. arXiv preprint arXiv:1904.05391 | |
Ceca | Behrens, Timothy E. J., et al. “What Is a Cognitive Map? Organising Knowledge for Flexible Behaviour.” 2018, doi:10.1101/365593.; LINK: https://www.cell.com/neuron/pdf/S0896-6273(18)30856-0.pdf | |
18.04.2019 | Ceca | O’Reilly, Randall C., Thomas E. Hazy, and Seth A. Herd. “The Leabra Cognitive Architecture: How to Play 20 Principles with Nature..” The Oxford handbook of cognitive science 91 (2016): 91-116. |
11.04.2019 | Darjan | Krotov, Dmitry, and John J. Hopfield. “Unsupervised learning by competing hidden units.” Proceedings of the National Academy of Sciences (2019): 201820458. |
20.03.2019 | Arjun | Koutnik, Jan, et al. “A clockwork rnn.” arXiv preprint arXiv:1402.3511 (2014). |
Chung, Junyoung, Sungjin Ahn, and Yoshua Bengio. “Hierarchical multiscale recurrent neural networks.” arXiv preprint arXiv:1609.01704 (2016). | ||
14.03.2019 | Anand | Jaderberg, Max, et al. “Human-level performance in first-person multiplayer games with population-based deep reinforcement learning.” arXiv preprint arXiv:1807.01281 (2018). |
17.12.2018 | Thomas L. | Beaulieu-Laroche, Lou, et al. “Enhanced Dendritic Compartmentalization in Human Cortical Neurons.” Cell 175.3 (2018): 643-651. |
20.11.2018 | Michael | Kutter, Esther F., et al. “Single Neurons in the Human Brain Encode Numbers.” Neuron (2018). |
Quiroga, R. Quian, et al. “Invariant visual representation by single neurons in the human brain.” Nature 435.7045 (2005): 1102. | ||
09.11.2018 | Guillaume | Wasmuht, Dante Francisco, et al. “Intrinsic neuronal dynamics predict distinct functional roles during working memory.” Nature communications 9.1 (2018): 3499. |
19.10.2018 | Franz | Perich, Matthew G., Juan A. Gallego, and Lee E. Miller. “A neural population mechanism for rapid learning.” Neuron (2018). |
12.10.2018 | Darjan | Zeng, Andy, et al. “Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning.” arXiv preprint arXiv:1803.09956 (2018). |
Dubey, Rachit, et al. “Investigating Human Priors for Playing Video Games.” arXiv preprint arXiv:1802.10217 (2018). | ||
05.10.2018 | Ceca | Rougier, Nicolas P., et al. “Prefrontal cortex and flexible cognitive control: Rules without symbols.” Proceedings of the National Academy of Sciences 102.20 (2005): 7338-7343. |
21.09.2018 | Arjun | Palm, Rasmus Berg, Ulrich Paquet, and Ole Winther. “Recurrent Relational Networks.” arXiv preprint arXiv:1711.08028 (2018). |
10.08.2018 | Anand | Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., & Pontil, M. (2018). Bilevel Programming for Hyperparameter Optimization and Meta-Learning. ArXiv:1806.04910 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1806.04910 (ICML 2018) |
03.08.2018 | Darjan | Siwani, Samer, et al. “OLMα2 cells bidirectionally modulate learning.” Neuron (2018). |
20.07.2018 | Arjun | Henaff, Mikael, et al. “Tracking the world state with recurrent entity networks.” arXiv preprint arXiv:1612.03969 (2016). |
13.07.2018 | Franz | Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton. “Dynamic routing between capsules.” Advances in Neural Information Processing Systems. 2017. |
23.04.2018 | Ceca | Glimcher, Paul W. “Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis.” Proceedings of the National Academy of Sciences 108.Supplement 3 (2011): 15647-15654. |
12.03.2018 | Franz | Houthooft, Rein, et al. “Vime: Variational information maximizing exploration.” Advances in Neural Information Processing Systems. 2016. |
Blundell, Charles, et al. “Weight uncertainty in neural networks.” arXiv preprint arXiv:1505.05424 (2015). | ||
02.03.2018 | Anand | Finn, Chelsea, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks.” arXiv preprint arXiv:1703.03400 (2017). |
23.02.2018 | Michael | Mostafa, Hesham, Vishwajith Ramesh, and Gert Cauwenberghs. “Deep supervised learning using local errors.” arXiv preprint arXiv:1711.06756 (2017). |
09.02.2018 | Anand | Costa, R., Assael, Y., Shillingford, B., de Freitas, N. & Vogels, Ti. Cortical microcircuits as gated-recurrent neural networks. in Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 271–282 (Curran Associates, Inc., 2017). |
02.02.2018 | Guillaume | Bahdanau, Dzmitry, et al. “End-to-end attention-based large vocabulary speech recognition.” Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016. |
Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. “Speech recognition with deep recurrent neural networks.” Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, 2013. | ||
Graves, Alex, et al. “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.” Proceedings of the 23rd international conference on Machine learning. ACM, 2006. | ||
Amodei, Dario, et al. “Deep speech 2: End-to-end speech recognition in english and mandarin.” International Conference on Machine Learning. 2016. | ||
Hannun, Awni, et al. “Deep speech: Scaling up end-to-end speech recognition.” arXiv preprint arXiv:1412.5567 (2014). | ||
26.01.2018 | Darjan | Mishra, Nikhil, et al. “Meta-learning with temporal convolutions.” arXiv preprint arXiv:1707.03141 (2017). https://arxiv.org/abs/1707.03141 |
19.01.2018 | Arjun | Jaderberg, Max, et al. “Population Based Training of Neural Networks.” arXiv preprint arXiv:1711.09846 (2017). |
12.01.2018 | Thomas B. | Wang, Peng, et al. “Multi-attention network for one shot learning.” 2017 IEEE conference on computer vision and pattern recognition, CVPR. 2017. |
05.01.2018 | Thomas L. | Jaderberg, Max, et al. “Decoupled neural interfaces using synthetic gradients.” arXiv preprint arXiv:1608.05343 (2016). |
07.12.2017 | Franz | Graves. “Adaptive Computation Time for Recurrent Neural Networks.” arXiv:1603.08983 (2016). https://arxiv.org/abs/1603.08983 |
01.12.2017 | Guillaume | Sussillo, Stavisky, Kao, Ryu, Shenoy. “Making brain-machine interfaces robust to future neural variability” nature communications |
Panzeri, Harvey, Piasini, Latham, Fellin “Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention and Behavior” | ||
Lee, Delbruck, and Pfeiffer “Training Deep Spiking Neural Networks Using Backpropagation” | ||
24.11.2017 | Anand | Xu, Yan, Xiaoqin Zeng, and Shuiming Zhong. “A new supervised learning algorithm for spiking neurons.” Neural computation 25.6 (2013): 1472-1511. |
Ponulak, Filip, and Andrzej Kasiński. “Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting.” Neural Computation 22.2 (2010): 467-510. | ||
17.11.2017 | Michael | Hadji, Isma, and Richard P. Wildes. “A Spatiotemporal Oriented Energy Network for Dynamic Texture Recognition.” arXiv preprint arXiv:1708.06690 (2017). https://arxiv.org/abs/1708.06690 |
06.10.2017 | Guillaume | Song Han et al. 2017 - SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size https://arxiv.org/abs/1602.07360 (submitted to ICLR 2017) |
Song Han et al. 2017 - ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA https://arxiv.org/abs/1612.00694 | ||
Collins et al. 2014 - Memory bounded neural network https://arxiv.org/pdf/1412.1442.pdf | ||
Song Han et al. 2015 - Learning both weights and connections https://arxiv.org/pdf/1506.02626.pdf (appeared in NIPS) | ||
29.10.2017 | Jian | Dvorkin R, Ziv NE (2016) Relative Contributions of Specific Activity Histories and Spontaneous Processes to Size Remodeling of Glutamatergic Synapses. PLoS Biol 14(10): e1002572. https://doi.org/10.1371/journal.pbio.1002572 |
Rubinski A, Ziv NE (2015) Remodeling and Tenacity of Inhibitory Synapses: Relationships with Network Activity and Neighboring Excitatory Synapses. PLoS Comput Biol 11(11): e1004632. https://doi.org/10.1371/journal.pcbi.1004632 | ||
Statman A, Kaufman M, Minerbi A, Ziv NE, Brenner N (2014) Synaptic Size Dynamics as an Effectively Stochastic Process. PLoS Comput Biol 10(10): e1003846. https://doi.org/10.1371/journal.pcbi.1003846 | ||
10.08.2017 | Franz | Zoph, Barret, and Quoc V. Le. “Neural architecture search with reinforcement learning.” arXiv preprint arXiv:1611.01578 (2016). |
02.08.2017 | David | Friston K. and Kiebel S. “Predictive coding under the free-energy principle.” Phil. Trans. R. Soc. B (2009) 364, 1211–1221. |
Friston K. “Variational filtering.” NeuroImage (2008) 41, 747-766. | ||
26.07.2017 | Anand | Spratling, M. W. “A review of predictive coding algorithms.” Brain and cognition 112 (2017): 92-97. |
Rao, Rajesh PN, and Dana H. Ballard. “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.” Nature neuroscience 2.1 (1999): 79-87. | ||
PredNet: Lotter, William, Gabriel Kreiman, and David Cox. “Deep predictive coding networks for video prediction and unsupervised learning.” arXiv preprint arXiv:1605.08104 (2016). | ||
26.07.2017 | Michael M. | Dosovitskiy, Alexey, and Vladlen Koltun. “Learning to act by predicting the future.” arXiv preprint arXiv:1611.01779 (2016). |
13.06.2017 | Guillaume | Lillicrap, Timothy P., et al. “Random synaptic feedback weights support error backpropagation for deep learning.” Nature Communications 7 (2016). |
25.04.2017 | Guillaume | Salimans, Tim, et al. “Evolution Strategies as a Scalable Alternative to Reinforcement Learning.” arXiv preprint arXiv:1703.03864 (2017). |
25.04.2017 | Anand | Whittington, James CR, and Rafal Bogacz. “An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.” Neural Computation (2017). |
25.04.2017 | Arjun | Orhan, A. Emin, and Wei Ji Ma. “Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback.” arXiv preprint arXiv:1601.03060 (2016). APA |
25.04.2017 | David | Schiess, Mathieu, Robert Urbanczik, and Walter Senn. “Somato-dendritic synaptic plasticity and error-backpropagation in active dendrites.” PLoS Comput Biol 12.2 (2016): e1004638. |
18.04.2017 | Anand | Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014. |
14.04.2017 | David | Variational Auto-encoders |
Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013). | ||
04.04.2017 | Guillaume and David | Variational Inference |
Mnih, Andriy, and Karol Gregor. “Neural variational inference and learning in belief networks.” arXiv preprint arXiv:1402.0030 (2014). | ||
28.03.2017 | Arjun | Dirichlet Distributions |
Blei, David M., and Michael I. Jordan. “Variational inference for Dirichlet process mixtures.” Bayesian analysis 1.1 (2006): 121-143. | ||
Sethuraman, Jayaram. “A constructive definition of Dirichlet priors.” Statistica sinica (1994): 639-650. | ||
Blackwell, David, and James B. MacQueen. “Ferguson distributions via Pólya urn schemes.” The annals of statistics (1973): 353-355. | ||
Ferguson, Thomas S. “A Bayesian analysis of some nonparametric problems.” The annals of statistics (1973): 209-230. | ||
02.12.2016 | Arjun | Nessler, Bernhard, et al. “Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity.” PLoS Comput Biol 9.4 (2013): e1003037. |
18.11.2016 | Guillaume | Nithianantharajah, Jess, et al. “Synaptic scaffold evolution generated components of vertebrate cognitive complexity.” Nature neuroscience 16.1 (2013): 16-24. |
Carlisle, Holly J., et al. “Opposing effects of PSD‐93 and PSD‐95 on long‐term potentiation and spike timing‐dependent plasticity.” The Journal of physiology 586.24 (2008): 5885-5900. | ||
11.11.2016 | Anand | Rigotti, Mattia, et al. “The importance of mixed selectivity in complex cognitive tasks.” Nature 497.7451 (2013): 585-590. |
09.09.2016 | Ke Bai | Eliasmith, Chris, et al. “A large-scale model of the functioning brain.” science 338.6111 (2012): 1202-1205. |
02.09.2016 | Ke Bai | Bobier, Bruce, Terrence C. Stewart, and Chris Eliasmith. “A unifying mechanistic model of selective attention in spiking neurons.” PLoS Comput Biol 10.6 (2014): e1003577. |
16.08.2016 | David | Zenke, Friedemann, Everton J. Agnes, and Wulfram Gerstner. “Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks.” Nature communications 6 (2015). |
05.08.2016 | Zhaofei | Raju, Rajkumar Vasudeva, and Xaq Pitkow. “Inference by Reparameterization in Neural Population Codes.” Advances in Neural Information Processing Systems. 2016. |
28.07.2016 | Anna | Buzsáki, György. “Neural syntax: cell assemblies, synapsembles, and readers.” Neuron 68.3 (2010): 362-385. |
21.07.2016 | Guillaume | Chung, Junyoung, et al. “Empirical evaluation of gated recurrent neural networks on sequence modeling.” arXiv preprint arXiv:1412.3555 (2014). |
Sussillo, David, and L. F. Abbott. “Random walk initialization for training very deep feedforward networks.” arXiv preprint arXiv:1412.6558 (2014). | ||
27.05.2016 | Guillaume | Williams, Ronald J. “Simple statistical gradient-following algorithms for connectionist reinforcement learning.” Machine learning 8.3-4 (1992): 229-256. |
24.03.2016 | Anand | Denève, Sophie, and Christian K. Machens. “Efficient codes and balanced networks.” Nature neuroscience 19.3 (2016): 375-382. |
17.03.2016 | Anand | Abbott, L. F., Brian DePasquale, and Raoul-Martin Memmesheimer. “Building functional networks of spiking model neurons.” Nature neuroscience 19.3 (2016): 350-355. |
10.03.2016 | David | Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780. |
Graves, Alex, and Jürgen Schmidhuber. “Offline handwriting recognition with multidimensional recurrent neural networks.” Advances in neural information processing systems. 2009. | ||
Graves, Alex. “Generating sequences with recurrent neural networks.” arXiv preprint arXiv:1308.0850 (2013). | ||
Graves, Alex, Greg Wayne, and Ivo Danihelka. “Neural turing machines.” arXiv preprint arXiv:1410.5401 (2014). | ||
26.02.2016 | Guillaume | Gardner, Brian, Ioana Sporea, and André Grüning. “Learning spatiotemporally encoded pattern transformations in structured spiking neural networks.” Neural computation (2015). |
15.12.2015 | Guillaume | Hennequin, Guillaume, Tim P. Vogels, and Wulfram Gerstner. “Optimal control of transient dynamics in balanced networks supports generation of complex movements.” Neuron 82.6 (2014): 1394-1406. |
11.12.2015 | Gernot | Avermann, Michael, et al. “Microcircuits of excitatory and inhibitory neurons in layer 2/3 of mouse barrel cortex.” Journal of neurophysiology 107.11 (2012): 3116-3134. |
31.11.2015 | Christoph | Mante, Valerio, et al. “Context-dependent computation by recurrent dynamics in prefrontal cortex.” Nature 503.7474 (2013): 78-84. |
17.11.2015 | David | Pfister, Jean-Pascal, Peter Dayan, and Máté Lengyel. “Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials.” Nature neuroscience 13.10 (2010): 1271-1275. |
27.10.2015 | Zhaofei | Habenschuss, Stefan, Helmut Puhr, and Wolfgang Maass. “Emergence of optimal decoding of population codes through STDP.” Neural computation 25.6 (2013): 1371-1407. |
20.10.2015 | Anand | Maass, Wolfgang, Thomas Natschläger, and Henry Markram. “Real-time computing without stable states: A new framework for neural computation based on perturbations.” Neural computation 14.11 (2002): 2531-2560. |
13.10.2015 | Guillaume | Brunel, Nicolas. “Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.” Journal of computational neuroscience 8.3 (2000): 183-208. |