<center>
# Notes From Codeneuro 2015 in SF
*Originally published 2015-11-20 on [docs.sweeting.me](https://docs.sweeting.me/s/blog).*
http://codeneuro.org is a cross-over conferece for neuroscientists and programmers to learn from eachother and share the latest state of the art research in their respective fields.
</center>
## Day 1
Several great talks were given by people in both computer science and neuroscience. Everyone who presented had background in everyting from neurobiology to data science to hardcore CS.
The Day 1 talks:
- lacey kitch [stanford]
- fatma imamoglu [berkeley]
- logan grosenick [stanford/nyu]
- jessica hamrick [jupyter]
- max ogden [dat]
- karissa mckelvey [dat]
- greg corrado [google]
- paul merolla [ibm]
- marion le borgne [numenta]
- marc levoy [google]
### Talk 1 by Lacey Kitch on calcium imaging mouse "place neurons"
Lacey is a fantastic public speaker. Aside from this talk, she also helped run the neuro track of Day 2 where she introduced many of the foundational concepts of neurobiology to a crowd of riveted computer scientists.
This talk was about tracking the firing of "place neurons" in mice brains as they navigate a maze to find a goal. She started by explaining the concept of a place neuron, which is basically a neuron that fires every time a mouse is at a certain location or locations. For example, every time the mouse reached the center of the maze, a group of several neurons would fire very exitedly. Other neurons would fire whenever the mouse passed into Arm 1 of the maze, and others once it reached the goal.
As the mouse learned the markings on the walls of the maze and gained it's bearings, certain neurons would fire more predictably, indicating the mouse was more strongly associating their firings with that particular location.
They used a combination of machine learning techiniques to first understand which neurons were predicatbly firing at certain locations, and then to determine where in the maze they would fire. Using this information, by day 5 of training, they were able to reconstruct the mouse's position just from the firings of observed neurons, and almost all of the time it would match the mouse's actual position in the maze.
Their imaging technique was fascinating in its own right. They cut open a window in the mouse's brain, and placed a microscope over a 1mm^2 section of tissue, where they observed ~1000 neurons in 2d space using calcium imaging. Calcium imaging is a technique that involves expressing a protein in the SOMA of the neuron that opens and closes a cage around a green phosphorescent molecure, in correlation with the neuron's firing. When the neuron spikes, the cage closes and forces water away from the phosphorescen molecule, which allows shined laser light to reflect green, and shows up as a sudden green flash to the camera. When the neuron isn't firing, the protein cage opens up and allows water to inhibit the phosphorescence of the green molecule. It's not a perfect indicator of spiking activity, and this observation method does slightly affect the activity of the neuron, but it's a good enough imaging technique for experiments like this one.
The map of neuron firing to physical location is fascinating, and is something I'd never thought about as being such a literal 1:1 landmark to neuron relationship. Thinking about it more though, it does make sense that I'm able to navigate better when I can assosciate my landscape with specific landmarks, and use them to orient myself in 3d space.
### Talk 2 by Fatma Imamoglu on word reconstruction from fMRI of the auditory cortex
This talk did a similar reconstruction of mental activity with a trained machine-learning model, but with a different end goal. Fatma's research was with human subjects reading words off paper while in an fMRI machine. They used supervised learning to correlate activity in various mental regions to several dimensions such as emotion, sensory activity, or color. They used a system similar to word2vec to build the dimensions for the words, so that if the subject read of "running" they'd see spikes in the motor cortex that they could plot along dimensions like "physical activity", "walking", and "sports". They used their trained models to go in both directions, predicting the read words from the fMRI data, and predicting fMRI activity from the known read words.
This research was fascinating for a number of reasons. I gather from talking with my roomates Carson and Kathy who studied neurscience that this was brand new unreleased research. The shocking part for me was that such accurate correlations could be made from dense and noisy fMRI results, and that the dimensions they plotted the words on closely correlated to actual areas of the brain for physical and emotional activity.
They weren't able to come up with the exact words the subjects spoke, but using word2vec and similar libraries they could probably reconstruct some paraphrased form of the read material out of synonyms.
### Talk 3 by Logan Grosenick on mapping clarified mouse brain connections in 3d space
Logan talked more generally on the dificulties of doing 3d mapping of connections within the brain. He covered various imaging techiques used to build connectomes of brains, from the previously covered calcium imaging, fMRI, and magnetic imaging.