Neuroscience news block: magic mushrooms, a very intelligent AI, strobe light against Alzheimer’s and more!

If keeping up with political news starts getting you down (can’t blame you for that) then I have something for you: good old neuroscience discoveries with zero mentions of Trump!

First off, psychonauts, rejoice! Recently, researchers went one step further towards making psilocybin (the active component of magic mushrooms) an approved therapy method. In two new studies they found that psilocybin can ease depression and anxiety for patients suffering from advanced cancer. And they didn’t even need to take it continuously like some common antidepressants -- one large dose alleviated their symptoms for up to six months. Not only that, the majority of people felt more satisfied with their lives, felt more optimistic and a bit more than a half of them described this experience to be one of the most meaningful ones in their lives. (“The cloud of doom seemed to lift”, “it was magic”, “"a space shuttle launch that begins with the clunky trappings of Earth, then gives way to the weightlessness and majesty of space" -- hit me up when regular antidepressant have similar effects). This evident success went against researchers’ initial nervousness about patients “seeing existential void” during the trip and thus coming back even more depressed. It is by far not the first study to show the beneficial effect of psychedelics on mental health: read up on how it could help with addiction, depression and obsessive-compulsive disorder.

 

Second off, college students… do not rejoice! I bear sad news. Recent findings suggest that these all-nighters we all like to pull before the essay deadline are as unhealthy as all the sugar in the Red Bulls we drink during it.

Leslie Knope knows what's up.

Leslie Knope knows what's up.

There is this thing in our brain called circadian clock that makes sure we follow a 24h-cycle by telling us when to sleep (at night, when it’s dark), when to wake (in the morning, when it’s getting light), when to reach our body’s temperature minimum (around 5 am) and when to produce most cortisol (in the morning, to get us stressed for the new day). Brain region responsible for this is a teeny-tiny structure called SCN -- suprachiasmatic nucleus.
And what do you do when you want to test for the causal relationship between malfunctioning circadian clock and mood? Correct, you mess with nature and create a new genetically modified mouse model tailored just for that! To do it, researchers suppressed the master gene driving the circadian rhythm in SCN in these mice -- which lead to the reduction of SCN’s signal strength by around 80%. Their bodies were now basically clueless about when to sleep (and when to party). And see there: disrupting this internal master clock makes previously perfectly healthy mice depressed and anxious. They showed signs of hopelessness and despair, avoided brightly lit areas, showed a changed cortisol secretion pattern (which normally helps with the stress response) and -- additionally -- gained weight. So even though SCN is not directly involved with mood disorders they seemingly play an important indirect role in developing these (and present another potential therapy target). Who knows, maybe in a couple of decades we will able to construct a clear picture of what causes depression from all these puzzle pieces we have now.

From http://media.web.britannica.com/eb-media/64/104464-004-285BDE23.gif

From http://media.web.britannica.com/eb-media/64/104464-004-285BDE23.gif

Another new study demonstrated how smart AI is becoming (our baby grows so fast!). Despite being pretty good with identifying faces, hair is still somewhat a problem for computers (and given what you see when googling for “weird haircuts” no one can blame them for that). Canadian researchers tackled this problem and taught an algorithm to correctly identify people’s hair on pictures. Unlike most other machine learning studies, this algorithm learnt directly from human instructions instead of digesting a huge example dataset -- an approach called “heuristic training”. To teach it to rely on the criteria we use when recognizing hair -- such as texture, colour or direction in which hair flows -- researchers gave the machine certain guidelines to follow. So now it didn’t need to consume thousand example pictures to know what hair looks like but rather used human expertise to classify a certain picture part as hairy or not. And it worked out -- the algorithm outperformed conventional methods of training neural networks by 160%! This approach is very promising for helping AI quickly and reliably recognise and classify new objects -- it would be enough to provide it with the right rules for the classification instead of a large dataset (i.e. “The pixels at the top of the picture are more likely to be sky than the ones on the bottom” etc.).
However, enabling Siri to give you a honest hair opinion is not a main priority (albeit a very marketing-friendly one) as the researchers hope that this algorithm would help to accurately identify skin cancer or design safer self-driving cars. "If you could take [dermatologists'] expertise, and then train a deep neural net to then realize some features or details that even doctors won't be aware of, that would be just amazing," one of the researchers, Parhan Aarabi, said.

The left and centre columns show an aggressive and conservative image-recognition algorithm trained to recognize human hair, compared to the more precise heuristically trained algorithm at right. The red overlay represents areas of the image each algorithm has determined to be hair. (IEEE Transactions on Neural Networks & Learning Systems)

The left and centre columns show an aggressive and conservative image-recognition algorithm trained to recognize human hair, compared to the more precise heuristically trained algorithm at right. The red overlay represents areas of the image each algorithm has determined to be hair. (IEEE Transactions on Neural Networks & Learning Systems)

In another cool study MIT scientists found that shining strobe light into rodents’ eyes encourages protective cells to destroy harmful beta amyloid proteins (which accumulate in the brains of Alzheimer’s patients) in a quick and efficient way. The reduction in beta amyloid plaques seems to be due to the fact that these light flashes induce brain waves known as gamma oscillations. These waves are indicated in contributing to such brain functions as attention, perception and memory and -- surprise! -- were hinted to be disrupted in Alzheimer’s patients.
Now that doesn’t mean you should drag your grandma to a club: the perfect frequency turned out to be 40 flashes a second which is much faster than disco lights. The researchers found that putting mice with Alzheimer’s in front of a flickering light for an hour led to a significant reduction in beta amyloids in the visual brain areas. Seeing that the effect was rather short-lasting and that the protein levels returned to the baseline after one day they tried to extend the therapy and to stimulate the mice for a whole week instead. This time around the reduction was even more noticeable -- gamma waves slowed down the production of amyloids and made the brain immune cells blood-thirsty for them. The next step now would be to figure out a way to use this non-invasive technique to restore normal gamma waves patterns in other parts of the brain (before, they directly stimulated hippocampus, the memory center, with this flickering light; yet despite the successful outcome this is a degree of invasion not always possible and/or advisable). Same as with depression, there is hope that in some decades we will finally have a clear picture of what’s going on in Alzheimer’s and how to stop it from going on.