• Skip to content

Georgia Tech at NeurIPS 2021

GT Research in Machine Learning

  • Research Insights
  • Lead Author Spotlight
  • Authors
  • Papers
  • About

Home 4.6 Featured Author

NeurIPS 2021 Lead Author Spotlight

Ran Liu, PhD Machine Learning student

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

Friday, Dec. 10, 11:30 am EDT

oral

#COMPUTATIONAL NEUROSCIENCE
We introduced a novel unsupervised approach for learning disentangled representations of neural activity called SwapVAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Q&A with Ran Liu

(click question to show answer)

What motivated your work on this paper?

Brain activities are often complex and noisy, yet it is believed by neuroscientists that a low-dimensional neural representation that governs neural signals exists. The biggest motivation of this project is to find a low-dimensional representation space that could ‘explain’ the neural signals. Our work SwapVAE presents an initial step towards this goal by combining self-supervised learning (SSL) techniques with the generative modeling framework to learn interpretable representations of neural activities.

If readers remember one takeaway from the paper, what should it be and why?

It should be our latent space augmentation operation BlockSwap. BlockSwap makes the latent representation more interpretable by separating the latent representation into augmentation-invariant information and augmentation-variant information, and swapping the invariant part before reconstruction. We hope BlockSwap can be applied in other scenarios when interpretability matters.

Were there any “aha” moments or lessons that you’ll use to inform your future work?

My lesson was summarized 2000 years ago by Aristotle: “For the things we have to learn before we can do them, we learn by doing them.”

What are you most excited for at NeurIPS and what do you hope to take away from the experience?

I am excited to check out other cutting-edge works! I hope to learn from the computational neuroscientists to see how they approach similar problems, and am also excited to learn from other deep learning scientists for inspiration.

Ran with her cat Tigger who is currently pursuing a master of catputer science degree.

MEET MORE lead authorS

Copyright © 2022 · Altitude Pro on Genesis Framework · WordPress · Log in