Neuro-LLM
Toward understanding brain and language
Under the AMED Bain/Minds 2.0 domain 4 project, our research aims to transform computational neuroscience through four key objectives. First, we seek to develop a standard platform for analyzing large-scale brain measurement and simulation data. While a fully detailed digital brain capturing every molecular to behavioral nuance is theoretically possible, the enormous number of undetermined parameters makes it impractical. Instead, we propose building a functional digital brain model using Transformer architectures applied to both experimental and simulation datasets, linking molecular, cellular, circuit, and behavioral scales. Second, we aim to unravel the inner workings of large language models (LLMs) and multimodal models (LMMs). Although these Transformers excel in next-word prediction and implicitly generate linguistic structures, how high-level representations emerge remains unclear. We plan to create innovative, large-scale analytical methods that extend beyond current techniques like attention flow analysis. Third, we will apply these Transformer-based techniques to neural data to optimize models for predicting brain activity, capturing the essential dynamics and spatiotemporal interactions of neural populations, and refining our digital brain model. Lastly, we intend to develop a specialized LLM to automate script generation and evaluation , streamlining complex simulations and data analysis. Join us as we bridge advanced computational methods with neuroscience, paving the way for a new era in data-driven brain research!
Mohamed bin Zayed University of Artificial Intelligence
RIKEN Center for Computational Science
Graduate School of Information Sciences Department of System Information Sciences, Tohoku University
RIKEN Center for Advanced Intelligence Project (AIP)
Nagoya Institute of Technology
Deparatment of Physiology and Cell Biology, Tokyo Institute of Science
Riichiro Hira (Institute of Science Tokyo)
riichirohira@gmail.com