MENLO PARK — Meta’s Fundamental Artificial Intelligence Research (FAIR) lab in Paris has long been a leader in scientific innovation, contributing significant advances across multiple fields, including medicine, climate science, and conservation. As the lab continues to focus on developing advanced machine intelligence (AMI), it is also dedicated to driving innovation that can benefit society at large. Looking to the next decade, Meta is excited to share two groundbreaking studies that demonstrate how AI can bring us closer to understanding human intelligence and the neural mechanisms behind language.
In collaboration with the Basque Center on Cognition, Brain and Language (BCBL) in San Sebastián, Spain, Meta has unveiled two key breakthroughs that push the boundaries of AI and neuroscience. The first breakthrough involves using AI to decode brain activity and reconstruct sentences from non-invasive brain recordings. The new model can accurately decode up to 80% of the characters in sentences typed by participants, representing a significant step forward in decoding brain signals. In the second study, Meta’s research explores how AI can help us understand how the brain transforms thoughts into language, providing new insights into how the brain coordinates speech production.
These discoveries are the result of deep collaboration with leading neuroscience institutions. To further support this important work, Meta is announcing a $2.2 million donation to the Rothschild Foundation Hospital. This donation highlights Meta’s ongoing commitment to working closely with some of Europe’s most prestigious research institutions, including NeuroSpin (CEA), Inria, ENS-PSL, and CNRS. These collaborations will be crucial as Meta continues to explore how these scientific breakthroughs can lead to practical solutions that improve lives.
Decoding Language from Brain Activity Meta’s research on decoding language from non-invasive brain recordings offers the possibility of restoring communication for individuals who have lost the ability to speak due to brain injuries or lesions. Traditional methods of restoring communication rely on invasive techniques, such as stereotactic electroencephalography and electrocorticography, which require neurosurgical interventions. Meta’s innovative approach uses non-invasive devices—magnetoencephalography (MEG) and electroencephalography (EEG)—to record brain signals while participants type sentences. The AI model can then decode these signals, accurately reconstructing typed sentences with remarkable precision.
While the results are promising, the technology faces challenges, such as improving decoding accuracy and addressing practical limitations like the need for subjects to remain still in magnetically shielded rooms. Future studies will also explore how this technology can be applied to individuals with brain injuries.
Understanding How the Brain Produces Language The second study represents a major leap forward in understanding how the brain produces language. For years, studying the brain during speech has been difficult due to the interference caused by mouth and tongue movements. Meta’s researchers used AI to analyze MEG signals while participants typed sentences, allowing them to track the neural activity that transforms thoughts into speech. The findings show that the brain generates representations that start with the abstract meaning of a sentence and then gradually transforms these into actions, such as finger movements on a keyboard.
This research provides valuable insights into the brain’s complex neural processes and reveals how the brain employs a dynamic neural code to link successive representations over time, enabling speech production.
AI for Health and Open Source Contributions Meta’s AI models are not only advancing our understanding of language and the brain but also making significant strides in the healthcare industry. Meta’s open-source contributions have empowered companies like BrightHeart and Virgo to develop cutting-edge AI applications in medicine. BrightHeart uses Meta’s DINOv2 AI model to assist clinicians in identifying congenital heart defects in fetal heart ultrasounds, contributing to the company’s FDA 510(k) clearance. Similarly, Virgo applies DINOv2 to analyze endoscopy video, achieving state-of-the-art performance across multiple AI benchmarks.
As Meta looks to the future, the company is excited about the potential for these breakthroughs to transform healthcare and other sectors, providing innovative solutions to some of the world’s most pressing challenges.