Technology

Meta's TRIBEv2 uses 700 brain scans to engineer viral content

Meta's TRIBEv2 predicts viral videos by simulating brain responses trained on fMRI data from 700+ people

Published April 25, 2026
Meta's TRIBEv2 uses 700 brain scans to engineer viral content
Meta's TRIBEv2 uses 700 brain scans to engineer viral content

Meta can now simulate how your brain will respond to a video before a single real viewer has watched it. TRIBEv2, released by Meta's Fundamental AI Research (FAIR) team in March 2026, is a foundation model trained on fMRI data from over 700 volunteers, mapping neural activity across approximately 70,000 points on the cortex.

The end result, according to the company, is a digital copy of the human brain, which helps in predicting the neurological engagement in regard to video, audio, and text-based content, doing so with a precision 70 times more refined than that of its predecessor.

Advertisement

The model utilises three different forms of input, video, audio, and text, and matches them to patterns based on fMRI brain scans done in response to viewing videos, listening to podcasts, and reading text. The fMRIs were used to identify genuine neurological activity, indicating activation areas within the brain.

TRIBEv2 applies this information to model how a new piece of content will affect brain regions involved in sustaining attention, arousal of emotion, and rewarding activity. If these brain regions are activated by the simulation, then the content is likely to grab the viewer’s attention in real life.

Most notably, the key business value of TRIBEv2 is that it allows zero-shot predictions, that is, it can predict brain activity without ever scanning the same person. This way, content producers and platforms do not have to conduct studies with new participants each time a new piece of content becomes available.

The model works for a wide variety of people, allowing it to be used outside of a lab setting.

With the results of the analysis provided by TRIBEv2, editors will be able to determine the best possible B-roll, pace content based on cognitive load predictions, and reorganise content in a way that maintains the brain activity signatures most correlated with shares and replays. That is computational neuromarketing on a scale not seen before.

However, the uses of such a tool transcend mere social media optimisation. The ability to predict neural responses to content across videos, audio, and written language will have broad applications. 

Pareesa Afreen
Pareesa Afreen is a reporter and sub editor specialising in technology coverage, with 3 years of experience. She reports on digital innovation, gadgets, and emerging tech trends while ensuring clarity and accuracy through her editorial role, delivering accessible and engaging stories for a fast-evolving digital audience.
Share this story: