Meta's TRIBEv2 uses 700 brain scans to engineer viral content
Meta's TRIBEv2 predicts viral videos by simulating brain responses trained on fMRI data from 700+ people
Meta can now simulate how your brain will respond to a video before a single real viewer has watched it. TRIBEv2, released by Meta's Fundamental AI Research (FAIR) team in March 2026, is a foundation model trained on fMRI data from over 700 volunteers, mapping neural activity across approximately 70,000 points on the cortex.
The end result, according to the company, is a digital copy of the human brain, which helps in predicting the neurological engagement in regard to video, audio, and text-based content, doing so with a precision 70 times more refined than that of its predecessor.
The model utilises three different forms of input, video, audio, and text, and matches them to patterns based on fMRI brain scans done in response to viewing videos, listening to podcasts, and reading text. The fMRIs were used to identify genuine neurological activity, indicating activation areas within the brain.
TRIBEv2 applies this information to model how a new piece of content will affect brain regions involved in sustaining attention, arousal of emotion, and rewarding activity. If these brain regions are activated by the simulation, then the content is likely to grab the viewer’s attention in real life.
Most notably, the key business value of TRIBEv2 is that it allows zero-shot predictions, that is, it can predict brain activity without ever scanning the same person. This way, content producers and platforms do not have to conduct studies with new participants each time a new piece of content becomes available.
The model works for a wide variety of people, allowing it to be used outside of a lab setting.
With the results of the analysis provided by TRIBEv2, editors will be able to determine the best possible B-roll, pace content based on cognitive load predictions, and reorganise content in a way that maintains the brain activity signatures most correlated with shares and replays. That is computational neuromarketing on a scale not seen before.
However, the uses of such a tool transcend mere social media optimisation. The ability to predict neural responses to content across videos, audio, and written language will have broad applications.
-
Research reveals AI is ‘not the main driver’ of US job slowdown
-
Elon Musk's ‘Instagram is for girls’ remark sparks platform debate
-
OpenAI partners with Malta to give nationwide access to ChatGPT Plus
-
Do Instagram DMs boost your reach? Instagram head says 'no'
-
Zuckerberg, Pichai called to testify on child safety concerns
-
Steve Jobs asked this one question before hiring anyone
-
Google redesigns 4,000 emojis with 3D look for Android 17
-
OpenAI rolls out ChatGPT finance tools with account linking
