[ad_1]
10/26/2022 – “We eat with our eyes first.”
The Roman gourmet Apicius is said to have uttered these words in the 1st century AD. Now, about 2,000 years later, scientists could agree with him.
Researchers at the Massachusetts Institute of Technology have discovered a previously unknown part of the brain that lights up when we see food. This part, called the “ventral nutritional component,” is located in the brain’s visual cortex, in a region known to play a role in identifying faces, scenes, and words.
The study published in the journal Current Biology, This involved using artificial intelligence (AI) technology to create a computer model of this part of the brain. Similar models are created in all research areas to simulate and examine complex systems of the body. A computer model of the digestive system was recently used to determine this the best posture for taking a pill.
“The research is still cutting edge,” says study author Meenakshi Khosla, PhD. “Much more needs to be done to understand whether this region is the same or different in different individuals and how it is modulated by experience or familiarity with different types of food.”
Finding these differences could provide insights into how people choose what to eat or even help us uncover what drives eating disorders, says Khosla.
Part of what makes this study unique was the researchers’ approach, which they describe as “hypothesis-neutral.” Rather than proving or disproving a solid hypothesis, they simply began examining the data to see what they could find. The goal: To go beyond “the idiosyncratic hypotheses that scientists have already thought of testing,” the paper says. So they began searching a public database called the Natural Scenes Dataset, an inventory of brain scans from eight volunteers looking at 56,720 images.
As expected, the software that analyzed the data set discovered brain regions already known to be triggered by images of faces, bodies, words and scenes. But to the researchers’ surprise, the analysis also revealed a previously unknown part of the brain that appeared to respond to images of food.
“Our first reaction was, ‘That’s cute and all, but it can’t possibly be true,'” says Khosla.
To confirm their discovery, the researchers used the data to train a computer model of this part of the brain, a process that takes less than an hour. Then they fed the model more than 1.2 million new pictures.
In fact, the model lit up in response to food. Color didn’t matter – even black and white food images triggered it, although not as much as colored ones. And the model could tell the difference between food and objects that looked like food: a banana from a crescent moon, or a blueberry muffin from a puppy with a muffin-like face.
Using the human data, the researchers found that some people responded slightly more strongly to processed foods like pizza than to unprocessed foods like apples. They hope to explore how other things, such as For example, liking or disliking a food can affect a person’s response to that food.
This technology could also open up other research areas. Khosla hopes to use it to explore how the brain responds to social cues like body language and facial expressions.
For now, Khosla has already started testing the computer model on real people by scanning the brains of a new group of volunteers. “We recently collected pilot data in a few subjects and were able to localize this component,” she says.
[ad_2]
Source link