More
    HomeBiologyAI gives glimpse of how a canine brain represents what it sees

    AI gives glimpse of how a canine brain represents what it sees

    A first look at how the canine mind reconstructs what it sees has been provided by scientists’ decoding of visual images from a canine’s brain. The study, conducted at Emory University, was published in the Journal of Visualized Experiments.

    The findings imply that dogs are more tuned into environmental actions than they are to the person or thing performing the action.

    Three 30-minute sessions totaling 90 minutes were used to collect the fMRI neural data from two awake, unrestrained dogs as they watched videos. They then examined the patterns in the neural data using a machine-learning algorithm.

    According to Gregory Berns, an Emory professor of psychology and the paper’s corresponding author, “We demonstrated that we could track the activity in a dog’s brain while it is watching a video and, to at least a limited extent, reconstruct what it is looking at.” “It’s incredible that we can pull that off,”

    Recent developments in fMRI and machine learning to decode visual stimuli from the human brain have provided new insights into the nature of perception, which served as the inspiration for the project. Along with humans, the technique has only been used on a small number of other species, such as some primates.

    Erin Phillips, the first author of the paper, worked as a research specialist in Berns’ Canine Cognitive Neuroscience Lab and says, “While our work is based on just two dogs, it offers proof of concept that these methods work on canines.” “I hope this paper paves the way for other researchers to apply these techniques to dogs and other species as well, so we can get more data and bigger insights into how the minds of different animals work,” the author writes in the paper.

    Phillips, a Scotsman, attended Emory as a Bobby Jones Scholar as part of a collaboration between the universities of Emory and St. Andrews. She is presently a graduate student studying ecology and evolutionary biology at Princeton University.

    Berns and colleagues developed the first training methods for getting dogs to enter fMRI scanners, held motionless and unrestrained, and allow their neural activity to be recorded. His team released the first fMRI brain scans of an awake, unrestrained dog ten years ago. The Dog Project, which Berns refers to as a series of experiments examining the minds of the earliest domesticated species, was able to begin as a result.

    His laboratory has authored studies over the years on how the canine brain interprets sights, sounds, smells, and rewards like praise or food.

    In the meantime, machine-learning computer algorithms’ underlying technology continued to advance. Thanks to technology, scientists have been able to decode some patterns of human brain activity. The technology “reads minds” by identifying the various objects or actions that a person is seeing while watching a video within brain-data patterns.

    “I started to consider whether we could use similar methods on dogs.” Berns cites.

    Finding video material that a dog might find engaging enough to watch for a long time was the first difficult task. The Emory research team mounted a video recorder to a gimbal and selfie stick to capture steady footage of dogs from about waist height or slightly lower than a human.

    They used the tool to record a half-hour video of scenes from the majority of dogs’ daily lives. People petted dogs and gave them treats as part of the activities. Dogs appeared in scenes sniffing, playing, eating, and walking on a leash. Activity scenes included people sitting, people hugging or kissing, people offering a rubber bone or a ball to the camera, people eating, and cars, bikes, or scooters passing by on a road.

    The video data was divided into different categories based on time stamps, including object-based categories (such as dog, car, human, and cat) and action-based categories (such as sniffing, playing, or eating).

    In three sessions lasting a total of 90 minutes, only two of the dogs that had been trained for fMRI experiments had the attention span and temperament to lie motionless and watch the 30-minute video without pausing. These two “super stars” canines were Daisy, a mixed-breed dog who may be partially a Boston terrier, and Bhubo, a mixed-breed dog who may be partially a boxer.

    Phillips, who watched the animals during the fMRI sessions and observed their eye movements on the video, claims that they didn’t even require treats. It was amusing because it involved serious science and took a lot of time and effort, but all it involved was the dogs watching videos of other dogs and people acting silly.

    The same experiment was performed on two people, who sat in an fMRI and watched the same 30-minute video three times.

    Using time stamps, the brain data could be mapped onto the video classifiers.

    Ivis, a neural network machine learning algorithm, was used to analyze the data. Using a neural network, a computer analyzes training examples to perform machine learning. In this case, the neural network was taught to put the data from the brain into groups.

    According to the findings for the two human subjects, the model created using a neural network mapped brain data to object-and action-based classifiers with 99% accuracy.

    The model failed to produce accurate results for the object classifiers when decoding video content from the dogs. However, when it came to identifying the canine actions, it was 75% to 88% accurate.

    The findings imply significant variations between the functioning of the human and canine brains.

    According to Berns, “We humans are very object-oriented.” “The English language has ten times as many nouns as verbs because we are particularly obsessed with naming things. “Dogs seem to be more focused on the action than the person or thing that they are seeing.”

    According to Berns, there are significant differences between the visual systems of dogs and humans. Dogs can only see in blue and yellow tones, but they have a slightly higher density of motion-detecting vision receptors than humans.

    Canine brains being highly tuned to actions first and foremost “makes perfect sense,” the author claims. To avoid being eaten or to keep an eye on potential prey, animals must pay close attention to what is going on in their environment. Movement and action are crucial.

    For Philips, understanding how various animals see the world is crucial to the field research currently being done on the potential effects of predator reintroduction in Mozambique on ecosystems. She claims that historically, there hasn’t been much of a connection between ecology and computer science. But the field of machine learning is growing, and it is starting to be used in more areas, such as ecology.

    Daniel Dilks, an associate professor of psychology at Emory, and Kirsten Gillette, a neuroscience and behavioral biology major at Emory, are additional authors on the paper. Since graduating, Gilette has enrolled in the University of North Carolina‘s postbaccalaureate program.

    Ashwin Sakhardande owns Bhubo, while Rebecca Beasley owns Daisy. A grant from the National Eye Institute helped to fund the study’s human testing.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Must Read

    spot_img