8.6 C
Canberra
Wednesday, July 23, 2025

Examine reveals vision-language fashions cannot deal with queries with negation phrases


Think about a radiologist analyzing a chest X-ray from a brand new affected person. She notices the affected person has swelling within the tissue however doesn’t have an enlarged coronary heart. Seeking to velocity up analysis, she may use a vision-language machine-learning mannequin to seek for stories from related sufferers.

But when the mannequin mistakenly identifies stories with each circumstances, the almost certainly analysis may very well be fairly completely different: If a affected person has tissue swelling and an enlarged coronary heart, the situation could be very more likely to be cardiac associated, however with no enlarged coronary heart there may very well be a number of underlying causes.

In a brand new examine, MIT researchers have discovered that vision-language fashions are extraordinarily more likely to make such a mistake in real-world conditions as a result of they do not perceive negation — phrases like “no” and “would not” that specify what is fake or absent.

“These negation phrases can have a really important impression, and if we’re simply utilizing these fashions blindly, we might run into catastrophic penalties,” says Kumail Alhamoud, an MIT graduate pupil and lead writer of this examine.

The researchers examined the power of vision-language fashions to establish negation in picture captions. The fashions usually carried out in addition to a random guess. Constructing on these findings, the crew created a dataset of photographs with corresponding captions that embody negation phrases describing lacking objects.

They present that retraining a vision-language mannequin with this dataset results in efficiency enhancements when a mannequin is requested to retrieve photographs that don’t include sure objects. It additionally boosts accuracy on a number of alternative query answering with negated captions.

However the researchers warning that extra work is required to deal with the foundation causes of this drawback. They hope their analysis alerts potential customers to a beforehand unnoticed shortcoming that would have severe implications in high-stakes settings the place these fashions are at the moment getting used, from figuring out which sufferers obtain sure therapies to figuring out product defects in manufacturing vegetation.

“It is a technical paper, however there are larger points to think about. If one thing as elementary as negation is damaged, we should not be utilizing massive imaginative and prescient/language fashions in most of the methods we’re utilizing them now — with out intensive analysis,” says senior writer Marzyeh Ghassemi, an affiliate professor within the Division of Electrical Engineering and Laptop Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Data and Determination Programs.

Ghassemi and Alhamoud are joined on the paper by Shaden Alshammari, an MIT graduate pupil; Yonglong Tian of OpenAI; Guohao Li, a former postdoc at Oxford College; Philip H.S. Torr, a professor at Oxford; and Yoon Kim, an assistant professor of EECS and a member of the Laptop Science and Synthetic Intelligence Laboratory (CSAIL) at MIT. The analysis will likely be offered at Convention on Laptop Imaginative and prescient and Sample Recognition.

Neglecting negation

Imaginative and prescient-language fashions (VLM) are educated utilizing enormous collections of photographs and corresponding captions, which they be taught to encode as units of numbers, referred to as vector representations. The fashions use these vectors to differentiate between completely different photographs.

A VLM makes use of two separate encoders, one for textual content and one for photographs, and the encoders be taught to output related vectors for a picture and its corresponding textual content caption.

“The captions specific what’s within the photographs — they’re a optimistic label. And that’s really the entire drawback. Nobody appears at a picture of a canine leaping over a fence and captions it by saying ‘a canine leaping over a fence, with no helicopters,'” Ghassemi says.

As a result of the image-caption datasets do not include examples of negation, VLMs by no means be taught to establish it.

To dig deeper into this drawback, the researchers designed two benchmark duties that check the power of VLMs to grasp negation.

For the primary, they used a big language mannequin (LLM) to re-caption photographs in an present dataset by asking the LLM to consider associated objects not in a picture and write them into the caption. Then they examined fashions by prompting them with negation phrases to retrieve photographs that include sure objects, however not others.

For the second process, they designed a number of alternative questions that ask a VLM to pick out probably the most applicable caption from a listing of intently associated choices. These captions differ solely by including a reference to an object that does not seem within the picture or negating an object that does seem within the picture.

The fashions usually failed at each duties, with picture retrieval efficiency dropping by practically 25 p.c with negated captions. When it got here to answering a number of alternative questions, the most effective fashions solely achieved about 39 p.c accuracy, with a number of fashions acting at and even under random probability.

One cause for this failure is a shortcut the researchers name affirmation bias — VLMs ignore negation phrases and deal with objects within the photographs as a substitute.

“This doesn’t simply occur for phrases like ‘no’ and ‘not.’ No matter the way you specific negation or exclusion, the fashions will merely ignore it,” Alhamoud says.

This was constant throughout each VLM they examined.

“A solvable drawback”

Since VLMs aren’t sometimes educated on picture captions with negation, the researchers developed datasets with negation phrases as a primary step towards fixing the issue.

Utilizing a dataset with 10 million image-text caption pairs, they prompted an LLM to suggest associated captions that specify what’s excluded from the pictures, yielding new captions with negation phrases.

They needed to be particularly cautious that these artificial captions nonetheless learn naturally, or it might trigger a VLM to fail in the true world when confronted with extra complicated captions written by people.

They discovered that finetuning VLMs with their dataset led to efficiency beneficial properties throughout the board. It improved fashions’ picture retrieval talents by about 10 p.c, whereas additionally boosting efficiency within the multiple-choice query answering process by about 30 p.c.

“However our resolution shouldn’t be excellent. We’re simply recaptioning datasets, a type of knowledge augmentation. We’ve not even touched how these fashions work, however we hope it is a sign that it is a solvable drawback and others can take our resolution and enhance it,” Alhamoud says.

On the identical time, he hopes their work encourages extra customers to consider the issue they need to use a VLM to unravel and design some examples to check it earlier than deployment.

Sooner or later, the researchers might develop upon this work by educating VLMs to course of textual content and pictures individually, which can enhance their skill to grasp negation. As well as, they might develop further datasets that embody image-caption pairs for particular functions, resembling well being care.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles