Vulnerability Found in AI Image Recognition – Neuroscience News

by thinkia.org.in
0 comment


Summary: A new study reveals a vulnerability in AI image recognition systems due to their exclusion of the alpha channel, which controls image transparency. Researchers developed “AlphaDog,” an attack method that manipulates transparency in images, allowing hackers to distort visuals like road signs or medical scans in ways undetectable by AI.

Tested across 100 AI models, AlphaDog exploits this transparency flaw, posing significant risks to road safety and healthcare diagnostics. By highlighting these blind spots in image transparency processing, the study urges updates to AI models to secure critical sectors.

The researchers are collaborating with tech giants to address this issue and safeguard image recognition platforms. This gap underscores the importance of thorough security in AI development.

Key Facts

  • AlphaDog manipulates image transparency, misleading AI models in fields like road safety and telehealth.
  • Most AI systems omit the alpha channel, which is crucial for accurate image transparency.
  • Researchers are working with tech companies to integrate alpha channel processing and secure AI.

Source: UT San Antonio

Artificial intelligence can help people process and comprehend large amounts of data with precision, but the modern image recognition platforms and computer vision models that are built into AI frequently overlook an important back-end feature called the alpha channel, which controls the transparency of images, according to a new study.

Researchers at The University of Texas at San Antonio (UTSA) developed a proprietary attack called AlphaDog to study how hackers can exploit this oversight.

Their findings are described in a paper written by Guenevere Chen, an assistant professor in the UTSA Department of Electrical and Computer Engineering, and her former doctoral student, Qi Xia ’24, and published by the Network and Distributed System Security Symposium 2025.

In the paper, the UTSA researchers describe the technology gap and offer recommendations to mitigate this type of cyber threat.

“We have two targets. One is a human victim, and one is AI,” Chen explained.

To assess the vulnerability, the researchers identified and exploited an alpha channel attack on images by developing AlphaDog. The attack simulator causes humans to see images differently than machines. It works by manipulating the transparency of images.

The researchers generated 6,500 AlphaDog attack images and tested them across 100 AI models, including 80 open-source systems and 20 cloud-based AI platforms like ChatGPT.

They found that AlphaDog excels at targeting grayscale regions within an image, enabling attackers to compromise the integrity of purely grayscale images and colored images containing grayscale regions.

The researchers tested images in a variety of everyday scenarios.

They found gaps in AI that pose a significant risk to road safety. Using AlphaDog, for example, they could manipulate the grayscale elements of road signs, which could potentially mislead autonomous vehicles.

Likewise, they found they could alter grayscale images like X-rays, MRIs and CT scans, potentially creating a serious threat that could lead to misdiagnoses in the realm of telehealth and medical imaging.

This could also endanger patient safety and open the door to fraud, such as manipulating insurance claims by altering X-ray results that show a normal leg as a broken leg.

They also found a way to alter images of people. By targeting the alpha channel, the UTSA researchers could disrupt facial recognition systems.

AlphaDog works by leveraging the differences in how AI and humans process image transparency. Computer vision models typically process red, green, blue and alpha (RGBA) images—values defining the opacity of a color.

The alpha channel indicates how opaque each pixel is and allows an image to be combined with a background image, producing a compositite image that has the appearance of transparency.

However, using AlphaDog, the researchers found that the AI models they tested do not read all four RGBA channels; instead they only read data from the RGB channels.

“AI is created by humans, and the people who wrote the code focused on RGB but left the alpha channel out. In other words, they wrote code for AI models to read image files without the alpha channel,” said Chen. “That’s the vulnerability. The exclusion of the alpha channel in these platorms leads to data poisoning.”

She added, “AI is important. It’s changing our world, and we have so many concerns.”

Chen and Xia are working with several key stakeholders, including Google, Amazon and Microsoft, to mitigate the vulnerability regarding AlphaDog’s ability compromise systems.

About this AI research news

Author: Andrea Ari Castaneda
Source: UT San Antonio
Contact: Andrea Ari Castaneda – UT San Antonio
Image: The image is credited to Neuroscience News

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved