Page 1 of 1

New AI Framework Combines Transparency with High Accuracy in Image Classification

PostPosted:15 Jun 2025 00:42
by peterenlkert
A new explainable artificial intelligence (AI) method developed at the University of Michigan enables image classification models to deliver transparent decisions without sacrificing accuracy—a breakthrough particularly valuable for high-stakes applications like medical diagnostics.

In conventional AI systems, results often lack insight into why a decision was made. For instance, if a model labels a tumor as malignant but doesn't indicate whether it was due to its size, shape, or some anomaly in the image, clinicians can't validate or explain the finding. Worse, the model might have relied on irrelevant or misleading patterns that humans would recognize as spurious.

“We need AI systems we can trust, especially in sensitive domains like health care,” said Salar Fattahi, assistant professor of industrial and operations engineering at U-M and senior author of the study. “If we don’t understand how the model makes decisions, we can’t rely on it safely. My goal is to help build AI that is not just accurate, but also transparent and easy to interpret.”

AI models classify images by associating them with numerical vectors known as concept embeddings. These embeddings represent concepts such as “fracture,” “arthritis,” or “healthy bone.” The field of explainable AI (XAI) works to make these concept embeddings interpretable—so that humans can understand how they influence the AI’s decisions.

However, many current XAI techniques are added onto AI systems after training, rather than being part of the model from the start. These post hoc methods may identify key decision factors, but paradoxically, they often lack explainability themselves. Moreover, they treat concept embeddings as fixed, even if the embeddings are based on flawed or inconsistent data—such as those generated from large, uncurated image-text datasets like CLIP.

To address these issues, the U-M team developed Constrained Concept Refinement (CCR), a new framework that embeds interpretability directly into the model’s architecture and allows concept representations to adjust based on the specific task. The model’s flexibility can be tuned to prioritize either interpretability—with more restrictions on concept embeddings—or accuracy, by allowing more adaptive embeddings.

This means if a concept like “healthy bone” is misrepresented in a pretrained dataset, CCR can automatically refine it based on the actual task and data at hand. This dual optimization leads to enhanced accuracy and clearer decision-making.

“What surprised me most,” Fattahi added, “was realizing that interpretability doesn’t have to come at the expense of performance. With the right model design, we can have both—accurate predictions and decisions we can actually explain.”

https://github.com/AidenKLM/ATHENA-Bloo ... d-emeralds
https://github.com/BlakeRNW/Dark-War-Su ... s-MOD-2025
https://github.com/BradyTDN/Asphalt-8-u ... st-version
https://github.com/CalebGNS/Hill-Climb- ... uckets-MOD
https://github.com/CarterSNB/Motorcycle ... y-MOD-2025
https://github.com/ColeDSB/Extreme-Car- ... -Pass-2025
https://github.com/ConnorDVS/Moto-Rider ... y-MOD-2025
https://github.com/DylanKMP/Balkan-Driv ... y-MOD-2025
https://github.com/EthanBKT/Car-Parking ... d-MOD-2025
https://github.com/PaulHMB/OTR---Offroa ... -cars-2025
https://github.com/JakeGNT/NFS-No-Limit ... d-MOD-2025
https://github.com/JaxonLVB/Cars-Arena- ... ited-money
https://github.com/LeviAMT/Drift-2-Drag ... rsion-2025
https://github.com/MasonBKT/Rally-Fury- ... d-MOD-2025
https://github.com/NathanKNW/Driving-Sc ... money-2025
https://github.com/NolanGBT/Real-Racing ... -gold-2025
https://github.com/ParkerGNS/Drift-Max- ... -cars-2025
https://github.com/RyanHND/Driving-Zone ... y-MOD-2025
https://github.com/TylerGRT/Offroad-Out ... money-2025
https://github.com/DavidFND/Stunt-Car-E ... y-and-gems
https://github.com/TylerDVS/Rebel-Racin ... st-version
https://github.com/WadeTMD/Dyno-2-Race- ... money-2025
https://github.com/ChrisGNW/My-Dream-St ... -gems-2025
https://github.com/AndrewTMD/Wasteland- ... y-and-gems
https://github.com/ChrisPGN/Pizza-Ready ... thing-2025
https://github.com/MarkLTN/Top-Eleven-2 ... tokens-MOD
https://github.com/KevinEMD/eFootball-2 ... ited-coins
https://github.com/TerryABD/Score-Hero- ... -life-2025
https://github.com/RobertGRT/Real-Boxin ... s-MOD-2025
https://github.com/WilliamPTB/The-Grand ... -gold-2025
https://github.com/DavidFMW/Forge-of-Em ... monds-2025
https://github.com/ThomasDTS/Static-Shi ... -gems-2025
https://github.com/MatthewPRT/Ultimate- ... -money-MOD
https://github.com/AndrewTNS/Drag-Racin ... st-version
https://github.com/BrianMWA/Hero-Wars-A ... st-version
https://github.com/KevinGCT/Shadow-Figh ... l-MOD-2025
https://github.com/StevenBKN/Shadow-Fig ... y-MOD-2025
https://github.com/JoshBNG/Assoluto-Rac ... money-2025
https://github.com/KevinBKT/CarX-Highwa ... y-MOD-2025
https://github.com/JeffreyTKT/Beach-Bug ... s-2025-MOD
https://github.com/JasonGMB/Universal-T ... y-2025-MOD
https://github.com/PatrickDFK/Truck-Sim ... y-MOD-2025
https://github.com/AdamFNG/APEX-Racer-u ... d-MOD-2025
https://github.com/NathanWDN/F1-Clash-2 ... -money-MOD
https://github.com/ChrisJNK/Pocket-Cham ... st-version
https://github.com/DavidHKD/Pocket-Styl ... rsion-2025


When tested on standard image classification benchmarks like CIFAR10/100, ImageNet, and Places365, CCR outperformed two leading explainable AI methods—CLIP-IP-OMP and label-free CBM—in both prediction accuracy and computational efficiency. It also cut runtime by a factor of ten, offering a faster and more cost-effective solution.

“Although our current experiments focus on image classification, the method’s low cost and adaptability make it a strong candidate for other machine learning domains,” said Geyu Liang, U-M doctoral graduate and lead author of the study.

For example, in finance, where AI is used to decide loan approvals, explainable AI could clarify whether a rejection was based on legitimate factors like income or credit history, rather than biased or unrelated variables—improving transparency and fairness.

“We’ve only scratched the surface,” said Fattahi. “What excites me most is the strong evidence that explainability can be built into modern AI efficiently and without compromise.”