• Coins MarketCap
    • Coins MarketCap
    • Crypto Calculator
    • Top Gainers and Loser of the day
  • Crypto Exchanges
  • Bitcoin News
  • Crypto News
    • Cryptocurrency
    • Blockchain
    • Finance
    • Investing
    • View all latest Updates regarding crypto
Monday, October 13, 2025
WIREOPEDIA
No Result
View All Result
Contribute!
CONTACT US
  • Home
  • Breaking News
  • World
  • UK
  • US
  • Entertainment
  • Business
  • Technology
  • Defense
  • Health Care
  • Politics
  • Strange
  • Crypto News
WIREOPEDIA
  • Home
  • Breaking News
  • World
  • UK
  • US
  • Entertainment
  • Business
  • Technology
  • Defense
  • Health Care
  • Politics
  • Strange
  • Crypto News
No Result
View All Result
WIREOPEDIA
No Result
View All Result
Home Blockchain

AI model audits need a ‘trust, but verify’ approach to enhance reliability

by wireopedia memeber
May 10, 2025
in Blockchain, Crypto, Crypto Market, Cryptocurrency, Finance, Investing, Market
0
AI model audits need a ‘trust, but verify’ approach to enhance reliability
74
SHARES
1.2k
VIEWS
Share on FacebookShare on Twitter

The following is a guest post and opinion of Samuel Pearton, CMO at Polyhedra.

You might also like

Dogecoin To Take Another Shot At The Moon As Classic Pattern Reappears

Nobel Peace Prize bets on Polymarket under scrutiny: Report

Bitcoin Price Watch: Rangebound Near $115K as Volume Signals Caution

Reliability remains a mirage in the ever-expanding realm of AI models, affecting mainstream AI adoption in critical sectors like healthcare and finance. AI model audits are essential in restoring reliability within the AI industry, helping regulators, developers, and users enhance accountability and compliance.

But AI model audits can be unreliable since auditors have to independently review the pre-processing (training), in-processing (inference), and post-processing (model deployment) stages. A ‘trust, but verify’ approach improves reliability in audit processes and helps society rebuild trust in AI.

Traditional AI Model Audit Systems Are Unreliable

AI model audits are useful for understanding how an AI system works, its potential impact, and providing evidence-based reports for industry stakeholders.

For instance, companies use audit reports to acquire AI models based on due diligence, assessment, and comparative benefits between different vendor models. These reports further ensure developers have taken necessary precautions at all stages and that the model complies with existing regulatory frameworks.

But AI model audits are prone to reliability issues due to their inherent procedural functioning and human resource challenges.

According to the European Data Protection Board’s (EDPB) AI auditing checklist, audits from a “controller’s implementation of the accountability principle” and “inspection/investigation carried out by a Supervisory Authority” could be different, creating confusion among enforcement agencies.

EDPB’s checklist covers implementation mechanisms, data verification, and impact on subjects through algorithmic audits. But the report also acknowledges audits are based on existing systems and don’t question “whether a system should exist in the first place.”

Besides these structural problems, auditor teams require updated domain knowledge of data sciences and machine learning. They also require complete training, testing, and production sampling data spread across multiple systems, creating complex workflows and interdependencies.

Any knowledge gap or error between coordinating team members can lead to a cascading effect and invalidate the entire audit process. As AI models become more complex, auditors will have additional responsibilities to independently verify and validate reports before aggregated conformity and remedial checks.

The AI industry’s progress is rapidly outpacing auditors’ capacity and capability to conduct forensic analysis and assess AI models. This leaves a void in audit methods, skill sets, and regulatory enforcement, deepening the trust crisis in AI model audits.

An auditor’s primary task is to enhance transparency by evaluating risks, governance, and underlying processes of AI models. When auditors lack the knowledge and tools to assess AI and its implementation within organizational environments, user trust is eroded.

A Deloitte report outlines the three lines of AI defense. In the first line, model owners and management have the main responsibility to manage risks. This is followed by the second line, where policy workers provide the needed oversight for risk mitigation.

The third line of defense is the most important, where auditors gauge the first and second lines to evaluate operational effectiveness. Subsequently, auditors submit a report to the Board of Directors, collating data on the AI model’s best practices and compliance.

To enhance reliability in AI model audits, the people and underlying tech must adopt a ‘trust but verify’ philosophy during audit proceedings.

A ‘Trust, But Verify’ Approach to AI Model Audits

‘Trust, but verify’ is a Russian proverb that U.S. President Ronald Reagan popularized during the United States–Soviet Union nuclear arms treaty. Reagan’s stance of “extensive verification procedures that would enable both sides to monitor compliance” is beneficial for reinstating reliability in AI model audits.

In a ‘trust but verify’ system, AI model audits require continuous evaluation and verification before trusting the audit results. In effect, this means there is no such thing as auditing an AI model, preparing a report, and assuming it to be correct.

So, despite stringent verification procedures and validation mechanisms of all key components, an AI model audit is never safe. In a research paper, Penn State engineer Phil Laplante and NIST Computer Security Division member Rick Kuhn have called this the ‘trust but verify continuously’ AI architecture.

The need for constant evaluation and continuous AI assurance by leveraging the ‘trust but verify continuously’ infrastructure is critical for AI model audits. For example, AI models often require re-auditing and post-event reevaluation since a system’s mission or context can change over its lifespan.

A ‘trust but verify’ method during audits helps determine model performance degradation through new fault detection techniques. Audit teams can deploy testing and mitigation strategies with continuous monitoring, empowering auditors to implement robust algorithms and improved monitoring facilities.

Per Laplante and Kuhn, “continuous monitoring of the AI system is an important part of the post-deployment assurance process model.” Such monitoring is possible through automatic AI audits where routine self-diagnostic tests are embedded into the AI system.

Since internal diagnosis may have trust issues, a trust elevator with a mix of human and machine systems can monitor AI. These systems offer stronger AI audits by facilitating post-mortem and black box recording analysis for retrospective context-based result verification.

An auditor’s primary role is to referee and prevent AI models from crossing trust threshold boundaries. A ‘trust but verify’ approach enables audit team members to verify trustworthiness explicitly at each step. This solves the lack of reliability in AI model audits by restoring confidence in AI systems through rigorous scrutiny and transparent decision-making.

The post AI model audits need a ‘trust, but verify’ approach to enhance reliability appeared first on CryptoSlate.

Read Entire Article
Tags: BlockchainCoin SurgesCryptocurrenciesCryptoslateMarket StoriesTrading
Share30Tweet19

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related News

Two teenage girls on student exchange from Morocco missing in London

Two teenage girls on student exchange from Morocco missing in London

January 30, 2025
Offences committed by hospital worker who sexually abused dozens of corpses ‘could happen again’

Offences committed by hospital worker who sexually abused dozens of corpses ‘could happen again’

July 15, 2025

Dogecoin Returns To December 2020 Levels, Is Another 36,000% Rally Possible?

July 14, 2025

Browse by Category

  • Blockchain
  • Breaking News
  • Business
  • Crypto
  • Crypto Market
  • Cryptocurrency
  • Defense
  • Entertainment
  • Finance
  • Health Care
  • Investing
  • Market
  • Politics
  • Strange
  • Technology
  • UK News
  • US News
  • World
WIREOPEDIA

Wireopedia is an automated news feed. The Wireopedia AI pulls from sources with different views so you can see the various sides of different arguments and make a decision for yourself. Wireopedia will be firmly committed to the public interest and democratic values.

Privacy Policy     Terms and Conditions

CATEGORIES

  • Blockchain
  • Breaking News
  • Business
  • Crypto
  • Crypto Market
  • Cryptocurrency
  • Defense
  • Entertainment
  • Finance
  • Health Care
  • Investing
  • Market
  • Politics
  • Strange
  • Technology
  • UK News
  • US News
  • World

BROWSE BY TAG

Bitcoin Bitcoinist Bitcoinmagazine Blockchain Breaking News Business BuzzFeed Celebrity News Coin Surges Cointelegraph Cryptocurrencies Cryptoslate Defense Entertainment Health Care insidebitcoins Market Stories newsbtc Politico Skynews Strange Technology Trading UK US World

RECENT POSTS

  • Dogecoin To Take Another Shot At The Moon As Classic Pattern Reappears
  • Nobel Peace Prize bets on Polymarket under scrutiny: Report
  • Bitcoin Price Watch: Rangebound Near $115K as Volume Signals Caution
  • Following Diane Keaton’s Death, People Are Discovering Her Lasting Friendship With Steve Martin And Martin Short
  • Madagascar’s president has left the country, says opposition leader

© 2024 WIREOPEDIA - All right reserved.

No Result
View All Result
  • Home
  • Breaking News
  • World
  • UK
  • US
  • Entertainment
  • Business
  • Technology
  • Defense
  • Health Care
  • Politics
  • Strange
  • Crypto News
  • Contribute!

© 2024 WIREOPEDIA - All right reserved.

You have not selected any currencies to display