Move beyond testing using sample image datasets. 

Introducing a new way to evaluate the artificial intelligence of identity verification vendors.

AI is core to online identity verification. As the complexity of verification tools have advanced, so too have the methods to deceive them.

Advanced machine learning claims are difficult to assess.

Advai can cross-evaluate different providers against the vulnerabilities most important to your organisation. 

With an increased resilience to online threats, adversarial activity and fraud, have confidence when deploying your identification system. 

What's involved?

Graphic Advai Versus@2X
  1. Advanced IDV Testing Workbench.

    Evaluate modern IDV technologies like optical document verification, facial biometrics, liveness detection, as well as fraud detection and risk evaluation models.

  2. Test for and reduce bias.

    Mitigate ethical and regulatory concerns by finding weak spots in your IDV provider. Discover if certain features have an undue influence over verifications, such as eye, hair and skin colour, the presence of beards and glasses, etc..

  3. Replace current inadequate tests.

    Using sample datasets to assess verification providers limits you to the available data and the speed of manual testing. Our approach attacks verification systems at a sub-feature layer, algorithmically optimising to discover weaknesses. 

How it works

In Situ Graphic Mark

Advai uses a library perturbations to tease out system vulnerabilities curated over three years of leading R&D.

Adversarial perturbations can be created to fool a system into thinking one person is another, as seen with Mark and Johnny here.

These perturbations can be invisible to the human eye. Therefore, human control checks miss these attacks.

Combat a sophisticated fraud landscape.

Reduce your risk of fraud by measuring and improving the robustness of your verification provider's AI. 


Quantified robustness will increase your confidence in deployment.


We can work with your vendor to suggest ways to strengthen their systems, specific to your risk appetite and unique industry needs.

Test for real world vulnerabilities and attacks.

Reduce the exclusion of legitimate users.


The real world comes with varying lighting conditions, device types, and potential biases in the training data (such as under represented genders or ethnicities, or even features like beards and glasses).


You can't test on regular samples because these are tests on the equivalent data models were trained on. We've developed methods to perturb sample images, massively increasing training data and improving real world resilience.

Find the optimal verification provider.

Comparative benchmarking helps you find a verification provider for optimal compliance and risk reduction.


Enhance your cost benefit analysis in vendor selection, enabling you to select a partner that matches your risk appetite.


We produce new information that helps you balance speed and accuracy, alongside resilience to adversarial activity. 

Pinpoint the unique failure modes of AI.

Inhuman intelligence make inhuman errors.

 

Evaluate the performance claims with tests built to verify AI models, originally design to assure vision systems for the UK Ministry of Defence.


New strengths provided by AI systems also means new weaknesses. Deep knowledge from adversarial AI research pinpoints these vulnerabilities and strengthens systems against advanced AI powered attacks. 

  • Discern between indistinguishable claims about accuracy and security.

  • Cut through the noise of marketing claims.

  • Reduce risks associated with uninformed decisions.

  • Save time and resources in vendor selection.

Use the most robust verification provider.

Book Call
Cta