https://gtr.ukri.org/projects?ref=10065751
Advai is delighted to announce our latest research grant focusing on enhancing the trustworthiness of AI, particularly in the realm of Computer Vision. This project aims to address the challenges of deploying Computer Vision systems, especially in safety-critical environments, where unexpected failures can lead to substandard performance, reputational damage, or regulatory issues.
Key Aspects of the Project:
Development of Sandbox Test Environments: We are creating independent testing environments for Computer Vision systems. These environments will act as proxies for real-world deployment, enabling us to test systems without the risk associated with actual deployment failures.
Establishing Reliable Metrics: Our goal is to develop metrics for predicting failure modes in AI systems. These metrics will help in identifying potential engineering failures (like imbalanced data or adversarial attacks) and issues where AI systems may not meet societal expectations (such as biases related to race or gender).
Enhancing Trust in AI: By accurately predicting and mitigating failure modes, we aim to foster trust in AI technologies. Our project's approach will improve real-world performance predictions during development and establish tools indicative of real-world model failures.
Benefits for Trustworthy AI: The project will enable more efficient model development by providing early indicators of potential deployment success or failure. Additionally, it will accelerate the production of data that can help in establishing benchmarks for model performance in real-world settings.
This initiative is a significant step in Advai's commitment to advancing trustworthy and ethical AI applications, ensuring that AI systems are reliable, fair, and effective in various real-world scenarios.