Home AI News Software to Verify AI Knowledge and Ensure Data Security

Software to Verify AI Knowledge and Ensure Data Security

0
Software to Verify AI Knowledge and Ensure Data Security

Generative artificial intelligence (AI) systems have garnered significant interest worldwide. In line with this, researchers from the University of Surrey have developed software that can assess the amount of information an AI has extracted from an organization’s digital database.

Enhancing Online Security Protocol

Utilizing Surrey’s verification software, companies can bolster their online security measures. By employing this tool, organizations gain insights into whether an AI has acquired excessive knowledge or accessed sensitive data.

Additionally, the software possesses the ability to identify if the AI has discovered and can exploit vulnerabilities in software code. For instance, in the context of online gaming, the software can determine if an AI consistently wins at online poker by exploiting a coding flaw.

Understanding AI’s Knowledge

The lead author of the research paper, Dr. Solofomampionona Fortunat Rajaona, a Research Fellow in formal verification of privacy at the University of Surrey, explains the significance of their work. According to Dr. Rajaona, AI systems often interact with each other or with humans, such as self-driving cars on a highway or robots in a hospital. Understanding the extent of an AI’s knowledge has been a persistent challenge, but their verification software offers a solution.

Dr. Rajaona further elaborates, “Our verification software can determine the AI’s learning capability from their interactions, whether they possess adequate knowledge for successful cooperation, or if their knowledge breaches privacy. By verifying what AI has learned, we can provide organizations with the confidence to deploy AI effectively and securely.”

Recognized Excellence

Surrey’s software study won the prestigious best paper award at the 25th International Symposium on Formal Methods.

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, emphasizes the importance of developing tools that can validate the performance of generative AI models. Such tools are crucial for the safe and responsible deployment of AI, especially considering the recent surge of interest in generative AI models. Professor Hilton believes this research is a significant step towards preserving the privacy and integrity of datasets utilized for AI training.

For more information, visit: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here