IAS Gyan

Daily News Analysis

This tool can help detect data vulnerabilities in AI-powered systems

14th November, 2020 Science and Technology

Context: A new open-source tool called ‘Machine Learning Privacy Meter’ has been developed to help detect data vulnerabilities in artificial intelligence (AI)-powered systems and prevent them from possible attacks.

  • A team of researchers at National University of Singapore (NUS) have developed the tool along with a general attack formula that provides a framework to test different types of inference attacks in AI systems.
  • AI models used in various services are trained on data sets that include sensitive information.
  • The models are vulnerable to inference attacks that allow hackers to extract sensitive information about training data.
  • In an attack, hackers frequently ask the AI service to generate information, and then analyse the data to determine a pattern. Hackers then infer if a specific type of data was used for training the AI programme, and can even reconstruct the original dataset.
  • Inference attacks are difficult to detect as the system just assumes the hacker is a regular user while supplying information,”.
  • The tool can simulate such attacks and quantify how much the model leaks about individual data records in its training set.
  • It also highlights the vulnerable areas in the training data, and shows possible techniques that organisations can adopt to mitigate a possible inference attack, in advance.
  • Data protection regulations such as the General Data Protection Regulation mandate the need to assess the privacy risks to data when using machine learning.
  • This tool can aid companies in achieving regulatory compliance by generating reports for Data Protection Impact Assessments.

https://www.thehindu.com/sci-tech/technology/this-tool-can-help-detect-data-vulnerabilities-in-ai-powered-systems/article33084682.ece?homepage=true