[IT.T.11] Cyber-physical attacks against machine learning models refer to attacks that exploit the interaction between the cyber (computing and communication) components and the physical components of a system. These attacks target machine learning models that are integrated into cyber-physical systems (CPS), aiming to cause physical consequences by manipulating the data, the training process, or the model itself.
System Asset: processing hardware running the ML model.
Business Asset: input data.
Security Criteria: availability.
Vulnerabilities:
Threat agent: white-box and black-box scenarios. In the white-box scenario, the attacker is assumed to have complete knowledge of the target machine learning model, its architecture, parameters, utilized training data, and the learning algorithm. In a black-box scenario, the attacker has no knowledge of the target model's architecture, parameters, or training data. The attacker is assumed to be only able to interact with the model by sending it inputs and observing the outputs.
Attack methods:
Impact and harm: Negates the availability of the targeted machine learning model.
Security requirement: The machine learning system must be resistant to malicious cyber-physical attacks.
Security controls: