[IT.T.7] A Denial of Service (DoS) attack aims to disrupt the normal functioning and reduce the availability of a machine learning system, making it unusable for legitimate users. This is typically achieved by overwhelming the system with a high volume of requests or resource-intensive tasks, exhausting its computational resources.
System Asset: ML system input/API.
Business Asset: input data.
Security Criteria: availability.
Vulnerabilities:
Threat agent: white-box and black-box scenarios. In the white-box scenario, the attacker is assumed to have complete knowledge of the target machine learning model, its architecture, parameters, utilized training data, and the learning algorithm. In a black-box scenario, the attacker has no knowledge of the target model's architecture, parameters, or training data. The attacker is assumed to be only able to interact with the model by sending it inputs and observing the outputs.
Attack methods:
Impact and harm: Negates the availability of the targeted machine learning model. This leads to potential system failures, service unavailability, an increase in processing times, and a reduction in quality of service.
Security requirement: The machine learning system must be resistant to denial-of-service attacks.
Security controls: