Definition
[MP.T.1],[IT.T.10] A hardware side-channel attack is a type of attack that exploits vulnerabilities in the physical implementation of a machine learning (ML) model to extract sensitive information, such as model parameters, training data, or the model's architecture. Instead of targeting the ML algorithm directly, these attacks measure and analyze side-channel information that is correlated with the ML assets.
Targeted assets
System Asset: processing hardware running the ML model.
Business Asset: model's parameters, input data.
Security Criteria: confidentiality.
Attack details
Exploited vulnerabilities
Vulnerabilities:
- The state of machine learning model's operational component data can be inferred through measurable attributes of the machine learning, such as power consumption, EM emission, memory access pattern.
Threat agent
Threat agent: black-box scenario. In a black-box scenario, the attacker has no knowledge of the target model's architecture, parameters, or training data. The attacker is assumed to be only able to interact with the model by sending it inputs and observing the outputs.
Attack methods
Attack methods:
- Measurable attributes of hardware processing target machine learning model's operation are observed. The observed and measured metrics are analyzed to infer the machine learning model's structure.
Impact and harm
Impact and harm: Negates the confidentiality of the targeted machine learning model. This may lead to the loss of intellectual property.
Security countermeasures
Security requirements
Security requirement: The machine learning system must be resistant to malicious side-channel attacks.
Security controls
Security controls:
- Internal operation shuffling: order of execution is mixed to modify the scheduled operation time.
- Computation masking: taints sensitive operations with random values, to eliminate dependencies between the private data and the side-channel attributes.
- Augmenting masking: masks adder trees and ReLU (Rectified Linear Unit).
- BoMaNet masking: uses gate-level Boolean masking for splitting secrets, thus reducing relation between secret related computations and side-channel attributes.
- Data quantization: mitigates weight matrices leakage based of off cache access patterns.
- Cache partitioning: distinct portions of the last-level cache are allocated to different applications to eliminate cache interference between the attacker and the victim. Reduction of hardware profiler's precision: reduces side-channel leakage from context-switching penalties.
- Oblivious RAM (ORAM:) reduces side-channel information leakage based of off memory patterns an timing, shuffles and re-encrypts the data to conceal access pattern.
- Memory-Trace Obliviousness (MTO): reduces side-channel information leakage based off memory patterns an timing. Creation of fake memory access with TIE (Trusted Inference Engine).
- Randomization of the width of the coalescing unit and merge of transactions: mitigates GPU memory timing side-channel attacks.
- GPUGuard: detects spy programs through a decision tree method, thus mitigating side-channel attacks. Possible direction - explainable AI, if both inductive and deductive reasonings are incorporated together, this could reduce frequency of logical fallacies.