Splitout: Out-Of Training-Hijacking Detection in Split Learning Via Outlier Detection
No Thumbnail Available
Date
2025
Authors
Erdogan, Ege
Teksen, Unat
Celiktenyildiz, M. Salih
Kupcu, Alptekin
Cicek, A. Erciment
Journal Title
Journal ISSN
Volume Title
Publisher
Springer-verlag Singapore Pte Ltd
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Split learning enables efficient and privacy-aware training of a deep neural network by splitting a neural network so that the clients (data holders) compute the first layers and only share the intermediate output with the central compute-heavy server. This paradigm introduces a new attack medium in which the server has full control over what the client models learn, which has already been exploited to infer the private data of clients and to implement backdoors in the client models. Although previous work has shown that clients can successfully detect such training-hijacking attacks, the proposed methods rely on heuristics, require tuning of many hyperparameters, and do not fully utilize the clients' capabilities. In this work, we show that given modest assumptions regarding the clients' compute capabilities, an out-of-the-box outlier detection method can be used to detect existing training-hijacking attacks with almost-zero false positive rates. We conclude through experiments on different tasks that the simplicity of our approach we name SplitOut makes it a more viable and reliable alternative compared to the earlier detection methods.
Description
Keywords
Machine learning, Data privacy, Split learning, Training-hijacking
Turkish CoHE Thesis Center URL
Fields of Science
Citation
0
WoS Q
N/A
Scopus Q
Q3
Source
23rd International Conference on Cryptology and Network Security (CANS) -- SEP 24-27, 2024 -- Univ Cambridge, Dep Comp Sci & Tech, Cambridge, ENGLAND
Volume
14906
Issue
Start Page
118
End Page
142