Projects for students
CYBERSECURITY TOPICS
Project topic #1
Title: Using information-theoretic approaches to find bounds on the performance of machine learning techniques in SCA
Skills: Information theory, Side-channel analysis, Machine learning
Type: Master thesis
Supervisor: Lejla Batina
Daily supervisor: Vahid Jahandideh
Description: Information theory is influential in many areas, including side-channel analysis, where It has been used in numerous side-channel papers. Still, research is ongoing to obtain new information-theoretic bounds for the success rate of side-channel attacks. See [1] as a recently published paper in this field. Also, see [2] for more practical information-theoretic bounds. There is also a mathematical reduction from noisy measurements to random noiseless values in [3]. In this project, we obtain new results in this field by combining the idea of the reduction with existing information-theoretic bounds.
Related work:
- On the Success Rate of Side-Channel Attacks on Masked Implementations.
- Perceived Information Revisited.
- Unifying Leakage Models: from Probing Attacks to Noisy Leakage.
Project topic #2
Title: Higher order DPA attack against ASCON and Xoodyak
Skills: Side Channel Analysis, Cryptography
Type: Internship / Master thesis
Supervisor: Lejla Batina
Daily supervisor: Silvia Mella, Parisa Amiri Eliasi
Description: Even if a cipher is theoretically proven to be secure, when it comes to real-world implementation, it can be susceptible to a powerful class of attacks called Side Channel Analysis (SCA). Countermeasures are introduced to secure implementation against differential power analysis (DPA). One of these countermeasures is Boolean marking, which relies on splitting the secret information into shares and then processing them independently. This protects the algorithm against simple first-order DPA. However, it is still possible to conduct higher-order DPA that is based on higher-order statistics. In this master thesis/internship project, you are supposed to explore higher order DPA attack against ASCON (winner of the NIST LWC competition) and Xoodyak.
Related work:
- Differential Power Analysis
- Using Second-Order Power Analysis to Attack DPA Resistant Software
Project topic #3
Title: Fault Injection attacks on multi-variate post-quantum cryptographic implementations
Skills: Embedded C programming, Python programming
Type: Master Thesis / Bachelor Thesis / Internship
Supervisor: Lejla Batina
Daily supervisor: Durba Chatterjee
Description: Multivariate post-quantum signature schemes, such as Unbalanced Oil vinegar (UOV) and related constructions, are considered efficient candidates for post-quantum security, making them interesting targets for fault injection attacks. While recent surveys and studies have identified potential fault-based weaknesses, practical evaluations on real devices remain limited. The goal of this project is to experimentally perform the fault injection attacks via clock/voltage glitching on software implementations of UOV schemes on ARM Cortex-M4.
Related work:
- [1] SoK: On the Physical Security of UOV-Based Signature Schemes
- [2] https://eprint.iacr.org/2025/2101.pdf
- [3] A New Fault Attack on UOV Multivariate Signature Scheme | Springer Nature Link
Project topic #4
Title: Power Side-Channel Leakage Evaluation of Unified Protected ML-DSA and ML-KEM Hardware Implementations
Skills: Python Programming, SCA knowledge, Signal processing
Type: Master Thesis / Internship
Supervisor: Lejla Batina
Daily supervisor: Azade Rezaeezade
Description: This project focuses on the side-channel analysis (SCA) of a unified hardware implementation of two standardized post-quantum cryptographic algorithms: ML-KEM (Kyber) and ML-DSA (Dilithium). The goal is to evaluate the resistance of a protected (masked) hardware implementation against power side-channel attacks. The evaluation will involve analyzing and attacking different cryptographic operations, including both functions shared between the two schemes and functions specific to one of them.
Analysis techniques
You will apply various SCA techniques, such as:
- Simple Power Analysis (SPA)
- Differential Power Analysis (DPA), especially Correlation Power Analysis (CPA)
- Template Attacks
- Deep Learning-based attacks
Target Functions
The operations to be evaluated include:
- Masked Keccak
- Masked Decode
- Masked Polynomial Multiplication
- Secure Decomposition
- Secure Bound Check
- Secure Comparison
A measurement dataset is already available, so you can start directly with the analysis phase.
Getting Started
You are encouraged to begin by studying the following two papers:
- SoK: Reassessing Side-Channel Vulnerabilities and Countermeasures in PQC Implementations
- Efficient unified architecture for post-quantum cryptography: combining Dilithium and Kyber
The first paper introduces the SCA methodology and attack strategies for PQC. The second paper describes the unified hardware implementation (currently unprotected version). The paper describing the protected (masked) implementation is currently under review and will be shared once published. It will provide additional insights into potential target operations and sensitive intermediate variables. Based on the literature, you can select:
- Your preferred SCA method
- Your target function or operation for evaluation.
Notes: If the attack is very straightforward, please select multiple methods or targets. If you cannot evaluate this yourself, no worries, we will help you by doing so. The target(s) and method(s) will be assigned to you if they have not been assigned to someone else before you. This project is well suited for students interested in hardware security, post-quantum cryptography, and practical side-channel analysis. There is possibility to request the project in group up to 5 students.
AI CYBERSECURITY TOPICS
Project topic #5
Title: Bridging Physical Side-Channel-based and Algorithm-based DNN Model Extraction Attacks
Skills: Basic PyTorch, Machine Learning
Type: Internship / Master thesis
Supervisor: Lejla Batina
Daily supervisor: Péter Horváth, Zhuoran Liu
Description: Deep neural networks have been deployed in many edge applications, e.g., Advanced driver-assistance systems (ADAS) and Automatic Speech Recognition (ASR), where trained deep learning models are deployed on edge hardware for inference. Model extraction or model stealing attacks aim to extract secrets from deep learning models [a, b]. Current physical side-channel-based model extraction attacks focus on exploiting power or EM leakages to extract the model secrets targeting edge hardware devices [c], while algorithm-based model extraction attacks focus on exploiting the input-output pairs targeting the Machine-Learning-as-a-Service (MLaaS) paradigm [d]. This project looks at the intersection of these two types of model extraction attacks with the general goal of bridging these two types of attacks.
Related work:
- [a] I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences. ACM CSUR 2023.
- [b] SoK: Neural Network Extraction Through Physical Side Channels. Usenix Security 2024.
- [c] BarraCUDA: Edge GPUs do Leak DNN Weights. Usenix Security 2025.
- [d] Towards Data-Free Model Stealing in a Hard Label Setting. CVPR 2022.
Project topic #6
Title: Stealthy Backdoors as Watermarks for Deep Neural Nets
Skills: PyTorch, Machine Learning
Type: Internship / Master thesis
Supervisor: Lejla Batina
Daily supervisor: Zhuoran Liu
Description: Backdoor, initially as a type of attack, inserts a secret functionality into a model that is activated when inputs containing a specific trigger are provided to the model during inference. Due to the nature that backdoors slightly influence regular model performance, backdoors can be used as a neural network watermark to protect the weights that are commonly treated as intellectual property [a]. Recent backdoor mitigation research showed that non-stealthy backdoors could be easily mitigated [b], substantially compromising their utility as watermarks. In this project, the objective is to design a stealthy backdoor-based neural network watermark such that it can resist different state-of-the-art backdoors [c] or watermark mitigation.
Related work:
- [a] Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. Usenix Security 2018.
- [b] BAN: Detecting Backdoors Activated by Adversarial Neuron Noise. NeurIPS 2024.
- [c] Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features. ICLR 2024.
Project topic #7
Title: Fault injection on neural networks
Skills: Fundamentals of neural networks
Type: Internship / Master thesis
Supervisor: Lejla Batina
Daily supervisor: Zhuoran Liu
Description: In recent years, neural networks went from a semi-abandoned technology to being the next big thing. After the AI winter, the first successful application of neural networks was image recognition. This is still one of the main applications of neural networks, and, as many car manufacturers are exploring the possibility of self-driving cars, a security concern too. Previous work showed that it is possible to induce faults in image classifiers that make the neural network misclassify images. This project will explore the practical feasibility of injecting faults in neural networks in a way that would undermine the security of an autonomous vehicle by focusing on specific attack scenarios. Think for example an adversary that hangs a strong magnet to a traffic light’s pole. When a self-driving car passes nearby the magnet, their relative movement will briefly induce an electromagnetic field. Can this EM pulse make the car misclassify a road sign or not detect a pedestrian? Is this attack easy and reliable enough to be a security concern in the real world? In this project, you will build a fault injection setup and perform electromagnetic fault injection on an image classifier, and then evaluate the feasibility of the considered attacks in the real world.
Related work:
- DeepLaser: Practical Fault Attack on Deep Neural Networks (arxiv.org)
- Fault injection attack on deep neural network | IEEE Conference Publication | IEEE Xplore
- Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip (arxiv.org)
- Can A Car Be Considered A Faraday Cage? (faradaysource.com)
- Compute Solution for Tesla's Full Self-Driving Computer (paywalled)
Project topic #8
Title: EM side-channel for HDMI eavesdropping with LLM extension
Skills: Python Programming, Machine Learning
Type: Master thesis
Supervisor: Lejla Batina
Daily supervisor: Lizzy Grootjen
Description: Side-channels can occur in different forms, such as power consumption or electromagnetic emanation. TEMPEST is the name which refers to spying on information systems through leaking emanation, including radio/electrical signals, sounds and vibrations. TEMPEST has been applied on the electromagnetic emanation of HDMI-cables, an open source implementation has been made by [2]. They reconstruct the monitor image from electromagnetic signals. Two authors used deep learning to improve the quality of the images for both mobile devices and computer monitors [3] [4]. The authors from [3] improved TEMPEST-images from HDMI emanation, although the text in the images still have an error rate of 35%. In this project, you will investigate if it is possible to extend the pipeline from these authors by improving the character error rate by using a LLM. Concretely, you’ll need to:
- Extract the text from an image.
- Input this data to a LLM.
- Process the output of the LLM back into the image.
- Build a pipeline around the pretrained neural network and the LLM for an end-to-end output (TEMPEST-images to fully reconstructed image).
Related work:
- [1] Wikipedia page on the origin of TEMPEST.
- [2] gr-tempest: an open-source GNU Radio implementation of TEMPEST.
- [3] Deep-TEMPEST: Using Deep Learning to Eavesdrop on HDMI from its Unintended Electromagnetic Emanations.
- [4] Screen Gleaning: A Screen Reading TEMPEST Attack on Mobile Devices Exploiting an Electromagnetic Side Channel.
