SNE Master Research Projects 2019 - 2020

Contact TimeLine Projects LeftOver Projects Presentations-rp1 Presentations-rp2 Objective Process Tips Project Proposal


Cees de Laat, room: C.3.152
And the OS3 staff.
Course Codes:

Research Project 1 53841REP6Y
Research Project 2 53842REP6Y


RP1 (January):
  • Sept 11, 2019, 13h00-13h30: Introduction to the Research Projects.
  • xxx, 10h15-13h00: Detailed discussion on selections for RP1.
  • Monday Jan 6th - Friday Jan 31th 2020: Research Project 1.
  • Friday Jan 10th: (updated) research plan due.
  • Monday Feb 3, 2020 10h00-17h00: Presentations RP1 in B1.23 at SP 904.
  • Tuesday Feb 4, 2020 10h00 - 17h00: Presentations RP1 in B1.23 at SP 904.
  • Sunday Feb 9, 24h00: RP - reports due
RP2 (June):
  • XXX, 2020, 14h00-16h00, B1.23 Detailed discussion on chosen subjects for RP2.
  • Monday Jun 1th - Friday Jun 25, 2020: Research Project 2.
  • Friday Jun 5th: (updated) research plan due.
  • Monday Jun 29, 2020, 10h00-17h00: presentations in C0.005 @ SP904.
  • Tuesday Jun 30, 2020, 10h00-17h00: presentations in C1.112 @ SP904.
  • Sunday Jul 5, 24h00: RP - reports due


Here is a list of student projects. Find here the left over projects this year: LeftOvers.
In a futile attempt to prevent spam "@" is replaced by "=>" in the table.
Color of cell background:
Project available Presentation received. Confidentiality was requested.
Currently chosen project. Report received. Blocked, not available.
Project plan received. Completed project. Report but no presentation
Outside normal rp timeframe project will be done in next block


supervisor contact



Zero Trust Network Security Model in containerized environments.

Security’s main purpose in an organization is to prevent leaks of confidential data and lowering the risks of modern cyber-attacks against network which recently became critical. Zero Trust is a model of security that treats all network traffic, even if it is inside the perimeter as hostile. In order to implement a Zero Trust Network, the following assertions should be considered <https://on2it.net/en/zero-trust/>:
  • Assume that network is always hostile: Never trust, always verify.
  • Threats exist inside and outside of the network.
  • Authenticate and authorize de- vice, user, workload or system each time it tries to connect, re- gardless of its location.
  • Least privilege-access.
  • Inspect and log traffic.
In order to successfully observe Zero Trust network following the above criteria, there are some security checkpoints that need to be applied where every communication must pass in order to send or receive data. This can be achieved by using appropriate controls for every condition.
For this project we will be investigating the appropriate controls in order to implement Zero Trust for "east/west" traffic in a containerized environment to mitigate data leakage.

The research question can be summarized as:
  • "How to implement Zero Trust for "east/west" traffic between microservices in containerized environment?"
To answer the research question, we have the following sub-questions:
    • How to regulate the "east/west" traffic flow?
    • How to implement confidential- ity at rest and transit data?
Jeroen Scheerder <Jeroen.Scheerder=>on2it.net>

Catherine de Weever <Catherine.deWeever=>os3.nl>
Marios Andreou <mandreou=>os3.nl>


Security of Mobility-as-a-Service(MaaS) applications on Mobile Phones.

This project will focus on the security of Mobility-as-a-Service(MaaS) android applications. With MaaS you can think about, but not limited to; Uber, Lime, Beat, Bolt, OV-api,.. The goal of this project is to identify and classify if the applications are using data which not intended to use data for another purpose then needed for the service offered.
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>

Alexander Blaauwgeers <alexander.blaauwgeers=>os3.nl>


Blockchain's Relationship with Sovrin for Digital Self-Sovereign Identities.

Summary: Sovrin (sorvin.org) is a blockchain for self-sovereign identities. TNO operates one of the nodes of the Sovrin network. Sovrin enables easy exchange and verification of identity information (e.g. “age=18+”) for business transactions. Potential savings are estimated to be over 1 B€ per year for just the Netherlands. However, Sovrin provides only an underlying infrastructure. Additional query-response protocols are needed. This is being studied in e.g. the Techruption Self-Sovereign-Identity-Framework (SSIF); project. The research question is which functionalities are needed in the protocols for this. The work includes the development of a datamodel, as well as an implementation that connects to the Sovrin network.
Oskar van Deventer <oskar.vandeventer=>tno.nl>


The Current State of DNS Resolvers and RPKI Protection.

The Domain Name System (DNS) and Border Gateway Protocol (BGP) are two fundamental building blocks of the internet. However, these protocols were initially not developed with security in mind. For instance, malicious groups can perform prefix hijacking and additionally spoof a DNS resolvers IP address in the hijacked IP prefix. The results of such action could be disastrous. Additionally, BGP is also prone to route leaks. In 2008, Resource Public Key Infrastructure (RPKI) was proposed to address this issue.
RPKI is a hierarchical Public Key Infrastructure (PKI) that binds Internet Number Resources (INRs), such as Autonomous System Numbers (ASNs) and IP addresses, to public keys via certificates. With the RPKI certificate scheme, AS owners can prove that they are authorized to advertise certain IP prefixes. To make this certificate scheme work, the Regional Internet Registries (RIRs) control the trust anchors for each region.

The objective of this research is to research which DNS resolvers are (partially) protected by RPKI.
Willem Toorop <willem=>nlnetlabs.nl>

Erik Dekker <Erik.Dekker=>os3.nl>
Marius Brouwer <mbrouwer=>os3.nl>


Qualitative analysis of Internet measurement methods and bias.

In the past year NLnet Labs and other organisations have run a number of measurements on DNSSEC deployment and validation.; We used the RIPE Atlas infrastructure for measurements, while other used Google ads where flash code runs the measurements.; The results differ as the measurement points (or observation points) differ: RIPE Atlas measurment points are mainly located in Europe, while Google ads flash measurements run global (or with some stronger representation of East-Asia).

Question is can we quantify the bias in the Atlas measurements or qualitative compare the measurements, so we can correlate the results of both measurement platforms.; This would greatly help interpret our results and the results from others based on the Atlas infrastructure. The results are highly relevant as many operational discussions on DNS and DNSSEC deployment are supported or falsified by these kind of measurements.
Willem Toorop <willem=>nlnetlabs.nl>


Collaborative work with Augmented and Virtual Reality – Unity based network infrastructure.

Although the principles have been around some time, Augmented and Virtual Reality finally gets usable for the consumer market. Nowadays, the prominent game engines are used for development of Mixed Reality (AR+VR) applications. This research follows the vision, that different users with different devices should be able to connect to a common server and collaborate virtually by using either AR or VR head-mounted displays or mobile devices like smartphones.
Research question:
  • How does latency impact the quality collaboration of different visualization and device options?
There are existing network capabilities of Unity, existing AR/VR framework that can be built out of unity and existing connectors (which combine for example HTC Vive to Hololens).
The student is asked to:
  • Build a server infrastructure on which users can connect with different devices
  • Build a build-infrastructure for different devices
The software framework will be published under an open source license after the end of the project.
Doris Aschenbrenner <d.aschenbrenner=>tudelft.nl>




Sensor data streaming framework for Unity.

In order to build a Virtual Reality “digital twin” of an existing technical framework (like a smart factory), the static 3D representation needs to “play” sensor data which either is directly connected or comes from a stored snapshot. Although a specific implementation of this already exists, the student is asked to build a more generic framework for this, which is also able to “play” position data of parts of the infrastructure (for example moving robots). This will enable the research on virtually working on a digital twin factory.
Research question:
  • What are the requirements and limitations of a seamless integration of smart factory sensor data for a digital twin scenario?
There are existing network capabilities of Unity, existing connectors from Unity to ROS (robot operation system) for sensor data transmission and an existing 3D model which uses position data.
The student is asked to:
  • Build a generic infrastructure which can either play live data or snapshot data.
  • The sensor data will include position data, but also other properties which are displayed in graphs and should be visualized by 2D plots within Unity.
The software framework will be published under an open source license after the end of the project.
Doris Aschenbrenner <d.aschenbrenner=>tudelft.nl>


APFS checkpoint management behaviour in macOS.

How many copies do you have? How do Copy On Write  filesystems handle overwriting in files?

Filesystems like APFS use btree structures and COW to transform the disk content from one state to the next. Can these old copied versions be used to create large amount of (latent) snapshots of the filesystem? How does overwriting of (records in sqlite) databasefiles effect the content of the APFS filesystem? The students is asked to research the effects of COW on recovering partially overwritten files and filesystems. As part of this research an estimation of the decay of these latent traces should be researched.
Zeno Geradts <zeno=>holmes.nl>
"Ruud Schramp (DBS)" <schramp=>holmes.nl>

Maarten van der Slik <Maarten.vanderSlik=>os3.nl>


To optimize or not: on the impact of architectural optimizations on network performance.

Project description: Networks are becoming extremely fast. On our testbed with 100Gbps network cards, we can send up to 150 millions of packets per second with under 1us of latency. To support such speeds, many microarchitectural optimizations such as the use of huge pages and direct cache placement of network packets need to be in effect. Unfortunately, these optimizations if not done carefully can significantly harm performance or security. While the security aspects are becoming clear [1], the end-to-end performance impacts remain unknown. In this project, you will investigate the performance impacts of using huge pages and last level cache management in high-performance networking environments. If you were always wondering what happens when receiving millions of packets at nanosecond scale, this project is for you!

Requirements: C programming, knowledge of computer architecture and operating systems internals.

Supervisors: Animesh Trivedi and Kaveh Razavi, VU Amsterdam

[1] NetCAT: Practical Cache Attacks from the Network, Security and Privacy 2020.
Animesh Trivedi <(animesh.trivedi=>vu.nl>
Kaveh Razavi <kaveh=>cs.vu.nl>


The other faces of RDMA virtualization.

Project description: RDMA is a technology that enabled very efficient transfer of data over the network. With 100Gbps RDMA-enabled network cards, it is possible to send hundreds of millions of messages with under 1us latency. Traditionally RDMA has mostly been used in single-user setups in HPC environments. However, recently RDMA technology has been commoditized and used in general purpose workloads such as key-value stores and transaction processing. Major data centers such as Microsoft Azure are already using this technology in their backend services. It is not surprising that there is now support for RDMA virtualization to make it available to virtual machines. We would like you to investigate the limitations of this new technology in terms of isolation and quality of service between different tenants.

Requirements: C programming, knowledge of computer architecture and operating systems internals.

Supervisors:  Animesh Trivedi and Kaveh Razavi, VU Amsterdam
Animesh Trivedi <(animesh.trivedi=>vu.nl>
Kaveh Razavi <kaveh=>cs.vu.nl>


Verification of Objection Location Data through Picture Data Mining Techniques.

Shadows in the open give out more information about the location of the objects in the pictures. According to the positioning, length, and reflection side of the shadow, verification of location information found in the meta data of a picture can be verified. The objective of this project is to develop such algorithms that find freely available images on the internet where tempering with the location data has been performed. The deliverable from this project are the location verification algorithms, a live web service that verifies the location information of the object, and a non-public facing database that contains information about images that had the location information in their meta-data, removed or falsely altered.
Junaid Chaudhry <chaudhry=>ieee.org>


Designing structured metadata for CVE reports.

Vulnerability reports such as MITRE's CVE are currently free format text, without much structure in them. This makes it hard to machine process reports and automatically extract useful information and combine it with other information sources. With tens of thousands of such reports published each year, it is increasingly hard to keep a holistic overview and see patterns. With our open source Binary Analysis Tool we aim to correlate data with firmware databases.

Your task is to analyse how we can use the information from these reports, what metadata is relevant and propose a useful metadata format for CVE reports. In your research you make an inventory of tools that can be used to convert existing CVE reports with minimal effort.

Armijn Hemel - Tjaldur Software Governance Solutions
Armijn Hemel <armijn=>tjaldur.nl>


Incorporating post-quantum cryptography in a microservice environment.


Digital certificates typically use ECDSA or RSA for their digital signatures. These algorithms are expected to be broken by Shor’s algorithm when universal quantum computers with reliable qubits become a reality. The National Institute for Standards and Technology (NIST) is currently in the process of standardizing a set of new algorithms (post-quantum algorithms) that are expected to be resistant to quantum attacks.
The goal of this project is to implement post-quantum algorithms in digital certificates and to assess the usability of these algorithms for public key infrastructures.
Cedric Van Bockhaven <cvanbockhaven=>deloitte.nl>
Itan Barmes <ibarmes=>deloitte.nl>
Vincent van Mieghem <vvanmieghem=>deloitte.nl>

Daan Weller <Daan.Weller=>os3.nl>
Ronald van der Gaag <Ronald.vanderGaag=>os3.nl>


Artificial Intelligence Assisted carving.

Problem Description:
Carving for data and locating files belonging to Principal can be hard if we only use keywords. This still requires a lot of manual work to create keyword lists, which might not even be sufficient to find what we are looking for.
  • Create a simple framework to detect documents of a certain set (or company) within carved data by utilizing machine learning. Closely related to document identification.
The research project below is currently the only open project at our Forensics department rated at MSc level. Of course, if your students have any ideas for a cybersecurity/forensics related project they are always welcome to contact us.
Danny Kielman <danny.kielman=>fox-it.com>
Mattijs Dijkstra <mattijs.dijkstra=>fox-it.com>


Technical research – Identity & Access Management – CyberArk Splunk Use Cases

CyberArk PAS is a common privileged access manager. For this research, we are interested in identifying potential interesting use cases to be built in Splunk. The research should focus on identifying common risks / vulnerabilities when using CyberArk PAS as a Privileged Access Manager (PAM) and should focus on being able to identify potential misuse. Besides the creation of use cases, we request that the research also focusses on identifying opportunities for combining the syslogs of CyberArk PAS in combination with the output of CyberArk PTA.

For this Internship, you will have to setup a small lab to perform your investigations.

For more information about this topic, reach out to Roel Bierens or Chantal Jongeneel (Intern coordinator).

Roel Bierens <rbierens=>deloitte.nl>
Chantal Jongeneel <cjongeneel=>deloitte.nl>

Mike Slotboom <Mike.Slotboom=>os3.nl>
Ivar Slotboom <ivar.slotboom=>os3.nl


Usage Control in the Mobile Cloud.

Mobile clouds [1] aim to integrate mobile computing and sensing with rich computational resources offered by cloud back-ends. They are particularly useful in services such as transportation, healthcare and so on when used to collect, process and present data from physical world. In this thesis, we will focus on the usage control, in particular privacy, of the collected data pertinent to mobile clouds. Usage control[2] differs from traditional access control by not only enforcing security requirements on the release of data by also on what happens afterwards. The thesis will involve the following steps:
  • Propose an architecture over cloud for "usage control as a service" (extension of authorization as a service) for the enforcement of usage control policies
  • Implement the architecture (compatible with Openstack[3] and Android) and evaluate its performance.
[1] https://en.wikipedia.org/wiki/Mobile_cloud_computing
[2] Jaehong Park, Ravi S. Sandhu: The UCONABC usage control model. ACM Trans. Inf. Syst. Secur. 7(1): 128-174 (2004)
[3] https://en.wikipedia.org/wiki/OpenStack
[4] Slim Trabelsi, Jakub Sendor: "Sticky policies for data control in the cloud" PST 2012: 75-80
Fatih Turkmen <F.Turkmen=>uva.nl>
Yuri Demchenko <y.demchenko=>uva.nl>


Security of embedded technology.

Analyzing the security of embedded technology, which operates in an ever changing environment, is Riscure's primary business. Therefore, research and development (R&D) is of utmost importance for Riscure to stay relevant. The R&D conducted at Riscure focuses on four domains: software, hardware, fault injection and side-channel analysis. Potential SNE Master projects can be shaped around the topics of any of these fields. We would like to invite interested students to discuss a potential Research Project at Riscure in any of the mentioned fields. Projects will be shaped according to the requirements of the SNE Master.
Please have a look at our website for more information: https://www.riscure.com
Previous Research Projects conducted by SNE students:
  1. https://www.os3.nl/_media/2013-2014/courses/rp1/p67_report.pdf
  2. https://www.os3.nl/_media/2011-2012/courses/rp2/p61_report.pdf
  3. http://rp.delaat.net/2014-2015/p48/report.pdf
  4. https://www.os3.nl/_media/2011-2012/courses/rp2/p19_report.pdf
If you want to see what the atmosphere is at Riscure, please have a look at: https://vimeo.com/78065043
Please let us know If you have any additional questions!
Ronan Loftus <loftus=>riscure.com>
Alexandru Geana <Geana=>riscure.com>
Karolina Mrozek >Mrozek=>riscure.com>
Dana Geist <geist=>riscure.com>




Video broadcasting manipulation detection.

The detection of manipulation of broadcasting videostreams with facial morphing on the internet . Examples are provided from https://dl.acm.org/citation.cfm?id=2818122 and other on line sources.
Zeno Geradts <Z.J.M.H.Geradts=>uva.nl>


The Serval Project;

Making a low-cost, scalable tsunami and all-hazards warning system with integrated FM radio transmitter.

The Sulawesi earthquake reminded us of the significant gap that exists between the generation of tsunami (and other hazard) warnings, that works quite well, and the means of getting those warnings out to those in the small isolated coastal communities that need them.; There is a need for a low-cost and scalable solution to providing early warning capabilities. Such a system also needs to be useful year-round, so that it will be maintained and work when needed. For this reason, we are building a FM radio juke-box into the system.; In this project, you will help to advance this project from proof-of-concept to prototype stage, through assisting with the generation of the radio juke-box client software as well as the back-end middle-ware for feeding alerts to the satellite up-link, and receiving them on the terminal equipment.
Paul Gardner-Stephen <paul.gardner-stephen=>flinders.edu.au>


The Serval Project;

Shaking down the Serval Mesh Extender

The Serval Mesh Extender is a ruggedised solar-powered peer-to-peer mesh communications system that allows isolated communities to have local communications, and through interconnection to HF digital radios and other means, to connect those communities together.; The system now largely works, but effort is required to more thoroughly test the system under realisitic conditions, and to document, and then eliminate software issues that interfere with the efficient operation of the system.; This will occur through interaction with a remotely accessed semi-automatic test bed network.
Paul Gardner-Stephen <paul.gardner-stephen=>flinders.edu.au>


The Serval Project;

Security through Simpllicity: Creating an open-source smart-phone like device.

SPECTRE and MELTDOWN, and the 20 years it took to discover the vulnerabilities, have unambiguously shown that the complexity of modern computing devices has grown to the point where verification of security is simply impossible. Yet we still have need for strong assurances of security to support many uses of modern technology.; However, things were not always like this. Computers of the 80s and 90s were simple enough that both hardware and software could be verified. Therefore we are creating an open-source smart-phone like device based on an improved evolution of the well known Commodore 64 architecture implemented in FPGA.; We have working bench-top prototypes, and are moving to prototype hardware, and are looking for both IT/CS as well as electronic engineering students to help move the project to prototype stage, and to test the hardware, and implement the software, so that we can have usable test devices in 2019.
Paul Gardner-Stephen <paul.gardner-stephen=>flinders.edu.au>


Cross-blockchain oracle.

Interconnection between different blockchain instances, and smart contracts residing on those, will be essential for a thriving multi-blockchain business ecosystem. Technologies like hashed timelock contracts (HTLC) enable atomic swaps of cryptocurrencies and tokens between blockchains. A next challenge is the cross-blockchain oracle, where the status of an oracle value on one blockchain enables or prevents a transaction on another blockchain.
The goal of this research project is to explore the possibilities, impossibilities, trust assumptions, security and options for a cross-blockchain oracle, as well as to provide a minimal viable implementation.
Oskar van Deventer <oskar.vandeventer=>tno.nl>
Maarten Everts <maarten.everts=>tno.nl>


APFS Slack Analysis and Detection of Hidden Data.

Apple recently introduced APFS with their latest version of OS X, Sierra. The new file system comes with some interesting new features that either pose challenges or opportunities for digital forensics. The goal in this project is to pick one or more relevant features (i.e. encryption, nanosecond timestamps, flexible space allocation, snapshot/cloning, etc.) and reverse engineer their inner workings to come up with a proof-of-concept parsing tool that provides useful input for forensic investigations of Apple systems.
Danny Kielman <danny.kielman=>fox-it.com>

Axel Koolhaas <Axel.Koolhaas=>os3.nl>
Woudt van Steenbergen <woudt.vansteenbergen=>os3.nl>


Man vs the Machine.

Machine Learning has advanced to the point where our computer systems can detect malicious activity through baselining of large volumes of data and picking out the anomalies and non-conformities.; As an example, the finance sector has been using machine learning to detect fraudulent transactions and has been very successful at minimizing the impact of stolen credit card numbers over the past few years.;
  • As we further leverage machine learning and other advance analytics to improve cyber security detection in other industries, what does the role of a cybersecurity analyst evolve into?
  • What are the strengths of machine learning?;
  • What are its weaknesses?; What activities remain after machine learning?;
  • How and when does AI come into the picture?;
  • What are the key skills needed to still be relevant?;
  • What emerging technologies are contributing to the change?;
  • What do new individuals entering into cyber security focus on?;
  • And what do existing cyber security professionals develop to stay current?;
  • What will the industry look like in 2 years?; 5 years? 10+ years?
Rita Abrantes <Rita.Abrantes=>shell.com>


Incentivize distributed shared WiFi through VPN on home routers.

Many forms of free WiFi exists such as ad based solutions [1], provider initiatives [2],
hotel/restaurant/etc. hotspots and Open Wireless Movement [3]. Security and privacy are important factors for sharing wireless. The provider does not want to be held liable [4] and the client wants privacy.

The RP will consist of creating a protocol + Proof of Concept to securely join WiFi networks and share your network. A client connects to a wireless AP using RADIUS credentials; username = PORT@domain, which indicates to which VPN the client will connect to. The AP (upgraded home WiFi router) only lets clients connect to VPN servers, which run on the client's home router, creating a tunnel between a device (client) and the owner's home router (VPN endpoint).

The client has the VPN location embedded in his 802.1x credentials for the shared SSID (like Eduroam) for participating APs. Additionally, the client has a VPN client installed, enabling APs to only allow (whitelist) VPN traffic and a DNS req. for VPN endpoint discovery. This creates the safety for joining any wireless (using the VPN) and sharing your wireless (whitelisting VPN traffic) without worry for liability issues.

This setup will incentivize users to upgrade their routers, giving them more security when connecting to any foreign wireless (through VPN) and provides access to wireless in more places (which require VPN to connect).
  1. worldwifi.io
  2. hotspots.wifi.comcast.com
  3. www.eff.org/issues/open-wireless
  4. www.eff.org/wp/open-wi-fi-and-copyright-primer-network-operators
Peter Boers <peter.boers=>surfnet.nl>

Sander Lentink <sander.lentink=>os3.nl>


Inventory of smartcard-based healthcare identification solutions in Europe and behond: technology and adoption.

For potential international adoption of Whitebox technology in the future, in particular the technique of patients carrying authorization codes with them to authorize healthcare professionals, we want to make an inventory of the current status of healthcare PKIs and smartcard technology in Europe and if possible also outside Europe.

Many countries have developed health information exchange systems over the last 1-2 decades, most of them without much regard of what other countries are doing, or of international interoperability. However, common to most systems developed today is the development of a (per-country) PKI for credentials, typically smartcards, that are provided to healthcare professionals to allow the health information exchange system to identify these professionals, and to establish their 'role' (or rather: the speciality of a doctor, such as GP, pharmacist, gyneacologist, etc.). We know a few of these smartcard systems, e.g., in Austria and France, but not all of them, and we do not know their degree of adoption.

In this project, we would like students to enquire about and report on the state of the art of healthcare smartcard systems in Europe and possibly outside Europe (e.g., Asia, Russia):
  • what products are rolled out by what companies, backed by what CAs (e.g., governmental, as is the case with the Dutch "UZI" healthcare smartcard)?
  • Is it easy to obtain the relevant CA keys?
  • And what is the adoption rate of these smartcards under GPs, emergency care wards, hospitals, in different countries?
  • What are relevant new developments (e.g., contactless solutions) proposed by major stakeholders or industry players in the market?
Note that this project is probably less technical than usual for an SNE student, although it is technically interesting. For comparison, this project may also be fitting for an MBA student.

For more information, see also (in Dutch): https://whiteboxsystems.nl/sne-projecten/#project-2-onderzoek-adoptie-health-smartcards-in-europa-en-daarbuiten
General introduction
Whitebox Systems is a UvA spin-off company working on a decentralized system for health information exchange. Security and privacy protection are key concerns for the products and standards provided by the company. The main product is the Whitebox, a system owned by doctors (GPs) that is used by the GP to authorize other healthcare professionals so that they - and only they - can retrieve information about a patient when needed. Any data transfer is protected end-to-end; central components and central trust are avoided as much as possible. The system will use a published source model, meaning that although we do not give away copyright, the code can be inspected and validated externally.

The Whitebox is currently transitioning from an authorization model that started with doctor-initiated static connections/authorizations, to a model that includes patient-initiated authorizations. Essentially, patients can use an authorization code (a kind of token) that is generated by the Whitebox, to authorize a healthcare professional at any point of care (e.g., a pharmacist or a hospital). Such a code may become part of a referral letter or a prescription. This transition gives rise to a number of interesting questions, and thus to possible research projects related to the Whitebox design, implementation and use. Two of these projects are described below. If you are interested in these project or have questions about other possibilities, please contact <guido=>whiteboxsystems.nl>.

For a more in-depth description of the projects below (in Dutch), please see https://whiteboxsystems.nl/sne-projecten/
Guido van 't Noordende <g.j.vantnoordende=>uva.nl>


Decentralized trust and key management.

Currently, the Whitebox provides a means for doctors (General Practitioner GPs) to establish static trusted connections with parties they know personally. These connections (essentially, authenticated TLS connections with known, validated keys), once established, can subsequently be used by the GP to authorize the party in question to access particular patient information. Examples are static connections to the GP post which takes care of evening/night and weekend shifts, or to a specific pharmacist. In this model, trust management is intuïtive and direct. However, with dynamic authorizations established by patients (see general description above), a question comes up on whether the underlying (trust) connections between the GP practice (i.e., the Whitebox) and the authorized organization (e.g,. hospital or pharmacist) may be re-usable as a 'trusted' connection by the GP in the future.

The basis question is:
  • what is the degree of trust a doctor can place in (trust) relations that are established by this doctor's patients, when they authorize another healthcare professional?
More in general:
  • what degree of trust that can be placed in relations/connections established by a patient, also in view of possible theft of authorization tokens held by patients?
  • What kind of validation methods can exist for a GP to increase or validate a given trust relation implied by an authorization action of a patient?
Perhaps the problem can be raised to a higher level also: can (public) auditing mechanisms -- for example, using block chains -- be used to help establish and validate trust in organizations (technically: keys of such organizations), in systems that implement decentralized trust-base transactions, like the Whitebox system does?

In this project, the student(s) may either implement part of a solution or design, or model the behavior of a system inspired by the decentralized authorization model of the Whitebox.

As an example: reputation based trust management based on decentralized authorization actions by patients of multiple doctors may be an effective way to establish trust in organization keys, over time. Modeling trust networks may be an interesting contribution to understanding the problem at hand, and could thus be an interesting student project in this context.

NB: this project is a rather advanced/involved design and/or modelling project. Students should be confident on their ability to understand and design/model a complex system in the relatively short timeframe provided by an RP2 project -- this project is not for the faint of heart. Once completed, an excellent implementation or evaluation may become the basis for a research paper.

See also (in Dutch): https://whiteboxsystems.nl/sne-projecten/#project-2-ontwerp-van-een-decentraal-vertrouwensmodel
General introduction
Whitebox Systems is a UvA spin-off company working on a decentralized system for health information exchange. Security and privacy protection are key concerns for the products and standards provided by the company. The main product is the Whitebox, a system owned by doctors (GPs) that is used by the GP to authorize other healthcare professionals so that they - and only they - can retrieve information about a patient when needed. Any data transfer is protected end-to-end; central components and central trust are avoided as much as possible. The system will use a published source model, meaning that although we do not give away copyright, the code can be inspected and validated externally.

The Whitebox is currently transitioning from an authorization model that started with doctor-initiated static connections/authorizations, to a model that includes patient-initiated authorizations. Essentially, patients can use an authorization code (a kind of token) that is generated by the Whitebox, to authorize a healthcare professional at any point of care (e.g., a pharmacist or a hospital). Such a code may become part of a referral letter or a prescription. This transition gives rise to a number of interesting questions, and thus to possible research projects related to the Whitebox design, implementation and use. Two of these projects are described below. If you are interested in these project or have questions about other possibilities, please contact <guido=>whiteboxsystems.nl>.

For a more in-depth description of the projects below (in Dutch), please see https://whiteboxsystems.nl/sne-projecten/
Guido van 't Noordende <g.j.vantnoordende=>uva.nl>


LDBC Graphalytics.

LDBC Graphalytics, is a mature, industrial-grade benchmark for graph-processing platforms. It consists of six deterministic algorithms, standard datasets, synthetic dataset generators, and reference output, that enable the objective comparison of graph analysis platforms. Its test harness produces deep metrics that quantify multiple kinds of system scalability, such as horizontal/vertical and weak/strong, and of robustness, such as failures and performance variability. The benchmark comes with open-source software for generating data and monitoring performance.

Until recently, graph processing used only common big data infrastructure, that is, with much local and remote memory per core and storage on disk. However, operating separate HPC and big data infrastructures is increasingly more unsustainable. The energy and (human) resource costs far exceed what most organizations can afford. Instead, we see a convergence between big data and HPC infrastructure.
For example, next-generation HPC infrastructure includes more cores and hardware threads than ever-before. This leads to a large search space for application-developers to explore, when adapting their workloads to the platform.

To take a step towards a better understanding of performance for graph processing platforms on next-generation HPC infrastructure, we would like to work together with 3-5 students on the following topics:
  1. How to configure graph processing platforms to efficiently run on many/multi-core devices, such as the Intel Knights Landing, which exhibits configurable and dynamic behavior?
  2. How to evaluate the performance of modern many-core platforms, such as the NVIDIA Tesla?
  3. How to setup a fair, reproducible experiment to compare and benchmark graph-processing platforms?
Alex Uta <a.uta=>vu.nl>
Marc X. Makkes <m.x.makkes=>vu.nl>


Normal traffic flow information distribution to detect malicious traffic.

In the era of an increasingly encrypted communication it is getting harder to distinguish normal from malicious traffic. Deep packet inspection is no longer an option, unless the trusted certificate store of the monitored clients is altered. However, Netflow data might still be able to provide relevant information about the parties involved in the communication and the traffic volumes they exchange. So would it be possible to tell apart ill-intentioned traffic by looking only at the flows and using a little help from the content providers, like for example website owners and mobile application vendors?

The basic idea is to research a framework or a data interchange format between the content providers, described above, and the monitoring devices. Both in the case of a website and a mobile application such a description can be used to list the authorised online resources that should be used and what is the relative distribution of the traffic between them. If such a framework proves to be successful, it can help in alerting for covert channel malware communication, cross-site scripting and all other types of network communication not initially intended by the original content provider.


Elastic Named Data Network (NDN) for data centric application in cloud environments.

The selection of virtual machines (VMs) must account for the performance requirements of applications (or application components) to be hosted on them. The performance of components on specific types of VM can be predicted based on static information (e.g. CPU, memory and storage) provided by cloud providers, however the provisioning overhead for different VM instances and the network performance in one data centre or across different data centres is also important. Moreover, application-specific performance cannot always be easily derived from this static information.

An information catalogue is envisaged that aims to provide a service that can deliver the most up to date cloud resource information to cloud customers to help them use the Cloud better. The goal of this project will be to extend earlier work [1], but will focus on smart performance information discovery. The student will:
  1. Investigate the state of the art for cloud performance information retrieval and cataloguing.
  2. Propose Cloud performance metadata, and prototype a performance information catalogue.
  3. Customize and integrate an (existing) automated performance collection agent with the catalogue.
  4. Enable smart query of performance information from the catalogue using certain metadata.
  5. (Optional) Test the results with the use cases in on-going EU projects like SWITCH.
Some reading material:
  1. Elzinga, O., Koulouzis, S., Hu, Y., Wang, J., Zhou, H., Martin, P., Taal, A., de Laat, C., and Zhao, Z (2017), Automatic collector for dynamic cloud performance Information, IEEE Networking, Architecture and Storage (NAS), Shenzheng, China, Auguest 7-8, 2017 https://doi.org/10.1109/NAS.2017.8026845
More info: Arie Taal, Paul Martin, Zhiming Zhao
Zhiming Zhao <z.zhao=>uva.nl>

Sean Liao <sean.liao=>os3.nl>


Network aware performance optimization for Big Data applications using coflows.

Optimizing data transmission is crucial to improve the performance of data intensive applications. In many cases, network traffic control plays a key role in optimising data transmission especially when data volumes are very large. In many cases, data-intensive jobs can be divided into multiple successive computation stages, e.g., in MapReduce type jobs. A computation stage relies on the outputs of the the previous stage and cannot start until all its required inputs are in place. Inter-stage data transfer involves a group of parallel flows, which share the same performance goal such as minimising the flow's completion time.

CoFlow is an application-aware network control model for cluster-based data centric computing. The CoFlow framework is able to schedule the network usage based on the abstract application data flows (called coflows). However, customizing CoFlow for different application patterns, e.g., choosing proper network scheduling strategies, is often difficult, in particular when the high level job scheduling tools have their own optimizing strategies.

The project aims to profile the behavior of CoFlow with different computing platforms, e.g., Hadoop and Spark etc.
  1. Review the existing CoFlow scheduling strategies and related work
  2. Prototyping test applications using; big data platforms (including Apache Hadoop, Spark, Hive, Tez).
  3. Set up coflow test bed (Aalo, Varys etc.) using existing CoFlow installations.
  4. Benchmark the behavior of CoFlow in different application patterns, and characterise the behavior.
Background reading:
  1. CoFlow introduction: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-211.pdf
  2. Junchao Wang, Huan Zhouy, Yang Huz, Cees de Laatx and Zhiming Zhao, Deadline-Aware Coflow Scheduling in a DAG, in NetCloud 2017, Hongkong, to appear [upon request]
More info: Junchao Wang, Spiros Koulouzis, Zhiming Zhao
Zhiming Zhao <z.zhao=>uva.nl>


Elastic data services for time critical distributed workflows.

Large-scale observations over extended periods of time are necessary for constructing and validating models of the environment. Therefore, it is necessary to provide advanced computational networked infrastructure for transporting large datasets and performing data-intensive processing. Data infrastructures manage the lifecycle of observation data and provide services for users and workflows to discover, subscribe and obtain data for different application purposes. In many cases, applications have high performance requirements, e.g., disaster early warning systems.

This project focuses on data aggregation and processing use-cases from European research infrastructures, and investigates how to optimise infrastructures to meet critical time requirements of data services, in particular for different patterns of data-intensive workflow. The student will use some initial software components [1] developed in the ENVRIPLUS [2] and SWITCH [3] projects, and will:
  1. Model the time constraints for the data services and the characteristics of data access patterns found in given use cases.
  2. Review the state of the art technologies for optimising virtual infrastructures.
  3. Propose and prototype an elastic data service solution based on a number of selected workflow patterns.
  4. Evaluate the results using a use case provided by an environmental research infrastructure.
  1. https://staff.fnwi.uva.nl/z.zhao/software/drip/
  2. http://www.envriplus.eu
  3. http://www.switchproject.eu
More info: —Spiros Koulouzis, Paul Martin, Zhiming Zhao
Zhiming Zhao <z.zhao=>uva.nl>


Contextual information capture and analysis in data provenance.

Tracking the history of events and the evolution of data plays a crucial role in data-centric applications for ensuring reproducibility of results, diagnosing faults, and performing optimisation of data-flow. Data provenance systems [1] are a typical solution, capturing and recording the events generated in the course of a process workflow using contextual metadata, and providing querying and visualisation tools for use in analysing such events later.

Conceptual models such as W3C PROV (and extensions such as ProvONE), OPM and CERIF have been proposed to describe data provenance, and a number of different solutions have been developed. Choosing a suitable provenance solution for a given workflow system or data infrastructure requires consideration of not only the high-level workflow or data pipeline, but also performance issues such as the overhead of event capture and the volume of provenance data generated.

The project will be conducted in the context of EU H2020 ENVRIPLUS project [1, 2]. The goal of this project is to provide practical guidelines for choosing provenance solutions. This entails:
  1. Reviewing the state of the art for provenance systems.
  2. Prototyping sample workflows that demonstrate selected provenance models.
  3. Benchmarking the results of sample workflows, and defining guidelines for choosing between different provenance solutions (considering metadata, logging, analytics, etc.).
  1. About project: http://www.envriplus.eu
  2. Provenance background in ENVRIPLUS: https://surfdrive.surf.nl/files/index.php/s/uRa1AdyURMtYxbb
  3. Michael Gerhards, Volker Sander, Torsten Matzerath, Adam Belloum, Dmitry Vasunin, and Ammar Benabdelkader. 2011. Provenance opportunities for WS-VLAM: an exploration of an e-science and an e-business approach. In Proceedings of the 6th workshop on Workflows in support of large-scale science (WORKS '11). http://dx.doi.org/10.1145/2110497.2110505
More info: - Zhiming Zhao, Adam Belloum, Paul Martin
Zhiming Zhao <z.zhao=>uva.nl>


Profiling Partitioning Mechanisms for Graphs with Different Characteristics.

In computer systems, graph is an important model for describing many things, such as workflows, virtual infrastructures, ontological model etc. Partitioning is an frequently used graph operation in the contexts like parallizing workflow execution, mapping networked infrastructures onto distributed data centers [1], and controlling load balance of resources. However, developing an effective partition solution is often not easy; it is often a complex optimization issue involves constraints like system performance and cost constraints.;

A comprehensive benchmark on graph partitioning mechanisms is helpful to choose a partitioning solver for a specific model. This portfolio can also give advices on how to partition based on the characteristics of the graph. This project aims at benchmarking the existing partition algorithms for graphs with different characteristics, and profiling their applicability for specific type of graphs.;
This project will be conducted in the context of EU SWITCH [2] project. the students will:
  1. Review the state of the art of the graph partitioning algorithms and related tools, such as Chaco, METIS and KaHIP, etc.
  2. Investigate how to define the characteristics of a graph, such as sparse graph, skewed graph, etc. This can also be discussed with different graph models, like planar graph, DAG, hypergraph, etc.
  3. Build a benchmark for different types of graphs with various partitioning mechanisms and find the relationship behind.;
  4. Discuss about how to choose a partitioning mechanism based on the graph characteristics.
Reading material:
  1. Zhou, H., Hu Y., Wang, J., Martin, P., de Laat, C. and Zhao, Z., (2016) Fast and Dynamic Resource Provisioning for Quality Critical Cloud Applications, IEEE International Symposium On Real-time Computing (ISORC) 2016, York UK http://dx.doi.org/10.1109/ISORC.2016.22
  2. SWITCH: www.switchproject.eu

More info: Huan Zhou, Arie Taal, Zhiming Zhao

Zhiming Zhao <z.zhao=>uva.nl>


Auto-Tuning for GPU Pipelines and Fused Kernels.

Achieving high performance on many-core accelerators is a complex task, even for experienced programmers. This task is made even more challenging by the fact that, to achieve high performance, code optimization is not enough, and auto-tuning is often necessary. The reason for this is that computational kernels running on many-core accelerators need ad-hoc configurations that are a function of kernel, input, and accelerator characteristics to achieve high performance. However, tuning kernels in isolation may not be the best strategy for all scenarios.

Imagine having a pipeline that is composed by a certain number of computational kernels. You can tune each of these kernels in isolation, and find the optimal configuration for each of them. Then you can use these configurations in the pipeline, and achieve some level of performance. But these kernels may depend on each other, and may also influence each other. What if the choice of a certain memory layout for one kernel causes performance degradation on another kernel?

One of the existing optimization strategies to deal with pipelines is to fuse kernels together, to simplify execution patterns and decrease overhead. In this project we aim to measure the performance of accelerated pipelines in three different tuning scenarios:
  1. tuning each component in isolation,
  2. tuning the pipeline as a whole, and
  3. tuning the fused kernel. Measuring the performance of one or more pipelines in these scenarios we hope to, on one level, being able to determine which is the best strategy for the specific pipelines on different hardware platform, and on another level we hope to better understand which are the characteristics that influence this behavior.
Rob van Nieuwpoort <R.vanNieuwpoort=>uva.nl>


Speeding up next generation sequencing of potatoes

Genotype and single nucleotide polymorphism calling (SNP) is a technique to find bases in next-generation sequencing data that differ from a reference genome. This technique is commonly used in (plant) genetic research. However, most algorithms focus on allowing calling in diploid heterozygous organisms (specifically human) only. Within the realm of plant breeding, many species are of polyploid nature (e.g. potato with 4 copies, wheat with 6 copies and strawberry with eight copies). For genotype and SNP calling in these organisms, only a few algorithms exist, such as freebayes (https://github.com/ekg/freebayes). However, with the increasing amount of next generation sequencing data being generated, we are noticing limits to the scalability of this methodology, both in compute time and memory consumption (>100Gb).

We are looking for a student with a background in computer science, who will perform the following tasks:

  • Examine the current implementation of the freebayes algorithm
  • Identify bottlenecks in memory consumption and compute performance
  • Come up with an improved strategy to reduce memory consumption of the freebayes algorithm
  • Come up with an improved strategy to execute this algorithm on a cluster with multiple CPU’s or on GPU/s (using the memory of multiple compute nodes)
  • Implement an improved version of freebayes, according to the guidelines established above
  • Test the improved algorithm on real datasets of potato.
This is a challenging master thesis project on an important food crop (potato) on a problem which is relevant for both science and industry. As part of the thesis, you will be given the opportunity to present your progress/results to relevant industrial partners for the Dutch breeding industry.

Occasional traveling to Wageningen will be required.
Rob van Nieuwpoort <R.vanNieuwpoort=>uva.nl>


Auto-tuning for Power Efficiency.

Auto-tuning is a well-known optimization technique in computer science. It has been used to ease the manual optimization process that is traditionally performed by programmers, and to maximize the performance portability. Auto-tuning works by just executing the code that has to be tuned many times on a small problem set, with different tuning parameters. The best performing version is than subsequently used for the real problems. Tuning can be done with application-specific parameters (different algorithms, granularity, convergence heuristics, etc) or platform parameters (number of parallel threads used, compiler flags, etc).

For this project, we apply auto-tuning on GPUs. We have several GPU applications where the absolute performance is not the most important bottleneck for the application in the real world. Instead the power dissipation of the total system is critical. This can be due to the enormous scale of the application, or because the application must run in an embedded device. An example of the first is the Square Kilometre Array, a large radio telescope that currently is under construction. With current technology, it will need more power than all of the Netherlands combined. In embedded systems, power usage can be critical as well. For instance, we have GPU codes that make images for radar systems in drones. The weight and power limitations are an important bottleneck (batteries are heavy).

In this project, we use power dissipation as the evaluation function for the auto-tuning system. Earlier work by others investigated this, but only for a single compute-bound application. However, many realistic applications are memory-bound. This is a problem, because loading a value from the L1 cache can already take 7-15x more energy than an instruction that only performs a computation (e.g., multiply).

There also are interesting platform parameters than can be changed in this context. It is possible to change both core and memory clock frequencies, for instance. It will be interesting to if we can at runtime, achieve the optimal balance between these frequencies.

We want to perform auto-tuning on a set of GPU benchmark applications that we developed.
Rob van Nieuwpoort <R.vanNieuwpoort=>uva.nl>


Applying and Generalizing Data Locality Abstractions for Parallel Programs.

TIDA is a library for high-level programming of parallel applications, focusing on data locality. TIDA has been shown to work well for grid-based operations, like stencils and convolutions. These are in an important building block for many simulations in astrophysics, climate simulations and water management, for instance. The TIDA paper gives more details on the programming model.

This projects aims to achieve several things and answer several research questions:

TIDA currently only works with up to 3D. In many applications we have, higher dimensionalities are needed. Can we generalize the model to N dimensions?
The model currently only supports a two-level hierarchy of data locality. However, modern memory systems often have many more levels, both on CPUs and GPUs (e.g., L1, L2 and L3 cache, main memory, memory banks coupled to a different core, etc). Can we generalize the model to support N-level memory hierarchies?
The current implementation only works on CPUs, can we generalize to GPUs as well?
Given the above generalizations, can we still implement the model efficiently? How should we perform the mapping from the abstract hierarchical model to a real physical memory system?

We want to test the new extended model on a real application. We have examples available in many domains. The student can pick one that is of interest to her/him.
Rob van Nieuwpoort <R.vanNieuwpoort=>uva.nl>


Ethereum Smart Contract Fuzz Testing.

An Ethereum smart contract can be seen as a computer program that runs on the Ethereum Virtual Machine (EVM), with the ability to accept, hold and transfer funds programmatically. Once a smart contract has been place on the blockchain, it can be executed by anyone. Furthermore, many smart contracts accept user input. Because smart contracts operate on a cryptocurrency with real value, security of smart contracts is of the utmost importance. I would like to create a smart contract fuzzer that will check for unexpected behaviour or crashes of the EVM. Based on preliminary research, such a fuzzer does not exist yet.
Rodrigo Marcos <rodrigo.marcos=>secforce.com>


Tunneling data over a Citrix virtual channel.

Citrix provides services for remote virtual desktop infrastructure (VDI / Xen Desktop) or application virtualization (XenApp). Citrix is sometimes used as a security measure to sandbox the execution of sensitive applications (e.g. so a financial application that may only be run from a single server, with the users that require the access connecting to the virtual desktop). The organization then sets additional restrictions: no access to clipboard data, no access to shared drives, and no outbound connectivity that is allowed to prevent data leaks.
Citrix is built on top of traditional Windows technologies such as RDP to establish the connection to the virtualized desktop infrastructure. RDP has the capability to extend the remote desktop session with clipboard management, attaching of printers and sound devices, and drive mapping. Additionally, it is possible to create plugins to provide other functionalities.

The rdp2tcp project features the possibility to tunnel TCP connections (TCP forwarding) over a remote desktop session. This means no extra ports have to be opened.
We would like to investigate whether it is possible to establish a TCP tunnel over a Citrix virtual desktop session. This would allow routing of traffic through the Citrix server, potentially providing the ability to move laterally through the network in order to access systems connected to the Citrix server (that are not directly exposed to the Internet).

Find here the video from the presentation: RP40 Presentatie Demo Video.mp4
Cedric Van Bockhaven <cvanbockhaven=>deloitte.nl>

Ward Bakker <Ward.Bakker=>os3.nl>
Niels den Otter <notter=>os3.nl>


Generating probable password candidates for the offline assessment of Dutch domain password hashes.

Although password authentication is not considered to be the most secure authentication method, it still is a reasonable option in practice today, mainly because of usability and deployability characteristics.
From early on, password authentication has been the target of attacks. As a result techniques and procedures concerning password authentication have been improved, e.g.:
  • Efficient attacks using rainbow tables have been introduced to enable pre-computed hash lookups. To mitigate such attacks, among others, password policies and salts have been used.
  • Graphics processing units (GPUs) are being utilized for guessing large amounts of password candidates per second. To counter such attacks, processing expensive and memory intensive hashing algorithms have been developed.
Our research focuses on assessing the strength of Dutch domain passwords by taking Dutch domain related breach corpus data as a starting point. The results could be valuable to sup- port security assessments in practice, e.g. red teaming exercises, and further development of preventive measures to assure stronger password selection for Dutch domain services.
Pim Campers <Pim.Campers=>secura.com>

Tom Broumels <Tom.Broumels=>os3.nl>


Digital Forensic Investigation of Data Theft on the Google Cloud Platform.

The Mitre GCP Matrix [1] displays 9 tactics to gain access on different levels on the Google Cloud Platform, the third most popular cloud platform. One of these tactics, called “Collection”, is getting access to data of interest from either a specific target or just anyone possible. The next goal after collecting data is to steal (exfiltrate) the data. In most cases, metadata could also be interesting.

A common problem with public cloud users is that these users often do not configure their public cloud storage solutions properly. The storage could easily remain public faced to the rest of the world instead of limiting access just to their application. Companies do not want their data to be viewed or exfiltrated by unauthorized. Our research will focus on the early detection and mitigation of the misuse of improperly secured cloud storage with the GCP provided tooling.

[1] https://attack.mitre.org/matrices/enterprise/cloud/gcp/
Korstiaan Stam <korstiaan.stam=>pwc.com>

Frank Wiersma <frank.wiersma=>os3.nl>
Tjeerd Slokker <tjeerd.slokker=>os3.nl>


Anomaly Detection on Log Files Based on Simplicity Theory.

As humans know from common sense -- and cognitive studies confirm -- events are relevant to subjects when they are exceptional (for them) or when they (potentially) might have positive or negative impact on their desires or interests. The goal of this project is to investigate how to develop similar relevance mechanisms in computational settings in order to provide adaptive monitoring. Intuitively, the system needs to form an idea of normality from observations, and use it to evaluate whether and to what extent a new observation is exceptional. Second, the system should be provided with a reward model (possibly specified at design time, but that could be modified or refined dynamically) and use it to evaluate the potential impact of a new observation. Once implemented, these filters of relevance could be used for instance in a monitoring application to highlight to the user where to pay further attention. The target domains of such an application might be the most various, for instance networking, social systems, etc.; The objectives of this study are to:
  • investigate computational models for relevance, drawing from existing literature (information theory, algorithmic information theory, simplicity theory, etc.)
  • decide an application domain and settle upon an associated representational model
  • develop the functions necessary for relevance, e.g. prototyping and reward model; and the mechanisms quantifying relevance
  • build a prototype for the target application domain
  • Dessalles, J. L. (2013). Algorithmic simplicity and relevance. Algorithmic Probability and Friends, 7070 LNAI, 119–130.
  • Breuker, J. (1994). Components of problem solving and types of problems. A Future for Knowledge Acquisition, 867, 118–136.
  • Lindenmayer, D. B., & Likens, G. E. (2009). Adaptive monitoring: a new paradigm for long-term research and monitoring. Trends in Ecology and Evolution, 24(9), 482–486.
  • Domshlak, C., Hüllermeier, E., Kaci, S., & Prade, H. (2011). Preferences in AI: An overview. Artificial Intelligence, 175(7–8), 1037–1052.
Giovanni Sileno <G.Sileno=>uva.nl>

Giacomo Casoni <Giacomo.Casoni=>os3.nl>
Mar Badias Simo <Mar.BadiasSimo=>os3.nl>


Smart contracts specified as contracts.

Developing a distributed state of mind: from control flow to control structure

The concepts of control flow, of data structure, as well as that of data flow are well established in the computational literature; in contrast, one can find different definitions of control structures, and typically these are not associated to the common use of the term, referring to the power relationships holding in society or in organizations.

The goal of this project is the design and development of a social architecture language that cross-compile in a modern concurrent programming language (Rust, Go, or Scala), in order to make explicit a multi-threaded, distributed state of mind, following results obtained in agent-based programming. The starting point will be a minimal language subset of AgentSpeak(L).

Potential applications: controlled machine learning for Responsible AI, control of distributed computation
Giovanni Sileno <G.Sileno=>uva.nl>
Mosata Mohajeriparizi <m.mohajeriparizi=>uva.nl>


Zero Trust Validation.

ON2IT advocates the Zero Trust Validation conceptual strategy [1] to strengthen information security at the architectural level. Zero Trust is often mistakenly perceived as an architectural approach. However, it is, in the end, a strategic approach towards protecting assets regardless of location. To enable this approach, controls are needed to provide sufficient insight (visibility), to exert control, and to provide operational feedback. However, these controls/probes are not naturally available in all environ­ments. Finding ways to embed such controls, and finding/applying them, can be challenging, especially in the context of containerized, cloud­ and virtualized workflows.

At the strategic level, Zero Trust is not sufficiently perceived as a value contributor. At the managerial level, it is perceived mainly as an architectural ‘toy’. This makes it hard to translate a Zero Trust strategic approach to the operational level; there’s a lack overall coherence. For this reason, ON2IT developed a Zero Trust Readiness Assessment framework which facilitates testing the readiness level on three levels: governance, management and operations.

Research (sub)questions that emerge:
  • What is missing in the current approach of ZTA to make it resonate with the board?
    • What are Critical Success Factors for drafting and implementing ZTA?
    • What is an easy to consume capability maturity or readiness model for the adoption of ZTA that guides boards and management teams in making the right decisions?
    • What does a management portal with associated KPIs need to offer in order to enable board and management to manage and monitor the ZTA implementation process and take appropriate ownership?
    • How do we add the necessary controls and leverage control and monitoring facilitities thusly provided efficiently?
  1. Zero Trust Validation
  2. "On Exploring Research Methods for Business Information Security Alignment and Artefact Engineering" by Yuri Bobbert, University of Antwerp
Jeroen Scheerder <Jeroen.Scheerder=>on2it.net>


Improving Red Team Reconnaissance.

During red teaming exercises it is of vital importance for the red team to know when the blue team has recognised their actions and are investigating their artefacts. Having such knowledge gives the red team the opportunity to either brace for impact, clean up their channels and lay low, change C2-channels or otherwise adjust their attacks to perhaps remain hidden for the blue team. This is all done in order to be a better sparring partner for the blue team and give them a better training. During our red team exercises we make use of many different ways of detecting blue team activities. As we believe the entire red teaming industry needs to improve we have open sourced some of these checks into our RedELK tooling(1). More info on our approach and details of RedELK here(2).

We are always looking for new novel ways for blue team detection. Lately, research was disclosed where adwords are used for this purpose (3). We want students to investigate the feasibility of using adwords for blue team activity, as well as have a fully working PoC. The analyses should include effectiveness as well as ease of setup and possibility of including into RedELK.

We are open to other novel ways for detection of blue team activities. This RP can easily be changed to your liking if you have another novel technique and hypothesis that fits the end goal. Get in touch if you have such an idea.

Students preferably already have experience in either offensive or defensive IT operations.
  1. https://github.com/outflanknl/RedELK
  2. https://www.youtube.com/watch?v=OjtftdPts4g
  3. https://www.youtube.com/watch?v=wlKqyuefE1E
Marc Smeets <marc=>outflank.nl>




Industrial Control System research.

Interface between 3th party software and an embedded OS

In Industrial Systems vendors only provide support until a certain patch of an OS. However OS versions are reliant on patching for solving security issues. Is it possible to develop an interface between the software and the OS in such a way that its possible to maintain availability and security?

Vulnerability assessment of Safety instrumented system (SIS)

SIS are promoted as being more secure, reliable and redundant. Is this true or are these systems still vulnerable? How more secure are these systems really? What are the differences between PLC and SIS?
Dima van de Wouw <dvandewouw=>deloitte.nl>


Eduroam / WPA2-Enterprise Client Testing Suite.

The eduroam wireless roaming service for research and education supports 10,000 campuses across the globe. Technologies such as the configuration assistant tool limit end-user configuration errors, but misconfigurations (accidental or deliberate) still exist in the infrastructure and are often only revealed when a eduroam user visits a particular site. For the deployment of probes as “visiting eduroam users” the research question is:
  • What is the optimal set of authentication tests from a client to determine correct deployment of a wireless hotspot?
Additional sub-questions that could be explored:
  • With multiple clients at different sites - what additional information can you deduce from authentication failures.
  • What number of probes or set of features are needed to root cause a problem?
  • Which combination of other monitoring logs can be used to determine problems without client testing?
The project will be able to utilise a network of “virtual end users” to test these test in reality - not just in theory.
Klaas Wierenga <klaas.wierenga=>geant.org>

Raoul Mensenkamp <rdijksman=>os3.nl>
Erik Lamers <erik.lamers=>os3.nl>


Analyzing and enhancing embedded software technologies on RISC-V64 using the Ghidra framework.

There is a lack of proper tooling (disassemblers and decompilers) for RISCV64. Some plugins for IDA and Ghidra exist (publicly available on the internet), but are in a proof-of-concept stage. This slows down the progress in reversing and analyzing firmware for this architecture. Since embedded devices are expected to take advantage of this architecture due to its openness, reliable tooling is needed. The task would be to check existing tooling and either improve it if possible, or start from scratch with a solid foundation to which extensions can later be added (once they are frozen in the specs).
Alexandru Geana <Geana=>riscure.com>
Karolina Mrozek <Mrozek=>riscure.com>
Dana Geist <Geist=>riscure.com>

Patrick Spaans <pspaans=>os3.nl>
Joris Jonkers Both <Joris.JonkersBoth=>os3.nl>


The influence of the training set size on the performance of the Robust Covariance Estimator as an anomaly detection algorithm on automotive CAN data.

Cars are becoming more connected and networked, because of this more attack vectors available on a car.
  • Assessing the security of upcoming protocols for ICS systems, comparing them to each other and also to the current industry standards.
Colin Schappin <cschappin=>deloitte.nl>

Silke Knossen <silke.knossen=>os3.nl>
Vincent Kieberl <vincent.kieberl=>os3.nl>


Cybersecurity in Automotive Networks.

Automotive vehicles are comprised of multiple Electronic Control Units (ECUs), each controlling a subsystem of the vehicle. These include, but are not limited to, engine controls, brakes, locks, climate control, and multimedia systems. In an effort to reduce the amount of interconnections required between these ECUs, Bosch developed the Controller Area Network (CAN) bus, first released in 1986. In this research project we look at the security of the automotive networks themselves. We consider if there are measures taken to protect them against malicious messages and if not, if there are extensions that do and how those affect the performance of the bus.

Research Questions:
  1. Which automotive communication protocols are currently used in production, forming the state of practice?
  2. What features are built into the protocols utilized in the automotive industry to provide security?
  3. What extensions to protocols can be used to introduce security to the protocols?
  4. How do these extensions compare in terms of security, according to the CIA triad and other relevant properties, such as authenticity?
  5. If the extensions provide sufficient security, are there any drawbacks or other consequences that need to be taken into consideration?
Colin Schappin <cschappin=>deloitte.nl>

Arnold.Buntsma <Arnold.Buntsma=>os3.nl>
Sebastian Wilczek <Sebastian.Wilczek=>os3.nl>


Network Anomaly Detection in Modbus TCP Industrial Control Systems.

ICS malware network behavioral analysis.

  • How does malware look like on an ICS network?
  • Does this differ from regular IT systems and are pattern based / machine learning based solutions applicable to ICS systems?

ICS process mapping to finite state machines and analyzing system behavior.

  • It is possible to map a process (control, safety, ...) used in ICS systems to a finite state machine (FSM)?
  • Can this process of conversion be made easier for ICS processes?
  • Is it possible to use this FSM to monitor the behavior of the system and see if it shows unusual behavior (malware or defect equipment)?
Bartosz Czaszynski <bczaszynski=>deloitte.nl>

Philipp Mieden <Philipp.Mieden=>os3.nl>
Rutger Beltman <Rutger.Beltman=>os3.nl>


Using BGP Flow-Spec for distributed micro-segmentation.

BGP Flowspec (RFC 5575) is a standard to distribute ACLs with BGP. This is mainly used in DDOS mitigation, but I think it would be suitable to  implement a distributed firewall and create a microsegmentation solution in a datacenter. This could either be used in combination with the infeastructure and an OS like Cumulus Linux or in (relation to the above) when routing is done on a host/hypervisor. FRRouting currently has Flowspec partly implemented (only as a receiver), which could be used as an implementation.
Attilla de Groot <attilla=>cumulusnetworks.com>

Davide Pucci <Davide.Pucci=>os3.nl>


Measuring end-to-end latency with P4.

The UvA is collaborating with SURFnet, UTwente and SIDN to create a P4 nationwide experimental environment, as part of the 2STiC initiative. In this project we want to investigate how to use INT (In-band Network Telemetry) specification in a distributed P4 (www.p4.org <http://www.p4.org>) testbed. See for additional information on INT and P4: https://p4.org/assets/INT-current-spec.pdf

The specific usecase we consider is flow measurements, namely end-to-end latency and throughput over time. The research will address the following challenges:
  • The specification does not prescribe where the INT tag should be inserted in the packets. We will determine the most suitable design for tag INT tag insertion as function of the considered usecase.
  • We will investigate the role and the optimal behavior of the INT sinks, ie the elements that extract the information from the packets.
  • We will develop an initial implementation and evaluate its performance in the testbed.
Joseph Hill <j.d.hill=>uva.nl>
Paola Grosso <p.grosso=>uva.nl>

Silke.Knossen <Silke.Knossen=>os3.nl>
Rutger Beltman <rutger.beltman@os3.nl>


Scoring model for IoCs by combining open intelligence feeds to reduce false positives.

In the last few years much research has been done in the field of Threat Intelligence. Many tools have been released to harvest, parse, aggregate, store, and share Indicators Of Compromise (IOC) (https://github.com/hslatman/awesome-threat-intelligence) but yet one big problem remains at the moment of using it, *False Positives*. Commercial, open source, or even home brew feeds of threat intelligence need to go trough a phase of verification. This is a tedious job, mostly done by security analyst, where the data is analysed in order to rule out outdated, non relevant, or wrong (IP: IOCs. The idea of this research project is to analyse the various possibilities to perform this verification phase in an automated fashion.
Leandro Velasco <leandro.velasco=>kpn.com>
Joao Novaismarques <joao.novaismarques=>kpn.com>

Jelle Ermerins <jermerins=>os3.nl>
Niek van Noort <Niek.vanNoort=>os3.nl>


Detecting Fileless Malicious Behaviour of .NET C2 Agents using ETW.

The cat and mouse game between attackers (RedTeams) and Defenders (BlueTeams) is a never ending story. In the past years attackers have found that Antivirus bypass was doable by performing "fileless attacks" leveraging common tooling in windows environments. A common tool wildly exploited is powershell. As a counter measurement the industry is slowly implementing endpoint monitoring. This practice aims to build on top of the antiviruses by analyzing the events that happens in the system using software like sysmon or other EDR tooling. Moreover, microsoft implemented powershell script block logging. This allows defenders to not just monitor low level events but also analyse the commands executed by the powershell engine. Attackers after noticing that their trick started to get attention moved away and started implementing malicious .Net applications. Due to the nature of the .Net framework, attackers are able to deploy a .Net agent on the target system and send raw .Net code that will be compiled and executed by the agent from memory, thus avoiding detection.
Security researches had found that Event Tracing for Windows (ETW), first introduced in Windows 2000, could be used to detect these new threats.
Recently the company FireEye has released SilkETW, an open source tool that facilitate the use of the data generated by ETW. However, many challenges still remain, vendors and blue teams need to have a better understanding of the events generated and integrate these events into their detection strategies.

The idea behind this research project is to study the effectiveness of this newly discovered technology against threats such as the Covenant framework (https://github.com/cobbr/Covenant) and webshells such as the one recently disclosed by the apt34/Oilrig dump (https://d.uijn.nl/2019/04/18/yet-another-apt34-oilrig-leak-quick-analysis/).

Leandro Velasco <leandro.velasco=>kpn.com>
Jeroen Klaver <jeroen.klaver=>kpn.com>

Alexander Bode <Alexander.Bode=>os3.nl>
Niels Warnars <nwarnars=>os3.nl>


OSINT  Washing Street.

At the moment more and more OSINT is available via all kinds of sources,a lot them are legit services that are used by malicious actors. Examples are github, pastebin, twitter etc. If you look at pastebin data you might find IOC/TTPS but usually the payloads delivered in many stages so it is important to have a system that follows the path until it finds the real payload. The question here is how can you build a generic pipeline that unravels data like a matryoshka doll. So no matter the input, the pipeline will try to decode, query or perform whatever relevant action that is needed. This would result in better insight in the later stages of an attack. An example of a framework using the method is Stoq (https://github.com/PUNCH-Cyber/stoq), but this lakes research in usability and if the results are added value compared to other osint sources.
Leandro Velasco <leandro.velasco=>kpn.com>
Joao Novaismarques <joao.novaismarques=>kpn.com>


Integration of EVPN in Kubernetes.

EVPN-VxLAN is the default overlay solution for IP-Fabrics and Cumulus has upstreamed the EVPN implementation into the FRRouting project. EVPN can also be run on a regular Linux host (https://cumulusnetworks.com/blog/evpn-host/), but Openstack doesn’t have integration with EVPN/FRR or the other changes made in the Linux kernel the last few years (e.g VRFs, vlan-aware bridging).
Attilla de Groot <attilla=>cumulusnetworks.com>
Frank Potter <Frank.Potter=>os3.nl>


Building an open-source, flexible, large-scale static code analyzer.

Background information
Data drives business, and maybe even the world. Businesses that make it their business to gather data are often aggregators of client­side generated data. Client­side generated data, however, is inherently untrustworthy. Malicious users can construct their data to exploit careless, or naive, programming and use this malicious, untrusted data to steal information or even take over systems.
It is no surprise that large companies such as Google, Facebook and Yahoo spend considerable resources in securing their own systems against would­be attackers. Generally, many methods have been developed to make untrusted data cross the trust­boundary to trusted data, and effectively make malicious data harmless. However, securing your systems against malicious data often requires expertise beyond what even skilled programmers might reasonably possess.
Problem description
Ideally, tools that analyze code for vulnerabilities would be used to detect common security issues. Such tools, or static code analyzers, exist, but are either out­dated (http://rips­scanner.sourceforge.net/) or part of very expensive commercial packages (https://www.checkmarx.com/ and http://armorize.com/). Next to the need for an open­source alternative to the previously mentioned tools, we also need to look at increasing our scope. Rather than focusing on a single codebase, the tool would ideally be able to scan many remote, large­scale repositories and report the findings back in an easily accessible way.
An interesting target for this research would be very popular, open­source (at this stage) Content Management Systems (CMSs), and specifically plug­ins created for these CMSs. CMS cores are held to a very high coding standard and are often relatively secure. Plug­ins, however, are necessarily less so, but are generally as popular as the CMSs they’re created for. This is problematic, because an insecure plug­in is as dangerous as an insecure CMS. Experienced programmers and security experts generally audit the most popular plug­ins, but this is: a) very time­intensive, b) prone to errors and c) of limited scope, ie not every plug­in can be audited. For example, if it was feasible to audit all aspects of a CMS repository (CMS core and plug­ins), the DigiNotar debacle could have easily been avoided.
Research proposal
Your research would consist of extending our proof­of­concept static code analyzer written in Python and using it to scan code repositories, possibly of some major CMSs and their plug­ins, for security issues and finding innovative ways of reporting on the massive amount of possible issues you are sure to find. Help others keep our data that little bit more safe.
Patrick Jagusiak <patrick.jagusiak=>dongit.nl>
Wouter van Dongen <wouter.vandongen=>dongit.nl>


Ibis Data Serialization in Apache Spark.

Apache Spark is a system for large-scale data processing used for Big Data applications business applications, but also in many scientific applications. Spark uses Java (or Scala) object serialization to transfer data over the network. Especially if data fits in memory, the performance of serialization is the most important bottleneck in Spark applications. Spark currently offers two mechanisms for serialization: Standard Java object serialization and Kryo serialization.

In the Ibis project (www.cs.vu.nl/ibis), we have developed an alternative serialization mechanism for high-performance computing applications that relies on compile-time code generation and zero-copy networking for increased performance. Performance of JVM serialization can also be compared with benchmarks: https://github.com/eishay/jvm-serializers/wiki. However, we also want to evaluate if we can increase Spark performance at the application level by using out improved object serialization system. In addition, our Ibis implementation can use fast local networks such as Infiniband transparently. We also want to investigate if using specialized networks increases application performance. Therefore, this project involves extending Spark with our serialization and networking methods (based on existing libraries), and on analyzing the performance of several real-world Spark applications.
Adam Belloum <A.S.Z.Belloum=>uva.nl>
Jason Maassen <J.Maassen=>esciencecenter.nl>

Dadepo Aderemi <Dadepo.Aderemi=>os3.nl>
Mathijs Visser <mathijs.visser=>os3.nl>


Using Mimikatz’ driver, Mimidrv, to disable Windows Defender in Windows.

Mimikatz has a driver bundled that allows an attacker to arbitrary R/W to kernel memory. This project would look into using the mimikatz driver in order to run privileged code via the driver. For example, working from the kernel, it is possible to unhook A/V in order to bypass endpoint protection software. However, several protections are in place (e.g. KPP) that make this difficult. It would be interesting to look into a generic way to unhook minifilter callbacks by using the mimikatz kernel driver.
Cedric van Bockhaven <cvanbockhaven=>deloitte.nl>

Bram Blaauwendraad <Bram.Blaauwendraad=>os3.nl>
Thomas Ouddeken <touddeken=>os3.nl>


Developing a Distributed State of Mind.

A system required to be autonomous needs to be more than just a computational black box that produces a set of outputs from a set of inputs. Interpreted as an agent provided with (some degree of) rationality, it should act based on desires, goals and internal knowledge for justifying its decisions. One could then imagine a software agent much like a human being or a human group, with multiple parallel threads of thoughts and considerations which more than often are in conflict with each other. This distributed view contrasts the common centralized view used in agent-based programming,and opens up to potential cross-fertilization with distributed computing applications which for the moment are for the most unexplored.

The goal of this project is the design and development of an efficient agent architecture in a modern concurrent programming language (Rust, Go, or Scala), in order to make explicit a multi-threaded, distributed state of mind.
Giovanni Sileno <G.Sileno=>uva.nl>
Mostafa Mohajeriparizi <m.mohajeriparizi=>uva.nl>


Profiling (ab)user behavior of leaked credentials.

Appropriate secret management during software development is important as such secrets are often high privileged accounts, private keys or tokens which can grant access to the system being developed or any of its dependencies. Systems here could entail virtual machines or other software/infrastructure/platform services exposing a service for remote management. To combat mismanagement of secrets during development time, software projects such as HachiCorp Vault or CyberArk Conjure are introduced to provide a structured solution to this problem and to ensure secrets are not exposed by removing them from the source code.

Unfortunately, secrets are still frequently committed to software repositories which has the effect of accidentally ending up in either packages being released or in publicly accessible repositories, such as on Github. The fact that these secrets are then easily accessed (and potentially abused) in an automated fashion has been recently demonstrated by POC projects like shhgit [1].

This research would entail the study and profiling of the behavior of the abuser of such secrets by first setting up a monitoring environment and a restricted execution environment before intentionally leaking secrets online through different channels.

Especially, the research focusses on answering the following questions:
- Can abuse as a result of leaked credentials be profiled?
- Can profiles be used to predict abuser behavior?
- Are there different abusers / patterns / motives for different types of leaked credentials?
- Are there different abusers / patterns / motives for different sources of leaked credentials?
- Can profiles be used to attribute attacks to different attacker groups?

Prior experience with security monitoring and or cloud environments such as AWS / Azure is recommended in order to timely scope the research to a more feasible proposal.

[1] https://github.com/eth0izzle/shhgit
Fons Mijnen <fmijnen=>deloitte.nl>
Mich Cox <mcox=>deloitte.nl>


Development of a control framework to guaranty the security of a collaborative open-source project.

We’re now living in an information society, and everyone is expecting to be able to find everything on the Web. IT developers make no exception and spend a large part of their working hours searching for and reusing part of codes found on Public Repositories (e.g. GitHub, Gitlab …) or web forums (e.g. StackOverflow).
The use of open-source software has long been seen as a secure alternative as the code is available for review to everyone, and as a result, bugs and vulnerability should more easily be found and fixed. Multiple incidents related to the use of Open-source software (NPM, Gentoo, Homebrew) have shown that the greater security of open-source components turned out to be theoretical.
This research aims to highlight the root causes of major recent incidents related to open-source collaborative projects, as well as to propose a global open-source security framework that could address those issues.
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>
Aristide Bouix <Bouix.Aristide=>kpmg.nl>


Security Evaluation on Amazon Web Services’ REST API Authentication Protocol Signature Version 4.

Amazon Web Services is leading the Cloud Computing market with more than a third of the global market share. In this context, they need to enforce strict segregation between their users and virtual environments. AWS provide three different way to access a Cloud environment, either by using the: web console, CLI (Command Line Interface) or SDK (Software Development Kit)
If the first method uses standard OAuth2 authentication, AWS has created its own standard call signaturev4 for direct API REST requests authentication. Sigv4 is an internal and closed source protocol.
This research intends to evaluate the resilience and security of the AWS API compare to usual market standard such as OAuth2 and Basic HTTP Authentication.
To do so, you may start with:
  • Deploying a local Cloud stack or Escher
  • Testing some HTTP attack scenarios on a local server (e.g. replayed attack)
  • Document findings
  • Send a few crafted request to an AWS service and study the response
NB: As AWS doesn’t officially support penetration test on their infrastructure, direct attempt on AWS should be limited to the minimum and Flood attacks avoided.
Reference: https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>
Aristide Bouix <Bouix.Aristide=>kpmg.nl>

Hoang Huynh <hhuynh=>os3.nl>
Jason Kerssens <jkerssens=>os3.nl>


Characterization of the CASB (Cloud Access Security Broker) Technology Market.

CASB are often referred to as the new firewall of the Cloud era; they provide a middleman layer between a company network and a set of web services. However, many companies have been disappointed by their capabilities after implementation.

The objective of this research is to provide an detailed comparison of the current capabilities of the market leaders against the capabilities expected by corporate IT executives. You will have to address the following point:
  • Give an accurate definition of the CASB technology
  • Identify the risks related to Shadow IT and potential mitigation for each of them
  • List the capabilities of the Leaders and Visionaries in the Gartner Quadrant and verify they address the risks related to Shadow IT
  • Give your conclusion on the maturity of this market
NB: The challenge of this subject is to find reliable information. You can start with contacting providers’ sales department and request a demonstration, crossing the references of research publications or interacting on cybersecurity web forums.
Reference: https://www.bsigroup.com/globalassets/localfiles/en-ie/csir/resources/whitepaper/1810-magic_quadrant_for_casb.pdf
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>
Aristide Bouix <Bouix.Aristide=>kpmg.nl>


Insight in Cyber Safety when Remotely Operating SCADA Systems of Dutch Critical Infrastructure Object.

Nowadays most systems (e.g. Scada and process control in industry) have the ability to produce logging and sensor data about all infrastructure components. Often the combination of selected information from those logging files combined with information from external sources and current operations on those systems can create a good picture on the state of security. The challenge is to gather the data, make sure the correct logging is turned on in the first place, then place filters on this data so that the amount becomes manageable. Then data needs to be combined and processed to be usable for decision support and state.

In this rp we seek students that will:
  1. create a data gathering system
  2. artificial intelligence setup and machine learning to process that data.
This project covers a wide field and can easily be scoped into focused small research projects suitable for SNE students when contacting the supervisors.
Cedric Both <cedric=>datadigest.nl>

Tina Tami <tina.tami=>os3.nl>


Autonomous asset management using smart agents & sensors.

Today there is little to no knowledge about the state of our critical dutch infrastructure objects (e.g. tunnels, bridges, locks), and that creates security and asset management challenges. But the systems inside of the newer objects are capable to produce logging and sensor data that could be used to get the needed information regarding asset management and the overall state of such an object. The challenge is to create smart agents and sensors that combine the different types of data and then process that data to be usable for autonomous asset management (e.g. decision making when a component and/or systems needs replacement or has security issues).

In this rp we seek students that will:
1) Create  smart agents and sensors that can gather the "right" data.
2) Artificial intelligence setup and machine learning to process the data
3) Create an autonomous decision making and business rules logic engine.

The research question is to figure out what the "right" data is. The scope is big, students will discuss with the supervisor to scope the individual projects to right size.
Cedric Both <cedric=>datadigest.nl>


Analysis of a rarely implemented security feature: signing Docker images with a Notary server.

Notary is Docker's platform to provide trusted delivery of content by signing images that are published. A content publisher can then provide the corresponding signing keys that allow users to verify that content when it is consumed. Signing Docker images is considered as a best security practice, but is little implemented in practice.
The goal of this project is to provide guidelines for safe service implementation. A starting point could be:
  • Get familiar with the service Architecture and Threat Model [1]
  • Deploy a production like service [2]
  • Test the compromise scenarios from the Threat Model
  • Conclude and release a secure production-ready manual and docker-compose template 
  1. https://docs.docker.com/notary/service_architecture/
  2. https://docs.docker.com/notary/running_a_service/
Aristide Bouix <Bouix.Aristide=>kpmg.nl>
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>


Security of IoT communication protocols on the AWS platform.

In January 2020, Jason and Hoang from the OS3 master worked on the project “Security Evaluation on Amazon Web Services’ REST API Authentication Protocol Signature Version 4”[1]. This project has shown the resilience of the Sigv4 authentication mechanism for HTTP protocol communications.
Since June 2017, AWS released a service called AWS Greengrass[2] that can be used as an intermediate server for low connectivity devices running AWS IoT SDK[3]. This is an interesting configuration as it allows to further challenge Sigv4 authentication on a disconnected environment using the MQTT protocol.
  1. https://homepages.staff.os3.nl/~delaat/rp/2019-2020/p65/report.pdf
  2. https://docs.aws.amazon.com/greengrass/latest/developerguide/what-is-gg.html
  3. https://github.com/aws/aws-iot-device-sdk-python
Aristide Bouix <Bouix.Aristide=>kpmg.nl>
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>


WireGuard a new standard protocol to set up a Virtual Private Network?

WireGuard[1] is a new VPN protocol that aims to be as easy to configure and deploy as SSH and to replace other protocols such as IPsec and OpenVPN considered too complex in terms of code and configuration.
Since January, this protocol has been included in the Linux Kernel[2]. An American senator even called to use it as the favored VPN solution for the Government[3].
Behind this hype, is the protocol as fast and secured as it is advertised by its developers? The aim of this project is to deployed local OpenVPN, IPsec and WireGuard servers and to evaluate their different level of resilience.
  1. https://www.wireguard.com/
  2. https://www.theregister.co.uk/2020/01/29/wireguard_vpn_will_be_in_linux_56_kernel/
  3. https://www.phoronix.com/scan.php?page=news_item&px=WireGuard-Senator-Recommends
Aristide Bouix <Bouix.Aristide=>kpmg.nl>
Alex Stavroulakis <Stavroulakis.Alex=>kpmg.nl>



I hereby would like to invite you to the annual RP2 presentations, where the SNE students will be presenting their research.
Considering the wide variety of presentations the day promises to be very interesting and we hope you will join us.
Program (Printer friendly version: HTML, PDF).
Monday June 29, 2020, Auditorium C0.005, FNWI, Sciencepark 904, Amsterdam.
Time D #RP Title Name(s) LOC RP #stds

Welcome, introduction. Cees de Laat

10h00 25

10h25 25

10h50 20

11h10 25

11h35 25



13h10 20

13h30 20

13h50 20

14h10 20

14h30 25

14h55 20

15h15 20

15h35 20

16h00 20



Tuesday June 30, 2020, Auditorium C1.112, FNWI, Sciencepark 904, Amsterdam.
Time D #RP Title Name(s) LOC RP #stds

Welcome, introduction. Cees de Laat

10h00 25

10h25 25

10h50 20

11h10 25

11h35 25



13h10 20

13h30 20

13h50 20

14h10 20

14h30 25

14h55 20

15h15 20

15h35 20

16h00 20




Program (Printer friendly version: HTML, PDF.
presentations are 20 min for single and 25 min for pairs of students, yellow = requested specific day/time.)
Monday Feb 3 2020, 10h25 - 17h00 in room B1.23 at Science Park 904 NL-1098XH Amsterdam.
Time D #RP Title Name(s) LOC RP
10h25 0

Welcome, introduction. Cees de Laat

25 1
Zero Trust Network Security Model in containerized environments. Catherine de Weever, Marios Andreou on2it 1
10h50 20
bio/coffee break

11h10 25 4
The Current State of DNS Resolvers and RPKI Protection. Erik Dekker, Marius Brouwer nlnetlabs 1
11h35 25
A Design and Procedure for Digital Forensic Investigation on Data Theft on the Google Cloud Platform.
Frank Wiersma, Tjeerd Slokker pwc
12h00 60


13h00 25 23
Detecting hidden data within APFS datastructures. Axel Koolhaas, Woudt van Steenbergen fox-it 1
13h25 20 30
Automated planning and adaptation of Named Data Networks in Cloud environments. Sean Liao UvA
13h50 25
bio/tea/coffee break

14h10 25 60
Fast Data Serialization and Networking for Apache Spark.
Dadepo Aderemi, Mathijs Visser UvA
14h35 25 43 Anomaly Detection on Log Files Based on Simplicity Theory. Giacomo Casoni, Mar Badias Simo UvA
15h00 20
bio/tea/coffee break

15h20 25 49 Creating a plugin for Ghidra to support RISC-V64, to analyze the security of embedded technologies. Patrick Spaans, Joris Jonkers Both riscure 1
15h45 25 65 Security Evaluation on Amazon Web Services’ REST API Authentication Protocol Signature Version 4. Hoang Huynh, Jason Kerssens kpmg 1
16h10 20 41 Generating probable password candidates for the offline assessment of Dutch domain password hashes. Tom Broumels secura 1
16h30 0


Tuesday feb 4th 2020, 10h00 - 17h00 in room B1.23 at Science Park 904 NL-1098XH Amsterdam.
Time D #RP Title Name(s) LOC RP
10h00 0

Welcome, introduction. Cees de Laat

10h00 25 52
Network Anomaly Detection in Modbus TCP Industrial Control Systems. Philipp Mieden, Rutger Beltman deloitte 1
25 13
Incorporating post-quantum cryptography signatures in digital certificates Daan Weller, Ronald van der Gaag deloitte 2
10h50 20
bio/coffee break

11h10 25 50 Large-scale automotive CAN data acquisition for IDS evaluation.
Silke Knossen, Vincent Kieberl deloitte 1
11h35 25 51 Security Evaluation of Automotive Networks. Arnold.Buntsma, Sebastian Wilczek deloitte 1
12h00 60


13h00 25 40
Tunneling data over a Citrix virtual channel. Ward Bakker, Niels den Otter deloitte 1
13h25 25
Using Mimikatz’ Mimidrv driver to unhook antivirus callbacks in Windows. Bram Blaauwendraad, Thomas Ouddeken deloitte 1
13h50 20
bio/tea/coffee break

14h10 25 56 Detecting Malicious Behaviour of .NET C2 Agents using ETW. Alexander Bode, Niels Warnars kpn 1
14h35 25 55 Scoring model for IoCs by combining external resources to reduce false positives. Jelle Ermerins, Niek van Noort kpn 1
15h00 20
bio/tea/coffee break

15h20 20 53 Using BGP Flow-Spec for distributed micro-segmentation. Davide Pucci cumulusnetworks 1
15h40 20 8 APFS Checkpoint behaviour research in macOS. Maarten van der Slik NFI 1
16h00 20 67 Insight in Cyber Safety when Remotely Operating SCADA Systems of Dutch Critical Infrastructure Objects. Tina Tami datadigest 1
16h20 0


Out of normal schedule presentations: Room B1.23at Science Park 904 NL-1098XH Amsterdam. Program:
Date Time Place D #RP Title Name(s) LOC RP #stds
Integration of EVPN in Kubernetes. Attilla de Groot
Security of Mobility-as-a-Service(MaaS)
Alexander Blaauwgeers
B1.23 20
Incentivize distributed shared WiFi through VPN on home routers.
Sander Lentink SURFnet