The BRIDGE project is about privacy aspects of databases, looked at in different ways. One important aspect is how to sanitize a dataset before publishing it, or how to respond to queries in a way that satisfies privacy notions such as differential privacy. There is a big gap between (asymptotic) theory and schemes that perform well in practice. The main goal of the project is to bridge this gap, using ideas from (non-asymptotic) information theory, traitor-tracing codes and fuzzy extractors (biometric privacy).Data are a key resource in today's information age. However, disclosure of data poses threats to individual privacy. We study the problem of sanitizing a dataset before publishing it or publishing information about a dataset while satisfying privacy notions such as differential privacy. The main questions are of the form: for a given task, what mechanisms can provide the highest utility while protecting privacy. There is a gap between theory and practice. Mechanisms with provable utility perform poorly in experiments; mechanisms that empirically perform well often lack any proof of utility. We believe that there are two main reasons. First, utility theorems must provide error bounds for every possible dataset, whereas practical mechanisms exploit typical dataset features. Second, most theorems are asymptotic and have hidden constants and/or poly-log terms. Bridging this gap requires better understanding of the factors affecting utility; better utility metrics and ways to formalize dependencies on dataset features; better understanding of the limitations on utility; mechanisms to get close to the limits.
We propose to apply information-theoretic techniques to improve the state of the art in privacy-preserving data publishing and analysis. Information theory is the natural 'language' here. A private mechanism can be viewed as a channel with multiple senders (one per dataset record) and one receiver (receiving the output of the mechanism). The objective is to ensure a low decoding success rate. Furthermore, there is a link between traitor tracing codes and performance bounds on differential privacy.
If you are interested in this post-doc position, and you would like to apply, please send us your application by using the 'apply now'-button on top of this page. Please upload the following:
Please keep in mind: you can upload only 5 documents up to 2 MB each.
Screening of applicants will start as soon as applications are received and will continue until the position has been filled.