Powered by OpenAIRE graph
Found an issue? Give us feedback

PrivInfer - Programming Languages for Differential Privacy: Conditioning and Inference

Funder: UK Research and InnovationProject code: EP/M022358/1
Funded under: EPSRC Funder Contribution: 91,961 GBP

PrivInfer - Programming Languages for Differential Privacy: Conditioning and Inference

Description

An enormous amount of individuals' data is collected every day. These data could potentially be very valuable for scientific and medical research or for targeting business. Unfortunately, privacy concerns restrict the way this huge amount of information can be used and released. Several techniques have been proposed with the aim of making the data anonymous. These techniques however lose their effectiveness when attackers can exploit additional knowledge. Differential privacy is a promising approach to the privacy-preserving release of data: it offers a strong guaranteed bound on the increase in harm that a user I incurs as a result of participating in a differentially private data analysis, even under worst-case assumptions. A standard way to ensure differential privacy is by adding some statistical noise to the result of a data analysis. Differentially private mechanisms have been proposed for a wide range of interesting problems like statistical analysis, combinatorial optimization, machine learning, distributed computations, etc. Moreover, several programming language verification tools have been proposed with the goal of assisting a programmer in checking whether a given program is differentially private or not. These tools have been proved successful in checking differentially private programs that uses standard mechanisms. They offer however only a limited support for reasoning about differential privacy when this is obtained using non-standard mechanisms. One limitation comes from the simplified probabilistic models that are built-in to those tools. In particular, these simplified models provide no support (or only very limited support) for reasoning about explicit conditional distributions and probabilistic inference. From the verification point of view, dealing with explicit conditional distributions is difficult because it requires finding a manageable representation, in the internal logic of the verification tool, of events and probability measures. Moreover, it requires a set of primitives to handle them efficiently. In this project we aim at overcoming these limitations by extending the scope of verification tools for differential privacy to support explicit reasoning about conditional distributions and probabilistic inference. Support for conditional distributions and probabilistic inference is crucial for reasoning about machine learning algorithms. Those are essential tools for achieving efficient and accurate data analysis for massive collection of data. So, the goal of the project is to provide a novel programming language technology useful for enhancing privacy-preserving data analysis based on machine learning.

Data Management Plans
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

All Research products
arrow_drop_down
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::f37bc00708b7e17024ae6bb2a14ac9fa&type=result"></script>');
-->
</script>
For further information contact us at helpdesk@openaire.eu

No option selected
arrow_drop_down