Loading
Current AI systems often make decisions in a way that people don’t understand. This potentially creates mistrust, unfairness and brittleness of these systems to changes. A solution is to build systems that learn and use understandable concepts. This requires annotators to label large amounts of data, which is tedious and expensive. Worse, even with all these annotations, AI models can still learn the wrong concepts! The researchers will develop new methods that learn understandable concepts correctly with a high probability from few annotations. They will also use available knowledge on the interactions between the concepts and information on past decisions.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=nwo_________::2373d61b059f0e7bfe3e7ee09cfc754a&type=result"></script>');
-->
</script> For further information contact us at helpdesk@openaire.eu
