“UElic: Embracing Human Uncertainty in Machine Learning Models”
In the world of AI, humans are often looked at as reliable sources of information. But what if we embrace the uncertainty in human insights instead? That’s exactly what a team of researchers from the University of Cambridge aims to do with their new platform called UElic.
So, what exactly is UElic? Well, it’s a platform that collects real-world human uncertainty data to improve the reliability of machine learning models. The researchers believe that allowing humans to express their uncertainty can greatly enhance the performance of these models.
The researchers introduced concept-based models that focus on interpretability and human interventions. These models use supervised learning with inputs, concepts, and outputs. Concepts can be binary or categorical and can include uncertainty. To collect feedback, the researchers used an image classification dataset where humans could label images and indicate their level of uncertainty.
The study explored how concept-based models handle human uncertainty and how they can better support it. The researchers used benchmark machine learning datasets with varying levels of uncertainty to simulate and study uncertainty expressions. They found that design choices play a crucial role in handling uncertainty scores effectively.
Some challenges that were discovered include the complementarity of human and machine uncertainty, dealing with human calibration, and scaling uncertainty elicitation. To facilitate further research in human uncertainty interventions, the researchers introduced the UElic interface and the CUB-S dataset.
If you’re interested in learning more about this research, you can check out the paper and reference article for all the details. And don’t forget to join our ML SubReddit, Facebook community, Discord channel, and email newsletter for the latest AI research news and cool projects.
All credit for this research goes to the dedicated researchers on the project. They are constantly pushing the boundaries of AI and finding innovative ways to improve its performance. So, let’s embrace human uncertainty and work towards more reliable AI models together!