Project description

News, interviews, talk shows often appear biased to citizens with different political orientations and understood differently by public policy experts and broader public. Big data analyses of such sources would be more useful if bias is captured more accurately. Detecting fake news or propaganda messages from biased media sources would be improved, potentially allowing viewers to detect misrepresentations as they watch.

We aim at achieving reliable and explainable big data analysis of media related collections. We develop a framework of models for bias- and diversity-aware accuracy measures (with confidence scores and thresholds to assess reliability) and diversity-driven human computation methods for continuous gathering of opinion- and perspectives-aware training data (using crowdsourcing with citizens and ‘niche-sourcing’ with experts) to support macro-level analysis, e.g role of political and gender bias in media programmes & micro-level analysis, e.g. close reading of specific programmes, bias effects of speaker selection and topic presentation. We tackle the challenge of accuracy by seeking to capture the diverse range of inherent diverse opinions and biases of citizens and experts in big data analysis. Scientific challenges are (1) measuring accuracy in diverse interpretation space and (2) visualizing the complexity of the interpretation space to support responsible data analytics.