BAAI·ICT ¥100,000 (~$13,986) 1391 Team1476 participants
BAAI & ICT - False news Detection Task 1
2019-08-30 - Launch
2019-10-13 - Team Merger Deadline
2019-11-06 - Close
ICON Home     Competitions    

 

Evaluation

 

For the false news text detection task and the false news multi-modal detection task, the evaluation metric is F1, which is the harmonic average of the precision and recall:


Precision Rate = # of false news correctly predicted / # of false news predicted in total

Recall Rate =  # of false news correctly predicted / # of false news labeled in test set

 F1 = (2 * precision * recall) / (precision + recall)

 

 

For the false news image detection task, the evaluation metric is also F1.
 

Precision Rate = # of false images correctly predicted / # of predicted false images in total

Recall Rate = # of false images correctly predicted / # of false images labeled in test set

 F1 = (2 * precision * recall) / (precision + recall)

 

 

The test set is divided into two parts: Phase I test set and the Phase II test set. In the Phase I, all participants can see their scores. The second phase requires submitting codes and models. The scores of the second phase determine the team final rank of the competition. In Phase II, the organizers will test the reproducibility of the results according to the model and code submitted by the teams, and determine the final score based on the reproducibility results.

BAAI & ICT - False news Detection Task 1

¥100,000 (~$13,986)

1476 participants

start

Final Submissions

2019-08-30

2019-11-06