The online academic platforms are accumulating a large number of papers every day. How to accurately and quickly assign these new papers to the existed author profiles is the most urgent problem to be solved for those online academic systems. The problem can be defined as following: given a set of new papers and a group of existed author’s paper lists already in the system, how to assign the new papers to the existed authors correctly.
In this sub-task, the evaluation metric is Macro Pairwise-F1 for an individual author. For m authors, the evaluation methods are:
WeightedPrecision = \sum_{i=1}^{M} Precision_i \times weight_i
WeightedRecall = \sum_{i=1}^{M} Recall_i \times weight_i
WeightedF_1 = \frac{2 \times WeightedPrecision \times WeightedRecall}{WeightedPrecision + WeightedRecall}
Given a bunch of papers with authors who have same names, participants will be asked to return different clusters of papers by authors. Each cluster belongs to an author.
In this track, the evaluation metric is Macro Pairwise-F1:
PairwisePrecision = \frac{\#PairsCorrectlyPredictedToSameAuthor}{\#TotalPairsPredictedToSameAuthor}
PairwiseRecall = \frac{\#PairsCorrectlyPredictedToSameAuthor}{\#TotalPairsToSameAuthor}
PairwiseF_1 = \frac{2 \times PairwisePrecision \times PairwiseRecall}{PairwisePrecision + PairwiseRecall}
WhoIsWho:Name Disambiguation from Scratch
222 participants
start
Final Submissions
2021-12-21
2030-12-21
Sponsor:Zhipu.AI