Evaluation
1. Competition plan
The training and verification dataset will be released in mid-April. Each team can upload result file to Biendata at most once a day, This result is only used to improve the algorithm. The evaluation dataset will be released on July 20. Multiple results can be submitted before July 25th (up to once per day).
2. Submit file
After the verification dataset is released, the participating teams are allowed to submit the results to the platform multiple times. Please upload result file named “result.json”. The format is the same as the sample output in the task description, and the ranking is updated in time. After the evaluation set is released, the participating teams are allowed to submit the evaluation set result file multiple times, but at most once a day per team. The top three teams will eventually need to submit the following additional materials:
1. Related codes and instructions
2. Method description document (non-evaluation papers, evaluation paper writing requirements you can find in CCKS 2019 official website)
The above two documents need to be sent to the mailbox:ccks2019_erl@163.com before Aug 1. The title of the email is: "CCKS-EL-name of the team", such as "CCKS-EL-火箭队".
The code and its documentation need to be packaged into a file (tar, zip, gzip, rar, etc.), named with code.xxx, required to submit all the program code and related configuration instructions, the program should be able to run and the results and results. Txt matches. If the method uses additional resources, ask for instructions and provide a resource file or address, and the additional resources must open source and free to use.
3. Evaluation
The competition adopts Macro Pairwise-F1 as the evaluation metric.
Precision:
Recall:
F1 score:
CCKS 2019 Task 2 (Mandarin Text Data Only)
¥15000
820 participants
start
Final Submissions
2019-04-19
2019-07-25
Sponsor:CCKS 2019 & Baidu