Results (
Thai) 1:
[Copy]Copied!
Assessing the performance of expert finding tools should take a multidimensional tact. Ofcourse it is important that the system actually be able to find experts. Accordinglytechnical performance measures such as those described in the previous section (e.g., theprecision and recall of a returned expert list) are important. One key to comparing andcontrasting systems is a common data set – lists of experts and sources from which thatexpertise can be inferred. Unfortunately very few organizations assess the performance oftheir expert finders much less benchmark them against a standard data set or expertfinding task. Fortunately, the Text Retrieval and Evaluation Conference (TREC)Enterprise track evaluated both email search and expert search (Craswell et al. 2005). Inthe latter task, 9 groups participated in the first expertise search task which sought to findexperts from 331,037 documents retrieved from the World Wide Web Consortia (W3C)(*.w3.org) site in June 2004. Given 50 topical queries, find the list of W3C people whoare experts in that topic area given a list of 1092 candidate experts. Ten training querieswere provided. The Mean Average Precision (MAP) of the best system was .275 MAP.MAP is the mean of the average precisions over a set of queries after each document isretrieved. This measure gives better scores to techniques that return more relevantdocuments earlier. US, European and Chinese organizations participated. Results fromTREC are displayed in Figure 4. Unfortunately the commercial solutions described in thenext section have not yet been assessed against this benchmark.
Being translated, please wait..