Wednesday, February 11, 2009

Grid@CLEF track : a framework for IR experimentation

Don't be put of by the title, this isn't a post about Grid Computing. Instead, I'm going to talk about the Grid@CLEF task, which defines a framework and TREC-style track for experimentation with various components of IR systems. Disclaimer: I'm pleased to be on the advisory committee of the Grid@CLEF task.

Firstly, I'll give a bit of background. Cross-Language Evaluation Forum (CLEF) is a spin-off from TREC which concentrates on the evaluation of mono-lingual (non-English) and cross-lingual retrieval. CLEF has been running since 2000, and attracts a wide spread of participating research groups from across the globe, reaching 130 for CLEF 2008.

The tracks have now been defined for CLEF 2009, which includes the Grid track. Nicola Ferro (Univ. of Padova) and Donna Harman (NIST) are the big-wigs for this task, with suggestions from the advisory committee. So what does Grid mean in this context? Well, the idea (in my own words) is that the components of an IR system that have effect can be roughly categorised as follows: tokeniser, stopword list, word-decompounder, stemmer, and ranking function. In the Grid track, the concept is that these components can be interchanged, and a fuller understanding of their impact derived. The Grid framework facilitates such interchanges, by defining a way to allow various mixes of components to be attempted, thus creating a "grid" of experimental results.

However, the problem with such an experiment is that often each of these components is tied to an IR system, and that having the IR system itself can have an impact on the results. Instead, the idea behind the Grid track is that the output from each component (tokeniser, stopword list etc) of a given IR system is saved in an XML format, and shared among participants. In this way, every combination of each component can be investigated.

The Grid@CLEF site describes more the intuitions of the task, including an example of how results will be presented.

Here in Glasgow, we like the concept behind the Grid track. Indeed, it has some similarities to the way we ran the opinion finding task in the TREC 2008 Blog track. In the opinion finding task (where the aim is to retrieve relevant and opinionated blog posts about the target topic), the retrieval performance of opinion identification approaches appears to be linked to the ability of the underlying "topical relevance" retrieval approach. To investigate this in TREC 2008, we provided 5 standard topical relevance baselines, which participants were able to use as input to their opinion finding technique(s). You can read more in the Overview of the TREC 2008 Blog track (Iadh Ounis, Craig Macdonald and Ian Soboroff), which should be released in a few weeks time.

I have committed to implementing Terrier support for the Grid@CLEF track. The XML specification is being agreed by the Grid@CLEF organisers and advisers. However, if you are interested in using Terrier on this task, you can follow the progress on the TR-9 issue concerning Terrier's Grid@CLEF support. The exact specification for the Grid@CLEF XML interchange format is still in flux, but once its settled down, Terrier support should be forthcoming.

No comments: