Friday, October 16, 2015

Reading Notes For Week 7

IIR Chapter 8

  1. The article begin with a discussion of measuring the effectiveness of IR systems and the test collections that are most often used for this purpose. We then present the straightforward notion of relevant and nonrelevant documents and the formal evaluation methodology that has been developed for evaluating unranked retrieval results.
  2.  That relevance of retrieval results is the most important factor: blindingly fast, useless answers do not make a user happy. However, user perceptions do not always coincide with system designers’ notions of quality. 
  3. The standard way to measure human satisfaction is by various kinds of user studies. These might include quantitative measures, both objective, such as time to complete a task, as well as subjective, such as a score for satisfaction with the search engine, and qualitative measures, such as user comments on the search interface.

Improving the effectiveness of information Retrieval with local context analysis

  1. The paper talks about the experiments with a range of collections of different sizes and languages, comparing no-expansion base and conventional independent local expansion. The experiment results are very helpful for expansions in all kinds of form.
  2. The comparison between local context pseudo relevance feedback and real relevance feedback are very interesting two strategies for the special case.

A study of methods for negative relevance feedback & Relevance feedback revisited

  1. The paper conduct a study of methods for negative relevance feedback. They compare representative negative feedback methods, covering vector space models.
  2. Also the author mentioned how to evaluate negative feedback, which requires a test set with sufficient difficult topics. from the vector space models in the paper, I think model based negative feedback methods are generally more effective than those based on vector-space models.

No comments:

Post a Comment