Learning to Rank with Vespa – Getting started with Text Search
<p><a href="https://vespa.ai/">Vespa.ai</a> have just published two tutorials to help people to get started with text search applications by building scalable solutions with Vespa. The tutorials were based on the <a href="https://github.com/microsoft/TREC-2019-Deep-Learning#document-ranking-task">full document ranking task</a> released by <a href="http://www.msmarco.org/">Microsoft’s MS MARCO dataset</a>’s team.</p><p><a href="https://docs.vespa.ai/documentation/tutorials/text-search.html">The first tutorial</a> helps you to create and deploy a basic text search application with Vespa as well as to download, parse and feed the dataset to a running Vespa instance. They also show how easy it is to experiment with ranking functions based on built-in ranking features available in Vespa.</p><p><a href="https://docs.vespa.ai/documentation/tutorials/text-search-ml.html">The second tutorial</a> shows how to create a training dataset containing Vespa ranking features that allow you to start training ML models to improve the app’s ranking function. It also illustrates the importance of going beyond pointwise loss functions when training models in a learning to rank context.</p><p>Both tutorials are detailed and come with code available to reproduce the steps. Here are the highlights.</p><h2>Basic text search app in a nutshell</h2><p>The main task when creating a basic app with Vespa is to write a search definition file containing information about the data you want to feed to the application and how Vespa should match and order the results returned in response to a query.</p><p>Apart from some additional details described in <a href="https://docs.vespa.ai/documentation/tutorials/text-search.html">the tutorial</a>, the search definition for our text search engine looks like the code snippet below. We have a title and body <code>field</code> containing information about the documents available to be searched. The <code>fieldset</code> keyword indicates that our query will match documents by searching query words in both title and body fields. Finally, we have defined two <code>rank-profile</code>, which controls how the matched documents will be ranked. The <code>default</code> rank-profile uses <code>nativeRank,</code> which is one of many <a href="https://docs.vespa.ai/documentation/reference/rank-features.html">built-in rank features</a> available in Vespa. The <code>bm25</code> rank-profile uses the widely known <a href="https://docs.vespa.ai/documentation/reference/bm25.html">BM25 rank feature</a>.</p><pre>search msmarco { <br/>
document msmarco {<br/> field title type string<br/> field body type string <br/> }
fieldset default {<br/> fields: title, body<br/> }
rank-profile default {<br/> first-phase {<br/> expression: nativeRank(title, body)<br/> }<br/> }
rank-profile bm25 inherits default {<br/> first-phase {<br/> expression: bm25(title) + bm25(body)<br/> }<br/> }
}</pre><p>When we have more than one rank-profile defined, we can chose which one to use at query time, by including the <code>ranking</code> parameter in the query:</p><pre>curl -s "<URL>/search/?query=what+is+dad+bod"<br/>curl -s "<URL>/search/?query=what+is+dad+bod&ranking=bm25"</pre><p>The first query above does not specify the <code>ranking</code> parameter and will therefore use the <code>default</code> rank-profile. The second query explicitly asks for the <code>bm25</code> rank-profile to be used instead.</p><p>Having multiple rank-profiles allow us to experiment with different ranking functions. There is one relevant document for each query in the MSMARCO dataset. The figure below is the result of an evaluation script that sent more than 5.000 queries to our application and asked for results using both rank-profiles described above. We then tracked the position of the relevant document for each query and plotted the distribution for the first 10 positions.</p><figure data-orig-width="650" data-orig-height="700" class="tmblr-full"><img src="https://66.media.tumblr.com/401dec61ed48dd2b4dd1f155a747aeeb/60c7799dba94b2e9-04/s540x810/e873ef7129f426267ae17825c3b4accb4370ade3.png" data-orig-width="650" data-orig-height="700"/></figure><p>It is clear that the <code>bm25</code> rank-profile does a much better job in this case. It places the relevant document in the first positions much more often than the <code>default</code> rank-profile.</p><h2>Data collection sanity check</h2><p>After setting up a basic application, we likely want to collect rank feature data to help improve our ranking functions. Vespa allow us to return rank features along with query results, which enable us to create training datasets that combine relevance information with search engine rank information.</p><p>There are different ways to create a training dataset in this case. Because of this, we believe it is a good idea to have a sanity check established before we start to collect the dataset. The goal of such sanity check is to increase the likelihood that we catch bugs early and create datasets containing the right information associated with our task of improving ranking functions.</p><p>Our proposal is to use the dataset to train a model using the same features and functional form used by the baseline you want to improve upon. If the dataset is well built and contains useful information about the task you are interested you should be able to get results at least as good as the one obtained by your baseline on a separate test set.</p><p>Since our baseline in this case is the <code>bm25</code> rank-profile, we should fit a linear model containing only the bm25 features:</p><pre>a + b * bm25(title) + c * bm25(body)</pre><p>Having this simple procedure in place helped us catch a few silly bugs in our data collection code and got us in the right track faster than would happen otherwise. Having bugs on your data is hard to catch when you begin experimenting with complex models as we never know if the bug comes from the data or the model. So this is a practice we highly recommend.</p><h2>How to create a training dataset with Vespa</h2><p>Asking Vespa to return ranking features in the result set is as simple as setting the <code>ranking.listFeatures</code> parameter to <code>true</code> in the request. Below is the body of a POST request that specify the query in <a href="https://docs.vespa.ai/documentation/query-language.html">YQL format</a> and enable the rank features dumping.</p><pre>body = {<br/> "yql": 'select * from sources * where (userInput(@userQuery));',<br/> "userQuery": "what is dad bod",<br/> "ranking": {"profile": "bm25", "listFeatures": "true"},<br/>}</pre><p>Vespa returns <a href="https://github.com/vespa-engine/system-test/blob/master/tests/search/rankfeatures/dump.txt">a bunch of ranking features</a> by default, but we can explicitly define which features we want by creating a rank-profile and ask it to <code>ignore-default-rank-features</code> and list the features we want by using the <code>rank-features</code> keyword, as shown below. The <code>random</code> first phase will be used when sampling random documents to serve as a proxy to non-relevant documents.</p><pre>rank-profile collect_rank_features inherits default {<br/><br/> first-phase {<br/> expression: random<br/> }<br/><br/> ignore-default-rank-features<br/><br/> rank-features {<br/> bm25(title)<br/> bm25(body)<br/> nativeRank(title)<br/> nativeRank(body)<br/> }<br/><br/>}</pre><p>We want a dataset that will help train models that will generalize well when running on a Vespa instance. This implies that we are only interested in collecting documents that are matched by the query because those are the documents that would be presented to the first-phase model in a production environment. Here is the data collection logic:</p><pre>hits = get_relevant_hit(query, rank_profile, relevant_id)<br/>if relevant_hit:<br/> hits.extend(get_random_hits(query, rank_profile, n_samples))<br/> data = annotate_data(hits, query_id, relevant_id)<br/> append_data(file, data)</pre><p>For each query, we first send a request to Vespa to get the relevant document associated with the query. If the relevant document is matched by the query, Vespa will return it and we will expand the number of documents associated with the query by sending a second request to Vespa. The second request asks Vespa to return a number of random documents sampled from the set of documents that were matched by the query.</p><p>We then parse the hits returned by Vespa and organize the data into a tabular form containing the rank features and the binary variable indicating if the query-document pair is relevant or not. At the end we have a dataset with the following format. More details can be found in <a href="https://docs.vespa.ai/documentation/tutorials/text-search-ml.html">our second tutorial</a>.</p><figure class="tmblr-full" data-orig-height="268" data-orig-width="1324"><img src="https://66.media.tumblr.com/5add0b2f89dc17281a50e15c91bc4c68/60c7799dba94b2e9-b4/s540x810/27eb0eb7626c85140d8596d0a82c9dd8b5a558fe.png" data-orig-height="268" data-orig-width="1324"/></figure><h2>Beyond pointwise loss functions</h2><p>The most straightforward way to train the linear model suggested in our data collection sanity check would be to use a vanilla logistic regression, since our target variable <code>relevant</code> is binary. The most commonly used loss function in this case (binary cross-entropy) is referred to as a pointwise loss function in the LTR literature, as it does not take the relative order of documents into account.</p><p>However, as we described in <a href="https://docs.vespa.ai/documentation/tutorials/text-search.html">our first tutorial</a>, the metric that we want to optimize in this case is the Mean Reciprocal Rank (MRR). The MRR is affected by the relative order of the relevance we assign to the list of documents generated by a query and not by their absolute magnitudes. This disconnect between the characteristics of the loss function and the metric of interest might lead to suboptimal results.</p><p>For ranking search results, it is preferable to use a listwise loss function when training our model, which takes the entire ranked list into consideration when updating the model parameters. To illustrate this, we trained linear models using the <a href="https://github.com/tensorflow/ranking">TF-Ranking framework</a>. The framework is built on top of TensorFlow and allow us to specify pointwise, pairwise and listwise loss functions, among other things.</p><p>We <a href="https://github.com/vespa-engine/sample-apps/tree/master/text-search">made available</a> the script that we used to train the two models that generated the results displayed in the figure below. The script uses simple linear models but can be useful as a starting point to build more complex ones.</p><figure class="tmblr-full" data-orig-height="700" data-orig-width="650"><img src="https://66.media.tumblr.com/6301c2c5d7015c9f8f8f824f4dd9440b/60c7799dba94b2e9-5e/s540x810/0280c78756d9a59068630a85ff30aad3876cdf97.png" data-orig-height="700" data-orig-width="650"/></figure><p>Overall, on average, there is not much difference between those models (with respect to MRR), which was expected given the simplicity of the models described here. However, we can see that a model based on a listwise loss function allocate more documents in the first two positions of the ranked list when compared to the pointwise model. We expect the difference in MRR between pointwise and listwise loss functions to increase as we move on to more complex models.</p><p>The main goal here was simply to show the importance of choosing better loss functions when dealing with LTR tasks and to give a quick start for those who want to give it a shot in their own Vespa applications. Now, it is up to you, check out <a href="https://docs.vespa.ai/documentation/tutorials/text-search.html">the tutorials</a>, build something and <a href="https://twitter.com/vespaengine">let us know</a> how it went. Feedbacks are welcome!</p>