E-commerce search and recommendation with Vespa.ai
<h2>Introduction<br/></h2><p>Holiday shopping season is upon us and it’s time for a blog post on E-commerce search and recommendation using <a href="http://vespa.ai/">Vespa.ai</a>. Vespa.ai is used as the search and recommendation backend at multiple Yahoo e-commerce sites in Asia, like <a href="https://tw.buy.yahoo.com/">tw.buy.yahoo.com</a>.</p><p>This blog post discusses some of the challenges in e-commerce search and recommendation, and shows how they can be solved using the features of Vespa.ai.</p><figure data-orig-width="4000" data-orig-height="2499" class="tmblr-full"><img src="https://66.media.tumblr.com/2e0f40eed18611a0fef419fb68f85250/f18c8d627f80772e-5c/s540x810/88daa448ae7e3c217e0f4c3a5e5c73f969087e0e.png" alt="image" data-orig-width="4000" data-orig-height="2499"/></figure><p><i>Photo by <a href="https://unsplash.com/@jonasleupe?utm_source=medium&utm_medium=referral">Jonas Leupe</a> on <a href="https://unsplash.com/?utm_source=medium&utm_medium=referral">Unsplash</a></i><br/></p><p><br/></p><h2>Text matching and ranking in e-commerce search</h2><p>E-commerce search have text ranking requirements where traditional text ranking features like <i>BM25</i> or <i>TF-IDF</i> might produce poor results. For an introduction to some of the issues with <i>TF-IDF/BM25</i> see <a href="https://medium.com/empathyco/the-influence-of-tf-idf-algorithms-in-ecommerce-search-e7cb9ab8e662">the influence of TF-IDF algorithms in e-commerce search</a>. One example from the blog post is a search for <i>ipad 2 </i>which with traditional <i>TF-IDF</i> ranking will rank <i>‘black mini ipad cover, compatible with ipad 2’</i> higher than <i>‘Ipad 2</i>’ as the former product description has several occurrences of the query terms <i>Ipad</i> and <i>2</i>.</p><p>Vespa allows developers and relevancy engineers to fine tune the text ranking features to meet the domain specific ranking challenges. For example developers can control if multiple occurrences of a query term in the matched text should impact the relevance score. See <a href="https://docs.vespa.ai/documentation/reference/nativerank.html#boost-tables">text ranking occurrence tables</a> and <a href="https://docs.vespa.ai/documentation/reference/rank-types.html">Vespa text ranking types</a> for in-depth details. Also the Vespa text ranking features takes text proximity into account in the relevancy calculation, i.e how close the query terms appear in the matched text. <i>BM25/TF-IDF</i> on the other hand does not take query term proximity into account at all. Vespa also implements <a href="https://docs.vespa.ai/documentation/reference/bm25.html">BM25</a> but it’s up to the relevancy engineer to chose which of the rich set of built-in text <a href="https://docs.vespa.ai/documentation/reference/rank-features.html">ranking features</a> in Vespa that is used.</p><p>Vespa uses <a href="https://opennlp.apache.org/">OpenNLP</a> for l<a href="https://docs.vespa.ai/documentation/linguistics.html">inguistic </a>processing like tokenization and stemming with support for multiple languages (as supported by OpenNLP).</p><h2>Custom ranking business logic in e-commerce search</h2><p>Your manager might tell you that <i>these</i> items of the product catalog should be prominent in the search results. How to tackle this with your existing search solution? Maybe by adding some synthetic query terms to the original user query, maybe by using separate indexes with federated search or even with a key value store which rarely is in synch with the product catalog search index?</p><p>With Vespa it’s easy to promote content as <a href="https://docs.vespa.ai/documentation/ranking.html">Vespa’s ranking framework </a>is <i>just math</i> and allows the developer to formulate the relevancy scoring function explicitly without having to rewrite the query formulation. Vespa controls ranking through <a href="https://docs.vespa.ai/documentation/reference/ranking-expressions.html">ranking expressions</a> configured in rank profiles which enables full control through the expressive Vespa ranking expression language. The rank profile to use is chosen at query time so developers can design multiple ranking profiles to rank documents differently based on query intent classification. See later section on query classification for more details how query classification can be done with Vespa.</p><p>A sample ranking profile which implements a tiered relevance scoring function where sponsored or promoted items are always ranked above non-sponsored documents is shown below. The ranking profile is applied to all documents which matches the query formulation and the relevance score of the hit is the assigned the value of the <i>first-phase expression</i>. Vespa also supports <a href="https://docs.vespa.ai/documentation/phased-ranking.html">multi-phase ranking</a>.</p><figure data-orig-width="639" data-orig-height="243" class="tmblr-full"><img src="https://66.media.tumblr.com/a11fa12596543e0cb84ed9a0249a92f3/f18c8d627f80772e-52/s540x810/cf4a61d45967d9ba1c80640fbc10830ea40f6191.png" data-orig-width="639" data-orig-height="243"/></figure><p><i>Sample hand crafted ranking profile defined in the Vespa application package.</i></p><p><br/></p><p>The above example is hand crafted but for optimal relevance we do recommend looking at learning to rank (LTR) methods. See l<a href="https://docs.vespa.ai/documentation/tutorials/text-search-ml.html">earning to Rank using TensorFlow Ranking </a>and l<a href="https://docs.vespa.ai/documentation/learning-to-rank.html">earning to Rank using XGBoost</a>. The trained MLR models can be used in combination with the specific business ranking logic. In the example above we could replace the <i>default-ranking</i> function with the trained MLR model, hence combining business logic with MLR models.<br/></p><h2>Facets and grouping in e-commerce search</h2><p>Guiding the user through the product catalog by guided navigation or faceted search is a feature which users expects from an e-commerce search solution today and with Vespa, facets and guided navigation is easily implemented by the powerful <a href="https://docs.vespa.ai/documentation/grouping.html">Vespa Grouping Language</a>.</p><figure data-orig-width="919" data-orig-height="645" class="tmblr-full"><img src="https://66.media.tumblr.com/c071b2ef7d6b1f1550283e57288593d7/f18c8d627f80772e-09/s540x810/f5d5f1b39915053f5ec3cfc4ab888e7e667d4d20.png" data-orig-width="919" data-orig-height="645"/></figure><p><i>Sample screenshot from <a href="https://github.com/vespa-engine/sample-apps/tree/master/use-case-shopping">Vespa e-commerce sample application</a> UI demonstrating search facets using Vespa Grouping Language.</i></p><p><br/></p><p>The Vespa grouping language supports deep nested grouping and aggregation operations over the matched content. The language also allows pagination within the group(s). For example if grouping hits by category and displaying top 3 ranking hits per category the language allows paginating to render more hits from a specified category group.</p><h2>The vocabulary mismatch problem in e-commerce search</h2><p>Studies (e.g. <a href="https://dl.acm.org/citation.cfm?id=3331323">this study </a>from FlipKart) finds that there is a significant fraction of queries in e-commerce search which suffer from vocabulary mismatch between the user query formulation and the relevant product descriptions in the product catalog. For example, the query <i>“ladies pregnancy dress”</i> would not match a product with description <i>“women maternity gown”</i> due to vocabulary mismatch between the query and the product description. Traditional Information Retrieval (IR) methods like TF-IDF/BM25 would fail retrieving the relevant product right off the bat.</p><p>Most techniques currently used to try to tackle the vocabulary mismatch problem are built around <a href="https://en.wikipedia.org/wiki/Query_expansion">query expansion</a>. With the recent advances in NLP using transfer learning with large pre-trained language models, we believe that future solutions will be built around multilingual semantic retrieval using text embeddings from pre-trained deep neural network language models. Vespa has recently announced a <a href="https://docs.vespa.ai/documentation/semantic-qa-retrieval.html">sample application on semantic retrieval</a> which addresses the vocabulary mismatch problem as the retrieval is not based on query terms alone, but instead based on the dense text tensor embedding representation of the query and the document. The mentioned sample app reproduces the accuracy of the retrieval model described in the <a href="https://ai.googleblog.com/2019/07/multilingual-universal-sentence-encoder.html">Google blog post about Semantic Retrieval.</a></p><p>Using our query and product title example from the section above, which suffers from the vocabulary mismatch, and instead move away from the textual representation to using the respective dense tensor embedding representation, we find that the semantic similarity between them is high (0.93). The high semantic similarity means that the relevant product would be retrieved when using semantic retrieval. The semantic similarity is in this case defined as the <a href="https://en.wikipedia.org/wiki/Cosine_similarity">cosine similarity</a> between the dense tensor embedding representations of the query and the product description. Vespa has strong support for expressing and storing <a href="https://docs.vespa.ai/documentation/tensor-intro.html">tensor fields </a>which one can perform <a href="https://docs.vespa.ai/documentation/reference/tensor.html#operations">tensor operations</a> (e.g cosine similarity) over for ranking, this functionality is demonstrated in the mentioned sample application.</p><p>Below is a simple matrix comparing the semantic similarity of three pairs of (query, product description). The tensor embeddings of the textual representation is obtained with the Universal Sentence Encoder from <i>Google</i>.</p><figure class="tmblr-full" data-orig-height="277" data-orig-width="652"><img src="https://66.media.tumblr.com/9e24d0a46ee723bcfef3ee0140003575/f18c8d627f80772e-3f/s540x810/a41664252cf8d8e4df5af5568cebab0719401bc3.png" data-orig-height="277" data-orig-width="652"/></figure><p><i>Semantic similarity matrix of different queries and product descriptions.</i></p><p><br/></p><p>The Universal Sentence Encoder Model from Google is multilingual as it was trained on text from multiple languages. Using these text embeddings enables multilingual retrieval so searches written in Chinese can retrieve relevant products by descriptions written in multiple languages. This is another nice property of semantic retrieval models which is particularly useful in e-commerce search applications with global reach.</p><h2>Query classification and query rewriting in e-commerce search</h2><p>Vespa supports deploying <a href="https://docs.vespa.ai/documentation/stateless-model-evaluation.html">stateless machine learned (ML) models </a>which comes handy when doing <a href="https://en.wikipedia.org/wiki/Web_query_classification">query classification</a>. Machine learned models which classify the query is commonly used in e-commerce search solutions and the recent advances in natural language processing (NLP) using pre-trained deep neural language models have improved the accuracy of text classification models significantly. See e.g t<a href="https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/">ext classification using BERT </a>for an illustrated guide to text classification using <i>BERT</i>. Vespa supports deploying ML models built with <i>TensorFlow, XGBoost</i> and <i>PyTorch</i> through the <i>Open Neural Network Exchange (<a href="https://onnx.ai/">ONNX</a>) format</i>. ML models trained with mentioned tools can successfully be used for various query classification tasks with high accuracy.</p><p>In e-commerce search, classifying the intent of the query or query session can help ranking the results by using an intent specific ranking profile which is tailored to the specific query intent. The intent classification can also determine how the result page is displayed and organised.</p><p>Consider a category browse intent query like <i>‘shoes for men’</i>. A query intent which might benefit from a query rewrite which limits the result set to contain only items which matched the unambiguous category id instead of just searching the product description or category fields for <i>‘shoes for men’</i> . Also ranking could change based on the query classification by using a ranking profile which gives higher weight to signals like popularity or price than text ranking features.</p><p>Vespa also features a powerful <a href="https://docs.vespa.ai/documentation/query-rewriting.html">query rewriting language</a> which supports rule based query rewrites, synonym expansion and query phrasing.</p><h2>Product recommendation in e-commerce search</h2><p>Vespa is commonly used for recommendation use cases and e-commerce is no exception.</p><p>Vespa is able to evaluate complex Machine Learned (ML) models over many data points (documents, products) in user time which allows the ML model to use real time signals derived from the current user’s online shopping session (e.g products browsed, queries performed, time of day) as model features. An offline batch oriented inference architecture would not be able to use these important real time signals. By batch oriented inference architecture we mean pre-computing the inference offline for a set of users or products and where the model inference results is stored in a key-value store for online retrieval.</p><p>In our <a href="https://docs.vespa.ai/documentation/tutorials/blog-recommendation.html">blog recommendation tutorial</a> we demonstrate how to apply a collaborative filtering model for content recommendation and in <a href="https://docs.vespa.ai/documentation/tutorials/blog-recommendation-nn.html">part 2 of the blog recommendation tutorial </a>we show to use a neural network trained with <i>TensorFlow</i> to serve recommendations in user time. Similar recommendation approaches are used with success in e-commerce.</p><h2>Keeping your e-commerce index up to date with real time updates</h2><p>Vespa is designed for horizontal scaling with high sustainable write and read throughput with low predictable latency. Updating the product catalog in real time is of critical importance for e-commerce applications as the real time information is used in retrieval filters and also as ranking signals. The product description or product title rarely changes but meta information like inventory status, price and popularity are real time signals which will improve relevance when used in ranking. Also having the inventory status reflected in the search index also avoids retrieving content which is out of stock.</p><p>Vespa has true native support for partial updates where there is no need to re-index the entire document but only a subset of the document (i.e fields in the document). Real time partial updates can be done at scale against <a href="https://docs.vespa.ai/documentation/attributes.html">attribute fields</a> which are stored and updated in memory. Attribute fields in Vespa can be updated at rates up to about 40-50K updates/s per content node.</p><h2>Campaigns in e-commerce search</h2><p>Using Vespa’s support for <a href="https://docs.vespa.ai/documentation/predicate-fields.html">predicate fields</a> it’s easy to control when content is surfaced in search results and not. The predicate field type allows the content (e.g a document) to express if it should match the query instead of the other way around. For e-commerce search and recommendation we can use predicate expressions to control how product campaigns are surfaced in search results. Some examples of what predicate fields can be used for:</p><ul><li>Only match and retrieve the document if time of day is in the range 8–16 or range 19–20 and the user is a member. This could be used for promoting content for certain users, controlled by the predicate expression stored in the document. The time of day and member status is passed with the query.</li><li>Represent recurring campaigns with multiple time ranges.</li></ul><p>Above examples are by no means exhaustive as predicates can be used for multiple campaign related use cases where the filtering logic is expressed in the content.</p><h2>Scaling & performance for high availability in e-commerce search</h2><p>Are you worried that your current search installation will break by the traffic surge associated with the holiday shopping season? Are your cloud VMs running high on disk busy metrics already? What about those long GC pauses in the JVM old generation causing your 95percentile latency go through the roof? Needless to say but any downtime due to a slow search backend causing a denial of service situation in the middle of the holiday shopping season will have catastrophic impact on revenue and customer experience.</p><figure class="tmblr-full" data-orig-height="4032" data-orig-width="3024"><img src="https://66.media.tumblr.com/274c1ab4427339e988e98dd8eab105d4/f18c8d627f80772e-3f/s540x810/2a85f7ebb31b3cf5e9e8bf2736e420a631d9dd1c.png" data-orig-height="4032" data-orig-width="3024"/></figure><p><i>Photo by <a href="https://unsplash.com/@jontyson?utm_source=medium&utm_medium=referral">Jon Tyson</a> on <a href="https://unsplash.com/?utm_source=medium&utm_medium=referral">Unsplash</a><br/></i></p><p><br/></p><p>The heart of the Vespa serving stack is written in C++ and don’t suffer from issues related to long JVM GC pauses. The <a href="https://docs.vespa.ai/documentation/proton.html">indexing and search component in Vespa</a> is significantly different from the <a href="https://lucene.apache.org/">Lucene</a> based engines like <i>SOLR/Elasticsearch</i> which are IO intensive due to the many <i>Lucene</i> segments within an index shard. A query in a <i>Lucene</i> based engine will need to perform lookups in dictionaries and posting lists across all segments across all <i>shards</i>. Optimising the search access pattern by merging the Lucene segments will further increase the IO load during the merge operations.</p><p>With Vespa you don’t need to define the number of shards for your index prior to indexing a single document as Vespa allows adaptive scaling of the content cluster(s) and there is no <i>shard concept</i> in Vespa. Content nodes can be added and removed as you wish and Vespa will<a href="https://docs.vespa.ai/documentation/elastic-vespa.html#resizing"> re-balance the data in the background </a>without having to re-feed the content from the source of truth.</p><p>In <i>ElasticSearch</i>, changing the number of shards to scale with changes in data volume requires an operator to perform a multi-step procedure that sets the index into read-only mode and splits it into an entirely new index. Vespa is designed to allow cluster resizing while being fully available for reads and writes. Vespa splits, joins and moves parts of the data space to ensure an even distribution with no intervention needed</p><p>At the scale we operate Vespa at <i>Verizon Media</i>, requiring more than 2X footprint during content volume expansion or reduction would be prohibitively expensive. Vespa was designed to allow content cluster resizing while serving traffic without noticeable serving impact. Adding content nodes or removing content nodes is handled by adjusting the node count in the <a href="https://docs.vespa.ai/documentation/cloudconfig/application-packages.html">application package</a> and re-deploying the application package.</p><p>Also the shard concept in <i>ElasticSearch</i> and <i>SOLR</i> impacts search latency incurred by cpu processing in the matching and ranking loops as the concurrency model in <i>ElasticSearch/SOLR</i> is <a href="http://blog.mikemccandless.com/2019/10/concurrent-query-execution-in-apache.html">one thread per search per shard</a>. Vespa on the other hand allows a single search to use multiple threads per node and the number of threads can be controlled at query time by a rank-profile setting: <a href="https://docs.vespa.ai/documentation/reference/search-definitions-reference.html#num-threads-per-search">num-threads-per-search</a>. Partitioning the matching and ranking by dividing the document volume between searcher threads reduces the overall latency at the cost of more cpu threads, but makes better use of multi-core cpu architectures. If your search servers cpu usage is low and search latency is still high you now know the reason.</p><p>In a recent published benchmark which compared the performance of Vespa versus ElasticSearch for <a href="https://github.com/jobergum/dense-vector-ranking-performance">dense vector ranking</a> Vespa was 5x faster than ElasticSearch. The benchmark used 2 shards for ElasticSearch and 2 threads per search in Vespa.</p><p><br/></p><p>The holiday season online query traffic can be very spiky, a query traffic pattern which can be difficult to predict and plan for. For instance price comparison sites might direct more user traffic to your site unexpectedly at times you did not plan for. Vespa supports graceful quality of search degradation which comes handy for those cases where traffic spikes reaches levels not anticipated in the capacity planning. These soft degradation features allow the search service to operate within acceptable latency levels but with less accuracy and coverage. These soft degradation mechanisms helps avoiding a denial of service situation where all searches are becoming slow due to overload caused by unexpected traffic spikes. See details in the Vespa <a href="https://docs.vespa.ai/documentation/graceful-degradation.html">graceful degradation documentation</a>.</p><h2>Summary</h2><p>In this post we have explored some of the challenges in e-commerce Search and Recommendation and highlighted some of the features of Vespa which can be used to tackle e-commerce search and recommendation use cases. If you want to try Vespa for your e-commerce application you can go check out our e-commerce sample application found <a href="https://docs.vespa.ai/documentation/use-case-shopping.html">here</a> . The sample application can be scaled to full production size using our hosted Vespa Cloud Service at <a href="https://cloud.vespa.ai/">https://cloud.vespa.ai/</a>. Happy Holiday Shopping Season!</p>