Apache Storm 2.0 Improvements
<p>By <a href="https://www.linkedin.com/in/kishorkumarpatil/">Kishor Patil</a>, Principal Software Systems Engineer at Verizon Media, and PMC member of Apache Storm & <a href="https://www.linkedin.com/in/bobbyevansbigdata/">Bobby Evans</a>, Apache Member and PMC member of Apache Hadoop, Spark, Storm, and Tez<br/></p><p>We are excited to be part of the <a href="http://storm.apache.org/2019/05/30/storm200-released.html">new release of Apache Storm 2.0.0</a>. The open source community has been working on this major release, Storm 2.0, for quite some time. At Yahoo we had a long time and <a href="http://yahoohadoop.tumblr.com/post/98751512631/the-evolution-of-storm-at-yahoo-and-apache">strong commitment</a> to using and contributing to Storm; a commitment we continue as part of Verizon Media. Together with the Apache community, we’ve added more than 1000 fixes and improvements to this new release. These improvements include sending real-time infrastructure alerts to the DevOps folks running Storm and the ability to augment ingested content with related content, thereby giving the users a deeper understanding of any one piece of content. <br/></p><p><b>Performance</b><br/></p><p><b></b></p><p>Performance and utilization are very important to us, so we developed a <a href="https://yahooeng.tumblr.com/post/135321837876/benchmarking-streaming-computation-engines-at">benchmark</a> to evaluate various stream processing platforms and the initial results showed Storm to be among the best. We expect to release new numbers by the end of June 2019, but in the interim, we ran some smaller Storm specific tests that we’d like to share.</p><p><b></b></p><p>Storm 2.0 has a built-in load generation tool under <a href="https://github.com/apache/storm/tree/v2.0.0/examples/storm-loadgen">examples/storm-loadgen</a>. It comes with the requisite word count test, which we used here, but also has the ability to capture a statistical representation of the bolts and spouts in a running production topology and replay that load on another topology, or another version of Storm. For this test, we backported that code to Storm 1.2.2. We then ran the ThroughputVsLatency test on both code bases at various throughputs and different numbers of workers to see what impact Storm 2.0 would have. These were run out of the box with no tuning to the default parameters, except to set max.spout.pending in the topologies to be 1000 sentences, as in the past that has proven to be a good balance between throughput and latency while providing flow control in the 1.2.2 version that lacks backpressure.</p><p><b></b></p><p>In general, for a WordCount topology, we noticed 50% - 80% improvements in latency for processing a full sentence. Moreover, 99 percentile latency in most cases, is lower than the mean latency in the 1.2.2 version. We also saw the maximum throughput on the same hardware more than double.</p><figure data-orig-width="1474" data-orig-height="912" class="tmblr-full"><img src="https://66.media.tumblr.com/3f630f757d60a09d5f50c78938262b61/tumblr_inline_pscc4bXpK91wxhpzr_540.png" alt="image" data-orig-width="1474" data-orig-height="912"/></figure><figure data-orig-width="1474" data-orig-height="912" class="tmblr-full"><img src="https://66.media.tumblr.com/5489f1a2c3610b92bc6aa75b652bc4a2/tumblr_inline_pscc4ptVJp1wxhpzr_540.png" alt="image" data-orig-width="1474" data-orig-height="912"/></figure><p>Why did this happen? <a href="https://issues.apache.org/jira/browse/STORM-2306">STORM-2306</a> redesigned the threading model in the workers, replaced disruptor queues with JCTools queues, added in a new true backpressure mechanism, and optimized a lot of code paths to reduce the overhead of the system. The impact on system resources is very promising. Memory usage was untouched, but CPU usage was a bit more nuanced.<br/></p><figure data-orig-width="1476" data-orig-height="742" class="tmblr-full"><img src="https://66.media.tumblr.com/bcf248c89ef30fead19c2c7026760859/tumblr_inline_pscc6jlcda1wxhpzr_540.png" alt="image" data-orig-width="1476" data-orig-height="742"/></figure><figure data-orig-width="1480" data-orig-height="742" class="tmblr-full"><img src="https://66.media.tumblr.com/fb19471c08cfb2192676aafd0106861c/tumblr_inline_pscc5vLKFQ1wxhpzr_540.png" alt="image" data-orig-width="1480" data-orig-height="742"/></figure><p><b></b></p><p>At low throughput (< 8000 sentences per second) the new system uses more CPU than before. This can be tuned as the system does not auto-tune itself yet. At higher rates, the slope of the line is much lower which means Storm has less overhead than before resulting in being able to process more data with the same hardware. This also means that we were able to max out each of these configurations at > 100,000 sentences per second on 2.0.0 which is over 2x the maximum 45,000 sentences per second that 1.2.2 could do with the same setup. Note that we did nothing to tune these topologies on either setup. With true backpressure, a WordCount Topology could consistently process 230,000 sentences per second by disabling the event tracking feature. Due to true backpressure, when we disabled it entirely, then we were able to achieve over 230,000 sentences per second in a stable way, which equates to over 2 million messages per second being processed on a single node.</p><p><b>Scalability</b><br/></p><p>In 2.0, we have laid the groundwork to make Storm even more scalable. Workers and supervisors can now heartbeat directly into Nimbus instead of going through ZooKeeper, resulting in the ability to run much larger clusters out of the box.<br/></p><p><b>Developer Friendly</b><br/></p><p>Prior to 2.0, Storm was primarily written in Clojure. Clojure is a wonderful language with many advantages over pure Java, but its prevalence in Storm became a hindrance for many developers who weren’t very familiar with it and didn’t have the time to learn it. Due to this, the community decided to port all of the daemon processes over to pure Java. We still maintain a backward compatible <a href="https://github.com/apache/storm/tree/v2.0.0/storm-clojure">storm-clojure</a> package for those that want to continue using Clojure for topologies.</p><p><b>Split Classpath</b></p><p>In older versions, Storm was a single jar, that included code for the daemons as well as the user code. We have now split this up and storm-client provides everything needed for your topology to run. Storm-core can still be used as a dependency for tests that want to run a local mode cluster, but it will pull in more dependencies than you might expect.<br/></p><p>To upgrade your topology to 2.0, you’ll just need to switch your dependency from storm-core-1.2.2 to storm-client-2.0.0 and recompile. </p><p><b>Backward Compatible</b><br/></p><p><b></b></p><p>Even though Storm 2.0 is API compatible with older versions, it can be difficult when running a hosted multi-tenant cluster. Coordinating upgrading the cluster with recompiling all of the topologies can be a massive task. Starting in 2.0.0, Storm has the option to run workers for topologies submitted with an older version with a classpath for a compatible older version of Storm. This important feature which was developed by our team, allows you to upgrade your cluster to 2.0 while still allowing for upgrading your topologies whenever they’re recompiled to use newer dependencies. </p><p><b></b></p><p><b>Generic Resource Aware Scheduling</b></p><p><b></b></p><p>With the newer generic resource aware scheduling strategy, it is now possible to specify generic resources along with CPU and memory such as Network, GPU, and any other generic cluster level resource. This allows topologies to specify such generic resource requirements for components resulting in better scheduling and stability.</p><p><b>More To Come</b><br/></p><p>Storm is a secure enterprise-ready stream but there is always room for improvement, which is why we’re adding in support to run workers in isolated, locked down, containers so there is less chance of malicious code using a zero-day exploit in the OS to steal data.<b><br/></b></p><p>We are working on redesigning metrics and heartbeats to be able to scale even better and more importantly automatically adjust your topology so it can run optimally on the available hardware. We are also exploring running Storm on other systems, to provide a clean base to run not just on Mesos but also on YARN and Kubernetes.</p><p>If you have any questions or suggestions, please feel free to reach out <a href="mailto:kishorvpatil@apache.org">via email</a>. </p><p><b></b></p><p><b>P.S.</b> We’re hiring! Explore the Big Data Open Source Distributed System Developer opportunity <a href="https://oath.wd5.myworkdayjobs.com/careers/job/US---Champaign/Low-Latency-Architect_JR0007942">here</a>.</p>