For a while now I have been writing about how to analyze and optimize Hadoop jobs beyond just tweaking MapReduce options. The other day I took a look at some of our Outage Analyzer Hadoop jobs and put words into action.
A simple analysis of the Outage Analyzer jobs with Compuware APM 5.5 identified three hotspots and two potential Hadoop problems in one of our biggest jobs. It took the responsible developer a couple of hours to fix it and the result is a 2x improvement overall and a 6x improvement on the Reduce part of the job. Let’s see how we achieved that.
About Outage Analyzer
Outage Analyzer is a free service provided by Compuware which displays in real-time any availability problems with the most popular 3rd party content providers on the Internet. It is available at http://www.outageanalyzer.com. It uses real time analytical process technologies to do anomaly detection and event correlation and classification. It stores billions of measures taken from Compuware’s global testing network every day in Hadoop and runs different MapReduce jobs to analyze the data. I examined the performance of these MapReduce jobs.
Identifying worthwhile jobs to analyze
The first thing I did was look for a worthwhile job to analyze. To do this, I looked at cluster utilization broken down by user and job.
What I found was that John was the biggest user of our cluster. So I looked at the jobs John was running.
The largest job by far was an analytics simulation of a full day of measurement data. This job is run often to test and tune changes to the analytics algorithms. Except for one of the runs, all of them lasted about 6.5 hours in real time and consumed nearly 100% CPU of the cluster during that time. This looked like a worthy candidate for further analysis.
Identifying which Job Phase to focus on
My next step was to look at a breakdown of the job from two angles: consumption and real time. From a real-time perspective, map and reduce took about the same amount of time – roughly 3 hours each. This could also be nicely seen in the resource usage of the job.
The significant drop in Load Average and the smaller drop in the other charts mark the end of the mapping phase and the start of pure reducing. What is immediately obvious is that the reducing phase, while lasting about the same time, does not consume as many resources as the map phase. The Load Average is significantly lower and the CPU utilization drops in several steps before the job is finished.
On the one hand that is because we have a priority scheduler and reducing does not use all slots, but more importantly, reducing cannot be parallelized as much as mapping. Every optimization here counts two fold, because you cannot scale things away! While the mapping phase is clearly consuming more resources, the reducing phase is a bottleneck and might therefore benefit even more from optimization.
As we can see the Map Time (time we spend in the mapping function, excluding merging, spilling and combining) is twice as high as the reduce time. The reduce time here represents the time that tasks were actually spending in the reduce function, excluding shuffle and merge time (which is represented separately). As such those two times represent those portions of the job that are directly impacted by the Map and Reduce code, which is usually custom code – and therefore tuneable by our developers!
Analyzing Map and Reduce Performance
So as a next step I used Compuware APM to get a high-level performance breakdown of the job’s respective 3 hour mapping and reducing phases. A single click gave me this pretty clear picture of the mapping phase:
The hot spot dashboard for the mapping phase shows that we spent the majority of the time (about 70%) in our own code and that it’s about 50% CPU time. This indicates a lot of potential for improvement. Next, I looked at the reducing phase.
This shows that 99% of the reducing time is spent on our own code and not in any Hadoop framework. Since the reduce phase was clearly the winner in terms of potential, I looked at the details of that hot spot – and immediately found three hot spots that were responsible for the majority of the reduce time.
Three simple code issues consume 70% of the reduce time
This is what the method hot spots dashboard for the reduce phase showed.
The top three items in the method hot spot told me everything I needed know. As it turned out nearly all other items listed were sub-hotspots of the top most method:
- SimpleDateFormat initialization:
The 5 items marked in red are all due to the creation of a SimpleDateFormat object. As most of us find out very painfully during our early career as a Java developer, the SimpleDateFormat is not thread safe and cannot be used as a static variable easily. This is why the developer chose the easiest route and created a new one for every call, leading to about 1.5 billion creations of this object. The initialization of the Formatter is actually very expensive and involves resource lookups, locale lookups and time calculations (seen in the separate line items listed here). This item alone consumed about 40% of our reduce time.
Solution: We chose to use the well-known Joda framework (the code replacement was easy) and made the Formatter a static final variable; totally removing this big hot spot from the equation.
- Regular Expression Matching (line two in the picture)
In order to split the CSV input we were using java.lang.String.split. It is often forgotten that this method uses regular expressions underneath. RegEx is rather CPU intensive and overkill for such a simple job. This was consuming another 15-20% of the allotted CPU time.
Solution: We changed this to a simple string tokenizer.
- Exception throwing (line three in the picture)
This example was especially interesting. During the reading of input data we are parsing numeric values, and if the field is not a correct number java.lang.Long.parseLong will throw aNumberFormatException. Our code would catch it, mark the field as invalid and ignore the exception. The fact is that nearly every input record in that feed has an invalid field or an empty field that should contain a number. Throwing this exception billions of times consumed another 10% of our CPU time.
Solution: We changed the code in a way to avoid the exception altogether.
And there we have it – three simple hot spots were consuming about 70% of our reduce CPU time! During analysis of the mapping portion I found the same hot spots again, where they contributed about 20-30% to the CPU time.
I sent this analysis to the developer and we decided to eat our own dog food, fix it and rerun the job to analyze the changes.
Job done in half the time – 6-fold improvement in reduce time!
The result of the modified code exceeded our expectations by quite a bit! The immediate changes saw the job time reduced by 50%. Instead of lasting about 6.5 hours, it was done after 3.5! Even more impressive was that while the mapping time only went down by about 15%, the reducing time was slashed from 3 hours to 30 minutes!
The Cluster Utilization shows a very clear picture. The overall utilization and load average during mapping phase actually increased a bit and instead of lasting 3 hours 10 minutes it was done after2 hours and 40 minutes. While not huge this is still a 15% improvement.
The reduce phase on the other hand shrank dramatically: from roughly 3 hours to 30 minutes! That means a couple of hours of development work lead to an impressive 6-fold performance improvement! We also see that the reduce phase is of course still not utilizing the whole cluster and its actually the 100% phase that got a lot shorter.
Three simple code fixes resulted in a 100% improvement of our biggest job and a 6-fold speedup of the reduce portion. Suffice it to say that this totally surprised the owners of the job. The job was utilizing 100% of the cluster, which for them meant that from a Hadoop perspective things were running in an optimal fashion. While this is true, it doesn’t mean that the job itself is efficient!
This example shows that optimizing MapReduce jobs beyond tinkering with Hadoop options can lead to a lot more efficiency without adding any more hardware – achieving the same result with fewer resources!
The Hotspot analysis did also reveal some Hadoop specific hotspots that led us to change some job options and speed things up even more. More on that in my next blog.