Apache Spark word count – big data analytics with a publicly available data set (part 2)

In a previous post I discussed my findings for developing a simple word count app in Java, and running it against the text of a typical novel, such as Alice in Wonderland.

Here’s a summary of finding so far:

  • 2012 MacBook Pro with a 2.4GHz i7 and SanDisk SSD it completed in 100ms
  • New 2015 MacBook Pro with a 2.2GHz i7 and stock SSD, it completed in 82ms
  • On an Ubuntu 16.04 server running under VMware ESXi on my HP DL380 Gen7 rack server configured with 2 x 2.4 GHz vCPUs,  6GB RAM it completed in 250ms (the slower time probably can be accounted for slower data access from the WD Black 7200rpm HDDs in my server which are likely an i/o bottleneck compared to loading the same data file from SSD)

Now let’s take it up a notch an throw some larger datasets into the mix, using the Yelp dataset. Taking the reviews.json file exported as a csv file running the same app on the same Ubuntu server in a VM on the DL380, performing the same word count against a 2.8GB file. With the same Java word count app this took 434,886ms to complete or just over 7 minutes. Ok, now we’re got something large enough to play with as a benchmark:

kev@esxi-ubuntu-mongodb1:~$ java -jar wordcount.jar ./data/yelp/reviewtext.csv

Incidentally, if you’re interested what the word counts from the Yelp reviews.json data file look like, here’s the first few counts in descending order before we get to some terms you’d expect in Yelp reviews, like ‘good’ and ‘food’:

the : 22169071
and : 18025282
I : 13845506
a : 13416437
to : 12952150
was : 9340074
of : 7811688
 : 7600402
is : 6513076
for : 6207595
in : 5895711
it : 5604281
The : 4702963
that : 4446918
with : 4353165
my : 4188709
but : 3659431
on : 3642918
you : 3570824
have : 3311139
this : 3270666
had : 3015103
they : 3001066
not : 2829568
were : 2814656
are : 2660714
we : 2639049
at : 2624837
so : 2378603
place : 2358191
be : 2276537
good : 2232567
food : 2215236

Now let’s look at re-writing the analysis using Apache Spark. The equivalent code using the Spark API for loading the data set and performing the wordcount turned out to be like this (although if you search for Apache Spark word count, there’s many different ways you could use the available apis to implement a word count):

JavaRDD<String> lines = spark.read().textFile(filepath).javaRDD();

JavaPairRDD<String, Integer> words = lines
    .flatMap(line -> Arrays.asList(line.split("\\s+")).iterator())
    .mapToPair(word -> new Tuple2<>(word, 1))
    .reduceByKey((x, y) -> x + y);

List<Tuple2<String, Integer>> results = words.collect();

Submitting the job to run on a standalone Spark node (an Ubuntu Server 16.04 VM on ESXi) with 1 core (–master local[1] )

./bin/spark-submit --class "kh.textanalysis.spark.SparkWordCount" 
  --master local[1] ../spark-word-count-0.0.1-SNAPSHOT.jar 
  ../data/yelp/reviewtext.csv

… the job completes in 336,326ms or approx 5.6 minutes. At this point this is with minimal understanding of how best to approach or structure an effective Spark job, but we’re already made an improvement and this first test is with a single Spark local node, and no additional worker nodes in the cluster.

Next, with a standalone local node, and 2 cores (–master local[2]) :

170261ms, or 2.8 mins. Now we’re talking.

Let’s trying deploying the master node, and then add some worker nodes.

17/11/07 18:43:33 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.1.82:7077…

17/11/07 18:43:33 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master 192.168.1.82:7077

My first guess is that the default 7077 port is not open, so:

sudo ufw allow 7077

Retrying, now the job has submitted, and starts up ok but gives this error.

17/11/07 18:47:28 INFO TaskSchedulerImpl: Adding task set 0.0 with 22 tasks

17/11/07 18:47:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Taking a look at the master node web ui on 8080:

True, we have the master started but no workers yet, so let’s start up 1 slave node first (another Ubuntu Server 16.04 VM on ESXi):

$ ./start-slave.sh -m 1G spark://192.168.1.82:7077

starting org.apache.spark.deploy.worker.Worker, logging to /home/kev/spark-2.2.0-bin-hadoop2.7/logs/spark-kev-org.apache.spark.deploy.worker.Worker-1-ubuntuspk1.out

Now we’ve got 1 worker up and ready:

Submitting the same job again to the master node, and now there’s a FileNotFound on not finding the file we’re attempting to process:

Exception in thread “main” org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 192.168.1.89, executor 0): java.io.FileNotFoundException: File file:/home/kev/data/yelp/reviewtext.csv does not exist

It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE tableName’ command in SQL or by recreating the Dataset/DataFrame involved.

I was curious how the datafile would be shared with the slave nodes and this answers my question, I guess it does not (or the way I’ve implemented my job so far assumes the data file is local). Clearly have some more work to do in this area to work out what the approach is to share the datafile between the nodes. In the meantime, I’m just going to copy the same file across each of the worker nodes to get something up and running. I’m guessing a better way would be to mount a shared drive between each of the workers, but I’ll come back to this later.

Datafile copied to my worker VMs, and restarting, now we’ve got the job running on 1 slave, 1 core, 1GB:

Here’s an interesting view of the processing steps in my job:

Taking a quick look at the HP server ILO, the CPUs are barely breaking a sweat and fan speeds are still low:

The Spark dashboard shows completion times, so now we’re at 5.3 mins:

Let’s add an extra vCPU to the first worker node (I need to reconfigure my VM in ESXi and then restart). The console’s showing an additional stopped worker from a second slave VM that I’ll start up for the last test. First, 2 vCPUs, starting up with – “-c 2” for 2 cores:

./start-slave.sh -c 2 -m 1G spark://192.168.1.82:7077

166773 ms, 2.7 minutes. Looking good!

Now with the additional second slave node, also started with 2 vCPUs, so now we have 2 worker slave nodes, 4 vCPUs total:

During the middle of the run, checking fan speeds – warming up and fans are running a little faster, but nothing crazy so far:

102977ms, or 1.7 minutes!

Curious how far we can go, lets try 2 worker slave nodes, 4vCPUs each, and bump up the available RAM to 3GB. Reconfigure my VMs, and off we go:

./start-slave.sh -c 4 -m 3G spark://192.168.1.82:7077

81998ms, or 1.4 minutes!

Still pretty good although the performance gains for doubling the cores per worker and adding more RAM seems to be leveling off and now we’re not seeing the same magnitude of improvements, so possible at this point it might be more interesting to add additional slave worker nodes. I’ve been creating my ESXi VMs by hand up until this point, so maybe this is the time to looking into some automation for spinning up multiple copies of the same VM.

Let’s summarize the results so far, running the standalone Java 8 app on the same HP server as Spark, and then comparing the various Spark configurations so far:

I’ve got ample resources left on my HP DL380 G7 rack server to run a few more VMs equally sized to what I have running, so maybe if I can work out an easy way to template the VMs I have so far, I’ll spin up some additional VMs as worker nodes and see what’s the fastest processing time I can get with the hardware I have. More updates to come later.

Loading the Yelp dataset into MongoDB

In a previous post, I downloaded the Yelp dataset, 5.79GB of json data. My first thought (before I get to experimenting with Apache Spark), was how can I extract some basic stats from this dataset, basics like how many data items are there, and what do the records look like in each of the data files.

Using mongoimport and referring to the docs here, the syntax for the import is:

mongoimport -d database -c collection importfile.json

Here’s the Yelp dataset json file for importing, to get an idea of the size of each file:

kev@esxi-ubuntu-mongodb1:~/data/yelp$ ls -lS

total 5657960

-rwxrwxr-x 1 kev kev 3819730722 Oct 14 16:16 review.json

-rwxrwxr-x 1 kev kev 1572537048 Oct 14 16:22 user.json

-rwxrwxr-x 1 kev kev  184892583 Oct 14 16:16 tip.json

-rwxrwxr-x 1 kev kev  132272455 Oct 14 16:03 business.json

-rwxrwxr-x 1 kev kev   60098185 Oct 14 16:03 checkin.json

-rwxrwxr-x 1 kev kev   24195971 Oct 14 16:03 photos.json

 

So importing each of the datasets, one at a time:

kev@esxi-ubuntu-mongodb1:~/data/yelp$ mongoimport -d yelp -c checkin checkin.json

2017-10-14T16:49:35.566-0700 connected to: localhost

2017-10-14T16:49:38.564-0700 [#########……………] yelp.checkin 22.6MB/57.3MB (39.5%)

2017-10-14T16:49:44.474-0700 [########################] yelp.checkin 57.3MB/57.3MB (100.0%)

2017-10-14T16:49:44.475-0700 imported 135148 documents

 

kev@esxi-ubuntu-mongodb1:~/data/yelp$ mongoimport -d yelp -c business business.json

2017-10-14T16:49:59.593-0700 connected to: localhost

2017-10-14T16:50:02.592-0700 [#####……………….] yelp.business 27.9MB/126MB (22.1%)

2017-10-14T16:50:12.873-0700 [########################] yelp.business 126MB/126MB (100.0%)

2017-10-14T16:50:12.873-0700 imported 156639 documents

 

kev@esxi-ubuntu-mongodb1:~/data/yelp$ mongoimport -d yelp -c tip tip.json

2017-10-14T16:50:38.061-0700 connected to: localhost

2017-10-14T16:50:41.058-0700 [##………………….] yelp.tip 17.5MB/176MB (9.9%)

2017-10-14T16:51:07.381-0700 [########################] yelp.tip 176MB/176MB (100.0%)

2017-10-14T16:51:07.381-0700 imported 1028802 documents

 

kev@esxi-ubuntu-mongodb1:~/data/yelp$ mongoimport -d yelp -c user user.json

2017-10-14T16:51:28.648-0700 connected to: localhost

2017-10-14T16:51:31.648-0700 [……………………] yelp.user 36.9MB/1.46GB (2.5%)

2017-10-14T16:54:15.907-0700 [########################] yelp.user 1.46GB/1.46GB (100.0%)

2017-10-14T16:54:15.907-0700 imported 1183362 documents

 

kev@esxi-ubuntu-mongodb1:~/data/yelp$ mongoimport -d yelp -c review review.json

2017-10-14T16:57:01.018-0700 connected to: localhost

2017-10-14T16:57:04.016-0700 [……………………] yelp.review 34.9MB/3.56GB (1.0%)

2017-10-14T17:02:31.967-0700 [########################] yelp.review 3.56GB/3.56GB (100.0%)

2017-10-14T17:02:31.967-0700 imported 4736897 documents

 

Done! Almost 6GB of data imported to MongoDB. Now, time for some queries!