As the size of the data increases the usual processing time of the engines also increases there is always new tools comming up tackling this problem starting with mapreduce and now Spark and something replacing spark in the near future. But sticking to our title, Spark process data in-memory while map reduce pushes the data […]
Framework of an Apache Spark Job Run! Now our community the big data analytics has started to use Apache Spark in full-swing for big data processing. The processing could for ad-hoc queries, prebuilt queries, graph processing, machine learning, and even for the data streaming. Hence the understanding of Spark Job Submission is very vital for […]
Apache Spark: 4+ years old Suited for sophisticated analytics at lighting speed Runs 1oo times faster in memory Runs 10 times faster in disk Supports in-memory processing Suits for interactive computing at blazzing fast speeds Supports developer with Java, Python & Scala API It runs on existing Hadoop cluster Compatible with HDFS, HBase and any […]
Apache Spark is fast and general engine for big data processing with libraries for SQL, streaming, advanced analytics. RDD is great abstraction for data sets, Immutable collection of data, which stands for Resilient Distributed Storage In Spark all work is expressed in following – Creating new RDDs, Transforming existing RDDs, Calling Operations on RDDs (eg.val […]
Spark SQL is a spark interface for both structured and semi-structured data Loads data from a variety of structured sources like Hive Tables, JSON and Parquet columnar storage Spark SQL allows to query data using SQL, both in internal & external to Spark core engine It provides robust integration between SQL and Python/Java/Scala code Spark SQL […]
Spark Streaming is Sparks module for applications such are benefits from data as soon as it lands/arrives from various sources. E.g. page view in real time, train a machine learning model, automatically detect anomalies. Developer can use a API which is very similar to batch jobs, also we can reuse the same API skills and […]
Spark began life in 2009 as a project within the AMPLab at the University of California, Berkeley. More specifically, it was born out of the necessity to prove out the concept of Mesos, which was also created in the AMPLab. Spark was first discussed in the Mesos white paper titled Mesos: A Platform for Fine-Grained Resource […]
The below tips are not written by me (Kumar Chinnakali). It is actually learnt from mammothdata.com and felt it could help our big data community, where Apache Spark is currently changing the world of Analytics & Big Data. Mamothdata team, tons of thanks for sharing with us. Spark is written in Scala, so new features […]
Thanks for your time; I definitely try to value yours. In part 1 – we discussed about Apache Spark libraries, Spark Components like Driver, DAG Scheduler, Task Scheduler, and Worker. Now in Part 2 -we will be discussing on Basics of Spark Concepts like Resilient Distributed Datasets, Shared Variables, SparkContext, Transformations, Action, and Advantages of […]
Top 12 excuses for why our big data isn’t paying off...
The Bot 101 [ Part 4 ] Dear Bot community members, thanks...
The List of 10+ Bot Platform for Developer and Architects!...
Top 150 Big Data & Cloud Computing Terminologies for...