Google Cloud 101-The Gist of GCP for the Cloud Community Tons of thanks to Stockholm Google Cloud OnBoard team which inspires me to share this below. HenryTheOwl second edition “Google Cloud 101-The Gist of GCP for the Cloud Community” is trying to answer you all about, what is Google Cloud all about ?, Why we […]
I/O of the Google BigQuery Execution Dear cloud community friends, this week I would love to share the post titled “I/O of the Google BigQuery Execution“. In this we will discuss about the internals of Google BigQuery, and how it executes and gives the super performance for the big data problems. Interested in learning the […]
The Artistic Guide to Big Data: Hadoop/Spark We love community. Hence in this new year 2018 as a first initiative, we are sharing the coffee char ideas on the “The Artistic Guide to Big Data: Hadoop/Spark”. The idea is all about bringing artistic touch of explanation to big data concepts. We are not talking of […]
Apache Spark is Superstar; but it’s Supernova on Azure for Big Data Analytics Initiatives Dear Cloud & Data Community, Happy Christmas! In this post am happy to share with you all on the facts about Apache Spark, especially how it’s so special and super nova, when it’s spin on as the Azure HDInsight. Big Data […]
Big Data Meets Microsoft Azure ! For Big Data & Cloud Community members this post on “Big Data, Meet Azure” is all about doing big on public cloud Azure. And sure, we no need definition for Big Data and Cloud Computing, but in a line; I would like to called both as Super Nova for […]
How to Ingest HDFS in JSON format using Apache Sqoop ? by NS Saravanan In current project use lambda architecture, so Data from sources system extracted in two ways, Real time streaming OR speed layer Batch process or Bach Layer Speed layer implemented using Attunity > Kafka > Spark streaming . The out of Spark stream […]
The top 79 beautiful lines for taking big data architecture from drawing board to production! Dear Data Community, Instead of titling this blog is “The top 79 beautiful lines for taking big data architecture from drawing board to production”, It would be very suitable if we call it as book talk, which is inspired by […]
Getting Started with Google Cloud Platform ! Last month got a chance to attend Bengaluru Google Cloud OnBoard, instructor led enablement event for Google Cloud Platform(Big Data). Big Data on GCP is simply superb, must try once. And presenting the prepared Getting Started with Google Cloud Platform artifact for our handy reference. Below are the quick […]
Big Data Stack 2.0 and Beyond! The Google File System (GFS), MapReduce, and Bigtable are Googles & data industries Big Data revolution, which constructs Big Data Stack 1.0. Dough Cutting actually integrated the above released concepts into a tool called Hadoop. GFS + MapReduce + Bigtable > HDFS + MapReduce + HBase; which is together […]
What is the best big data solution for working with all databases from Splunk ! The answer is Splunk DB Connect! In this blog we will see how the Splunk DB connect helps us to integrate all the databases from Splunk. Splunk DB Connect is the best solution for working with databases from Splunk. It […]
What is Beyond Classic Hadoop? Is it Spark and Flink? In this blog, we will explore the two new big data friends to Hadoop, and they are Apache Spark and Apache Flink. And if we take the Hadoop improvements with the parallel processing MapReduce; speed is very first focus. However, MapReduce is designed and developed for […]
The 7 Habits Of Successful Big Data and NoSQL Projects by Ben Lorica ! Let’s have email@example.com
Google Cloud 101-The Gist of GCP for the Cloud Community...
I/O of the Google BigQuery Execution Dear cloud community...
Go Big on the Cloud with 10 Proven Best Practices Cheers my...
The Artistic Guide to Big Data: Hadoop/Spark We love...