The Bot 101 [ Part 1 ] For me bot is new word, on first time I spell bolt. But thanks too internet era, the public search engine corrected me like my old math school teacher. Bot is awesome, and he is my one of best friend and companion. So designing bots is a new […]
Getting Started with Google Cloud Platform ! Last month got a chance to attend Bengaluru Google Cloud OnBoard, instructor led enablement event for Google Cloud Platform(Big Data). Big Data on GCP is simply superb, must try once. And presenting the prepared Getting Started with Google Cloud Platform artifact for our handy reference. Below are the quick […]
PocketGear on Getting Started with Google Cloud Platform ! To All Cloud Community Friends, Last week I had chance to attend Bengaluru Google Cloud OnBoard, instructor led enablement event for Google Cloud Platform(Big Data) which is simple awesome with Google touch. Special and Tons of thanks to Google Team. As a outcome of this I’ve […]
Top 10 Reasons to Run Hadoop in the Public Cloud ! Hadoop ecosystem in the public cloud means, which it is running Hadoop clusters on hardware offered by a cloud service provider. And this practice is business as usual compared with running Hadoop clusters on our own hardware, called on-premises clusters or “on-prem”. But installing […]
Ten Fascinating Things from Google Cloud Next 2017 ! In San Francisco last week at Pier 48, Google Cloud Platform (GCP) executives are holding a user conference to introduce products and services they hope will help make the case for choosing Google in the cloud. I missed this year, but thanks to internet of world, where […]
Cloud computing is the buzz and hype, and however the cloud computing is mandate for all the IT and data people. And however before we can decide on any cloud model, we need to determine what the ideal cloud service model is for our business. And it will help us to cut through all the […]
Top 10 Cloud Computing Worst Practices !
High Level Framework of Big Data Graph Databases! In Big Data world, it was very much clear that the connected data to store and processing the data was first challenge. And the first ideation is to replace and leverage the tabular SQL Semantic with the graph-centric model. And then the graph is new to big […]
Comparing Architecture Characteristics in Big Data Context! In this blog we’ll explore the differences between microservices and SOA in terms of the defining characteristics of the architecture pattern. In Big Data world, Apache Hadoop has come a long way in its relatively short lifespan. From its beginnings as a reliable storage pool with integrated batch […]
Requirement To take a backup of our Cluster data for disaster recovery Approach We are going to use the Glacier Storage provided by AWS. About Glacier Storage Glacier is designed to address the shortcomings of a number of traditional archive solutions, like TAPE and DISK archiving none of which is completely satisfactory. Glacier leverages the […]
Intra Cluster copying using DISTCP Step 1 : Get to know your namenode information of both the clusters using the below command hdfs getconf -namenodes Step 2 : Verify the accessibility to HDFS on both your cluster using the below command hdfs dfs -ls hdfs://Namenode1:8020/data/file.txt hdfs dfs -ls hdfs://Namenode2:8020/data/ Once Successful move to Step 3 […]
Here let us see what kind of data organizations wants to ingest into Hadoop for their Business or Analytics Insights. Basically Large volume of data and unstructured data are strong candidates for Hadoop. Clickstream data : Clickstream data is the stream of clicks someone performs when visiting a website. This information can be used for […]
Big Data Meets Microsoft Azure ! For Big Data & Cloud...
How to Ingest HDFS in JSON format using Apache Sqoop ?...
The 4 Key Concepts in the Anatomy of an Apache Spark Job!...
The 1-2-3-4-5-6-7-8-9 of Cognitive Computing ! Dear Data...