Blog 4 – OOP to Functional Programming We are already using functional programming using Scala with the previous blog series of Self-Learn Yourself Scala in 21 Blogs (#1, #2, #3). Let’s start with defining functional programming which is a programming paradigm that models computation as the evaluation of expressions and expressions are built using functions […]
Tons of thanks for all 800+ views, 100+ likes, and 15+ comments for (Big) Data in Data Lake vs. Data Warehouse. And all the comments and suggestions are deep motivation behind 2nd Version of Data Lake. To name few Ricky Barron, Winston Sucher, Vinay, Ben Sharma, and Sanjay Pande. The Data Lake Architecture, Four functions […]
Blog 3 – Functional Programming & Data Structures In Scala Functional programming and functional data structures are very interesting and powerful. Actually it supports both types data structures called immutable & mutable. And let us discuss on two more vital concepts called type parameterization and higher-order functions. The type is similar to Java generics; it […]
8 Breaking Changes in Apache Flink 1.0.0 ! Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying […]
Looking For College Projects ?
We should be excited that Apache Hive community have released the largest release and announced the availability of Apache Hive 2.0.0. It brings great and exciting improvements in the category of new functionality, Performance, Optimizations, Security, and Usability. Let us explore the features in detail below; HBase to store Hive Metadata – The current metastore […]
Blog 2 – Lets’ get started with Scala Just type Scala in your environment to get the Scala interpreter and if everything is fine we will prompt with scala>. If you have problem with installation please follow the link, which has step by step explanations. So we are good to explore the Scala commands. Now […]
Self-Biearn Yourself Scala in 21 Blogs – #1 Blog 1 – Scala the basics Thanks to the communities like LinkedIn, hadoop, Spark, Apache Software, Yahoo and more…from dataottam. As a new learning and sharing initiative we the dataottam team launched “Self-Learn Yourself Scala in 21 Blogs”. Scala is something Object-Oriented meets functional to have best […]
Self-Learn Yourself Apache Spark in 21 Blogs – #7 Key Concepts of Resilient Distributed Datasets (RDDs) and more… In this blog how do we create the RDDs and what operations can we perform with RDDs. Have quick read on the other blogs in this learning series. In simple RDD (Resilient Distributed Dataset); if data in […]
Celebrate the Big Data Problems – #4 What are the possible ways of command level searching in Linux? The dataottam team has come up with blog sharing initiative called “Celebrate the Big Data Problems”. In this series of blogs we will share our big data problems using CPS (Context, Problem, Solutions) Framework. Context: Search in […]
Data Lake Architecture Considerations & Composition In our last blog we saw the key benefits of Data Lake, but let’s deep dive in to the internals of a Data Lake via discussing the key considerations and compositions. Architecture Considerations: Take in any solution considerations it is practical difficult to arrives with a one-size-fit-all architecture; hence […]
What is RDD, Actions, and Transformations ? In Blog 6, we will see The RDD, and RDDs Input with Hands-on. Click to have quick read on the other blogs in this learning series. Hey, my dear friends. Before getting in to more deep dive into let’s have a look at who are the Spark Core Maintainers […]
Big Data Meets Microsoft Azure ! For Big Data & Cloud...
How to Ingest HDFS in JSON format using Apache Sqoop ?...
The 4 Key Concepts in the Anatomy of an Apache Spark Job!...
The 1-2-3-4-5-6-7-8-9 of Cognitive Computing ! Dear Data...