Datum Engineering !

An engineered artwork to make decisions..

Archive for the ‘R’ Category

Big Data Analytics: From Ugly Duckling to Beautiful Swan

Posted by datumengineering on January 31, 2016

Recently, I came across with an interesting book on the statistics which has a narration of Ugly Duckling story and correlation of this story with today’s DATA or rather BIG DATA ANALYTICS world. This story originally from famous storyteller Hans Christian Andersen

Story goes like this…

The duckling was a big ugly grey bird, so ugly that even a dog would not bite him. The poor duckling was ridiculed, ostracized and pecked by the other ducks. Eventually, it became too much for him and he flew to the swans, the royal birds, hoping that they would end his misery by killing him because he was so ugly. As he stared into the water, though, he saw not an ugly grey bird but a beautiful swan.

Data are much the same. Sometimes they’re just big, grey and ugly and don’t do any of the things that they’re supposed to do. When we get data like these, we swear at them, curse them, peck them and hope that they’ll fly away and be killed by the swans.

Alternatively, we can try to force our data into becoming beautiful swans.

Let me correlate the above narration with the data analysis solution, in 2 ways:

1. Build the process to expose the potential of the data to become beautiful swan

2. Every data need set of assumptions and hypotheses to be tested before it dies as a ugly duckling.

The process of exposing the potential of the data is vast from data sourcing, wrangling, cleansing to Exploratory Data Analysis (EDA) and further detailed analysis.  These steps should be an integral part of any data product. Though these processes have been for years with most of the data analysis systems and projects, but in recent years it is fairly extended and integrated to external datasets. This external data build an eco-system (support system) around your data to prove the value. e.g. If you want to expose your customer data to a level where it not only show 360 degree view but it also start revealing customer pattern, response with external system. Location play an important role (one of the important part) in this whole process. The spatial mapping, where the customers can be joined with their surrounding. There are various tools which can help you to achieve this spatial mapping from Java GIS libraries to R-Spatial Libraries. read this Spatial Analysis in R at original on DominoData Lab blog

Once you set the mapping right with external datasets, then there are various tools available for wrangling. Eventually, you cleanse the data and do the EDA with this broader dataset, then it becomes customer view with much broader spectrum of external datasets of Geo Location, Economy, GPS-sensor etc. With this, You can start analyzing customers by different segments which you have never captured within your systems. in short, something like this..

 

Not limited to spatial mapping and analysis but there are many more external data elements which can help your data building process to extend it to much broader range of variables for analysis. With an effective (rather smart) use of these data linkages you can start converting any ugly duckling into meaningful swan.

Let us look at the second part of the solution to build assumptions and hypotheses. Given any Data duckling you should start assessing how much of an ugly duckling of a data set you have, and discovering how to turn it into a swan. This is more a statistical solution of conversion (proving and probing) for duckling than a previously explained engineering solution. When assumptions are broken we stop being able to draw accurate conclusions about reality. Different statistical models assume different things, and if these models are going to reflect reality accurately then these assumptions need to be true.  This is a step by step process and developed from parametric test i.e. a test that requires data from one of the large catalogue of distributions that statisticians have described. The assumptions that can be tested are:

Normally distributed data: The rationale behind hypothesis testing relies on having something that is normally distributed (in some cases it’s the sampling distribution, in others the errors in the model).
Homogeneity of variance: This assumption means that the variances should be the same throughout the data. In designs in which you test several groups of participants this assumption means that each of these samples comes from populations with the same variance. In correlational designs, this assumption means that the variance of one variable should be stable at all levels of the other variable.
Interval data: Data should be measured at least at the interval level. This assumption is tested by common sense.
Independence: This assumption, like that of normality, is different depending on the test you’re using. In some cases it means that data from different participants are independent, which means that the behavior of one participant does not influence the behavior of another.

As there is vast support of tools in data collection there are various tools which can also help you to test hypotheses not only by number but visually too e.g. ggplot2, pastecs and psych

So, jump straight into the data with either of these approaches (or both) and forsure you can take any duckling to a journey of becoming a beautiful swan. That’s actually start of science, eventually developing a process of learning. And, build a process to learn by itself, whenever a new bird comes it would predict whether it will become a swan or remain to be duckling forever 🙂

 

 

 

Advertisements

Posted in Big Data, Data Analysis, Data Science, R, Statistical Model | Leave a Comment »

(R + Python)

Posted by datumengineering on February 8, 2014

Both R & Python should be measured based on their effectiveness in advanced analytics & data science. Initially, as a new comer in data science field we spend good amount of time to understand the pros and cons of these two. I too carried out this study solely for “self” to decide which tool should i pick to get in depth of data science. Eventually, i have started realizing that both (R & Python) has its space of mastery along with their broad support to data science. Here some understanding on “when-to-use-what”

  •  R is very rich when you get into the descriptive statistics, Inference, statistical modeling and start plotting your data on the bar, pie chart and histogram. When your data is pretty much shaped and easily consumable for statistical modeling using vectors, matrix etc.
  • First time learner who have some knowledge on statistics can start getting depth of Graph-cum-visualization with their data using R in terms of  trends, identify the correlation etc. I observed that you don’t need to start practicing R as a separate programming language. You can very well start sculling your boat in depth of statistics keeping R in another hand.
  •  R plays vital role for analyst who love to see the data distributions before drawing conclusion. It also helps analysts to visualize outliers and data density of given data set.
  • As you start getting  more into probabilistic problems and probabilistic distribution R ease the data manipulation using vector and matrix. Even same applies to linear regression problems also.
  • With support of R to stat rich problems you don’t need to get into the complexities of python, OOPS and understanding of the data types nitty-gritty.

Now, when you start getting into space of predictive modeling, machine learning and mathematical modeling, Python can give a easy hand. Mathematical functions, algorithmic problems find good support from Python libraries for k-means & hierarchical clustering, multivariate regression, SVM etc. Not limited to this, but it also has good support from data processing & data munging libraries like Pandas and NumPy. Here are some cents for python:

  • We know! Python is full fledge “scripting language” and this statement tells everything. Most importantly, over the years Python has developed an eco-system for end-to-end analytics.
  • Now you are not confined to data process and formalization, but you can easily play around data sourcing and data parsing too using programming model. This open the opportunities to analyze semi-structured data (JSON,XML) easily.
  • With Python you have all liberties to start consuming the data from unstructured sources too. With streaming support from Hadoop extend the possibilities of using python on unstructured data stored on HDFS and from HBase for graph & networked data processing.
  • With rich libraries like Scikit-learn you can do all text mining, vectorize the text data and identify similarities between posts and texts.
  • Having OO language in your hand your program will be far structured and modular for all your complex mathematical calculation in comparison to R. I would rather call it as an easy to read.
  • There are lot of ready-to-serve material in support of machine learning and predictive modeling using python. Read these two in combination: Machine learning with Python + Building ML with python.

So in summary, we can bet on R when we start getting into statistical analysis and then eventually turn up towards Python to take your problem to a predictive end.

This write up doesn’t meant to highlight R or Python’s limitations. R has evolved as a good support to ML and does have combination with Hadoop as RADOOP. However, Python also has good support to statistics and does have rich library (matplotlib) for visualization. But, as i mentioned earlier in this write up, above finding points are solely based on ease of use while you learn Data Science. I suppose once matured we can develop expertise in any one on them as per job role.

Posted in Data Analysis, Python, R, Statistical Model | Tagged: , , , | 2 Comments »