[ ANNOUNCEMENT ]
I hope this note finds you well. Please excuse the brief interruption in our newsletter. Over past few weeks, we have been doing some A/B testing and mounting our Newsletter on our AI led coach TAO.ai. This newsletter and future versions would be using capability of TAO. As with any AI, it needs some training, so kindly excuse/report the rough edges.
- Team TAO/AnalyticsCLUB
[ COVER OF THE WEEK ]
Weak data Source
[ LOCAL EVENTS & SESSIONS]
More WEB events? Click Here
[ AnalyticsWeek BYTES]
>> Collaborative Analytics: Analytics for your BigData by v1shal
>> Colleges are using big data to identify when students are likely to flame out by analyticsweekpick
>> Rise of Data Capital by Paul Sonderegger by thebiganalytics
Wanna write? Click Here
[ NEWS BYTES]
>> Strategy Analytics: Android accounts for 88% of smartphones shipped in Q3 2016 – GSMArena.com Under Analytics
>> Did you know we’re sedentary but less obese than average? So says Miami statistics website – Miami Herald Under Statistics
>> MHS grad sinks Steel Roots in cyber security – News – North of … – Wicked Local North of Boston Under cyber security
More NEWS ? Click Here
[ FEATURED COURSE]
Statistical Thinking and Data Analysis
[ FEATURED READ]
The Signal and the Noise: Why So Many Predictions Fail--but Some Don't
[ TIPS & TRICKS OF THE WEEK]
Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.
[ DATA SCIENCE JOB Q&A]
Q:What is cross-validation? How to do it right?
A: It's a model validation technique for assessing how the results of a statistical analysis will generalize to an independent data set. Mainly used in settings where the goal is prediction and one wants to estimate how accurately a model will perform in practice. The goal of cross-validation is to define a data set to test the model in the training phase (i.e. validation data set) in order to limit problems like overfitting, and get an insight on how the model will generalize to an independent data set. Examples: leave-one-out cross validation, K-fold cross validation How to do it right? the training and validation data sets have to be drawn from the same population predicting stock prices: trained for a certain 5-year period, it's unrealistic to treat the subsequent 5-year a draw from the same population common mistake: for instance the step of choosing the kernel parameters of a SVM should be cross-validated as well Bias-variance trade-off for k-fold cross validation: Leave-one-out cross-validation: gives approximately unbiased estimates of the test error since each training set contains almost the entire data set (n?1n?1 observations). But: we average the outputs of n fitted models, each of which is trained on an almost identical set of observations hence the outputs are highly correlated. Since the variance of a mean of quantities increases when correlation of these quantities increase, the test error estimate from a LOOCV has higher variance than the one obtained with k-fold cross validation Typically, we choose k=5 or k=10, as these values have been shown empirically to yield test error estimates that suffer neither from excessively high bias nor high variance.
[ ENGAGE WITH CLUB]
ASK Club FIND Project
Get HIRED #GetTAO Coach