Data Science, Northwestern University MSPA

Python Tops KDNuggets 2017 Data Science Software Poll

The results of KDNuggets’ 18th annual Software Poll should be fascinating reading for anyone involved in data science and analytics.  Some highlights – Python (52.6%) finally overtook R (52.1%), SQL remained at about 35%, and Spark and Tensorflow have increased to above 20%.

KDNugetts_poll

(Graph taken from http://www.kdnuggets.com/2017/05/poll-analytics-data-science-machine-learning-software-leaders.html/2)

I am about halfway through Northwestern University’s Master of Science in Predictive Analytics (MSPA) program.  I am very thankful that the program has made learning different languages a priority.  I have already learned Python, Jupyter Notebooks, R, SQL, some NoSQL (MongoDB), and SAS.  In my current class in Generalized Linear Models, I have also started to learn Angoss, SAS Enterprise Miner, and Microsoft Azure machine learning.  However, it looks like you can’t ever stop learning new things – and I am going to have to learn Spark and Tensorflow – to name a few more.

I highly recommend you read this article.

 

Data Science, Jupyter Notebook, JupyterLab

JupyterLab – Exciting Improvement on Jupyter Notebooks

At SciPy 2016, Brian Granger and Jason Grout presented JupyterLab, now in a pre-alpha release.  This was the most exciting and monumental news of the conference for me.  A blog post about JupyterLab from Fernando Perez can be viewed here, the link to the YouTube video of the presentation is available here, while the video is presented below.

The blog post discusses some of today’s “Jupyter Notebook” functionality, most of which I have not used.  This includes the Notebooks, “a file manager, a text editor, a terminal emulator, a monitor for running Jupyter processes, an IPython cluster manager, and a pager to display help”.   The new functionality allows you to “arrange a notebook next to a graphical console, atop a terminal that is monitoring the system, while keeping the file manager on the left”.  Users of RStudio will be happy to see this.  (I am wondering if they are going to create a Package Manager like RStudio?).

Here are a few screenshots of what it looks like.

 

You can download this now, and help “test and refine the system”.  Instructions to do this are here.

Data Science, Data Visualization, Jupyter Notebook

Jupyter Notebook, matplotlib figure display options, and pandas.set_option() optimization tips.

I prefer to do my coding in a Jupyter Notebook, as my previous posts have mentioned.  However, I have not run across any good documentation on how to optimize the notebook, for either a python or R kernel.  I am going to mention a few helpful hints I have found.  Here is the link to the Project Jupyter site.

First a basic comment on how to create a notebook where you want it.   You need to navigate to the directory where you want the notebook to be created.  I use the Windows PowerShell command-line shell.  When you open it up, you are at your home directory.  Use the “dir” command to see what is in that directory, and then use the “cd” (change directory) command to navigate to the directory you want to end up in.  If it is a longer path, you should enclose in quotes.  If you need to create a new directory, use the “md” or “mkdir” command to create a new directory.  For example, my long path is –  “….\Jupyter Notebooks\Python Notebooks”, and while at SciPy 2016 I created an new folder, and this directory is “….\Jupyter Notebooks\Python Notebooks\SciPy16” – to which I added a folder for each tutorial I attended.

Once you get into the final directory, type “Jupyter Notebook”, and a new notebook will be opened.  The first page that opens up is the “Home” page, and if your notebook exists, you can select it here.  If it doesn’t yet exist, then select “New” if the upper right, select your notebook type (for me R or Python 3), and it will launch the notebook.  (This notebook is from a pandas tutorial I attended at SciPy 2016 – “Analyzing and Manipulating Data with Pandas by Jonathon Rocher (excellent presentation if want to watch the video being created).

2016-07-14_15-48-13

Once you click on the “pandas_tutorial”, this Jupyter notebook will open up.

2016-07-14_15-50-47

A nice feature is that if you clone GitHub repository into that folder, and start a new Jupyter Notebook, then all the files that go with that repository are immediately available for use.

Importing data in a Jupyter Notebook.

If you are tired of hunting down the path for a data set, there is an easy way to find a data set and get it into the directory of the Jupyter notebook.  Go to the “Home” page, and select “Upload” and you will be taken to the “file upload” application.  Navigate to where you stored the data set on your computer, select, and then it will load that onto the home page.  You can then easily load it into your specific Jupyter notebook that is associated with that directory.

2016-07-14_15-48-13

Matplotlib figure display options.

If you don’t specify how to display your figures in the Jupyter notebook, when you create a figure using matplotlib, a separate window will open and display the graph.  This window is nice because it is interactive, and you can zoom in on the graph, save it, put labels in, etc.  There is a way to do this in the Jupyter notebook.

The first option I learned about was:

%matplotlib inline

This would display the graph in the notebook, but it was no longer interactive.

However, if you use:

%matplotlib notebook

The figures will now show up in the notebook , and still be interactive.  I learned this during the pandas tutorial at SciPy 2016.

You can also set your figure size by:

LARGE_FIGSIZE = (12,8) # for example

 

Some pandas optimization hints

Use:

pandas.set_option()

to set a large number of options.  For example:

pandas.set_option(“display.max_rows”, 16)

and only 16 rows of data will be displayed.  There are many options, so just use “pandas.set_option?” command to see what is available.

If you have other useful Jupyter notebook tips, would love to hear about them.

 

 

 

 

 

Data Science, Data Visualization

Altair – A Declarative Statistical Visualization Library for Python – Unveiled at SciPy 2016 Keynote Speech by Brain Granger.

You should check out Altair, an API designed to make data visualization much easier in Python.  Altair was introduced today during a keynote speech by Brian Granger during the opening day of SciPy 2016 (Scientific Computing with Python). Brian is the leader of the IPython project and co-founder of Project Jupyter (Jupyter notebooks are my favorite way to code in Python or R).

Matplotlib has been the cornerstone of data visualization in Python, and as Brian Granger pointed out, you can do anything you want to in matplotlib, but there is a price to pay for that, and that is time and effort.

Altair is designed as “a declarative statistical visualization library for Python”.  Here is the link to Brian Granger’s GitHub site which houses the Altair files.  Altair is designed to be a very simple API, with minimal coding required to produce really nice visualizations.  A point Brian made in his talk was that Altair is a declarative API, which specifies what should be done, but not how it should be done.  The source of the data is a pandas DataFrame, that is in a “tidy format”.  The end result is a JSON data structure that follows the Vega-Lite specifications.

Here is my understanding of this relationship from a very high level Altair to Vega-Lite to Vega to D3.  (For more information, follow this link)  D3 (Data-Driven Documents) is a web-based visualization tool, but this is a low-level system.  Vega is designed as a higher-level visualization specification language built on top of D3.  Vega-Lite is a high-level visualization grammar, and a higher level language than Vega.  It provides a concise JSON syntax, which can be compiled to Vega specifications (link).  Altair is an even higher-level, and emits JSON data structures following the Vega-Lite specifications.   The idea is that as you get higher up, the complexity and difficulty of producing a graphic goes down.

On the GitHub site there are a number of Jupyter notebook tutorials.  There is a somewhat restricted library of data visualizations available, and they currently list scatter charts, bar charts, line charts, area charts, layered charts, and grouped regression charts.

The fundamental object in Altair is the “Chart”, which takes a pandas dataframe as a single argument.  You then start specifying what you want: what kind of “mark” and visual encodings ( X,Y, Color, Opacity, Shape, Size, etc.) you want.  There are a variety of data transformations available, such as aggregation, values, count, valid, missing, distinct, sum, average, variance, stdev, median, min, max, etc.  It is also easy to export the charts and publish them on web as Vega-Lite plots.

This looks like a very exciting and much easier to use data visualization API, and I look forward to exploring it more soon.

Data Science, Northwestern University MSPA, Predictive Analytics

Northwestern University MSPA 401, Introduction to Statistics Review

I finished this course last week, and thought I would post my thoughts before I forget them.

I was in Professor Roy Sanford’s section, and I HIGHLY recommend him.  He is an extremely experienced practitioner, and very knowledgeable of statistics and in using R for statistical analysis.

The course is focused on several aspects – learning basic statistics, learning R to perform statistical analysis, and engaging the students to participate in discussions that are pertinent to the material being learned.

Learning Statistics

The core text for the course is Ken Black’s Business Statistics For Contemporary Decision Making, 8th Edition.  It is a loose leaf binder text so you can remove the sections you are studying, which makes it nice.  It is a very down to earth text, with plenty of examples and problems.  Their is a companion website called WileyPlus that has videos to watch and a variety of problems/exercises.

A second supplemental statistical text is Rand R. Wilcox’s Basic Statistics: Understanding Conventional Methods and Modern Insights.  There are selected readings which highlight some contemporary issues.  Not as easy to read as Black’s text, but still informative.

Learning R

The coursework is presented using R.  You don’t HAVE to learn to use R, but you would be an idiot not to take advantage of this opportunity.  There is a great deal of effort putting into devising the curriculum to help you learn R.   This is well thought out, and I feel very confident that I have obtained a good working knowledge of R on which to build.  I was astounded to read a comment on the LinkedIn group – Networking Group for Northwestern University’s MS in Predictive Analytics Program –  from a previous student who took this course, who commented he didn’t really learn any R because he didn’t do any of the R reading or assignments.  To me, learning R was just as important as learning the statistics.  Plus I don’t know how you could do the Data Analysis Projects without learning R. Learning R is accomplished through reading various text’s, watching weekly video’s on R produced by Prof. Sanford, and then doing exercises.  Plus there are R resources and lessons, including links to Lynda.com.

I did the work in both RStudio and in a Jupyter Notebook using the R kernel. The Jupyter Notebook was my favorite way of doing the assignments because I could refer back to them.  But some things are way easier to do in RStudio, like installing packages and data sets, so sometimes I switched between the two.  See my other blog posts for information about Jupyter Notebooks.

The first R text is Winston Chang’s R Graphics Cookbook.  This takes you through the R basics and gets you up to speed quickly visualizing data.  There is a little bit about using the base plotting function in R, but most of the book is about visualizing using the ggplot2 package.  If you follow the exercises, you will get good at plotting and visualizing data.  You will learn scatter plots, line graphs, bar graphs, histograms, box plots (a lot – I finally understand what to do with a box plot), functions, QQ plots (I finally understand these as well).  All of these are extremely helpful in what you will spend a lot of time learning, Exploratory Data Analysis (EDA).

The second R text is Jared P. Lander’s R for Everyone: Advanced Analytics and Graphics.  This dives more deeply into using R for things other than data visualization and graphics, although it includes this as well.  This is a very easy to read and follow text.

The third R text is John Verzani’s Using R for Introductory Statistics: 2nd Edition.  This book is a very deep dive into R’s capability to do statistical analysis.  Although very detailed, it is understandable with great examples.

The last R text is downloadable from the site, Sarah Stowell’s Using R for Statistics.  This is also a very practical book on both statistics and visualization.

Don’t be overwhelmed by the number of text’s and reading, it is doable, and I would do it all.  If you do that, you will not be able to say you did not get your money’s worth.

In addition there are beginning videos and lessons about learning R, including links to Lynda.com.   There are weekly Calculations with R assignment, which include a video with examples.  There are exercises with these weekly assignments as well.  Finally there are R lessons which take you through learning R in an organized manner.

Sync Sessions and Videos

Professor Sanford holds a sync session every other week.  These are extremely informative and helpful.  You don’t have to watch live, but you need to watch later.  The sync sessions in Predict 400 were optional and you could get by fine without watching them.  Not the case here.  You will learn a lot from these.

The same holds for the videos he has created to go along with the weekly R exercises.  These are must watch videos.

Data Analysis Projects

There are two data analysis projects.  You will learn how to apply what you are learning to a hypothetical data analysis project.  These are pretty challenging, but VERY worthwhile.  These show the applied focus of the MSPA program, and I found them beneficial.  The first one really focused on doing some exploratory data analysis.  The second one was twice as long as the first, and you applied what you learned later in the course, including the creation of a linear regression model.  You will definitely want to start early on these, and put in the effort to do these correctly, as together they constitute 2/5’s of your grade.

Bi-weekly Tests

There are 4 bi-weekly tests which are very fair and doable.  Together they constitute 1/5 of your grade.

Final Exam

The final exam is also very fair and doable.  Much easier if you have paid attention to learning R, as you can use R to do the exam.  This is 1/5 of your grade.

Communications and Discussions

There are Communications discussion sections set up for statistics and R.  You can post a question anytime in either and get a rapid response from either Prof. Sandford or the R TA.  Our R TA was Todd Peterson, and he was extremely knowledgeable, helpful, and responsive.

Every week there are two discussions around topics you are learning.  These are student driven, and if taken seriously, you can learn a lot from each other.  There are some extremely bright and talented students in these classes who have great real world experience in a variety of sectors.   The final discussion section is a recap of what you learned that week, and Prof. Sanford participates in that discussion.

Overall

I spent between 20-30 hours per week doing the coursework.  You wouldn’t have to spend that much time, especially if this material is not new for you.  But I wanted to really learn the material, not just pass the class.

I really enjoyed this course on many fronts.  I found learning about statistics and R together was very complementary.  In fact, I cannot imagine doing any kind of statistical analysis without using a language such as R.  I am now trying to recreate what I learned in R using Python.  I really feel as if I got my money’s worth.

 

 

Data Science

Using Jupyter Notebooks to learn R, Python

I love using Jupyter Notebooks to learn R and Python.  I only wish I would have discovered them when I first started to learn Python.  The notebooks are a great way to take notes, run code, see the output of the code, and then visualize the output.  The notebooks can be organized by language – ie Python vs R, and also by the course you are taking, or book you are working your way through.  You can then go back and view your notes and code for future reference.

Project Jupyter was developed from the IPython Project in 2014, and IPython notebooks are now Jupyter notebooks.  Jupyter Notebooks are described as “a web application for interactive data science and scientific computing” . These notebooks support over 40 programming languages, and you can create notebooks with a Python kernel, or ones with an R kernel, amongst others.  These are great for learning programming languages and several academic institutions are using these in their CS courses.  They are also great for “reproducibility” – the ability to reproduce the findings that other people report.  By publishing the notebook on GitHub, Dropbox, or Jupyter Notebook Viewer, others can see exactly what was performed, and run the code themselves.

Here is how I use Jupyter Notebooks.  When I start a new course, whether an official course in my Northwestern University Master of Science in Predictive Analytics, or a web based course like the ones I have been taking from DataCamp and Udemy, or from a book that I am working my way through – I will create a new Jupyter notebook.

You first have to open up a Jupyter Notebook by typing “Jupyter notebook” in your shell (I use Windows PowerShell).  This then opens up a browser page “Home”.

2016-01-03_13-26-32

If I want to open up an existing notebook, I scroll down to the  notebook of interest and open it.  Here is a screen shot showing some of my notebooks.

2016-01-03_13-29-50

If I want to start a new notebook, I go to the top, select “New”, and then either a Python or R notebook.  They come with the Python kernel installed (you go to IRkernel on GitHub to install the R kernel).  This opens up a new notebook.

2016-01-03_13-31-52

You type commands or text into “cells” and can run the cells individually or all together.  The two most common cells I use are “Markdown” and “Code”.  You do have to learn a few easy Markdown commands, for headers/etc.  The Markdown cells are used for taking notes, and inserting text.  The Code cells are used to input and run the code.

2016-01-03_13-39-43

Once you have inputted your code, you can run the cell several ways.  The most convenient is to hit “Shift-Enter”, which will run the code in that cell, and bring up a new blank cell.

These are great for creating and saving visualizations, as you can make minor changes and then compare plots.  Here are a few examples.

2016-01-03_13-45-24

2016-01-03_13-46-56

There are a few things to don’t run smoothly yet, like loading packages in R.  I have found the easiest way to load a package is to load it using RStudio, and then use the library command in Jupyter to load it into the Jupyter notebook.  Alternatively you could use the following command each time:

install.packages(“package name”, repos = c(“https://rweb.crmda.ku.edu/cran/”)) # You can select your CRAN mirror and insert into the repos command).

Overall, I love using Jupyter to both take notes, run code while learning, and organize my learning so I can easily find it later.  I see it’s huge potential in sharing data, and being able to easily reproduce results.  Give it a try!