Data Science, Northwestern University MSPA

Python Tops KDNuggets 2017 Data Science Software Poll

The results of KDNuggets’ 18th annual Software Poll should be fascinating reading for anyone involved in data science and analytics.  Some highlights – Python (52.6%) finally overtook R (52.1%), SQL remained at about 35%, and Spark and Tensorflow have increased to above 20%.

KDNugetts_poll

(Graph taken from http://www.kdnuggets.com/2017/05/poll-analytics-data-science-machine-learning-software-leaders.html/2)

I am about halfway through Northwestern University’s Master of Science in Predictive Analytics (MSPA) program.  I am very thankful that the program has made learning different languages a priority.  I have already learned Python, Jupyter Notebooks, R, SQL, some NoSQL (MongoDB), and SAS.  In my current class in Generalized Linear Models, I have also started to learn Angoss, SAS Enterprise Miner, and Microsoft Azure machine learning.  However, it looks like you can’t ever stop learning new things – and I am going to have to learn Spark and Tensorflow – to name a few more.

I highly recommend you read this article.

 

Data Science, Northwestern University MSPA, Uncategorized

DataCamp’s Importing Data in Python Part 1 and Part 2.

I recently finished these DataCamp  courses and really liked them.  I highly recommend them to students in general and especially to the students in Northwestern University’s Master of Science in Predictive Analytics (MSPA) program.

Importing Data in Python Part 1 is described as:

As a Data Scientist, on a daily basis you will need to clean data, wrangle and munge it, visualize it, build predictive models and interpret these models. Before doing any of these, however, you will need to know how to get data into Python. In this course, you’ll learn the many ways to import data into Python: (i) from flat files such as .txts and .csvs; (ii) from files native to other software such as Excel spreadsheets, Stata, SAS and MATLAB files; (iii) from relational databases such as SQLite & PostgreSQL.

Importing Data in Python Part 2 is described as:

As a Data Scientist, on a daily basis you will need to clean data, wrangle and munge it, visualize it, build predictive models and interpret these models. Before doing any of these, however, you will need to know how to get data into Python. In the prequel to this course, you have already learnt many ways to import data into Python: (i) from flat files such as .txts and .csvs; (ii) from files native to other software such as Excel spreadsheets, Stata, SAS and MATLAB files; (iii) from relational databases such as SQLite & PostgreSQL. In this course, you’ll extend this knowledge base by learning to import data (i) from the web and (ii) a special and essential case of this: pulling data from Application Programming Interfaces, also known as APIs, such as the Twitter streaming API, which allows us to stream real-time tweets.

 

Data Science, Northwestern University MSPA

Learning to Use Python’s SQLAlchemy in DataCamp’s “Introduction to Databases in Python” – useful for students taking Northwestern’s MSPA Predict 420.

I just completed DataCamp’s course titled “Introduction to Databases in Python“.  This is a very informative course, and is actually one of the few tutorials out there that I have run across on SQLAlchemy.

I just finished Northwestern University’s MSPA (Master of Science in Predictive Analytics) Predict 420 class – Database Systems and Data Preparation Review, and I wish I would have taken DataCamp’s course first.  It would have helped tremendously.  You have the opportunity to use SQLAlchemy to interact with SQL databases in Predict 420, but I looked and could not find a really good tuturial on this, until I ran across DataCamp’s course, after I finished Predict 420.  I highly recommend this DataCamp course to other MSPA students.

Introduction to Databases in Python is divided up into 5 sections, with the course’s description of each section attached.

  1.  Basics of Relational Database.  In this chapter, you will become acquainted with the fundamentals of Relational Databases and the Relational Model. You will learn how to connect to a database and then interact with it by writing basic SQL queries, both in raw SQL as well as with SQLAlchemy, which provides a Pythonic way of interacting with databases.
  2. Applying Filtering, Ordering, and Grouping to Queries.  In this chapter, you will build on the database knowledge you began acquiring in the previous chapter by writing more nuanced queries that allow you to filter, order, and count your data, all within the Pythonic framework provided by SQLAlchemy!
  3. Advanced SQLAlchemy Queries.  Herein, you will learn to perform advanced – and incredibly useful – queries that will enable you to interact with your data in powerful ways.
  4. Creating and Manipulating your own Databases.  In the previous chapters, you interacted with existing databases and queried them in various different ways. Now, you will learn how to build your own databases and keep them updated!
  5. Putting it all together.  Here, you will bring together all of the skills you acquired in the previous chapters to work on a real life project! From connecting to a database, to populating it, to reading and querying it, you will have a chance to apply all the key concepts you learned in this course.

 

 

 

 

Data Science, Northwestern University MSPA

Northwestern University MSPA Program – learning R and python resources page creation

For new students coming into Northwestern University’s Master of Science in Predictive Analytics (MSPA) program, there is often considerable apprehension about learning the programming languages (mainly R and Python, and some SAS).   I have created a page on my blog site – Northwestern University MSPA Program – Learning R and Python resources – that lists some of the resources that are available, and my favorites.

I would encourage students to start taking the programming courses ahead of the particular classes, and whatever language you are required to use in that class.  There is enough time between the official courses to take some of these courses.  That way you don’t have to learn the course content and the programming language at the same time (if you don’t it is still doable, just will take more effort).

Data Science, Northwestern University MSPA

Northwestern University MSPA 420, Database Systems and Data Preparation Review

This was the fourth course I took in the MSPA program. I took this course because I wanted to understand relational and non-relational databases better, and become adept at storing, manipulating, and retrieving data from databases.  I thought this skill would be very beneficial when it came to getting data for the other analytics courses.

My overall assessment is that this was a good course conceptually, with a solid curriculum requiring a lot of reading and self study.  However I felt it could have been improved upon by having more sync video sessions or videos prepared, by improving the discussion sections, and by providing the code solutions to the projects.  I will review how the course is organized, and then review my comments above.

I took the course from Dr. Lynd Bacon, a very knowledgeable instructor, and very helpful when engaged.

Course Goals

From the syllabus – The data “includes poorly structured and user-generated content data.”   “This course is primarily about manipulating data using Python tools.  SQL and noSQL technologies are used to some extent, and accessed using Python.”

The stated course goals are listed below:

  • “Articulate analytics as a core strategy using examples of successful predictive modeling/data mining applications in various industries.
  • Formulate and manage plans to address business issues with analytics.
  • Define key terms, concepts and issues in data management and database management systems with respect to predictive modeling
  • Evaluate the constraints, limitations and structure of data through data cleansing, preparation and exploratory analysis to create an analytical database.
  • Use object-oriented scripting software for data preparation.
  • Transform data into actionable insights through data exploration.”

In retrospect, the first three goals were not addressed explicitly in this course.  Part of the third goal was met in that key concepts and terms around database management systems and data management were dealt with in-depth.  There was a lot of conceptual work around extracting data from both relational (PostgreSQL) and non-relational (MongoDB) databases (MongoDB will not be used again as they are switching to ElasticSearch next semester).  The fifth and sixth goals were met through the project work.

Python was used as the programming language and there was a lot of reading devoted to developing Python skills.  Some people had never used python before, and were able to get through it.  I had used Python in previous courses, and felt I still learned a lot.  There was extensive use of pandas DataFrames.  We used the packages json, pymongo to interact with the MongoDB database,  and learned how to save DataFrames and objects by pickling them or putting them on a shelve.  I used the Jupyter Notebooks to do my Python coding.  We also learned some very basic Linux in order to interact with the servers in the Social Sciences Computing Cluster (SSCC) to extract the data from the relational and non-relational databases.

Like the other MSPA courses it was structured around the required textbook readings, assigned articles, weekly discussions, and 4 projects.

Readings

The actual textbooks were mainly for Python.  There was a very valuable text on data cleaning.  All of the reading regarding the relational and non-relational databases came from the assigned articles, some of which were chapters from textbooks.

Textbooks

Lubanovic, B. (2015). Introducing Python: Modern Computing in Simple Packages.  Sebastopol, Calif.: O’Reilly. [ISBN-13: 978-1-449-35936-2]

McKinney, W. (2013) Python for Data Analysis: Agile Tools for Real-World Data. Sebastopol, Calif O’Reilly. [ISBN-13: 978-1-449-31979-3]

Osborne, J. W. (2013). Best Practices in Data Cleaning: A Complete Guide to Everything You Need to Do Before and After Collecting Your Data. Thousand Oaks, Calif.: Sage. [ISBN-13: 978-1-4129-8801-8]

There were additional recommended reference books that I purchased, but did not really reference.

The first two texts were good reads with a lot of practical code to practice on to improve your Python skills.  The textbook on Best Practices in Data Cleaning is worth the read.  It makes you understand the significant importance of cleaning your data correctly, and then testing the underlying assumptions that most statistical analyses are based upon.  The author provides convincing evidence to debunk these myths – robustness, perfect measurement, categorization, distributional irrelevance, equality, and the motivated participant.

Weekly Discussions

To be honest, I was disappointed with this aspect of the course.  Some students were very active, and others participated very minimally and posted their one discussion the evening of the due date.  I did learn some things from the dedicated students, but I feel that if this were stressed more by the professor, then there could have been more robust postings.  This was the weakest discussion section of all the course I have taken so far.

Sync Sessions

Disappointingly there were only 2 sync sessions.  I feel this could be markedly improved.  I would like to see more involvement by the professor in creating either live sync sessions or create learning videos.  Ideally one would be created for each type of database system being studied, so you could see in person how to access, manipulate, and extract the data, and then apply the data cleaning techniques and then perform exploratory data analysis.  This was a huge disappoint for me.

Projects

There were a total of 4 projects.

The first project was around airline flight data, and being able to pull data into DataFrames, and then manipulating and analyzing the data.  The second project required extraction of data from a relational database, and then creating a local sqlite database, manipulating and analyzing the data, then saving the DataFrames by pickling them or Shelving them.  The third project required extracting hotel review information from json files.  The fourth and most challenging project involved extracting 501, 513 Enron emails, and then doing analyses on these emails.

I was disappointed with the more complex projects, and felt at times as if the course work did not adequately prepare me to succeed easily on these projects.   I was able to muck my way through these.  An extremely disappointing aspect of these projects is that good examples of the codes used by students were not referenced or shared by the professor.  I feel that I would have been able to close the loop on my knowledge deficiencies if I had been able to see other very successful code examples, and then been able to learn from them.

Summary

Overall this was an okay course.  It could be improved upon given my suggestions above.  I still learned a lot and will be able to use this knowledge in the future.  It did give me a good foundation upon which to add more knowledge in the future.

Data Science, Machine Learning

The world of machine learning algorithms – a summary infographic.

This is a very nice infographic that shows the basic types of machine learning algorithm categories.   It is somewhat informative to follow the path of how the algorithm got posted on twitter, where I saw it.  It was somewhat misleading (although not intentional I believe) about who actually created this infographic.  To me this highlights the importance of making sure we are crediting our information sources correctly.  This topic was also broached in this FiveThirtyEight article “Who Will Debunk The Debunkers” by Daniel Engber.  The article discusses many myths, one of them being a myth of how spinach was credited with having too much iron content.  It mentions that an unscholarly and unsourced article became “the ultimate authority for all the citations that followed”.  I have run across this as well, when I was trying to find the source of quotation about what a “Learning Health System” was defined as.  This definition was cited by at least twenty scholarly articles, but there was not reference for the citation, only circular references to the other articles that used this definition.  This highlights the importance of making sure we correctly cite the source of information, so it can be critically analyzed by other people interested in using the data.

I noticed this infographic after it had been tweeted by Evan Sinar (@EvanSinar).  The tweet cited an article in @DataScienceCentral.  That article “12 Algorithms Every Data Scientist Should Know” by Emmanuelle Rieuf, mentions an article posted by Mark van Rijmenan, with the same title – 12 Algorithms Every Data Scientist Should Know“, and then shows the infographic, giving the impression that this was the source of the algorithm.  That article mentions that the “guys from Think Big Data developed the infographic” and provided a link.  That links to the article “Which are the best known machine learning algorithms? Infographic” by Anubhav Srivastava.  It “mentioned over a dozen algorithms, segregated by their application intent, that should be in the repertoire of every data scientist”.  The bottom line, try to be careful with your source citations so it is not hard for people to follow the source backwards in time.  I was able to do this in this case, it just took a little while.  But there are many times where it is impossible to do this.

Now, for the infographic.

12algorithmseverydatascientistshouldknow

 

 

Data Science

Data Science Ecosystem graphic

I ran across this graphic in this article, The Data Science Ecosystem: Preamble, by Lukas Biewald, posted on the Open Data Science (ODSC) site.   This lays out SOME of the ecosystem out there, and I like the way Lukas divides the ecosystem up nicely into components.  I would comment that there is a lot left out about what Python and R can do in the Enrichment, ETL/Blending, Data Integration, Insights and Models sections.  But overall I like the graphic.

Data_Science_Ecosystem