Data Science, Data Visualization

Data Science Skill Network Visualization

I came across this great visualization by Ferris Jumah (see link Ferris Jumah’s blog post) about the relationships between data science skills listed by “Data Scientists” on their LinkedIn profiles.

data science skill networkbor55data science skill networkTo view a higher resolution image go to: http://imgur.com/hoyFT4t

How many of these skills have you mastered?

Ferris’s conclusions about a few key themes:

  1.  Approach data with a mathematical mindset.
  2. Use a common language to access, explore and model data.
  3. Develop strong computer science and software engineering backgrounds.

 

 

 

 

Data Science, Data Scientist

Who is Doing What/Earning What in Data Science Infographic

Are  you confused yet about the different roles/titles that people can have in the data analytics industry?   I think this might help add to your confusion.  This is a very nicely done infographic by DataCamp (http://blog.datacamp.com/data-science-industry-infographic/).  It is presented for your viewing pleasure and consideration.   Where do you fit into this categorization?  And does your compensation match your title match your responsibilities match your usefulness to your organization?

DataScientist

 

Data Science

Text Cleaning Using Python Infographic

Here is an infographic about using Python for text cleaining from the Analytics Vidya website (analyticsvidhya.com).

Here is the link: http://i2.wp.com/www.analyticsvidhya.com/wp-content/uploads/2015/06/New-Info.jpghttp://i2.wp.com/www.analyticsvidhya.com/wp-content/uploads/2015/06/New-Info.jpg

In addition to this information, Matt Crowson, the Python TA for my Math for Modelers course at Northwestern, suggested the following as well.

NLTK (Natural Language Tool Kit) http://www.nltk.org/

SciKit Learn http://scikit-learn.org/stable/

text-cleaning-python

Becoming a Healthcare Data Scientist, Northwestern University MSPA, Predictive Analytics

Interim Review of Northwestern University’s MSPA Math for Modelers course.

Predict 400, Math for Modelers Course, Northwestern University MSPA

I am going to summarize my experience to date with Northwestern University’s Master of Science in Predictive Analytics program. I am past the halfway point (week 7 of 9) of my first trimester in this program. I am enrolled in one course, Predict 400, Math for Modelers. This is being taught by Professor Philip Goldfeder.

I will first describe the outline of how the course works. This is an asynchronous learning experience, for the most part. We have had one live session with Prof. Goldfeder. The coursework is presented through the online platform called Canvas. There are three main components to the class, which I will describe in greater detail below. The first component is learning the actual math. The second is participating in discussions about questions posed each week by Prof. Goldfeder. The third is learning Python.

What I really love about this program is how it brings together the book work, homework, learning python, and getting help for problems/questions, into one place. I have been trying to informally do this on my own, and it was frustrating for me to try and learn math/machine learning/etc. using either books or other online courses, learn python a separate way, and then have difficulty getting my questions answered. It is a 1000% easier when this is all rolled into one. Even though this is a lot more expensive than doing it on your own, to me it is worth every penny.

Professor Philip Goldfeder.  He is a great Professor for this course. He received great reviews in the CTEC (Course Teacher and Evaluation Council, these are visible when signing up for classes) and I see why. Not only is he extremely knowledgeable, he is also very engaged with the students and seems genuinely interested in making sure we learn and understand the material. He is also great at challenging the students to think of ways to apply the concepts learned to real world examples. I highly recommend him.

Canvas platform. This is where you go to do everything. It has sections for Announcements, Syllabus, Modules (describe each week’s assignments and is where you download things), Grades, People (section where everyone gets to describe themselves and you get to know your classmates), and Discussions.

The math. The introductory course, Math for Modelers is designed to be a “Review of fundamental concepts from calculus, linear algebra, and probability with a focus upon applications in statistics and predictive modeling. The topics covered will include systems of linear equations and matrices, linear programming, concepts of probability underlying both classical and Bayesian statistics, differential calculus and integration.” This is a very aggressive review of linear algebra, probability, differential calculus and integral calculus. This would be easier for someone who has taken these courses recently, but is challenging for me since it has been decades since I learned this (not sure I really learned some of this the first time around). You are assigned 1-2 chapters a week from the textbook “ Lial, Greenwell and Ritchey (2012). Student Solutions Manual for Finite Mathematics and Calculus with Applications, 9th Ed.”  Prof. Goldfeder prepares a high level video that reviews the material in the chapters. He also posts PowerPoint presentations of the material in each chapter.

Homework. You are then required to complete a homework assignment each week, which covers the material in the chapters. This is typically 20-30 questions. This is completed through the Pearson educational application. This is a FANTASTIC resource. The textbook is online here. Each chapter has it’s own section, and you can do problems in each sub-chapter of each chapter. If you struggle with the solution, you can actually have the application walk you step by step through the problem, and show you similar problems. There are links out to the textbook that takes you right to the section dealing with the problem you are working on. There are also videos available to view on the topic as well. I almost always do all of the study problems. The homework is another section in here, and that is how you submit your homework. Homework is worth 25% of your grade.

Discussions. This is a surprisingly difficult section. The NU MSPA program is designed as an applied program, and designed to use real world examples and learning. To that end, Dr. Goldfeder challenges us each week to come up with real world examples or explanations of the material we are learning. To formulate a response to this can take a surprising amount of time if you take it seriously, but in doing so I have learned a lot. The process makes you think about how these concepts could be used in the real world. You are supposed to post your discussion response by the middle of the week so that you can participate in the discussions about what you posted, as well as what your classmates posted. The kicker is that you can’t see what other students have posted, until you post your submission. I have learned a lot from these discussions. The other students in the course have such a wide background, that they can weigh on the topics in a meaningful way. We have students with backgrounds in sports analytics, actuaries, people working in industry, medicine, computer science, etc. The discussions are worth 25% of your grade.

Python. This could be extremely challenging if you have not had any exposure to Python or programming. I knew this would be a challenge, so I did take a few Python courses (Codecademy’s Python course at http://www.codecademy.com/learn/python, How to think like a Computer Scientist at interactivepython.org) prior to enrolling in the class. However, I would label myself still a beginner in Python, and the exercises challenged me to expand my knowledge of Python. However, I personally think this is one of the most gratifying portions of the course. I really enjoy combining what we are learning with Python. We cover the basics of Python, creating graphs and plots, using NumPy and SciPy. I love this part of the course. This is done through the Enthought Canopy platform. This has the interactive editor, the package manager, and of great value, the “Training on demand”, which is a very comprehensive series of instructional videos. These cover basic and advanced functionalities. Well worth the money, just for access to these videos. There is no grade each week for the Python assignments, however, you need to keep up with these. There were questions on the midterm that specifically required the use of Python to analyze the question and display the results. We have a Python TA assigned to the class who is very responsive to questions. In addition, students post code and help provide input on any questions.

Tests. The midterm is worth 25% of the grade as is the final examination. The midterm was a take home test, and required a substantial investment of time to complete. In addition there was the regular homework/reading for that week, although the discussion that week was optional. This is a week when you would want to cut yourself some slack and allow extra time. I had a heavy work week that week, and regretted not thinking about this ahead of time to give myself a lighter work schedule.

Time requirement. I am finding that I am devoting 20-30 hours per week to do all of this. You could devote less time if you were more up to date on the math or Python. But remember, I am doing this to learn and retain the information. So I am doing all of the reading in the textbook, doing all of the example problems and “your turn problems”, and almost all of the chapter problems in the Pearson application. I have not had time to do all the problems in back of the textbook however. I also try to provide meaningful input into the discussions, both in my submission, and commenting on what other students have posted. I have also been trying to continue to dive deeper into learning Python.

Typical week. I usually try to do the textbook reading on Monday and Tuesday. (All of the assignments are due midnight Sunday night, so Monday starts a new week). I don’t do a lot of problems initially as I want to get through the reading, so I can apply it to my discussion. Then on Wednesday I like to start working on my discussion submission and try to get it in by Wednesday, or Thursday at the latest. That way I can participate in the discussions in a meaningful way. After I get my discussion submitted, I go back and work through the chapter problems in Pearson. I like to get to the homework section on Saturday. Ideally I like to have Sunday to do the Python reading and assignments.

My overall assessment of this course is that I am extremely satisfied. I think this is very professionally done, I am learning the math, I am being challenged to think about applying this to the real world, and I am learning Python. There is definitely a lot going on, but that is why I signed up for this. I feel as if I am getting my money’s worth.

Healthcare Technology

Billings Gazette article on “Web, mobile tech, used to connect with patients, improve care.”

I have attached a link to an article that describes how the Billings Clinic is using technology to better connect with our patients, and improve the care we deliver to them.

http://billingsgazette.com/lifestyles/health-med-fit/web-mobile-tech-used-to-connect-with-patients-improve-care/article_1bff7fbd-5d71-59a7-ac3f-1c4c894dcdfd.html

 

Becoming a Healthcare Data Scientist

Physician Data Scientist – Why and What Type? Part I.

Why would a practicing Emergency Medicine Physician want to become a Data Scientist, and what type of Data Scientist could I become?

I will provide my answers to those two questions, starting with what type of Data Scientist in this post, followed by Why I want to become a Data Scientist in Part 2.

First – What kind of Data Scientist do I see myself becoming?

Types of Data Scientists

I am going to use the framework that Bill Voorhies referenced in his blog post “How to Become a Data Scientist” (http://data-magnum.com/how-to-become-a-data-scientist/).  He used the framework developed by Harris, Murphy and Vaisman in their 2013 O’Reilly report “Analyzing the Analyzers.  An Introspective Survey of Data Scientists and Their Work“, available for free at http://www.oreilly.com/data/free/analyzing-the-analyzers.csp.  They describe 4 different subtypes of Data Scientists – Data Businessperson, Data Creative, Data Developers, and Data Researchers.  Figure 3-3 shows the skill sets strengths in each group. Below figure 3-3 I will provide a synopsis of how they described each subset.

2015-07-07_20-01-14

Data Businesspeople are most focused on the organization and how data projects yield profit.  They are leaders and entrepreneurs.  They have technical skills and work with real data.  They are the most likely group to have an MBA, and have an undergraduate Engineering degree.

Data Creatives are seen as the broadest of the Data Scientists, excelling at applying a wide range of tools and technologies to a problem, or creating innovative prototypes at hackathons, the quintessential Jack of All Trades.    They are seen as Artists.  They have substantial business experience.

Data Developers are focused on the technical problem of managing data – how to get it, store it, learn from it.  They are writing a lot of code, and have substantial computer science backgrounds.  They have more of the machine learning/big data skills than the other groups.

Data Researchers have a strong background in statistics, and have an academic background.

What type of Data Scientist do I see myself becoming?

I see myself fitting into two categories – a mix of the “Data Businesspeople” and the “Data Creative” subtypes of data scientists.   Although it will be easiest to become the Data Businesspeople type, I have aspirations of becoming more of a Data Creative or Jack of All Trades type as well.  I will discuss the different skill sets used in the analysis, and where I see my current strengths, and where my future strengths need to be developed in order to achieve these goals.

In terms of business skills, I have a broad general understanding of medicine in general, and emergency medicine in particular.   I also understand the Prehospital Emergency Medical Services environment, having started my career as an EMT-Paramedic, and having served as a Medical Director for several EMS services.   I am currently the Medical Director for our Air Ambulance service.  In addition, as a Chief Medical Information Officer, I understand the IT needs of clinicians and health care workers, and the technical realities of what IT can deliver.    I also serve as the Physician Liaison to our BI/Enterprise Analytics Division.   I see my experience and knowledge as a subject matter expert for clinical medicine driving the kinds of research questions that our data science/data analytics team attempt to answer.

I already have a deep interest in developing predictive algorithms that could be incorporated into bedside monitoring technologies that would be used to predict future states and detect early clinical deterioration.   This information could be used to guide triage decisions for clinicians;  is the patient safe to be discharged home, or do they need to be admitted to the hospital?   If they need to be admitted, do they need to be in the ICU, or is an unmonitored bed going to be ok?  Is the patient predicted to recover uneventfully, or do they have a high probability of deterioration requiring high resource utilization and admission to the ICU?  Does a patient at a small rural critical access hospital need to be transferred to a tertiary care facility that might be hundreds of miles away, taking them away from their family support network, and exposing them to the dangers of transfer and the costs of transfer (currently between $25,000 – $75,000), or can they be safely treated at their hometown facility.   Will the Internet of Things help us to remotely monitor patients at home, or even in the hospital, to detect either improvement or deterioration, before it is clinically apparent, thereby allowing earlier treatments and interventions and improving outcomes?  These are some of the important unanswered questions in my mind.

My weakest current skill, and continued weakest skill going forward I see as programming or hacking.   That is why I will never be a pure Data Creative type.   I do want to get competent at more than a basic level, in order to be able to do some of the work myself, and hand off the really complicated code to a true programmer.   I am currently working on learning Python, having finished the Codecademy course, and am almost finished with Zed A. Shaw’s “Learn Python the Hard Way”.   I know some R as well, mainly for statistical analysis.  Having said that, I am a novice coder at best.

I am extremely interested in machine learning and big data.  I would really like to become adept at analyzing big data because I see the potential of this approach in analyzing healthcare data.  This will be a big focus of mine.

I have a basic background in math and statistics, and am actually looking forward to relearning them again.   I think I will learn a tremendous amount now that I understand the importance of having this background.  I am currently working my way through the textbook we will be using in the fall for the math for modelers course.

When you consider all of the factors, my largest skill set is my business or subject matter experience.   I think this will allow me to be a better leader in choosing which analytics projects we pursue.   Having a good background in what types of analyses are possible, and which type are good for what situation, will help me make better decisions, and understand the results.   I am hopeful that I will then be able to translate the insights learned into understandable and actionable information that can be presented to the various stakeholders.

I am also hopeful that I can help drive the changes that are needed across the organization, based on the insight learned.  That is the basis for the “Learning Health System” concept.   A Learning Health System has to be able to capture important data, analyze it, gain insights, diffuse these insights, and rapidly change behavior incorporating these insights.  Our institution is currently trying to understand the meaning/basic concepts of a Learning Health System and put in place the framework and people necessary to achieve the goals of this system.  I hope to contribute to this in a meaningful manner.  There are also national initiatives on becoming Learning Health Systems.  The Learning Health Community (http://www.learninghealth.org/home/) is an excellent resource listing  core values, and some of the organizations also working on this goal.

In my next post, I will answer the question of Why I want to become a data scientist.

Healthcare Predictive Analytics

“The Formula” – great summer reading and some implications for healthcare predictive analytics.

I would like to recommend “The Formula” by Luke Dormehl for a good summer read.   I am enjoying this book so far.  I think it should be a must read for all of those interested in predictive analytics and predictive modelling.  A couple of passages from the beginning of the book are provided below.

9780399170539_p0_v2_s260x420

“Algorithms sort, filter and select the information that is presented to us on a daily basis.”  “… are changing the way that we view … life, the universe, and everything.”

“To make sense of a big picture, we reduce it …  To take an abstract concept such as human intelligence and turn it into something quantifiable, we abstract it further, stripping away complexity and assigning it a seemingly arbitrary number, which becomes a person’s IQ.”

“What is new is the scale that this idea is now being enacted upon , to the point that it is difficult to think of a field of work or leisure that is not subject to algorithmization and The Formula.  This book is about how we reached this point, and how the age of the algorithm impacts and shapes subjects as varied as human creativity, human relationships, notions of identity, and matters of law.”

“Algorithms are very good at providing us with answers in all of these cases.  The real question is whether they give us the answers we want (my emphasis).”

This takes us back to George E.P. Box’s famous quote “all models are wrong, but some are useful”.   We can create algorithms for almost anything, but how useful are they.   Accurate models can be created that work really well on deterministic systems, but are much harder to develop on complex systems.   As you strip away features to be studied from that complex system, you lose the impact of that feature on the system. You try to select features that do not have a huge impact on the performance of the system, but this is often unknowable in advance.

One of the great challenges in clinical medicine is trying to determine or predict what is going to happen to a patient in the future.   We know generally that smoking is bad, too much alcohol is bad, being overweight is bad, not exercising is bad, not sleeping enough is bad.  We know these are bad for the overall population of people.  However we do not know how each of these effect a single patient, nor how they are interrelated.   We would like to develop models that can predict what will happen if you have certain conditions (predictive modeling), and then look at what would happen if you took certain courses of action/treatments/preventative actions(prescriptive modeling).  The results of these models would allow clinicians and patients to be better informed and choose the best pathway forward.

Of particular interest to me, I would like to be able to predict real-time what is going to happen to a patient I am seeing in the emergency room.    This is a complex situation.   Their current state – physiologic vital signs (level of consciousness, blood pressure, pulse, respiratory rate, temperature, blood oxygen level, respiratory variability, heart rate variability, ekg,  etc.), along with their current laboratory and radiological imaging findings will define their current problem or diagnosis.  The patients past medical history, medications, allergies, social support, living environment, etc.,  will have major impacts on how they respond to their current illness or injury.  We would like to aggregate all of this information into predictive and prescriptive models that could predict future states.   Are the patients safe to be discharged home or do they need to be admitted?  If they need to be admitted, can they go to the short stay unit, a bed with cardiac monitoring, a bed with cardiac monitoring, or the intensive care  unit?  Given the current treatment, what will their response to this treatment be – will they get better or worse?  Will they develop sepsis?  Will they develop respiratory failure and require a tube be placed down their throat and a ventilator to breathe for them?

A particularly exciting area ripe for development is the internet of things.   The internet of things is going to revolutionize how we collect data, both at home and in the hospital.   This much-needed capability will allow us to monitor patients at home,  detect illnesses much earlier, monitor responses to therapies, etc.,  and will be useful for a whole host of things we haven’t even imagined yet.

These are some of the complex questions that face us now in medicine.  I am excited to participate in this quest to answer some of these vexing questions using all of the analytical tools that are currently available – whether “small data”  using standard descriptive and inferential statistics, predictive analytics, and big data analytics.