Artificial Intelligence, Cerner, Electronic Health Record (EHR), Healthcare Analytics, Healthcare Technology, Machine Learning

Cerner’s Strategy to Deploy “Intelligence” into the Cerner Ecosystem – Insights from the 2018 Cerner Strategic Client Summit

I feel privileged to have been invited for the second year in a row to Cerner’s Strategic Client Summit.  The meeting location – downtown San Diego – and the Summit content were both fantastic.  I will attempt to summarize a few key concepts, and why I feel optimistic about where Cerner is heading – both from an overall perspective as well as from an improved product and end-user experience perspective.

I was very impressed with the collaboration between Cerner President Zane Burke and Cerner’s new Chairman and CEO Brent Shafer.   I had the opportunity to have several conversations with both Zane, whom I have known for a while now, and Brent whom I just had an opportunity to meet for the first time.  I found Brent to be very personable, and very thoughtful about his vision for Cerner.  I was also impressed with the collaborative relationship between Zane and Brent, and think they will lead Cerner in the correct direction.

I am not trying to downplay the roles that many people play inside of Cerner, because there are a lot of great things going on, but I think there are two strategic people that Cerner needs to pay attention to in order to move their EHR to the next level.

The first is Paul Weaver, Cerner’s Vice President of User Experience.  I first saw and met Paul at the 2017 Strategic Client Summit in Dallas Texas, and was very impressed by his vision and enthusiasm.  He hails from the gaming industry, and brings his expertise to the world of healthcare software, where it is much needed.  Here is a link to a 13 minute podcast where Paul talks about the importance of the user experience.  At the 2017 Summit, he used the words “user delight” as his goal of how the interaction with the EHR would make end users feel.  I don’t know about you, but I have used many words to describe my feelings of how the EHR made me feel, and none of them were delight!  His message – all interactions with a software program elicit some type of an emotional reaction.  He wants those reactions to be positive, decreasing stress, and making both patients and healthcare providers/workers “happier and healthier”.  This is a laudable goal, and will help in the fight to combat physician (and other healthcare worker) burnout/suicide – since negative experiences with the EHR are almost always identified as one of the top contributing factors to burnout.  This is an EXTREMELY important area for Cerner to get right, and they need to support Paul Weaver in his efforts to accomplish his goals.

The second strategic person is David Cohen, Vice President for Intellectual Property Development.  David’s presentation, “Activating Intelligence to Transform Care”, was visionary, and he articulated the concepts of machine learning and artificial intelligence, and how to utilize them in healthcare better than anyone else I have heard or read to date.  I will provide some high level overview of his vision below.  At this time it appears they have branded these efforts as “Cerner Intelligence – Leveraging the Power of Data”.

Cerner sees the new demands on health care as being proactive health management (vs reactive sick care); cross-continuum care system (vs fragmented niche care); rewards for quality, safety and efficiency (vs rewards for volume); and person and care team-centric (vs clinician-centric).

Value “drivers” were presented.    These were specific areas where Cerner intends to deploy their Intelligence to make meaningful improvements.   These included clinical and quality drivers, operational drivers, financial drivers, and drivers around improving the experience.  I feel these are appropriate areas to start deploying this Intelligence.  I can post more on this when this information becomes publicly available, because there are some important key areas that if realized, will bring great value to organizations.

Where David’s presentation got really interesting was when he started presenting how Cerner’s areas of focus were on using machine learning, artificial intelligence, and knowledge management.   I am going to provide his definitions of each, because I think they are defined very nicely.

  • Machine Learning:  Leveraging the power of data and statistical methods to create new insights and workflow optimizations
  • AI experiences:  Leverage Artificial Intelligence capabilities that mimic human behaviors such as voice, vision, language, and conversation to enhance human abilities
  • Knowledge management:  Ensure data is complete, contextual, and accurately represented using standards based medical vocabularies\

David then started talking about “AI Experiences” (see diagram from Cerner below – reprinted with permission).   I am convinced that Cerner gets where they should be going in regards to incorporating AI into healthcare, on a very practical basis.  This starts with the inputs into the AI systems, the transformations of those inputs by the system, the incorporation into the knowledge management systems, and most importantly, the AI applications that will make the EHR a true virtual partner in the healthcare process – for providers, patients, and healthcare workers.  What was shown was more than a concept, and the demo’s they put on showed that they are making progress on these. The concept of a mouse-less and keyboard-less interaction with the EHR may be a reality, sooner rather than later.  I encouraged Cerner executives to support these initiatives deeply and at the highest levels.

Cerner AI

 

Overall, I am very excited and optimistic about Cerner’s vision, and for the prospect of them delivering meaningful improvements and solutions – both near-term and long-term.  Their focus on improving the user experience and making it “delightful” is a very important initiative.   Their focus on using data to improve – almost everything – is foundational for moving all of us forward.  My plea to Cerner is to continue to very deeply support these initiatives, and the talented people they have focused on these.

Data Science, Deep Learning, Machine Learning, Neural Networks

Neural Networks, Deep Learning, Machine Learning resources

I have come across a few great resources that I wanted to share.  For students taking a machine learning class (like Northwestern University’s MSDS 422 Practical Machine Learning) these are great references, and a way to learn about them before, during, or after the class.  This is not a comprehensive list, just a starter.

Textbook

There is a free online textbook, Neural Networks and Deep Learning.

Videos

There is a great math visualization site called 3Blue1Brown and they have a YouTube channel.  There are 4 videos on neural networks/deep learning which are really informative and a good introduction.

  1.  But what *is* a Neural Network? Chapter 1, deep learning
  2.  Gradient Descent, how neural networks learn. Chapter 2, deep learning
  3.  What is backpropagation really doing? Chapter 3, deep learning
  4.  Backpropagation calculus. Appendix to deep learning chapter 3.

There is a great playlist on Essence of linear algebra, which is a great review and explanation of linear algebra and matrix operations.  I wish I would have seen this when I was learning it.

Scikit-Learn Tutorials

There are tutorials on the Scikit-Learn site.

TensorFlow tutorials

They provide a link to this Google “Machine Learning Crash Course” – Google’s fast-paced, practical introduction to machine learning.

The TensorFlow site has a Tutorials page.  There are tutorials for Images, Sequences, Data Representation, and a few other things.

 

Google AI

Google has it’s own education site (which also has the Machine Learning Crash Course referenced above).

 

Blog sites

Adventures in Machine Learning, Andy Thomas’s blog.

This is a must view site, and worth visiting several times over.   Andy does a great job explaining the topics and has some great visuals as well.  These are fantastic tutorials.  I have listed only a few below.

Neural Networks Tutorial – A Pathway to Deep Learning

Python TensorFlow Tutorial – Build a Neural Network

Convolutional Neural Networks Tutorial in TensorFlow

Word2Vec work embedding tutorial in Python and TensorFlow

Recurrent neural networks and LSTM tutorial in Python and TensorFlow

 

colah’s blog – Christopher Olah’s blog

Another great blog, with lots of good postings.  A few are listed below.

Deep Learning, NLP, and Representations

Neural Networks, Types and Functional Programming

 

Courses

DataCamp – one of my favorite learning sites.  It does require a subscription.

DataCamp currently has 9 Python machine learning courses, which are listed below.  They also have 9 R machine learning courses.

Machine Learning with the Experts: School Budgets

Deep Learning in Python

Building Chatbots in Python

Natural Language Processing Fundamentals in Python

Unsupervised Learning in Python

Linear Classifiers in Python

Extreme Gradient Boosting wiht XGBoost

HR Analytics in Python: Predicting Employee Churn

Supervised Learning with Scikit-Learn

 

Udemy courses

Udemy is also a favorite learning site.  You can generally get the course for about $10.

My favorite Udemy learning series is from Lazy Programmers Inc.  They have a variety of courses.  Their blog site explains what order to take the courses in.   There are many other courses from different instructors as well.

Deep Learning Prerequisites: The Numpy stack in Python

Deep Learning Prerequisites: Linear Regression in Python

Deep Learning Prerequisites: Logistic Regression in Python

Data Science: Deep Learning in Python

Modern Deep Learning in Python

Convolutional Neural Networks in Python

Recurrent Neural Networks in Python

Deep Learning with Natural Language Processing in Python

Advanced AI: Deep Reinforcement Learning in Python

Plus many other courses on Supervised and Unsupervised Learning, Bayesian ML, Ensemble ML, Cluster Analysis, and a few others.

 

If you have other favorite machine learning resources, please let me know.

 

 

Machine Learning, Northwestern University MSDS Program, Northwestern University MSPA

Northwestern University MSDS (formerly MSPA) 422 – Practical Machine Learning Course Review

This course was taught by Dr. Thomas Miller, who is the faculty director of the Data Science program (formerly known as the Predictive Analytics program – I am going to post an article discussing the program name change from the Master of Science in Predictive Analytics (MSPA) to the Master of Science in Data Science (MSDS)).  Overall, this was an excellent review of machine learning, and is a required core course for all students in the program.  It is most definitely a foundational course for any student of data science in today’s world.  It is also a foundational course for the Artificial Intelligence and Deep Learning specialization, which is currently being developed (more on this in a subsequent post as well).  The course covers the following topics:

  • Supervised, Unsupervised, and Semi-supervised learning
  • Regression versus Classification
  • Decision Trees and Random Forests
  • Dimensionality Reduction techniques
  • Clustering Techniques
  • Feature Engineering
  • Artificial Neural Networks
  • Deep Neural Networks
  • Convolutional Neural Networks (CNN)
  • Recurrent Neural Networks (RNN)

This course uses Python and the Python Libraries Scikit-Learn and TensorFlow. In addition to using Jupyter Notebooks to run my code, I also learned how to run TensorFlow from the Command Line, which is a faster way of running neural networks through a large number of epochs. The course is currently offered in R as well, but they will be discontinuing the R course, and only offering the Python/TensorFlow course starting in the fall semester.   Dr. Miller commented that they will be using Python much more extensively going forward, especially in the AI/Deep Learning specialization courses.  R apparently will still be offered in the Analytics/Modeling courses – 410 (Regression Analysis) and 411 (Generalized Linear Models).   I did learn to use Python/Scikit-Learn/TensorFlow at an intermediate level, and feel like I have a great foundation to build upon, in terms of programming.

Course Structure

There is required reading every week, mainly from the two required textbooks, although there are a few articles to read as well.  There were a total of 5 sync sessions which reviewed various topics.   I wish the sync sessions had been a little more robust, and covered the current assignments and the coding required to complete the assignments.  I found this very helpful in previous courses.  There were weekly discussion board assignments, which covered basic concepts, and turned out to be very informative, especially since a lot of the topics covered on the final exam were covered in these discussions.  There are weekly assignments which must be completed, in which you either develop the code yourself, or use a skeletal code base provided and build upon it.   These ranged from very easy to very difficult, especially as you moved into the artificial neural networks.  There was a non-proctored final exam and a proctored final exam.

Textbooks

Primary Textbooks:

Géron, A. 2017. Hands-On Machine Learning with Scikit-Learn & TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, Calif.: O’Reilly. [ISBN-13 978-1-491-96229-9] Source code available at https://github.com/ageron/handson-ml  This was the primary textbook for most of the course.  It is an excellent text with lots of great coding examples.

Müller, A. C. and Guido, S. 2017. Introduction to Machine Learning with Python: A Guide for Data Scientists. Sebastopol, Calif.: O’Reilly. [ISBN-13: 978-1449369415] Code examples at https://github.com/amueller/introduction_to_ml_with_python

Reference Textbook:

Izenman, A. J. 2008. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning. New York: Springer. [ISBN-13: 978-0-387-78188-4] This was used very little.

Learning Outcomes (from syllabus):

Learning Outcomes Practical Machine Learning is a survey course with a long list of learning outcomes:

  • Explain the learning algorithm trade-offs, balancing performance within training data and robustness on unobserved test data.
  • Distinguish between supervised and unsupervised learning methods.
  • Distinguish between regression and classification problems
  • Explain bootstrap and cross-validation procedures
  • Explore and visualize data and perform basic statistical analysis
  • List alternative methods for evaluating classifiers.
  • List alternative methods for evaluating regression
  • Demonstrate the application of traditional statistical methods for classification and regression
  • Demonstrate the application of trees and random forests for classification and regression
  • Demonstrate principal components for dimension reduction.
  • Demonstrate principal components regression
  • Describe hierarchical and non-hierarchical clustering techniques
  • Describe how semi-supervised learning may be utilized in addressing classification and regression problems
  • Explain how measurement and feature engineering are relevant to modeling
  • Describe how artificial neural networks are constructed from logical connections of artificial neurons and activation functions
  • Demonstrate the use of artificial neural networks (including deep neural networks) in classification and regression
  • Describe how convolutional neural networks are constructed
  • Describe how recurrent neural networks are constructed
  • Distinguish between autoencoders and other forms of unsupervised learning
  • Describe applications of autoencoders
  • Explain how the results of machine learning can be useful to business managers
  • Transform data and research results into actionable insights

 

Weekly Assignments

Here are the weekly learning titles and assignments:

Week 1.  Introduction to Machine Learning

  • Assignment 1. Exploring and Visualizing Data

Week 2.  Supervised Learning for Classification

  • Assignment 2. Evaluating Classification Models

Week 3.  Supervised Learning for Regression

  • Assignment 3. Evaluating Regression Models

Week 4. Trees and Random Forests

  • Assignment 4. Random Forests

Week 5.  Unsupervised Learning

  • Assignment 5. Principal Components Analysis

Week 6. Neural Networks

  • Assignment 6. Neural Networks

Week 7.  Deep Learning for Computer Vision

  • Assignment 7. Deep Learning

Week 8.  Deep Learning for Natural Language Procession

  • Assignment 8 Natural Language Processing

Week 9.  Neural Networks Autoencoders

  • No assignment

 

Final Examinations

There were 2 final examinations, one being non-proctored and the other proctored.  The non-proctored exam was open book, and tested your ability to look at data and the various analytical techniques, and interpret the results of the analyses.  The proctored final exam was closed book and covered general concepts.

Final Thoughts

This was a great overview of some of the more important topics in machine learning.  I was able to get a good theoretical background in these topics, and learned the coding necessary to perform these.   This is a great foundation upon which to add more advanced and in-depth use of these techniques.  This course really challenged me to rethink what analytical techniques I should be learning and applying in the future, to the point that I am going to change my specialization to Artificial Intelligence and Deep Learning.

 

Data Science, Machine Learning

The world of machine learning algorithms – a summary infographic.

This is a very nice infographic that shows the basic types of machine learning algorithm categories.   It is somewhat informative to follow the path of how the algorithm got posted on twitter, where I saw it.  It was somewhat misleading (although not intentional I believe) about who actually created this infographic.  To me this highlights the importance of making sure we are crediting our information sources correctly.  This topic was also broached in this FiveThirtyEight article “Who Will Debunk The Debunkers” by Daniel Engber.  The article discusses many myths, one of them being a myth of how spinach was credited with having too much iron content.  It mentions that an unscholarly and unsourced article became “the ultimate authority for all the citations that followed”.  I have run across this as well, when I was trying to find the source of quotation about what a “Learning Health System” was defined as.  This definition was cited by at least twenty scholarly articles, but there was not reference for the citation, only circular references to the other articles that used this definition.  This highlights the importance of making sure we correctly cite the source of information, so it can be critically analyzed by other people interested in using the data.

I noticed this infographic after it had been tweeted by Evan Sinar (@EvanSinar).  The tweet cited an article in @DataScienceCentral.  That article “12 Algorithms Every Data Scientist Should Know” by Emmanuelle Rieuf, mentions an article posted by Mark van Rijmenan, with the same title – 12 Algorithms Every Data Scientist Should Know“, and then shows the infographic, giving the impression that this was the source of the algorithm.  That article mentions that the “guys from Think Big Data developed the infographic” and provided a link.  That links to the article “Which are the best known machine learning algorithms? Infographic” by Anubhav Srivastava.  It “mentioned over a dozen algorithms, segregated by their application intent, that should be in the repertoire of every data scientist”.  The bottom line, try to be careful with your source citations so it is not hard for people to follow the source backwards in time.  I was able to do this in this case, it just took a little while.  But there are many times where it is impossible to do this.

Now, for the infographic.

12algorithmseverydatascientistshouldknow