Machine Learning Basics Part 3: Basic model training using Linear Regression and Gradient Descent

Machine Learning Basics Part 3: Basic model training using Linear Regression and Gradient Descent

If you missed part one in the series, you can start here (Machine Learning Basics Part 1: An Overview).

Linear Regression is a straightforward way to find the linear relationship between one or more variables and a predicted target using a supervised learning algorithm. In simple linear regression, the model predicts the relationship between two variables. In multiple linear regression, additional variables that influence the relationship can be included. Output for both types of linear regression is a value within a continuous range.

Simple Linear Regression: Linear Regression works by finding the best fit line to a set of data points.

For example, a plot of the linear relationship between study time and test scores allows the prediction of a test score given the amount of hours studied.


To calculate this linear relationship, use the following:


In this example, ŷ is the predicted value, x is a given data point, θ1 is the feature weight, and θ0 is the intercept point, also known as the bias term. The best fit line is determined by using gradient descent to minimize the cost function. This is a complex way of saying the best line is one that makes predictions closest to actual values. In linear regression, the cost function is calculated using mean squared error (MSE): #331b9

Mean Squared Error for Linear Regression1

In the equation above, the letter m represents the number of data points, 𝛉T is the transpose of the model parameters theta, x is the feature value, and y is the prediction. Essentially, the line is evaluated by the distance between the predicted values and the actual values. Any difference between predicted value and actual value is an error. Minimizing mean squared error increases the accuracy of the model by selecting the line where the predictions and actual values are closest together.

Gradient descent is the method of iteratively adjusting the parameter theta (𝛉) to find the lowest possible MSE. A random parameter is used initially and each iteration of the algorithm takes a small step—the size of which is determined by the learning rate—to gradually change the value of the parameter until the MSE has reached the minimum value. Once this minimum is reached, the algorithm is said to have converged.

 

Be aware that choosing a learning rate that is smaller than ideal will result in an algorithm that converges extremely slowly because the steps it takes with each iteration are too small. Choosing a learning rate that is too large can result in a model that never converges because step size is too large and it can overshoot the minimum.

Learning Rate set too small1

Learning Rate set too large1

 

Multiple Linear Regression: Multiple linear regression, or multivariate linear regression, works similarly to simple linear regression but adds additional features. If we revisit the previous example of hours studied to predict test scores, a multiple linear regression example could be using hours studied and hours of sleep the night before exam to predict test scores. This model allows us to use unrelated features on a single data point to make a prediction about that data point. This can be represented visually as finding the plane that best fits the data. In the example below, we can see the relationship between horsepower, weight, and miles per gallon.

Multiple Linear Regression3

Thanks for reading our machine learning series, and keep and eye out for our next blog!

 

Reference:

  1. Geron, Aurelien (2017). Hands-On Machine Learning with Scikit-Learn & TensorFlow. Sebastopol, CA: O’Reilly.
  2. https://www.mathworks.com/help/stats/regress.html
  3. https://xkcd.com/1725/
Machine Learning Basics Part 2: Regression and Classification

Machine Learning Basics Part 2: Regression and Classification

If you missed part one in the series, you can start here (Machine Learning Basics Part 1: An Overview).

Regression:

Common real-world problems that are addressed with regression models are predicting housing values, financial forecasting, and predicting travel commute times. Regression models can have a single input feature, referred to as univariate, or multiple input features, referred to as multivariate. When evaluating a regression model, performance is determined by calculating the Mean squared error (MSE) cost function. MSE is the average of the squared errors of each data point from the hypothesis, or simply how far each prediction was from the desired outcome. A model that has a high MSE cost function fits the training data poorly and should be revised.

A visual representation of MSE:

In the image above,1 the actual data point values are represented by red dots. The hypothesis, which is used to make any predictions on future data, is represented by the blue line. The difference between the two is indicated by the green lines. These green lines are used to compute MSE and evaluate the strength of the model’s predictions.

Regression Problem Examples:

  • Given BMI, current weight, activity level, gender, and calorie intake, predict future weight.
  • Given calorie intake, fitness level, and family history, predict percent probability of heart disease.

#331b9

Commonly Used Regression Models:

Linear Regression: This is a model that represents the relationship between one or more input variables and a linear scalar response. Scalar refers to a single real number.

Ridge Regression: This is a linear regression model that incorporates a regularization term to prevent overfitting. If the regularization term (𝝰) is set to 0, ridge regression acts as simple linear regression. Note that data must be scaled before performing ridge regression.

Lasso Regression: Lasso is an abbreviation for least absolute shrinkage and selection operator regression. Similar to ridge regression, lasso regression includes a regularization term. One benefit to using lasso regression is that it tends to set the weights of the least important features to zero, effectively performing feature selection.2 You can implement lasso regression in Sci-kit Learn using the built-in model library.

Elastic Net: This model uses a regularization term that is a mix of both ridge and lasso regularization terms. By setting r=0 the model behaves as a ridge regression, and setting r=1 makes it behave like a lasso regression. This additional flexibility in customizing regularization can provide the benefits of both models.2 Implement elastic net in Sci-kit Learn using the built in model library. Select an alpha value to control regularization and an l1_ratio to set the mix ratio r.

ClassificationClassification problems predict a class. They can also return a probability value, which is then used to determine the class most likely to be correct. For classification problems, model performance is determined by calculating accuracy.

model accuracy =  correct predictions / total predictions * 100

Classification Problem Examples: Classification has its benefits for predictions in the healthcare industry.For example, given a dataset with features including glucose levels, pregnancies, blood pressure, skin thickness, insulin, and BMI, predictions can be made on the likelihood of the onset of diabetes. Because this prediction should be a 0 or 1, it is considered a binary classification problem.

Commonly Used Classification Models:

Logistic Regression: This is a model that uses a regression algorithm, but is most often used for classification problems since its output can be used to determine the probability of belonging to a certain class.2 Logistic regression uses the sigmoid function to output a value between 0 and 1. If the probability is >= 0.5 that an instance is in the positive class (represented by a 1), the model predicts 1. Otherwise, it predicts 0.

Softmax Regression: This is a logistic regression model that can support multiple classes. Softmax predicts the class with the highest estimated probability. It can only be used when classes are mutually exclusive.2

Naive Bayes: This is a classification system that assumes that the value of a feature is independent from the value of any other feature and ignores any possible correlations between features in making predictions. The model then predicts the class with the highest probability.4

Support Vector Machines (SVM): This is a classification system that identifies a decision border, or hyperplane, as wide as possible between class types and predicts class based on the side of the border that any point falls on. This system does not use probability to assign a class label. SVM models can be fine-tuned by adjusting kernel, regularization, gamma, and margin. We will explore these hyperparameters further in an upcoming blog post focused solely on SVM. Note that SVM can also be used to perform regression tasks.

Decision Trees and Random Forests: A decision tree is a model that separates data into branches by asking a binary question at each fork. For example, in a fruit classification problem one tree fork could ask if a fruit is red. Each fruit instance would either go to one branch for yes or the other for no. At the end of each branch is a leaf with all of the training instances that followed the same decision path. The common problem of overfitting can often be avoided by combining multiple trees into a random forest and taking the prediction from the tree with the highest probability of accuracy.

Neural Networks (NN): This is a model composed of layers of connected nodes. The model takes information in via an input layer and passes it through one or more hidden layers composed of nodes. These nodes are activated by their input, make some determination, and generate output for the next layer of nodes. Connections between nodes have edges, which have a weight that can be adjusted to influence learning. A bias term can also be added to the edges to create a threshold theta (𝛉), which is customizable and determines if the node’s output will continue to the next layer of nodes. The final layer is the output layer, which generates class probabilities and makes a final prediction. When a NN has two or more hidden layers, it’s called a deep neural network. There are multiple types of neural networks and we will explore this in more detail in later blog posts.

K-nearest Neighbor: This model evaluates a new data point by its proximity to training data points and assigns a class based on the majority class of its closest neighbors as determined by  feature similarity. K is an integer set when the model is built and determines how far out the model should look for neighbors. The boundary circle is set when it includes k neighbors.

Reference:

  1. https://en.wikipedia.org/wiki/Linear_regression
  2. Geron, Aurelien (2017). Hands-On Machine Learning with Scikit-Learn & TensorFlow. Sebastopol, CA: O’Reilly.
  3. https://en.wikipedia.org/wiki/Sigmoid_function
  4. https://en.wikipedia.org/wiki/Naive_Bayes_classifier
Machine Learning Basics Part 1: An Overview

Machine Learning Basics Part 1: An Overview

This is the first in a series of Machine Learning posts meant to act as a gentle introduction to Machine Learning techniques and approaches for those new to the subject. The material is strongly sourced from Hands-On Machine Learning with Scikit-Learn & TensorFlow by Aurélien Géron and from the Coursera Machine Learning class by Andrew Ng. Both are excellent resources and are highly recommended.

Machine Learning is often defined as “the field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959).

More practically, it is a program that employs a learning algorithm or neural net architecture that once trained on an initial data set, can make predictions on new data.

Common Learning Algorithms:¹

Linear and polynomial regression

Logistic regression

K-nearest neighbors

Support vector machines

Decision trees

Random forests

Ensemble methods

While the above learning algorithms can be extremely effective, more complex problems -, like image classification and natural language processing (NLP) – often require a deep neural net approach.

Common Neural Net (NN) Architectures:¹

Feed forward NN

Convolutional NN (CNN)

Recurrent NN (RNN)

Long short-term memory (LSTM)

Autoencoders

We will go into further detail on the above learning algorithms and neural nets in later blog posts.

Some Basic terminology:

Features – These are attributes of the data. For example, a common dataset used to introduce Machine Learning techniques is the Pima Indians Diabetes dataset, which is used to predict the onset of diabetes given additional health indicators. For this dataset, the features are pregnancies, glucose, blood pressure, skin thickness, insulin, BMI, etc.

Labels – These are the desired model predictions. In supervised training, this value is provided to the model during training so that it can learn to associate specific features with a label and increase prediction accuracy. In the Pima Indians Diabetes example, this would be a 1 (indicating diabetes onset is likely) or a 0 (indicating low likelihood of diabetes).

Supervised Learning – This is a learning task in which the training set used to build the model includes labels. Regression and classification are both supervised tasks.

Unsupervised Learning -This is a learning task in which training data is not labeled. Clustering, visualization, dimensionality reduction and association rule learning are all unsupervised tasks.

Some Supervised Learning Algorithms:¹

K-nearest neighbors

Linear regression

Logistic regression

Support vector machines (SVMs)

Decision trees and random forests

Neural networks

Unsupervised Learning Algorithms:¹

Clustering

• K-means

• Hierarchical cluster analysis (HCA)

• Expectation maximization

Visualization and Dimensionality Reduction

• Principal component analysis (PCA)

• Kernel PCA

• Locally-linear embedding (LLE)

• t-distributed Stochastic Neighbor Embedding (t-SNE)

Association Rule Learning

• Apriori

• Eclat

Dimensionality Reduction: This is the act of simplifying data without losing important information. An example of this is feature extraction, where correlated features are merged into a single feature that conveys the importance of both. For example, if you are predicting housing prices, you may be able to combine square footage with number of bedrooms to create a single feature representing living space

Batch Learning: This is a system that is incapable of learning incrementally and must be trained using all available data at once1. To learn new data, it must be retrained from scratch.

Online Learning: This is a system that is trained incrementally by feeding it data instances sequentially. This system can learn new data as it arrives.

Underfitting:  This is what happens when you creating a model that generalizes too broadly. It does not perform well on the training or test set.

Overfitting:  This is what occurs when you creating a model that performs well on the training set, but has become too specialized and no longer performs well on new data.

Common Notations:

m: The total number of instances in the dataset

X: A matrix containing all of the feature values of every instance of the dataset

x(i): A vector containing all of the feature values of a single instance of the dataset, the ith instance.

y: A vector containing the labels of the dataset. This is the value the model should predict

References:

  1. Géron, Aurélien (2017). Hands-On Machine Learning with Scikit-Learn & TensorFlow. Sebastopol, CA: O’Reilly.

#331b9

Olive at Grace Hopper Celebration 2018

Olive at Grace Hopper Celebration 2018

I was fortunate that Olive sent me to attend the Grace Hopper Celebration (#GHC18)in Houston, TX, this year. It was an amazing and inspiring three-day event with 20,000 female technologists from across the globe. I was able to attend workshops and presentations on Artificial Intelligence (AI) and Machine Learning (ML), data and privacy, and career development presented by thought leaders from Google, Amazon, Deep Mind, and other industries.

Best of #GHC18? The best part of my GHC experience was:

Learning:

A. Building a serverless scheduler web app in Amazon Web Services (AWS) by creating buckets on S3, setting up static website hosting, creating a DynamoDB table to store schedule information, creating and testing three Lambda AWS functions used to add, get, and update calendar sessions, and creating and deploying an Application Program Interface (API) to trigger the new functions.
Hearing a panel discussion on the future of Artificial Intelligence (AI) and General Intelligence (GI) where we discussed the likelihood of reaching GI in our lifetimes, what our ethical responsibilities are as individuals and corporations as we develop new AI technologies, and how to address reward hacking in AI.

B. Learning more about privacy and security in the IoT space, the lack of knowledge on how massive amounts of collected personal information are being shared, lack of consumer control over personal data, and examples of data being used in unexpected ways.
Meeting New People: There are people from all around the world from the heaviest hitting mega-companies to small startups with bold, new ideas. It’s a great opportunity to find out what people across other industries are working on, the challenges they’ve faced, and the technologies and strategies they employ.

C. Inspiration: Attending #GHC18 and hearing the success stories of women further ahead in their careers, learning about their businesses, and hearing how they’re changing the world left me with a huge boost of inspiration to bring back and share with my colleagues.

 

 

I’m lucky to be a part of a company that values diversity and encourages growth.
See you at #GHC19!

20,000 women attend the opening keynote address at #GHC18

Robotic Process Automation Vs Machine Learning: What’s the Difference?

Robotic Process Automation Vs Machine Learning: What’s the Difference?

The rapid advancements in automation are revolutionizing business operations for organizations in practically every industry.  As automation technology continues to evolve and uncover new opportunities to showcase its effectiveness, healthcare companies are one of the industries rapidly discovering the benefits of its methodologies.  While hospitals are projected to invest over $50 billion dollars towards artificial intelligence and robotic process automation solutions by 2020, some in the industry are only beginning to look into the potential of these solutions and their game-changing advantages.  After spending time with over 300 revenue cycle and IT executives at Becker’s 4th Annual Health & IT Revenue Cycle conference, our teams at Olive were able to garner some details behind executives’ findings and concerns.  The top 5 takeaways we found include a sense of hesitation regarding the ability to prove ROI, but also reveal that the most agreed upon application of AI will prove its worth most in repetitive high volume tasks like eligibility checks, authorizations, and claims.

Robotic process automation and machine learning are often the two technologies discussed the most when broaching this topic, but what is the difference between the two? Further, which of these two work the best for a given use case?  We’ll discuss the details of both methods and help you answer both of those questions in this piece.

What is robotic process automation?

It’s quite common for robotic process automation (RPA) to be thought of as actual robotic devices performing operations on an assembly line or robot constructs like The Iron Giant and Transformers.  However, robots exist in other forms as part of other technologies like soft-bots, AI, sensor networks, and data analytics. Fundamentally, the simplest way to describe RPA is that it’s a process by which a repeatable rule-based task is executed through an automation solution.

Operating within predefined rules and procedures, RPA solutions are able to complete an action through a machine that would normally require human interaction.  Whether the task is in a factory environment or office space, RPA can help with the construction of a component for a finished product or even help office productivity by brewing coffee through Wi-Fi enabled coffee makers.  Because RPA solutions require a thoroughly practiced, documented, and familiar procedure to fulfill its automation benefits, some believe it will eliminate the need for humans in some areas, however, that isn’t really the case.  RPA is designed to handle the tedious repetitive tasks humans currently must do, enabling enhanced human productivity by allowing humans to focus on the more complex and creative tasks they excel at.

Sometimes considered to be the most basic form of AI, robotic process automation is best utilized in business practices that require little skill and are performed under set parameters including how often a task needs to be executed and within specified timeframes.  In healthcare applications, RPA reaps loads of benefits by allowing skilled and/or specialized staff to focus their attention towards tasks that require human cognition and subjective decision making. There are often instances within hospitals where employees with clinical skills, such as nurses and aides, are tasked with additional tasks of insurance verification and data recording.  While these responsibilities are expected within their roles along with other staff members, these duties are ideal for an RPA solution to tackle. Having these non-clinical jobs being addressed through automation allows for staff to concentrate their attention on their principal tasks better suited for their skills of patient care and advocacy.

Studies have already shown that the increase of automation in processing medical records and documentation has led to a 15% decrease in the odds of in-hospital deaths and administrations that have adopted RPA have seen a 200% ROI within the first year of use (Olive AI white paper). As the U.S. nears a projected shortage of 250,000 nurses by 2025, identifying and implementing automation solutions within healthcare infrastructures has become a much more pressing need thus allowing clinical staff to dedicate their abilities towards tasks exhibiting their skillsets.

What is machine learning?

Similar to robotic process automation, the primary objective of machine learning (ML) is to also have computing technology mimic human operations.  However, where RPA is required to operate within a rule and process-based environment that limits decision making under unfamiliar situations, ML truly expresses its artificial intelligence as a learning resource exhibiting what most feel is the biggest characteristic of AI; adaptation.  Simply put, RPA acts more like a straightforward resource that executes actions based on its configuration, which places it in more of a grunt perspective with little freedom to “think” outside the box or exhibit any learning abilities. Machine learning, on the other hand, autonomously improves its performance over time, like humans, as the system is provided with observational data and real-world interaction.  Some have even made the comparison between the two as brains over brawn with ML being the former.

In the healthcare industry, ML also adds exponential benefits to administrations acting as the router between systems and data by automating repetitive high traffic tasks.  Serving as its own employee within an organization, an ML solution utilizes its own credentials to access system databases to record and report patient information or EHR (electronic health record).  By following the local credential structure, this allows for seamless integration into existing systems with little change to accommodate its inclusion and no additional workflows. For example, our Olive AI can be used to perform patient insurance eligibility checks.  After reviewing the patient record and history from their respective EHR, Olive can assist with checking against insurance eligibility portals. With a baseline of information gathered, the system can then proceed to offer approved solutions, compare previously approved authorizations, schedule future appointments and post-visit follow-ups, and payments.  Having this level of automation 24/7 365 days of the year empowers hospital and clinic staff to center attention towards their most critical role of patient care.

An article published in Healthcare IT News reported a prediction from IDC (International Data Corporation) that global investment towards AI solutions will jump 60% this year totaling $12.5 billion and then up to $46 billion by 2020.  As automation continues its seemingly endless upward trend and creates countless prospective breakthroughs in practically every industry, machine learning continues to be a key proponent towards technological advancement.

 

So which one is better?

To answer this question, decision-makers and executives must first determine their most critical business needs that can be best be improved through automation.  Overall, robotic process automation and machine learning are both invaluable solutions that are sure to drastically enhance business performance for any organization.  Some businesses may opt to incorporate an RPA option in order to automate their easier low skill functions as this will require little effort to integrate and in the smallest amount of time.  Other organizations have decided to use RPA as a starting point in their AI implementation with machine learning as their end goal for automation. Nonetheless, having discussed the capabilities of both RPA and ML, it seems the only one who can determine which is better for a business is the business itself based on their requirements and ultimately the option that will provide the highest ROI over time.

At Olive, we strive to build revolutionary artificial intelligence and robotic process automation solutions for the healthcare industry that layer in ML for a more robust robotic process automation solution. Our focus is on improving business productivity through automation of the error-prone and mundane tasks of healthcare administration so that staff can focus on patient care.  Our efficient cost-reducing options continue to deliver immediate positive results with Olive AI overseeing repetitious high traffic processes and workflows. These specialized tools empower our customers with the freedom to let their teams express the creativity and empathy that only a person is able to provide.  Please contact us to schedule a demo of our Olive AI and let us begin developing a solution that can address your automation demands and be your first step towards an AI environment.