Classification: K-Nearest Neighbors
K-Nearest Neighbors is a supervised machine learning algorithm for classification. You will implement and test this algorithm on several datasets.
StartKey Concepts
Review core concepts you need to learn to master this subject
K-Nearest Neighbors Underfitting and Overfitting
KNN Classification Algorithm in Scikit Learn
Euclidean Distance
Elbow Curve Validation Technique in K-Nearest Neighbor Algorithm
K-Nearest Neighbors
KNN of Unknown Data Point
Normalizing Data
K-Nearest Neighbors Underfitting and Overfitting
K-Nearest Neighbors Underfitting and Overfitting
The value of k in the KNN algorithm is related to the error rate of the model. A small value of k could lead to overfitting as well as a big value of k can lead to underfitting. Overfitting imply that the model is well on the training data but has poor performance when new data is coming. Underfitting refers to a model that is not good on the training data and also cannot be generalized to predict new data.
What you'll create
Portfolio projects that showcase your new skills
How you'll master it
Stress-test your knowledge with quizzes that help commit syntax to memory