Friday, 3 February 2017
Machine Learning Cheat Sheet Part 2 - Linear Regression with One Variable
1. Training set: (x(1),y(1)),(x(2),y(2)),...(x(m),y(m))
2. Hypothesis: hθ(x)=θ0+θ1x
3. Parameters: θ0,θ1
4. Cost function: J(θ0,θ1) uses parameters θ0 and θ1 to check the difference between hypothesis values hθ(x) and given values y from training example (x,y):
J(θ0,θ1)=12mm∑i=1(hθ(x(i))−y(i))2
5. Goal: minimise cost function
minθ0,θ1J(θ0,θ1)
6. Gradient descent algorithm (minimisation of cost function):
α - learning rate
repeat until convergence:
{θ0=θ0−α1mm∑i=1(h0(x(i)−y(i))θ1=θ1−α1mm∑i=1(h0(x(i)−y(i))×x(i)}
(update θ0 and θ1 simultaneously!)
Subscribe to:
Post Comments (Atom)
Online Encyclopedia of Statistical Science (Free)
Please, click on the chart below to go to the source:

-
Thanks to an excellent Java Concept of the Day , this is a brief description of main interfaces and classes of Java Collection Framework. H...
-
1. Logistic regression deals with data sets where y may have only a small number of discrete values. For example, if y∈{0,1} then ...
-
1. Notation: m - number of training examples. n=|x(i)| - number of features. x(i) - column vector of all...
No comments:
Post a Comment