Processing math: 100%

Friday, 3 February 2017

Machine Learning Cheat Sheet Part 2 - Linear Regression with One Variable


1. Training set: (x(1),y(1)),(x(2),y(2)),...(x(m),y(m))

2. Hypothesis: hθ(x)=θ0+θ1x

3. Parameters: θ0,θ1

4. Cost function: J(θ0,θ1) uses parameters θ0 and θ1 to check the difference between hypothesis values hθ(x) and given values y from training example (x,y):

J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2
5. Goal: minimise cost function

minθ0,θ1J(θ0,θ1)

6. Gradient descent algorithm (minimisation of cost function):

α - learning rate

repeat until convergence:
{θ0=θ0α1mmi=1(h0(x(i)y(i))θ1=θ1α1mmi=1(h0(x(i)y(i))×x(i)}
(update θ0 and θ1 simultaneously!)

No comments:

Post a Comment

Online Encyclopedia of Statistical Science (Free)

Please, click on the chart below to go to the source: