Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I've started learning ML following Andrew NG course on coursera. I'm trying to implement the gradient descent with linear regression but I'm not sure what am I missing. According to this enter image description here

I've tried to implement it but something is wrong. Here is the code. Worth pointing out that this is the first time I'm touching python, without learning the basics.

import numpy as np
import matplotlib.pyplot as plt

plt.ion()

x = [1,2,3,4,5]
y = [1,2,3,4,5]

def Gradient_Descent(x, y, learning_rate, iterations):
  theta_1=np.random.randint(low=2, high=5); 
  theta_0=np.random.randint(low=2, high=5);
  m = x.shape[0]

def mean_error(a, b, factor):
  sum_mean = 0
  for i in range(m):
    sum_mean += (theta_0 + theta_1 * a[i]) - b[i]  # h(x) = (theta0 + theta1 * x) - y 
    if factor:
      sum_mean *= a[i]
  return sum_mean

def perform_cal(theta_0, theta_1, m):
  temp_0 = theta_0 - learning_rate * ((1 / m) * mean_error(x, y, False))
  temp_1 = theta_1 - learning_rate * ((1 / m) * mean_error(x, y, True))
  return temp_0 , temp_1

fig = plt.figure()
ax = fig.add_subplot(111)

for i in range(iterations):
    theta_0, theta_1 = perform_cal(theta_0, theta_1, m)
    ax.clear()
    ax.plot(x, y, linestyle='None', marker='o')
    ax.plot(x, theta_0 + theta_1*x)
    fig.canvas.draw()


x = np.array(x)
y = np.array(y)
Gradient_Descent(x,y, 0.1, 500)

input("Press enter to close program")

What am I doing wrong?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
214 views
Welcome To Ask or Share your Answers For Others

1 Answer

import numpy as np
import matplotlib.pyplot as plt

plt.ion()

x = [1,2,3,4,5]
y = [1,2,3,4,5]

def Gradient_Descent(x, y, learning_rate, iterations):
  theta_1=0
  theta_0=0
  m = x.shape[0]
  for i in range(iterations):
      theta_0, theta_1 = perform_cal(theta_0, theta_1, m, learning_rate)
      ax.clear()
      ax.plot(x, y, linestyle='None', marker='o')
      ax.plot(x, theta_0 + theta_1*x)
      fig.canvas.draw()

def mean_error(a, b, factor, m, theta_0, theta_1):
  sum_mean = 0
  for i in range(m):
    sum_mean += (theta_0 + theta_1 * a[i]) - b[i]  # h(x) = (theta0 + theta1 * x) - y 
    if factor:
      sum_mean *= a[i]
  print(sum_mean)
  return sum_mean

def perform_cal(theta_0, theta_1, m, learning_rate):
  temp_0 = theta_0 - learning_rate * ((1 / m) * mean_error(x, y, False, m, theta_0, theta_1))
  temp_1 = theta_1 - learning_rate * ((1 / m) * mean_error(x, y, True, m, theta_0, theta_1))
  return temp_0 , temp_1

fig = plt.figure()
ax = fig.add_subplot(111)




x = np.array(x)
y = np.array(y)
Gradient_Descent(x,y, 0.01, 100)

Made some changes in your code (mostly rearrange few lines and didn't change anything that you did, so that it doesn't seem confusing), and it's now working. I would suggest you learn the basics of the language first as most of the mistakes were pretty basic such as parameter passing etc. However, It's commendable you're trying stuff on your own by implementing along Andrew Ng's course.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...