The AI must predict the next number in a given sequence of incremental integers using Python, but so far I haven't gotten the intended result. I tried changing the learning rate and iterations but so far without any luck.
The next number is supposed to be predicted based on this PATTERN:
First number in the sequence (1) is a random int in the interval of [2^0 (current index), 2^1(next index) and so on so forth...
The AI should be able to make the decision of which number to choose from the interval
The problem I encountered is implementing the pattern mentioned above into the AI so it can predict the n+1, since I am fairly new to machine learning I don't know how to feed the AI that pattern and which libraries I have to work with.
This is the code I used:
import numpy as np# Init sequence
data =\[[1, 3, 7, 8, 21, 49, 76, 224, 467, 514, 1155, 2683, 5216, 10544, 51510, 95823,198669, 357535, 863317, 1811764, 3007503, 5598802, 14428676, 33185509, 54538862,111949941, 227634408, 400708894, 1033162084, 2102388551, 3093472814, 7137437912, 14133072157,20112871792, 42387769980, 100251560595, 146971536592, 323724968937, 1003651412950, 1458252205147,2895374552463, 7409811047825, 15404761757071, 19996463086597, 51408670348612, 119666659114170,191206974700443, 409118905032525, 611140496167764, 2058769515153876, 4216495639600700, 6763683971478124,9974455244496710, 30045390491869460, 44218742292676575, 138245758910846492, 199976667976342049,525070384258266191]]X = np.matrix(data)[:, 0]
y = np.matrix(data)[:, 1]def J(X, y, theta):theta = np.matrix(theta).Tm = len(y)predictions = X * thetasqError = np.power((predictions-y), [2])return 1/(2*m) * sum(sqError)dataX = np.matrix(data)[:, 0:1]
X = np.ones((len(dataX), 2))
X[:, 1:] = dataX# gradient descent function
def gradient(X, y, alpha, theta, iters):J_history = np.zeros(iters)m = len(y)theta = np.matrix(theta).Tfor i in range(iters):h0 = X * thetadelta = (1 / m) * (X.T * h0 - X.T * y)theta = theta - alpha * deltaJ_history[i] = J(X, y, theta.T)return J_history, theta
print('\n'+40*'=')# Theta initialization
theta = np.matrix([np.random.random(), np.random.random()])# Learning rate
alpha = 0.02# Iterations
iters = 1000000print('\n== Model summary ==\nLearning rate: {}\nIterations: {}\nInitial
theta: {}\nInitial J: {:.2f}\n'.format(alpha, iters, theta, J(X, y, theta).item()))
print('Training model... ')# Train model and find optimal Theta value
J_history, theta_min = gradient(X, y, alpha, theta, iters)
print('Done, Model is trained')
print('\nModelled prediction function is:\ny = {:.2f} * x + {:.2f}'.format(theta_min[1].item(), theta_min[0].item()))
print('Cost is: {:.2f}'.format(J(X, y, theta_min.T).item()))# Calculate the predicted profit
def predict(pop):return [1, pop] * theta_min# Now
p = len(data)
print('\n'+40*'=')
print('Initial sequence was:\n', *np.array(data)[:, 1])
print('\nNext numbers should be: {:,.1f}'.format(predict(p).item()))