Logic of a neural network implementation

Asked

Viewed 92 times

1

Hello, Good Morning

I implemented a network, and it has the following matrix, where f(x) is an input vector (Matrix 1,139), the Phi matrix that has dimension 1,20 (20 due to the number of signals I used to train it) and w as the weights that are 20,1

for k in range(0,20):                                           
    for item in range(0,139):
        substract += (s[0,item] - phi[0,k])            # phi = 20,20

    mod = np.linalg.norm(substract)

    substract = 0

    if(mod > 0):
        substract = (mod*mod)*math.log10(mod)
        phi_matrix_final.append(substract)
    else:
        phi_matrix_final.append(mod)

    mod = 0
    substract = 0 

inserir a descrição da imagem aqui

Sn = 20 due to the number of training entries

Problem with this network is that it always returns a value very close to each other, since the answers should be between 0 and 10

Note: Use the r²log(r function)

  • What do you do with phi_matrix_final for it to enter, in the new iteration, as phi? Because I think you’re not reducing the error of phi

  • effect result = phi_matrix_final @weights

1 answer

1

You must have a single network entry, with a training sample of 139 points, which is why phi has dimension 1X20 and weights, dimension 20x1.

What is unclear is why you have 139 exits,

        substract += (s[0,item] - phi[0,k])          # phi = 20,20
                                                     # item varia de 0 a 139, ou seja, são 139 saídas?

The code, I think, should be as follows, according to your equation,

x = [] # 139 amostras
s = [[]] # 139 amostras x 20 saídas esperadas

mod = 0.0 # forçando um erro em phi
modant = 1.0 # forçando um erro em phi

phi = np.zeros((1, 20)) # 1 entrada e 20 saídas

ctr = 0

eta = 0.3 # amortecimento

while abs(1 - mod/modant) > 0.01 and ctr < 10000:
    ctr += 1
    modant = mod

    for item in range(0,139):
        for k in range(0,20):                                       
            phi[0,k] += eta* (x[item] - s[item,k])

        mod = np.linalg.norm(s)
        if mod > 0:
            phi = (mod*mod*math.log10(mod)) @ phi
  • Hello, grateful for the answer, but it returns a 20.1 matrix. I point out that after the for I did phi_matrix_final = np.asmatrix(phi_matrix_final) ####Xa; phi_matrix_final = np.reshape(phi_matrix_final, (1,20))

  • Just a minute, I’ll edit.

  • 1

    I’m going to check, I don’t think 139 outputs, because x and s are vectors, and then I trained 20 kernels, roughly, each of the 139 points minus the S1 kernel, summed, and so on to the S20,

  • And I think my implementation is also wrong, because you’re not training with the expected outputs, which would be the ss. For forward-feed, you also need outputs.

  • I changed to consider the exits.

  • Thanks for the answer, but still not enough to solve the problem

  • What’s the matter now?

Show 2 more comments

Browser other questions tagged

You are not signed in. Login or sign up in order to post.