1

My data consists of a time series of values $\pm1$ and I am trying to apply a RBF NN as a function approximator. Essentially, the NN will take as input one data sample and predict the next sample (one step ahead prediction). However, my network is not getting trained. If I use floating point vales as in the data then the same code works. However for $\pm$ data, I am not able to figure out how to train the network Can somebody please help?

x = rand(3000,1);
Data = 2*(x>=0.5)-1;

% Training and Test datasets time_steps=1; % prediction of #time_steps forward value (for this simple architechture time_steps<=3) % Training start_of_series_tr=100;
end_of_series_tr=2500;
% Test start_of_series_ts=2500;
end_of_series_ts=3000;

P_train=Data(start_of_series_tr:end_of_series_tr-time_steps,1); % Input Data f_train=Data(start_of_series_tr+time_steps:end_of_series_tr,1); % Label Data (desired output values) indt=Data(start_of_series_tr+time_steps:end_of_series_tr,1);% Time index

SNR = 30; % signal to noise ratio f_train=awgn(f_train,SNR); % Adding white Gaussian noise

%% Simulation parameters % Defining architechture of the RBF-NN [m n] = size(P_train);% Dimensions of input data [m]-length of signal, [n]-number of elements in each input order=1; % Number of past values used for the prediction of future value n1 = 20; % Number of hidden layer neurons

% Tuning parameters for training epoch=10; % simulation rounds (number of times the same data pass through the NN for training) eta=1e-2; % Gradient Descent step-size (learning rate) runs=10; % Number of Monte Carlo simulations Iti=[]; % Initial mean square error (MSE)

% Graphics/Plot parameters fsize=13; % Fontsize lw=2; % line width size

%% Training Phase for run=1:runs % Monte Carlos simulations loop

% spread and centers of the Gaussian kernel    
[temp, c, beeta] = kmeans(P_train,n1); % K-means clustering
beeta=4*beeta;                   % Increasing spread of Gaussian kernel

% Initialization of weights and bias
w=randn(1,n1); % weight
b=randn();     % bias

for k=1:epoch % simulation rounds loop

    I(k)=0;             % reset MSE
    U=zeros(1,order);   % reset input vector

    for i1=1:m % Iteration loop
        % sliding window (updating input vector)
        U(1:end-1)=U(2:end);
        U(end)=P_train(i1); % current value of time-series

        % Gaussian Kernel
        for i2=1:n1
            phi(i1,i2)=exp((-(norm(U-c(i2,:))^2))/beeta(i2,:).^2);
        end

        % Calculate output of the RBF
        y_train(i1)=w*phi(i1,:)'+b;

        e(i1)=f_train(i1)-y_train(i1); % instantaneous error in the prediction

        % Gradient descent-based weight-update rule
        w=w+eta*e(i1)*phi(i1,:);
        b=b+eta*e(i1);

        % Mean square error 
        I(i1)=mse(e(1:i1));      % Objective Function

    end
    Itti(epoch,:)=I; % MSE for all iterations
end
Iti(run,:)=mean(Itti,1); % Mean MSE for all epochs

end It=mean(Iti,1); % Mean MSE for all independent runs (Monte Carlo simulations)

%% Test Phase P_test=Data(start_of_series_ts:end_of_series_ts-time_steps,1); f_test=Data(start_of_series_ts+time_steps:end_of_series_ts,1); indts=Data(start_of_series_ts+time_steps:end_of_series_ts,1);

[m n] = size(P_test); for i1=1:m % Iteration loop % sliding window (updating input vector)
U(1:end-1)=U(2:end); U(end)=P_test(i1); for i2=1:n1 phi(i1,i2)=exp((-(norm(U-c(i2,:))^2))/beeta(i2,:).^2); end y_test(i1)=w*phi(i1,:)'+b;

e_test(i1)=real(f_test(i1)-y_test(i1));
I(2400+i1)=mse(e_test(1:i1));

end

Sm1
  • 541
  • 5
  • 19

0 Answers0