...

Artificial Intelligence - Keras

Back to Course

Lesson Description


Lession - #943 Keras-Regression Prediction using MPL


In this part, let us compose a simple MPL based ANN to do regression prediction. Till now, we have just done the characterization based expectation. Presently, we will attempt to anticipate the following conceivable worth by breaking down the past (ceaseless>
values and its affecting variables.

The Regression MPL can be addressed as below − The core features of the model are as per the following −
  • Input layer comprises of (13,>
    values.
  • First layer, Dense comprises of 64 units and 'relu' actuation function with ‘normal’ kernel initializer.
  • Second layer, Dense comprises of 64 units and 'relu' initiation function.
  • Output layer, Dense comprises of 1 unit.
  • Use mse as misfortune function.
  • Use RMSprop as Optimizer.
  • Use accuracy as metrics.
  • Utilize 128 as bunch size.
  • Utilize 500 as epochs.

    Step 1 - Import the modules

    import keras 
    
    from keras.datasets import boston_housing 
    from keras.models import Sequential 
    from keras.layers import Dense 
    from keras.optimizers import RMSprop 
    from keras.callbacks import EarlyStopping 
    from sklearn import preprocessing 
    from sklearn.preprocessing import scale

    Step 2 - Load Data

    Allow us to import Boston housing dataset
    (x_train, y_train>
    , (x_test, y_test>
    = boston_housing.load_data(>
    Here,
    boston_housing is a dataset given by Keras. It addresses an assortment of housing data in Boston area, each having 13 highlights.

    Step 3 - Process the data

    Allow us to change the dataset as indicated by our model, so that, we can take care of into our model. The information can be changed utilizing beneath code −
    x_train_scaled = preprocessing.scale(x_train>
    scaler = preprocessing.StandardScaler(>
    .fit(x_train>
    x_test_scaled = scaler.transform(x_test>
    Here, we have standardized the preparation information utilizing sklearn.preprocessing.scale function. preprocessing.StandardScaler(>
    .fit
    work returns a scalar with the standardized mean and standard deviation of the preparation information, which we can apply to the test information utilizing scalar.transform work. This will standardize the test information also with the very setting as that of preparing information.

    Step 4 - create the model

    Allow us to create a model
    model = Sequential(>
    model.add(Dense(64, kernel_initializer = 'normal', activation = 'relu', input_shape = (13,>
    >
    >
    model.add(Dense(64, activation = 'relu'>
    >
    model.add(Dense(1>
    >

    Step 5 - Compile the model

    Allow us to assemble the model utilizing chosen misfortune work, analyzer and metrics.
    model.compile(
       loss = 'mse', 
       optimizer = RMSprop(>
    , metrics = ['mean_absolute_error'] >

    Step 6 - Train the model

    Allow us to train the model utilizing fit(>
    method.
    history = model.fit(
       x_train_scaled, y_train,    
       batch_size=128, 
       epochs = 500, 
       verbose = 1, 
       validation_split = 0.2, 
       callbacks = [EarlyStopping(monitor = 'val_loss', patience = 20>
    ] >
    Here, we have utilized callback work, EarlyStopping. The reason for this callback is to screen the misfortune value during every epoch and contrast it and past epoch misfortune worth to track down the improvement in the preparation. In the event that there is no improvement for the persistence times, the entire process will be stopped.

    Step 7 - Evaluate the model

    Allow us to evaluate the model using test impormation
    score = model.evaluate(x_test_scaled, y_test, verbose = 0>
    print('Test loss:', score[0]>
    print('Test accuracy:', score[1]>

    Step 8 - Predict

    prediction = model.predict(x_test_scaled>
    print(prediction.flatten(>
    >
    print(y_test>
    classification keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras.
    keras RetinaNet is one of the best one-stage object detection models that has proven to work well with dense and small scale objects.
    keras timedistributed :This wrapper allows to apply a layer to every temporal slice of an input