This is a handmade convolutional neural network library, made in python, using numpy as the only dependency.
I made it to challenge myself and to learn more about neural networks, how they work in depth.
The big part of this project was made in 4 hours and a half. The save and load features, and the binary classification support were added later.
Remember that this library is not optimized for performance, but for learning purposes (although I tried to make it as fast as possible).
I intend to improve the neural networks and add more features in the future.
- Many layers (input, activation, dense, dropout, conv1d/2d, maxpooling1d/2d, flatten, embedding, batchnormalization, and more) π§
- Many activation functions (sigmoid, tanh, relu, leaky relu, softmax, linear, elu, selu) π
- Many loss functions (mean squared error, mean absolute error, categorical crossentropy, binary crossentropy, huber loss) π
- Many optimizers (sgd, momentum, rmsprop, adam) π
- Supports binary classification, multiclass classification and regression π
- Save and load models π
- Simple to use π
You can install the library using pip:
pip install neuralnetlib
See this file for a simple example of how to use the library. For a more advanced example, see this file.
More examples in this folder.
You are free to tweak the hyperparameters and the network architecture to see how it affects the results.
I used the MNIST dataset to test the library, but you can use any dataset you want.
π Quick examples (more here)
from neuralnetlib.model import Model
from neuralnetlib.layers import Input, Dense
from neuralnetlib.activations import Sigmoid
from neuralnetlib.losses import BinaryCrossentropy
from neuralnetlib.optimizers import SGD
from neuralnetlib.metrics import accuracy_score
# ... Preprocess x_train, y_train, x_test, y_test if necessary (you can use neuralnetlib.preprocess and neuralnetlib.utils)
# Create a model
model = Model()
model.add(Input(10)) # 10 features
model.add(Dense(8))
model.add(Dense(1))
model.add(Activation(Sigmoid())) # many ways to tell the model which Activation Function you'd like, see the next example
# Compile the model
model.compile(loss_function='bce', optimizer='sgd')
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=32, metrics=['accuracy'])
from neuralnetlib.activations import Softmax
from neuralnetlib.losses import CategoricalCrossentropy
from neuralnetlib.optimizers import Adam
from neuralnetlib.metrics import accuracy_score
# ... Preprocess x_train, y_train, x_test, y_test if necessary (you can use neuralnetlib.preprocess and neuralnetlib.utils)
# Create and compile a model
model = Model()
model.add(Input(28, 28, 1)) # For example, MNIST images
model.add(Conv2D(32, kernel_size=3, padding='same'), activation='relu') # activation supports both str...
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=2))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation=Softmax())) # ... and ActivationFunction objects
model.compile(loss_function='categorical_crossentropy', optimizer=Adam())
model.compile(loss_function='categorical_crossentropy', optimizer=Adam()) # same for loss_function and optimizer
# Train the model
model.fit(X_train, y_train_ohe, epochs=5, metrics=['accuracy'])
from neuralnetlib.losses import MeanSquaredError
from neuralnetlib.metrics import accuracy_score
# ... Preprocess x_train, y_train, x_test, y_test if necessary (you can use neuralnetlib.preprocess and neuralnetlib.utils)
# Create and compile a model
model = Model()
model.add(Input(13))
model.add(Dense(64, activation='leakyrelu'))
model.add(Dense(1), activation="linear")
model.compile(loss_function="mse", optimizer='adam') # you can either put acronyms or full name
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=128, metrics=['accuracy'])
You can also save and load models:
# Save a model
model.save('my_model.json')
# Load a model
model = Model.load('my_model.json')
Note
PCA (Principal Component Analysis) was used to reduce the number of features to 2, so we could plot the decision boundary. Representing n-dimensional data in 2D is not easy, so the decision boundary may not be always accurate. I also tried with t-SNE, but the results were not good.
Here, I decided to print the first 10 predictions and their respective labels to see how the network is performing.
You can of course use the library for any dataset you want.
- Marc Pinet - Initial work - marcpinet