deep-fine

an artificial neural network framework built from scratch using just Python and Numpy


Keywords
convolutional-neural-networks, keras, neural-network, numpy, python, scratch-implementation
Install
pip install deep-fine==0.1.0

Documentation

Fine

A keras-like neural network framework built purely using Python and Numpy that's just that, fine.

Table of Contents

1- How to use
2- Demo
3- Technical Specifications

How to use

git clone git@github.com:haidousm/fine.git
cd fine
python3 -m pip install -r requirements.txt

Demo

MNIST Demo Link

Demo was built using javascript for the frontend, and a flask server to serve predictions from the model.

Demo model creation & training:

from datasets import load_mnist
from models import Sequential

from layers import Conv2D
from layers import MaxPool2D
from layers import Flatten
from layers import Dense

from activations import ReLU
from activations import Softmax

from loss import CategoricalCrossEntropy

from models.model_utils import Categorical

from optimizers import Adam

X_train, y_train, X_test, y_test = load_mnist()

model = Sequential(
    layers=[
        Conv2D(16, (1, 3, 3)),
        ReLU(),
        Conv2D(16, (16, 3, 3)),
        ReLU(),
        MaxPool2D((2, 2)),

        Conv2D(32, (16, 3, 3)),
        ReLU(),
        Conv2D(32, (32, 3, 3)),
        ReLU(),
        MaxPool2D((2, 2)),

        Flatten(),
        Dense(1568, 64),
        ReLU(),
        Dense(64, 64),
        ReLU(),
        Dense(64, 10),
        Softmax()
    ],
    loss=CategoricalCrossEntropy(),
    optimizer=Adam(decay=1e-3),
    accuracy=Categorical()
)

model.train(X_train, y_train, epochs=5, batch_size=120, print_every=100)
model.evaluate(X_test, y_test, batch_size=120)


Technical Specifications

Layers

  • Dense Layer
  • Dropout Layer
  • Flatten Layer
  • 2D Convolutional Layer
  • Max Pool Layer

Activation Functions

  • Rectified Linear (ReLU)
  • Sigmoid
  • Softmax
  • Linear

Loss Functions

  • Categorical Cross Entropy
  • Binary Cross Entropy
  • Mean Squared Error

Optimizers

  • Stochastic Gradient Descent (SGD) with rate decay and momentum
  • Adaptive Moment Estimation (ADAM)