ThoughtWorksInc/DeepLearning.scala


A simple library for creating complex neural networks

http://deeplearning.thoughtworks.school/

License: Apache-2.0

Language: Scala

Keywords: automatic-differentiation, deep-learning, deep-neural-networks, domain-specific-language, dsl, functional-programming, machine-learning, neural-network, scala, symbolic-computation


DeepLearning.scala ThoughtWorks

Join the chat at https://gitter.im/ThoughtWorksInc/DeepLearning.scala Build Status Latest version Scaladoc

DeepLearning.scala is a simple library for creating complex neural networks from object-oriented and functional programming constructs.

  • DeepLearning.scala runs on JVM, can be used either in standalone JVM applications or a Jupyter Notebooks.
  • DeepLearning.scala is expressive. Various types of neural network layers can be created by composing map, reduce or other higher order functions.
  • DeepLearning.scala supports plugins. There are various plugins providing algorithms, models, hyperparameters or other features.
  • All the above features are statically type checked.

Features

Differentiable programming

Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports floats, doubles, GPU-accelerated N-dimensional arrays, and calculates derivatives of the weights in the formulas.

Dynamic neural networks

Unlike some other deep learning toolkits, the structure of neural networks in DeepLearning.scala is dynamically determined during running. Our neural networks are programs. All Scala features, including functions, expressions and control flows, are available in neural networks.

For example:

def ordinaryScalaFunction(a: INDArray): Boolean = {
  a.signnum.sumT > math.random
}

def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer(monadic[Do] {
  val outputOfLayer1 = layer1(input).forward.each
  if (ordinaryScalaFunction(outputOfLayer1.data)) {
    dynamicallySelectedLayer2(outputOfLayer1).forward.each
  } else {
    dynamicallySelectedLayer3(outputOfLayer1).forward.each
  }
})

The above neural network will go into different subnetworks according to an ordinary Scala function.

With the ability of creating dynamic neural networks, regular programmers are able to build complex neural networks from simple code. You write code almost as usual, the only difference being that code based on DeepLearning.scala is differentiable, which enables such code to evolve by modifying its parameters continuously.

Functional programming

DeepLearning.scala 2.0 is based on Monads, which are composable, thus a complex layer can be built from primitive operators or higher order functions like map/reduce. Along with the Monad, we provide an Applicative type class, to perform multiple calculations in parallel.

For example, the previous example can be rewritten in higher-order function style as following:

def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer {
  layer1(input).forward.flatMap { outputOfLayer1 =>
    if (ordinaryScalaFunction(outputOfLayer1.data)) {
      dynamicallySelectedLayer2(outputOfLayer1).forward
    } else {
      dynamicallySelectedLayer3(outputOfLayer1).forward
    }
  }
}

The key construct in DeepLearning.scala 2.0 is the dependent type class DeepLearning, which witnesses a differentiable expression. In other words, given the DeepLearning type class instance, you can activate the deep learning ability of any type.

Object-oriented programming

The code base of DeepLearning.scala 2.0 is organized according to Dependent Object Type calculus (DOT). All features are provided as mixin-able plugins. A plugin is able to change APIs and behaviors of all DeepLearning.scala types. This approach not only resolves expression problem, but also gives plugins the additional ability of virtually depending on other plugins.

For example, when a plugin author is creating the Adagrad optimizer plugin, he does not have to explicitly call functions related to learning rate. However, once a plugin user enables both the Adagrad plugin and the FixedLearningRate plugin, then computation in FixedLearningRate will get called eventually when the Adagrad optimization is executed.

Roadmap

v2.0

Version 2.0 is the current version with all of the above features.

v3.0

  • Support element-wise map/reduce and other higher-order functions on GPU.
  • Support distributed models and distributed training on Spark.

Links

Acknowledgements

DeepLearning.scala is sponsored by ThoughtWorks.

DeepLearning.scala is heavily inspired by my colleague @MarisaKirisame. Originally, we worked together on a prototype of a deep learning framework, and eventually split our work into this project and DeepDarkFantasy. Other contributors can be found at here.

Related projects

  • Shapeless provides a solid foundation for type-level programming used in DeepLearning.scala.
  • Scalaz and Algebra provides type classes used in DeepLearning.scala.
  • ThoughtWorks Each provides async/await-like syntax. You may want to use it to control your training process in an imperative style.
  • nd4j provides numerical computing used in DeepLearning.scala 2.0.
  • Compute.scala provides numerical computing used in DeepLearning.scala 3.0.
  • RAII.scala, future.scala and tryt.scala provides monadic asynchronous resource management used in DeepLearning.scala.
  • Plugins of DeepLearning.scala are based on Factory and other dependent type classes in feature.scala.
  • Import.scala is a Scala compiler plugin that enables magic imports. You may need it in those sbt project use DeepLearning.scala plugin hosted on Github Gist.
  • DeepLearning.scala can run in Jupyter Scala or Ammonite.
  • The unit tests of DeepLearning.scala are written in ScalaTest and example.scala syntax.
  • Some type classes in DeepLearning.scala are created by simulacrum's @typeclass annotation.

Project Statistics

Sourcerank 13
Repository Size 8.43 MB
Stars 680
Forks 76
Watchers 81
Open issues 31
Dependencies 0
Contributors 15
Tags 48
Created
Last updated
Last pushed

Top Contributors See all

杨博 (Yang Bo) 张志豪 Zhang Xiuyu chank Rabenda Shao Cheng Steven Xu Matthew Liu tw-data-china-go-cd Stephen Samuel Jan Vincent Liwanag 雾雨魔理沙 惊帆 Gerard de Melo The Gitter Badger

Packages Referencing this Repo

com.thoughtworks.deeplearning:plugins-rawfloatlayers_2.11
plugins-RawFloatLayers
Latest release 2.0.0-RC1 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-hlists_2.11
plugins-HLists
Latest release 2.2.0-M1 - Published - 680 stars
com.thoughtworks.deeplearning:plugins-rawdoublelayers_2.11
plugins-RawDoubleLayers
Latest release 2.0.0-RC1 - Updated - 680 stars
com.thoughtworks.deeplearning:tape_2.11
Tape
Latest release 2.0.0-M4 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-logging_2.12
plugins-Logging
Latest release 2.2.0-M1 - Published - 680 stars
com.thoughtworks.deeplearning:plugins-doubleweights_2.12
plugins-DoubleWeights
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-weights_2.12
plugins-Weights
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:differentiablecoproduct_2.12
DifferentiableCoproduct
Latest release 1.0.0 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-cumulativedoublelayers_2.12
plugins-CumulativeDoubleLayers
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-builtins_2.11
plugins-Builtins
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:coproduct_2.10
coproduct
Latest release 1.0.0-M3 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-indarraytraining_2.11
plugins-INDArrayTraining
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:deeplearning_2.12
DeepLearning
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:poly_2.11
Poly
Latest release 1.0.0 - Updated - 680 stars
com.thoughtworks.deeplearning:conversion_2.11
Implicit conversion functions that convert from native types to differentiable layers and batches.
Latest release 1.0.0-M4 - Published - 680 stars
com.thoughtworks.deeplearning:sbt-nd4j_2.11
sbt-nd4j
Latest release 1.0.0-M0 - Published - 680 stars
com.thoughtworks.deeplearning:coproduct_2.11
coproduct
Latest release 1.0.0-M3 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-implicitssingleton_2.11
plugins-ImplicitsSingleton
Latest release 2.2.0-M1 - Updated - 680 stars
com.thoughtworks.deeplearning:layer_2.12
Layer
Latest release 1.0.0 - Updated - 680 stars
com.thoughtworks.deeplearning:plugins-indarraylayers_2.11
plugins-INDArrayLayers
Latest release 2.2.0-M1 - Updated - 680 stars

Recent Tags See all

v2.2.0-M1 May 24, 2018
v2.2.0-M0 May 23, 2018
v2.0.3 December 11, 2017
v2.0.2 November 17, 2017
v2.0.1 August 04, 2017
v2.0.0 July 26, 2017
v2.0.0-RC7 July 25, 2017
v2.0.0-RC6 July 23, 2017
v2.0.0-RC5 July 17, 2017
v2.0.0-RC4 July 14, 2017
v2.0.0-RC3 July 12, 2017
v2.0.0-RC2 July 12, 2017
v2.0.0-RC1 June 28, 2017
v2.0.0-RC0 June 22, 2017
v2.0.0-M5 June 16, 2017

Interesting Forks See all

Atry/DeepLearning.scala
A simple DSL for creating complex neural networks
Scala - Apache-2.0 - Last pushed - 1 stars

Something wrong with this page? Make a suggestion

Last synced: 2018-04-25 11:03:25 UTC

Login to resync this repository