Clojure Machine Learning - Generating Art (Feed Forward, Hill Climb)

View this thread on: d.buzz | hive.blog | peakd.com | ecency.com
·@tensor·
0.000 HBD
Clojure Machine Learning - Generating Art (Feed Forward, Hill Climb)
#### What Will I Learn?

- You will learn how to create a feed forward neural network in Clojure
- You will learn how to build a hill climb algorithm 
- You will learn how to use Clojure Quil to sketch drawings 

#### Requirements

- A firm knowledge of Lisp or Clojure
- A decent knowledge of Linear Algebra and Machine Learning concepts
- The Clojure Leiningen build tool and Java JDK 8+

#### Difficulty

- Advanced

#### Description

In this **Clojure tutorial**, we use the **Quil library** to build out a picture/sketch generating **Feed Forward**, **Hill Climbing Neural Network**.  There are various ways to approach this type of problem in Clojure. Clojure has such a small but powerful core set of language features that we can leverage to build our **Feed Forward Neural Network** with very few lines of code.  

First, we build out a bunch of **generic mathematical functions**.  We need functions to help us preform **Linear Algebra** on the **Clojure Vector data type**.  This includes functions that allow us to preform **Multiplication**, **Subtraction** and the **Dot Product**.  We also create a few functions that allow us to **generate random Matrices and Vectors** based on the inputs. We then implement basic versions of the **Sigmoid function** and the **Tangent Hyperbolic function** which we will use as our Transfer Functions.  These functions allow us to **normalize the inputs and outputs** based on the weighs that we get from our network.  

Next, we build out a library that allows us to actually construct our **Feed Forward Neural Network**. We use **Clojure Records** to build a network of layers which we can then **recursively iterate** through and preform operations on.  We add some basic **genetic algorithms** in the form of our **Hill Climb Algorithm**.  we use an arbitrary value to incrementally search for numbers that work as solutions to our problem.  We have expected outputs which we are trying to train our network to work towards. 

We finally use the network inputs and outputs along with the **Quil library** to actually draw out random pictures.  The inputs work as points that exist between our **X and Y coordinates** of the window and our outputs are used to assign values to the colors that fill the created shapes.  The more **Neurons, Layers and Outputs** assigned to the **Neural Network**, the more complex and interesting the sketch will be.  More complex **Networks** will also tax the computer's resources as a result of a lack of GPU optimization in this particular implementation.     

The source code for this project can be found [here](https://github.com/tensor-programming/Clojure_Nerual_Art)

#### Video Tutorial
<iframe width="560" height="315" src="https://www.youtube.com/embed/u7qg-anw2io" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

    

<br /><hr/><em>Posted on <a href="https://utopian.io/utopian-io/@tensor/clojure-machine-learning-generating-art-feed-forward-hill-climb">Utopian.io -  Rewarding Open Source Contributors</a></em><hr/>
👍 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,