# Feed Forward Propagation from Andrew Ng Machine Learning Lecture

Neural networks are a series of stacked layers. Deeper the network, higher the number of layers. Layer 1 is called the input layer and Layer 3 is called the Output layer. The intermediate layer are called the hidden layer. The number of hidden layer might vary depending on the network complexity. In this case, Layer 2 is the hidden layer. $x = \begin{bmatrix} x1 \\ x2 \\ x3 \\ \end{bmatrix}$

$a = g( \theta^T * x ) \\$ g is the activation function . x is the input vector . a is the activation output.

## Dimension of weight matrix

S be the units in Layer 1 .

S = 3 ( Discarding Bias Unit )

T be the units in Layer 2.

T = 3 ( Discarding the Bias activation Unit )

The Dimension is given by ( S * T+1 ).

The Dimension of weight matrix is 3 * 4

$% $