Karthik Yearning Deep Learning

5 Intresting papers from Google AI - Nov01

One Model To Learn Them All   This paper demonstrates a single model to solve problems spanning from multiple domains. This model is trained on ImageNet , COCO dataset , a speech recognition corpus, and an English parsing task.   Fluid Annotation   A tool for image annotation. This is a model which perfor... Read more

Summary - Densenet

Densenet-Densely Connected Convolutional Networks Paper Let’s understand Densenet. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one betw... Read more

Different Convolutions

Pointwise Convolution or 1x1 convolution It is a 1x1 convolution kernel. This helps in acquiring pixel by pixel level feature extraction. But the main purpose is, When the number channels in the previous layers needs to be shrinked , then we use 1x1 convolution. This reduces the problem of computational cost. For all the features lear... Read more

Summary - A Dataset and Architecture for Visual Reasoning with a Working Memory

Link: Google AI Paper: Available here This paper from Google Brain Team was about visual question answering , visual reasoning. This paper addressess the shortcomings of Visual Question Answer(VQA) dataset with additional parametric information such as time and memory. Additionally, there is a Reasoning Agent whose task is as quoted from... Read more

Intro to Receptive field from a Research Paper

Understanding the Effective Receptive Field in Deep Convolutional Neural Networks The receptive field size is a crucial issue in many visual tasks , as the output must respond to large enough areas in the image to capture information about large objects. The effective receptive field show that it both has Gaussian distribution and onl... Read more

Feed Forward Propagation from Andrew Ng Machine Learning Lecture

Neural networks are a series of stacked layers. Deeper the network, higher the number of layers. Layer 1 is called the input layer and Layer 3 is called the Output layer. The intermediate layer are called the hidden layer. The number of hidden layer might vary depending on the network complexity. In this case, Layer 2 is the hidden layer. ... Read more

How to use GPU on Tensorflow

What is GPU? Graphical Processing unit is a electronic circuit designed to perform rapidly executions than CPUs.Modern GPUs are very efficient at manipulating computer graphics and image processing, and their highly parallel structure makes them more efficient than general purpose CPUs for algorithms where the processing of large blocks of d... Read more

Text Formatting Examples

Markdown Support As always, Jekyll offers support for GitHub Flavored Markdown, which allows you to format your posts using the Markdown syntax. Examples of these text formatting features can be seen below. You can find this post in the _posts directory. Basic Formatting With Markdown, it is possible to emphasize words by making them italiciz... Read more