ICLR 2016 Takeaways: Adversarial Models & Optimization

Posted by & filed under Machine Learning.

Takeaways and favorite papers from ICLR Last week, three members of indico’s Advanced Development team attended the International Conference on Learning Representations (ICLR). ICLR focuses mainly on representation learning — or working with raw data to build better features to solve complex problems. This covers ideas such as deep learning, kernel learning, compositional models, as… Read more

The Good, Bad, & Ugly of TensorFlow

Posted by & filed under Machine Learning.

A survey of six months rapid evolution (+ tips/hacks and code to fix the ugly stuff) We’ve been using TensorFlow in daily research and engineering since it was released almost six months ago. We’ve learned a lot of things along the way. Time for an update! Because there are many subjective articles on TensorFlow and… Read more

TensorFlow Data Input (Part 1): Placeholders, Protobufs & Queues

Posted by & filed under Machine Learning, Machine Learning Tutorials.

TensorFlow is a great new deep learning framework provided by the team at Google Brain. It supports the symbolic construction of functions (similar to Theano) to perform some computation, generally a neural network based model. Unlike Theano, TensorFlow supports a number of ways to feed data into your machine learning model. The processes of getting… Read more

Deep Advances in Generative Modeling

Posted by & filed under Machine Learning, Machine Learning Tutorials.

Earlier this month, Alec Radford — indico’s Head of Research — led a talk at Boston ML Forum. He presented an overview of recent work in generative modeling, including research that he, Luke Metz, and Soumith Chintala (FAIR) released in November 2015: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Video and slides below.… Read more

Getting Started with MXNet

Posted by & filed under Machine Learning, Machine Learning Tutorials.

So many other frameworks exist, why MXNet? MXNet is a modern interpretation and rewrite of a number of ideas being talked about in the deep learning infrastructure. It’s designed from the ground up to work well with multiple GPUs and multiple computers. When doing multi-device work in other frameworks, the end user frequently has to… Read more

Exploring Computer Vision (Part II): Transfer Learning

Posted by & filed under Machine Learning.

Welcome back to our three part series on computer vision. In the previous post, we discussed convolutional neural networks (CNNs). This post will assume that you have a basic understanding of CNNs; we encourage you to reread the first post if you want a refresher on convolutional networks. Introduction to Transfer Learning When we start… Read more