indico Blog

Resources for exploring machine learning and data science

May 2, 2016

Posted by
Luke Metz

TensorFlow Data Input (Part 2): Extensions & Hacks

Luke expands on the data input methods he discussed in Part 1 of this mini blog series, namely highlighting a hybrid approach of those methods that allows for faster training, as well as some extensions to the demo.

April 25, 2016

Posted by
Luke Metz

TensorFlow Data Input (Part 1): Placeholders, Protobufs & Queues

TensorFlow is a great new deep learning framework that supports the symbolic construction of functions (similar to Theano) to perform some computation, generally a neural network based model. Luke, one of our machine learning researchers, discusses several methods for feeding data into a machine learning model using this framework.

April 11, 2016

Posted by
Nathan Lintz

Sequence Modeling With Neural Networks (Part 1): Language & Seq2Seq

Sequence to sequence problems address areas such as machine translation, where an input sequence in one language is converted into a sequence in another language. Learn the foundations of sequence to sequence models and how neural networks can be used to build powerful models capable of analyzing data that varies over time.

March 21, 2016

Posted by
indico

Deep Advances in Generative Modeling

In recent years, deep learning approaches have come to dominate discriminative problems in many sub-areas of machine learning. Alongside this, they have also powered exciting improvements in generative and conditional modeling of richly structured data such as text, images, and audio. Alec Radford presents on new advances in generative modeling at Boston ML Forum 2016.

March 14, 2016

Posted by
Dan Kuster

A Fast Method to Stream Data from Big Data Sources

Modern computers are really quite powerful for processing streams of data. You shouldn't have to resort to a Hadoop cluster just to process data you want to use locally. There has to be a better way, right?

March 7, 2016

Posted by
Luke Metz

Getting Started with MXNet

MXNet is a lightweight, portable, and flexible distributed deep learning framework with mobile deployment capabilities. Its focus on speed makes it miles faster than Theano, Tensorflow for single and multiple GPUs. Luke Metz, a member of our Advanced Development team, walks through the process of building a small neural network from the bottom up in this tutorial.

February 22, 2016

Posted by
Nathan Lintz

Exploring Computer Vision (Part II): Transfer Learning

New approaches in machine learning allow us to reuse parts of a model trained to perform a task in one domain to solve other problems in different domains. This means considerably less data and time are needed to create new, powerful algorithms.

Get updated when new posts come out

Recent Posts

Exploiting Text Embeddings for Industry Contexts

From Synonyms to Object Properties It’s well known that word embeddings are excellent for finding similarities between words --... more

·

Making Deep Learning Practical with Smaller Datasets

At the recent VB Summit in Berkeley, Jeff Dean, Head of Google Brain discussed a popular challenge in making Deep Learning a practical... more

·

Building Better Search

In speech and writing, how often do we use one term -- and only that term -- to describe an idea? For example, if you were searching... more

·

Search for keywords