TensorFlow review: The best deep learning library gets better

At version r1.5, Google's open source machine learning and neural network library is more capable, more mature, and easier to learn and use

TensorFlow review: The best deep learning library gets better
Thinkstock
At a Glance

If you looked at TensorFlow as a deep learning framework last year and decided that it was too hard or too immature to use, it might be time to give it another look.

editors choice award logo plum InfoWorld

Since I reviewed TensorFlow r0.10 in October 2016, Google’s open source framework for deep learning has become more mature, implemented more algorithms and deployment options, and become easier to program. TensorFlow is now up to version r1.4.1 (stable version and web documentation), r1.5 (release candidate), and pre-release r1.6 (master branch and daily builds).

The TensorFlow project has been quite active. As a crude measure, the TensorFlow repository on GitHub currently has about 27 thousand commits, 85 thousand stars, and 42 thousand forks. These are impressive numbers reflecting high activity and interest, exceeding even the activity on the Node.js repo. A comparable framework, MXNet, which is strongly supported by Amazon, has considerably lower activity metrics: less than 7 thousand commits, about 13 thousand stars, and less than 5 thousand forks. Another statistic of note, from the TensorFlow r1.0 release in February 2017, is that people were using TensorFlow in more than 6,000 open source repositories online.

Much of the information in my TensorFlow r0.10 review and my November 2016 TensorFlow tutorial is still relevant. In this review I will concentrate on the current state of TensorFlow as of January 2018, and bring out the important features added in the last year or so.

TensorFlow features

TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence-to-sequence models for machine translation, natural language processing, and PDE (partial differential equation)-based simulations, just as it did a year ago. It still supports CPUs and Nvidia GPUs. It still runs on Ubuntu Linux, MacOS, Android, iOS, and (better than it used to) Windows. It can still support production prediction at scale with the same models used for training, only more flexibly. It still does auto-differentiation, still has a model visualization tool in TensorBoard, and (sorry, R and Scala programmers) still offers its best support for use from the Python language.

Since r0.10, TensorFlow has released so many improvements, enhancements, and additional capabilities, along with fixes for many bugs, that I can only mention the highlights. For example, various versions upgraded the CUDA and cuDNN library support, which increased performance by adopting the optimized code for the latest Nvidia GPUs. It gained HDFS (Hadoop File System) support, a much better Windows implementation, new solvers, and better Go support. After months of anticipation, XLA, which is a domain-specific compiler for TensorFlow graphs that improves performance, and a TensorFlow debugger were released. At the same time, TensorFlow started to play better with standard Python infrastructure such as PyPI and pip, and with the NumPy package widely used by the scientific computing community.

We saw a significant improvement in the RNN (recurrent neural networks, often used for natural language processing) support, and new Intel MKL (Math Kernel Library) integration to improve deep learning performance on the CPU. On the ease of programming front, canned estimators (pre-defined model layers) were added to the library, including several regressors and classifiers. Libraries were added for statistical distributions, signal processing primitives, and differentiable resampling of images. A TensorFlow-specific implementation of Keras (a high-level neural networks API that in its standard implementation also runs on top of MXNet, Deeplearning4j, Microsoft Cognitive Toolkit, and Theano) was developed. The community development process showed its effectiveness as several contributed modules were moved into the core library, and a server library improved production deployment.

A training dataset library was added, and given backwards compatibility guarantees; this is useful for developing new models for standard training datasets. Java support was added, and improved several times. Finally, in TensorFlow r1.5, eager execution (an experimental interface to TensorFlow that supports an imperative programming style, like NumPy) and TensorFlow Lite (prediction for mobile and embedded devices) previews were released.

TensorFlow installation

Overall, TensorFlow installation has improved noticeably. As before, there are multiple ways of installing TensorFlow including Python virtual environments, “native” pip, Docker, and building from sources. The TensorFlow team recommends installing with virtualenv; I instead used “native” pip because that’s what I did previously on my MacBook Pro, and I didn’t want to undertake mass uninstalls to free the space from the old installation.

tensorflow pip install tf nightly IDG

Installing a nightly build of TensorFlow for the Mac, which is a relatively recent addition to the installation options, works well. After the installation/upgrade, I ran the standard TensorFlow functionality test interactively.

In addition to binaries for numbered release versions, the TensorFlow team now supplies nightly master-branch Python wheel binaries for Linux, Mac, and Windows. The nightly Mac CPU wheel installed easily for me (see figure above) using the command:

sudo pip install tf-nightly

Although the current master branch documentation claims that there are nightly builds for both CPU and GPU versions of the library for all three platforms, I wasn’t able to install a GPU version for the Mac—pip couldn’t find it. My previous experience was that the Mac GPU version would attempt to install but never really worked, until r1.2 when the Mac GPU version was dropped. I’m not sure whether there are really plans to restore the Mac GPU for r1.6, or whether the nightly build documentation is mistaken. In any case, having the GPU installation fail quickly without overwriting the current installation is better than the previous behavior.

Still, a MacBook Pro isn’t the ideal machine for intensive use of TensorFlow to train deep learning models. You can do much better with a Linux box that contains one or more of the new high-end Nvidia GPUs, and you can build your own PC for deep learning for a couple thousand dollars. If your training needs are occasional, you can easily run TensorFlow with GPUs on AWS, Azure, Google Compute Engine, or the IBM Cloud, at any scale you can afford.

Using TensorFlow

Two of the biggest issues with TensorFlow a year ago were that it was too hard to learn and that it took too much code to create a model. Both issues have been addressed.

To make TensorFlow easier to learn, the TensorFlow team has produced more learning materials and improved the existing getting started tutorials. Plus a number of third parties have produced their own TensorFlow tutorials (including InfoWorld). There are now multiple TensorFlow books in print, and several online TensorFlow courses. You can even follow the TensorFlow for Deep Learning Research (CS 20) course at Stanford, which provides all the slides and lecture notes online.

Several new sections of the TensorFlow library offer interfaces that require less programming to create and train models. These include tf.keras, which provides a TensorFlow-only version of the otherwise engine-neutral Keras package, and tf.estimator, which provides a number of high-level facilities for working with models—both regressors and classifiers for linear, deep neural networks (DNN), and combined linear and DNN, plus a base class from which you can build your own estimators. In addition, the Dataset API allows you to build complex input pipelines from simple, reusable pieces. You don’t have to choose just one. As this TensorFlow-Keras tutorial shows, you can usefully make tf.keras, tf.data.dataset, and tf.estimator work together.

TensorFlow Lite

TensorFlow Lite, currently in developer preview, is TensorFlow’s lightweight solution for mobile and embedded devices, which enables on-device machine learning inference (but not training) with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. TensorFlow Lite models are small enough to run on mobile devices, and can serve the offline use case.

tensorflow lite architecture IDG

TensorFlow Lite allows sufficiently small neural network models to run on Android and iOS devices, even devices that are offline. The library is still in developer preview and makes no guarantees about forward or backward compatibility.

The basic idea of TensorFlow Lite is that you train a full-blown TensorFlow model and convert it to the TensorFlow Lite model format. Then you can use the converted file in your mobile application on Android or iOS.

Alternatively, you can use one of the pre-trained TensorFlow Lite models for image classification or smart replies. Smart replies are contextually relevant messages that can be offered as response options; this essentially provides the same reply prediction functionality as found in Google’s Gmail clients.

Another option is to retrain an existing model against a new tagged dataset, a technique that reduces training times significantly. A hands-on tutorial on this process is called TensorFlow for Poets.

TensorFlow Serving

TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It’s not just for serving a single model. You can have multiple servables with multiple versions, and clients can request either the latest version or a specific version ID for a particular model, which makes it easy to try out new algorithms and experiments.

You can represent composite models as multiple independent servables or as single composite servables. Access to servables is controlled by sources, and sources can discover servables from arbitrary storage systems.

TensorFlow Eager

Eager execution is an experimental interface to TensorFlow that provides an imperative programming style similar to NumPy. When you enable eager execution, TensorFlow operations execute immediately; you do not execute a pre-constructed graph with Session.run().

This is another useful way of simplifying the code for TensorFlow matrix operations and models, although it’s a preview/pre-alpha version with no forward compatibility guarantees. Eager execution also makes it much easier to debug TensorFlow code than running sessions.

Eager execution is compatible with NumPy arrays, GPU acceleration, automatic differentiation, and the use of the Keras-style Layer classes in the tf.layers module. You can emit summaries for use in TensorBoard, but you need to use a new contributed version of the summary class. The eager execution documentation warns that “work remains to be done in terms of smooth support for distributed and multi-GPU training and CPU performance.”

TensorFlow vs. the competition

Overall, TensorFlow remains at the forefront of machine learning and deep learning frameworks. As we’ve discussed, in the last year TensorFlow has been upgraded in the areas of performance, deployment, ease of learning, ease of programming, and compatibility with common Python libraries and utilities.

While that was happening, the competitive deep learning frameworks have also gotten better. MXNet, which already performed and scaled well (see my MXNet review), has moved to the Apache Foundation and improved in capabilities and performance. Microsoft Cognitive Toolkit has advanced in many ways, including support for Keras and (gasp!) TensorBoard. Facebook’s Caffe2 is a major rewrite of Caffe, adding recursive and LSTM (Long Short Term Memory) networks to its strength in image-processing convolutional networks.

The open source H2O.ai prediction engine has been enhanced with an excellent proprietary hyperparameter-tuning and feature engineering layer, Driverless AI, which is worthwhile but not cheap. Scikit-learn continues to be a pleasure to use within its self-imposed constraints, supporting ML but not deep neural networks. And Spark MLlib is an excellent option for those who already use Spark and don’t need to train deep neural networks.

As long as TensorFlow programming is within your technical reach, TensorFlow is an excellent choice for deep learning model building, training, and production. If you’re new to TensorFlow, try starting out with the high-level APIs found in tf.keras, tf.data.dataset, and tf.estimator. By the time you need the lower-level APIs, you’ll most likely be familiar enough with the platform to use them.

Cost: Free open source under the Apache License version 2.0. 

Platform: Ubuntu 14.04+, MacOS 10.11+, Windows 7+; Nvidia GPU and CUDA recommended. Most clouds now support TensorFlow with Nvidia GPUs. TensorFlow Lite runs trained models on Android and iOS.

At a Glance
  • Google's open source framework for deep learning has become more mature, implemented more algorithms and deployment options, and become easier to program.

    Pros

    • Wide variety of models and algorithms
    • Excellent performance on hardware with GPUs or TPUs
    • Excellent support for Python, and now integrates well with NumPy
    • Very good documentation
    • Good software for displaying computational network graphs
    • Keras front-end improves ease of use

    Cons

    • Still difficult to learn, although easier than it was
    • Support for Java, C, and Go lags support for Python

Copyright © 2018 IDG Communications, Inc.