Spark picks up machine learning, GPU acceleration
Databricks, corporate provider of support and development for the Apache Spark in-memory big data project, has spiced up its cloud-based implementation of Apache Spark with two additions that top IT’s current hot list.
The new features — GPU acceleration and integration with numerous deep learning libraries — can in theory be implemented in any local Apache Spark installation. But Databricks says its versions are tuned to avoid the resource contentions that complicate the use of such features.
Apache Spark isn’t configured out of the box to provide GPU acceleration, and to set up a system to support it, users must cobble together several pieces. To that end, Databricks offers to handle all the heavy lifting.
Databricks also claims that Spark’s behaviors are tuned to get the most out of a GPU cluster by reducing the number of contentions across nodes. This seems similar to the strategy used by MIT’s Milk library to accelerate parallel processing applications, wherein operations involving memory are batched to take maximum advantage of a system’s cache line. Likewise, Databricks’ setup tries to keep GPU operations from interrupting each other.
Another time-saving measure is adding direct access to popular machine learning libraries that can use Spark as a data source. Among them is Databricks’ TensorFrames, which allows the TensorFlow library to work with Spark and is GPU-enabled.
Databricks has tweaked its infrastructure to get the most out of Spark. It created a free tier of service to attract customers still wary of deep commitment, providing them with a subset of the conveniences available in the full-blown product. InfoWorld’s Martin Heller checked out the service earlier this year and liked what he saw, precisely because it was free to jump into and easy to get started.
But competition will be fierce, especially since Databricks faces brand-name juggernauts like Microsoft (via Azure Machine Learning), IBM, and Amazon. Thus, it has to find ways to both keep and expand an audience for a service as specific and focused as its own. The plan appears to involve not only adding features like machine learning and GPU acceleration to the mix, but ensuring they bring convenience, not complexity.
Source: InfoWorld Big Data