8 reasons you'll do big data this year

8 reasons you'll do big data this year

Over the past 12 months, I’ve been digging in the data trenches. OK, mostly I’ve been sitting next to the smarter people digging through the trenches and oversimplifying what they were doing in reports to management.

Very few IT projects are truly unique — and the ones that sound unique often fall into relatively predictable buckets. Lucky for you, I’ve decided to come up for air and share the top eight types of projects I’ve seen over the past 12 months.

1. Exploring the life of a deal

Companies that do e-commerce take for granted that you can hook up a few tools and know the close rate of users coming to the website, from sales to payment. But many companies deal with a lot more data sets than Web-to-close. Mainly those data sets originate from distributors and resellers.

Each distributor or reseller presents a different data set in a different format. Sure, fundamentally, this is a core ETL/data consolidation project with BI/visualization on the front end. But for many companies, truly understanding the life of the deal (from inception to close and ongoing) is more difficult than you think. You need to combine a lot of CRM, Web analytics, and finance to say, “Yes, PPC yielded closings, but 40 percent of those customers defaulted on the first bill, so …” 

Google Cloud Dataflow vs. Apache Spark: Benchmarks are in

Google Cloud Dataflow vs. Apache Spark: Benchmarks are in

On Tuesday, my company, Mammoth Data, released benchmarks on Google Cloud Dataflow and Apache Spark. The benchmarks were primarily for batch use cases on Google’s cloud infrastructure. Last year, Google contracted us to implement some use cases and extract user experience data points from people experienced in this field. As a follow-on, we did a benchmark for Google to see how its technology stacked up.

Benchmarks are often a black art of vendor-driven deception. I’ve never worked with a company more concerned with avoiding that. The benchmarks we released were constructed around Google Cloud Dataflow and Spark’s batch processing capabilities. They don’t address the more rapidly developing parts of both engines: the streaming portion.

We also wanted to avoid a “best SQL predicate pushdown” comparison. Because some queries don’t distribute well, Spark and Google Cloud Dataflow push the SQL to the underlying datastore. Benchmarking that would largely be a database-tuning exercise and, in my opinion, not very productive.

What is Google Cloud Dataflow?

Google Cloud Dataflow is closely analogous to Apache Spark in terms of API and engine. Both are also directed acyclic graph-based (DAG) data processing engines. However, there are aspects of Dataflow that aren’t directly comparable to Spark. Where Spark is strictly an API and engine with the supporting technologies, Google Cloud Dataflow is all that plus Google’s underlying infrastructure and operational support. More comparable to Google Cloud Dataflow is the managed Spark service available as part of the Databricks platform.