F5 Delivers Application Services For A Multi-Cloud World

F5 Delivers Application Services For A Multi-Cloud World

F5 Networks has announced the availability of offerings designed to provide consistent application services in multi-cloud environments — giving companies greater deployment flexibility, more effective security, and faster time to market.

F5’s 2017 State of Application Delivery report shows that while more customers than ever are shifting to cloud infrastructures, many are consciously choosing to invest in multiple cloud technologies. Eighty percent of survey respondents report that they are committed to multi-cloud architectures, and 20% state that they will have over half their applications running in the public and/or private clouds this year. With this move to leverage multi-cloud environments, unexpected hurdles often present themselves: application deployments across multiple cloud platforms create challenges in managing application services; inconsistent security policies create compliance risks; and multiple cloud architectures put more pressure on IT skills gaps, and diminish the return on the cloud’s value.

“Customers are increasingly choosing to deploy applications in multiple clouds — public and private, in colocation facilities, and in their own data centers — but are struggling with the management of different development environments, tool sets, and orchestration technologies,” said Sangeeta Anand, SVP of Product Management and Product Marketing at F5. “Where these clouds provide services for applications, they often do so in ways that are not portable enough, are use-case specific, or provide inadequate protection. F5’s portfolio of multi-cloud application services and solutions gives customers the freedom to deploy any application — anywhere — with consistent application services and enterprise-grade security.”

Extending the Reach of Applications and the Cloud

In November, F5 shared its vision to power intelligent application services in the cloud with the stability, security, and performance customers expect. Today, we build on that foundation with the broad availability of solutions for multi-platform, multi-cloud services that make applications go faster, smarter, and safer.

New public cloud solutions help deploy applications faster in any cloud

F5 BIG-IP Virtual Edition in the Google Cloud means that organizations can now deploy F5 services in all major public clouds. F5 gives customers greater choices of cloud platforms, with new ‘bring your own license’ offers for instances ranging from 25 Mbps to 5 Gbps. Available now from Google Cloud Launcher in Good, Better, and Best versions.

Cloud solution templates for Amazon Web Services, Azure, and Google reduce the complexity of deploying F5 services for applications in the public cloud. These new solution-specific templates simplify and automate common public cloud use cases. Cloud solution templates are also available for OpenStack private cloud environments. Available now on GitHub.

Integrated marketplace solutions provide easily accessible, pre-packaged F5 services such as WAF and Office 365 federated access that are deployable directly from public cloud marketplaces, meaning companies can leverage their trusted F5 services easily for applications in the cloud. Available now in the Azure Marketplace.

New private cloud solutions enable organizations to move to the cloud faster and with more confidence

A private cloud solution package gets customers up and running quickly with pre-tested, certified, and bundled F5 solutions that simplify and automate OpenStack private cloud deployments. Available now through F5 sales.

Broad line of additional solutions deliver application services in new environments

Lightweight Application Services Proxy gives customers flexibility in developing, testing, and scaling applications in container environments. Available now at Docker Store.

Container Connector provides the easy deployment of app services in containerized environments, and simple integration of capabilities into management/orchestration systems including Kubernetes and Mesos/Marathon. Available now at GitHub and Docker Hub.

Application Connector inserts application services from the edge of the public cloud and securely connects any public cloud provider to the customer’s interconnection or data center. In the case of AWS, Application Connector can automatically discover workloads for application services insertion. Available now through F5 sales.

“Business executives are beginning to rely heavily on cloud solutions in support of their digital transformation efforts, but in many cases, single cloud deployments are insufficient to enable true business agility,” said Zeus Kerravala, Principal Analyst at ZK Research. “Unfortunately, multi-cloud frameworks are complex and IT must take the steps needed to mitigate the challenges of operating in these mixed worlds. With F5’s multi-cloud portfolio of products, IT can deliver consistent and secure application services in any environment needed for business velocity.”

Source: CloudStrategyMag

IBM And Nutanix Launch Hyperconverged Initiative

IBM And Nutanix Launch Hyperconverged Initiative

IBM and Nutanix have announced a multi-year initiative to bring new workloads to hyperconverged deployments.

The integrated offering aims to combine Nutanix’s Enterprise Cloud Platform software with IBM Power Systems, to deliver a turnkey hyperconverged solution targeting critical workloads in large enterprises. The partnership plans to deliver a full-stack combination with built-in AHV virtualization for a simple experience within the data center.

In today’s technology landscape, processing real-time information is necessary but not sufficient. Being able to react in real-time used to give enterprises a competitive advantage, but this approach no longer guarantees happy customers. The value has now migrated to the ability to rapidly gather large amounts of data, quickly crunch and predict what’s likely to happen next — using a combination of analytics, cognitive skills, machine learning, and more. This is the start of the insight economy.

Handling these kinds of workloads present unique challenges — needing a combination of reliable storage, fast networks, scalability, and extremely powerful computing. It seems like private datacenters that were designed just a few years ago are due for a refresh — not only in the technology, but also in the architectural design philosophy. This is where the combination of IBM Power Systems and Nutanix comes in.

This joint initiative intends to bring new workloads to hyperconverged deployments by delivering the first simple-to-deploy, web-scale architecture supporting POWER based scale-out computing for a continuum of enterprise workloads, including:

  • Next generation cognitive workloads, including big data, machine learning and AI
  • Mission-critical workloads, such as databases, large scale data warehouses, web infrastructure, and mainstream enterprise apps
  • Cloud Native Workloads, including full stack open source middleware and enterprise databases and containers

With a shared philosophy based on open standards, a combination of Nutanix and IBM will be designed to bring out the true power of software-defined infrastructure – choice — for global 2000 enterprises, with plans for:

  • A simplified private enterprise cloud that delivers POWER architecture in a seamless and compatible way to the data center
  • Exclusive virtualization management with AHV, advanced planning and remediation with Machine Learning, App Mobility, Microsegmentation and more, with one-click automation
  • A fully integrated one-click management stack with Prism, to eliminate silos and reduce the need for specialized IT skills to build and operate cloud-driven infrastructure
  • Deploying stateful cloud native services using Acropolis Container Services with automated deployment and enterprise-class persistent storage

 “Hyperconverged systems continue on a rapid growth trajectory, with a market size forecast of nearly $6 billion by 20201. IT teams now recognize the need, and the undeniable benefits, of embracing the next generation of datacenter infrastructure technology,” said Stefanie Chiras, VP Power Systems at IBM. “Our partnership with Nutanix will be designed to give our joint enterprise customers a scalable, resilient, high-performance hyperconverged infrastructure solution, benefiting from the data and compute capabilities of the POWER architecture and the one-click simplicity of the Nutanix Enterprise Cloud Platform.”

“With this partnership, IBM customers of Power-based systems will be able to realize a public cloud-like experience with their on premise infrastructure,” said Dheeraj Pandey, CEO at Nutanix. “With the planned design, Enterprise customers will be able to run any mission critical workload, at any scale, with world-class virtualization and automation capabilities built into a scale out fabric leveraging IBM’s server technology.”

Source: CloudStrategyMag

Sierra Wireless Supports The New Google Cloud IoT Core

Sierra Wireless Supports The New Google Cloud IoT Core

Sierra Wireless has announced support for Google Cloud IoT Core, a fully managed service that allows users to easily and securely connect and manage devices at global scale.

Cloud IoT Core, together with other Google Cloud services, such as Pub/Sub, Dataflow, Bigtable, BigQuery, and Data Studio, provide a complete solution for collecting, processing, analyzing, and visualizing IoT data in real time to support improved operational efficiency.

Sierra Wireless’ device-to-cloud solution, which includes embedded wireless modules, gateways, and cloud and connectivity services, links businesses to their operational data. The solution provides intelligence at the edge, secured device provisioning and managed services, such as device monitoring and software updates. Sierra Wireless has been working with Google since 2015 to enable Pub/Sub integration of its AirVantage® IoT Platform with Google Cloud services. Now, with support for Google Cloud IoT Core, Sierra Wireless customers have more options to access the entire Google ecosystem out of the box.

Key features of the new Cloud IoT Core solution:

  • End-to-end security – Enable end-to-end security using certificate-based authentication and TLS; devices running Android Things or ones supporting the Cloud IoT Core security requirements can deliver full stack security.
  • Out-of-box data Insights – Use downstream analytic systems by integrating with Google Big Data Analytics and ML services.
  • Serverless infrastructure: Scale instantly without limits using horizontal scaling on Google’s serverless platform.
  • Role-level data control – Apply IAM roles to devices to control access to devices and data.
  • Automatic device deployment – Use REST APIs to automatically manage the registration, deployment and operation of devices at scale.

 “Real-time, reliable and actionable processing of information is a complex but critical element of IoT,” said Philippe Guillemette, CTO, Sierra Wireless. “Sierra Wireless’ experience in helping customers deploy more than one hundred million cloud-connected edge devices coupled with Google’s Cloud IoT Core dramatically reduces this complexity for businesses of any size.”

“Cloud IoT Core was designed to simplify digital transformation by helping customers to leverage Google Cloud’s data analytics and machine learning capabilities and act on insights, in real time, from operational data that was previously inaccessible,” said Adam Massey, director, Strategic Technology Partners at Google Cloud. “By working with industry leaders like Sierra Wireless, we are expanding the surface of innovation to help more customers realize the value of connected devices for their businesses.”

Source: CloudStrategyMag

Survey: Lack Of Preparedness By IT Execs Prevalent

Survey: Lack Of Preparedness By IT Execs Prevalent

SolarWinds MSP has published survey findings outlining the preparedness of UK and U.S. businesses in dealing with cybersecurity breaches. The report reveals that businesses are gravely optimistic about their ability to deter and cope with malicious attacks, despite the majority experiencing a breach over the last year and nearly one-fourth experiencing more than 10.

The potent combination of this lack of preparedness, the frequency of breaches, and the potential commercial impact of each one [$76k/59k GBP for small to medium sized businesses (SMBs) and $939k/724k GBP for enterprises]1, heightens the risk of an “extinction event” i.e., a massive business failure correlating to the breach.

Commenting on the survey, John Pagliuca, SolarWinds MSP general manager, said, “Our findings underscore the problems that contributed to the ‘WannaCry’ ransomware’s ability to cause so much damage around the globe. These results beg the question, ‘How can IT leaders feel so prepared yet still be exposed?’ One of the main reasons is that people are confusing IT security with cybersecurity. The former is what companies are talking about when they think about readiness. However, what they often don’t realize is that cybersecurity protection requires a multi-pronged, layered approach to security that involves prevention, protection, detection, remediation, and the ability to restore data and systems quickly and efficiently. The overconfidence and failure to deploy adequate cybersecurity technologies and techniques at each layer of a company’s cybersecurity strategy could be fatal.”

The research, looking into 400 SMBs and enterprises in the UK and U.S. and conducted by Sapio Research, reveals that 87% of IT executives questioned are confident in their security technology and processes’ resilience, and that 59% believe they are less vulnerable than they were 12 months ago. Given another 61% of businesses are anticipating a substantial boost to their cybersecurity budgets, they are confident this position will improve.

However, 71% of the same respondents said they have experienced a breach in the last 12 months.

These breaches are significant and shouldn’t be discounted. Of the businesses that have been breached and could identify an immediately traceable impact, 77% revealed that they had suffered a tangible loss, such as monetary impact, operational downtime, legal actions, or the loss of a customer or partner.

SolarWinds MSP also investigated why this overconfidence is occurring and identified seven basic faults:

  • Inconsistency in enforcing security policies
  • Negligence in the approach to user security awareness training
  • Shortsightedness in the application of cybersecurity technologies
  • Complacency around vulnerability reporting
  • Inflexibility in adapting processes and approach after a breach
  • Stagnation in the application of key prevention techniques
  • Lethargy around detection and response

The full report from SolarWinds MSP, entitled “2017 Survey Results: Cybersecurity: Can Overconfidence Lead to an Extinction Event? A SolarWinds MSP Report on Cybersecurity Readiness for U.K. and U.S. Businesses” is available here for download.

About SolarWinds MSP
SolarWinds MSP empowers MSPs of every size and scale worldwide to create highly efficient and profitable businesses that drive a measurable competitive advantage. Integrated solutions including automation, security, and network and service management — both on-premises and in the cloud, backed by actionable data insights, help MSPs get the job done easier and faster. SolarWinds MSP helps MSPs focus on what matters most — meeting their SLAs and creating a profitable business.

Methodology and Sample
In early 2017, SolarWinds MSP investigated the cybersecurity preparedness, experiences and failings of 400 SMBs and enterprises, split equally across the U.S. and the U.K. SMBs were categorized as having fewer than 250 employees.

 

1.      The cost per stolen record data was taken from IBM/Ponemon’s “2016 Cost of Data Breach Study: Global Analysis”

Source: CloudStrategyMag

Webair Announces Microsoft Azure ExpressRoute Partnership

Webair Announces Microsoft Azure ExpressRoute Partnership

Webair has announced that it is a Microsoft Azure ExpressRoute Partner. Azure ExpressRoute allows Webair customers to easily and securely utilize Microsoft cloud services, including services such as Azure, Office 365 and Dynamics 365, with increased levels of reliability and performance.

ExpressRoute is a private, dedicated network connection between the Microsoft Cloud and Webair’s customers’ IT environments. The decision to become a Microsoft Azure ExpressRoute Partner is consistent with Webair’s overarching strategy of providing customers with direct, private and secure access to hybrid cloud services, and expands its ability to mix and match its own local, low-latency enterprise public cloud as well as third-party hyperscale cloud services.

“By becoming a Microsoft Azure ExpressRoute Partner, Webair’s customers are provided with redundant and diverse paths to the Microsoft Cloud,” explains Michael Christopher Orza, CEO of Webair. “Azure ExpressRoute will allow our customers to utilize Microsoft cloud services with increased confidence in network performance and security.”

Webair’s cloud infrastructure is housed in Webair-owned facilities and runs on enterprise-grade hardware dedicated to customers and deployed directly into customer environments. Its direct network connectivity model and ability to deploy dedicated hardware per customer allow the secure and private consumption of scalable and SLA-backed cloud services with no physical connectivity to the public internet or to other customers. Today, many of Webair’s healthcare provider and enterprise customers, for example, need to bypass the public internet and consume cloud services as if they were on-premises. Becoming a Microsoft Azure ExpressRoute Partner and gaining a private, dedicated network connection between Microsoft Azure data centers and Webair customers’ IT environments now provides the best of both options without having to sacrifice existing network security models.

Webair has executed many hybrid cloud solutions for its customers, which often include a hybrid of services such as Enterprise Private Cloud, Managed Security, Disaster Recovery-as-a-Service (DRaaS) and Colocation as well as connectivity to Microsoft Azure, air-gapped and bypassing the public internet where possible. An air gap means that the customer’s network or system is physically isolated from the internet, thus providing added security against intruders.

In becoming a Microsoft Azure ExpressRoute Partner, Webair has established a more formal relationship with Microsoft to meet its clients’ ongoing growth and demand for future implementations. Webair plans on offering more managed services on top of Microsoft Azure as its customers, including healthcare and enterprise organizations, seek more hybrid services. Becoming a Microsoft Azure ExpressRoute Partner is but one critical first step in meeting these demands.

 

Source: CloudStrategyMag

PacketFabric Expands Cloud Networking Platform To Colo Atl

PacketFabric Expands Cloud Networking Platform To Colo Atl

Colo Atl has announced that PacketFabric’s Software-defined Networking (SDN) based network platform is now available at its downtown data center location. A NantWorks company and provider of next-generation Ethernet-based cloud networking services, PacketFabric can now easily interconnect with network service providers with no monthly recurring cross-connect fees within the Colo Atl Meet-Me Area (MMA). The collaboration also enables seamless access to the platform for Colo Atl’s enterprise, cloud and XaaS provider customers.

“As a carrier-neutral facility offering key interconnection opportunities with no monthly recurring cross connect fees, Colo Atl is an ideal partner for PacketFabric in the Atlanta market,” comments William Charnock, CEO of PacketFabric. “Colo Atl’s strategic location is critical to further extending the PacketFabric network and providing more customers with access to scalable, next-generation cloud networking services and simplified provisioning and maintenance of network infrastructure.”

PacketFabric’s fully automated network platform enables instantaneous, direct and secure provisioning of terabit-scale connectivity between any of the 128 locations on its network. PacketFabric customers can dynamically design and quickly deploy any network configuration leveraging an advanced Application Program Interface (API) and web-based portal for unmatched visibility and control over their network traffic and services. Real-time analytics and interactive troubleshooting capabilities allow PacketFabric to offer the robustness of a packet-switched network, while ensuring consistent and reliable performance.

“PacketFabric is an excellent addition to our Colo Atl family,” states Tim Kiser, owner and founder of Colo Atl. “Here at Colo Atl, in addition to our highly qualified staff, outstanding customer support, and critical interconnection opportunities, we aim to provide our customers with industry-leading infrastructure and solutions that will meet their considerable data demands. PacketFabric’s innovative cloud networking platform’s ability to deliver hundreds of terabits per second of on-demand connectivity surely fits the bill.”

Founded in November 2001, Colo Atl provides a reasonable, accommodating and cost-effective interconnection environment for more than 90 local, regional and global network operators. In 2016, the company celebrated its 15-year anniversary of providing service excellence and growth.

Colo Atl is an Atlanta Telecom Professionals Award Nominee and Winner of the 2016 TMT News Award for Best Colocation & Data Center – Georgia and the 2016 Georgia Excellence Award by the American Economic Institute (AEI).

Source: CloudStrategyMag

Markley And Cray Partner To Provide Supercomputing As A Service

Markley And Cray Partner To Provide Supercomputing As A Service

Cray Inc. and Markley have announced a partnership to provide supercomputing as a service solutions that combine the power of Cray supercomputers with the premier hosting capabilities of Markley. Through the partnership, Markley will offer Cray supercomputing technologies, as a hosted offering, and both companies will collaborate to build and develop industry-specific solutions.

The availability of sought-after supercomputing capabilities both on-premises and in the cloud has become increasingly desirable across a range of industries, including life sciences, bio-pharma, aerospace, government, banking, and more – as organizations work to analyze complex data sets and research, and reduce time to market for new products. Through the new supercomputing as a service offering, Cray and Markley will make it easier and more affordable for research scientists, data scientists, and IT executives to access dedicated, powerful compute and analytic capability to increase time to discovery and decision.

“The need for supercomputers has never been greater,” said Patrick W. Gilmore, chief technology officer at Markley. “For the life sciences industry especially, speed to market is critical. By making supercomputing and big data analytics available in a hosted model, Markley and Cray are providing organizations with the opportunity to reap significant benefits, both economically and operationally.”

Headquartered in Boston, Markley delivers best-of-breed cloud and data center offerings, including its enterprise-class, on-demand Infrastructure-as-a-Service solution that helps organizations maximize IT performance, reduce upfront capital expenses, increase speed to market, and improve business continuity. In addition, Markley guarantees 100% uptime, backed by the industry’s best Service Level Agreement.

“Cray and Markley are changing the game,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Now any company that has needed supercomputing capability to address their business-critical research and development needs can easily and efficiently harness the power of a Cray supercomputer. We are excited to partner with Markley to create this new market for Cray.”

The first industry solution built by Cray and hosted by Markley will feature the Cray® Urika®-GX for life sciences — a complete, pre-integrated hardware-software solution. In addition, Cray has integrated the Cray Graph Engine (CGE) with essential pattern-matching capability and tuned it to leverage the highly-scalable parallelization and performance of the Urika-GX platform. Cray and Markley have plans for the collaboration to quickly expand and include Cray’s full range of infrastructure solutions.

The Cray Urika-GX system is the first agile analytics platform that fuses supercomputing abilities with open enterprise standards to provide an unprecedented combination of versatility and speed for high-frequency insights, tailor-made for life sciences research and discovery.

“Research and development, particularly within life sciences, biotech and pharmaceutical companies, is increasingly data driven. Advances in genome sequencing technology mean that the sheer volume of data and analysis continues to strain legacy infrastructures,” said Chris Dwan, who led research computing at both the Broad Institute and the New York Genome Center. “The shortest path to breakthroughs in medicine is to put the very best technologies in the hands of the researchers, on their own schedule. Combining the strengths of Cray and Markley into supercomputing as a service does exactly that.”

“HPC environments are increasingly being used for high-performance analytics use cases that require real-time decision making such as cybersecurity, real-time marketing, digital twins, and emerging needs driven by big data and Internet of Things (IoT) use cases. Augmenting your on-premises infrastructure with HPC clouds enables you to meet your existing SLAs while scaling up performance-driven analytics for emerging use cases,” notes Gartner, in Follow These Three Steps to Optimize Business Value from Your HPC Environments, by Chirag Dekate, September 16, 2016.

Source: CloudStrategyMag

Global Capacity Expands Seven Ethernet Access Points

Global Capacity Expands Seven Ethernet Access Points

Global Capacity has announced the expansion of seven One Marketplace™ Points of Presence (PoPs) in its extensive North American network, including six Ethernet local access aggregation points and three high performance Ethernet Backbone points purposely built for the most demanding Cloud, Over-the-Top applications, and data services. The locations now Ethernet-enabled include Pittsburgh and Philadelphia, PA, Minneapolis, MN, and three new PoPs in Boston, MA, Kansas City, MO, and Vienna, VA. The locations added to the high performance Ethernet backbone include Kansas City, MO, Minneapolis, MN and Toronto, Ontario, Canada.

These high-demand, key aggregation points enable delivery of diverse route options, competitive pricing and a broad selection of network access services to One Marketplace customers. Ethernet is the technology of choice for SD-WAN, Hybrid WAN, and Cloud Connectivity solutions. The popularity of these enterprise services drives Global Capacity’s continued expansion and investment. The company’s investment in expanding its backbone PoPs and enabling greater Ethernet access is a testament to Global Capacity’s commitment to deliver ubiquitous coverage, flexible access options, and simplified service activation and management to both enterprise and service provider customers.

“Last year, Global Capacity achieved 37% growth in installed Ethernet revenue driven by cloud and data center connectivity, and the higher traffic needs of today’s data-driven society,” comments Jack Lodge, president of Global Capacity. “Global Capacity will continue to invest in the One Marketplace network in ways that will connect business locations in more markets to key destinations, over greater bandwidth and high performance Ethernet.”

Global Capacity’s award-winning marketplace of networks, One Marketplace, eliminates the complexity and inefficiency of the network market by delivering unprecedented transparency, efficiency and simplicity to the complex and highly fragmented data connectivity market. By combining intelligent information analytics and service automation through a suite of customer and supplier applications, along with network delivery, One Marketplace streamlines the process of designing, pricing, buying, delivering, and managing data connectivity solutions.

 

Source: CloudStrategyMag

Review: Tableau takes self-service BI to new heights

Review: Tableau takes self-service BI to new heights

Since I reviewed Tableau, Qlik Sense, and Microsoft Power BI in 2015, Tableau and Microsoft have solidified their leadership in the business intelligence (BI) market: Tableau with intuitive interactive exploration, Microsoft with low price and Office integration. Qlik is still a leader compared to the other 20 vendors in the sector, but trails both Tableau and Power BI.

ed choice plumInfoWorld

In addition to new analytics, mapping, and data connection features, Tableau has added better support of enterprises and mobile devices in the last two years. In this review, I’ll give you a snapshot of Tableau as it now stands, drill in on features new since version 9, and explore the Tableau road map.

Source: InfoWorld Big Data

NoSQL, no problem: Why MySQL is still king

NoSQL, no problem: Why MySQL is still king

MySQL is a bit of an attention hog. With relational databases supposedly put on deathwatch by NoSQL, MySQL should have been edging gracefully to the exit by now (or not so gracefully, like IBM’s DB2).

Instead, MySQL remains neck-and-neck with Oracle in the database popularity contest, despite nearly two decades less time in the market. More impressive still, while Oracle’s popularity keeps falling, MySQL is holding steady. Why?

An open gift that keeps on giving

While both MySQL and Oracle lost favor relative to their database peers, as measured by DB-Engines, MySQL remains hugely popular, second only to Oracle (and not by much):

mysql rankingDB-Engines

Looking at how these two database giants are trending and adding in Microsoft SQL Server, only MySQL continues to consistently grow in popularity:

mysql searchGoogle

While general search interest in MySQL has fallen over the years, roughly in line with falling general search interest in Oracle and Microsoft SQL Server, professional interest (as measured by Stack Overflow mentions) has remained relatively firm. More intriguing, it dwarfs every other database:

mysql stack overflowStack Overflow

The script wasn’t written this way. NoSQL, as I’ve written, boomed in the enterprise as companies struggled to manage the volume, velocity, and variety of modern data (the three V’s of big data, according to Gartner). Somehow MySQL not only survived, but thrived.

Like a comfortable supershoe

Sure, NoSQL found a ready audience. MongoDB, in particular, has attracted significant interest, so much so that the company is now reportedly past $100 million in revenue and angling to IPO later this year.

Yet MongoDB hasn’t toppled MySQL, nor has Apache Cassandra or Apache Hadoop, as former MySQL executive Zack Urlocker told me: “MongoDB, Cassandra, and Hadoop all have worthwhile specialized use cases that are sufficiently hard to do in [a] relational database. So they can be decent sized businesses (less than $100 million) but they are unlikely to be as common as relational.” Partly this stems from the nature of most big data today: still transactional in nature, and hence well-suited to the neat rows and columns of an RDBMS.

This coincides with the heart of MySQL’s popularity: It’s a great database that fits the skill sets of the broadest population of database professionals. Even better, they can take all they learned growing up with Oracle, IBM DB2, and Microsoft SQL Server and apply it to an omnipresent, free, and open source database. What’s not to love?

Scale, for one.

Actually, that was the original rap against MySQL and all relational databases: They could scale up but not out, and we live in a scale-out world. As it turns out, “It actually can scale” quite well, Linux Foundation executive Chris Aniszczyk affirmed to me. While it may have started from an architecturally underprivileged standpoint, engineers at the major web companies like Google and Facebook had huge incentives to engineer scale into it. As examples of MySQL at scale proliferated, Pivotal vice president James Bayer suggested to me, it bred confidence that MySQL was a strong go-to option for demanding workloads.

This isn’t to suggest that MySQL is an automatic winner when it comes to scale. As Compose.io developer DJ Walker-Morgan puts it, “NoSQL takes care of scaling like me buying diet food takes care of weight loss: only if strict disciplines and careful management is applied.” Again, enough examples exist that developers are motivated to give it a try, especially since it’s so familiar to a broad swath of the DBA community. Also, as Server Density CEO David Mytton underscored to me, “[M]anaged services like RDS … [and] Aurora in particular solve[] a lot of scale pain” for MySQL.

Which is why, 22 years after it first hit the proverbial shelves, MySQL is arguably the most popular database on earth. It doesn’t have the “enterprise grade” label that Oracle likes to slap on its database, and it doesn’t have the “built for horizontal scale” marketing that carried NoSQL so far, but it’s the default choice for yesterday’s and today’s generation of developers.

The fact that it’s free doesn’t hurt, but the fact that it’s a free, powerful, familiar relational database? That’s a winning combination.

Source: InfoWorld Big Data