IDG Contributor Network: Data sharing and medical research, fundraising is only the first step

IDG Contributor Network: Data sharing and medical research, fundraising is only the first step

Last week, Sean Parker (a founder of Facebook and, notoriously, Napster) announced the single largest donation to support immunotherapy cancer research. Totaling $250 million, the donation will support research to be conducted across six academic institutions, with the possibility of incorporating additional researchers if more funding is secured down the line.

I think it goes without saying that all donations to support medical research, particularly programs like immunotherapy that have a more difficult time receiving traditional funding, are fantastic.

However, a project like this isn’t just notable for the size of the donation, but also for the breadth of coordination that will be required to synthesize research across so many organizations. As past experience shows, innovating new models in research and discovery can be a challenge. For example, the now-defunct Prize4Life was founded to incentivize research into cures for ALS (Lou Gehrig’s disease). The organization was well funded and recognized for innovations such as a crowdsourcing approach to data science to try and foster breakthroughs. The data experiment failed however, and ultimately so did the organization.

More recently, Theranos has provided a cautionary tale for those looking to change processes without the strength of underlying data management and related quality standards. That company is perceived to have an execution problem, but what it really has is a data problem: trying to design testing that relies on the collection, analysis, and management of massive amounts of private data is a very ambitious undertaking.

Smart Play for Service Providers: Adding Intelligence

Smart Play for Service Providers: Adding Intelligence

For the last several years, the technology community has been talking about business intelligence (BI), business analytics (BA) and big data. For my money, BI and BA are more or less the same; they both aggregate and analyze key performance indicators (KPIs) within a given business.

There are many great software packages on the market that pull data from various sources within a business and display data in visually appealing charts and graphs, allowing businesses to better understand the underlying dynamics within their organization, better track results and make more informed decisions that will impact future performance.

Things start to get even more exciting when BI and BA intersect with big data because now you can see how your company’s performance stacks up against other companies of similar size in your industry.

Over the last few months, MSPmentor and Clarity Intelligence Platform have leveraged big data principles to produce the MSPmentor 501 List, which will be announced on May 19, 2016. Based on the aggregation of data points from across the entire managed service provider industry, this year’s 501 list will provide unprecedented highlights and insights.

Until now, BI, BA and big data have benefited mid-market and enterprise companies who have the budget to pay for the software as well as the resources necessary to analyze the data. Now, service providers (SPs) can equip their SMB clients with the insights BI, BA and big data produce, allowing them to add strategic and consultative value for their clients.

The most successful SPs in the market conduct monthly or quarterly business reviews with their clients. SPs provide a high level overview of the services consumed during this time period, general technology performance and a road map for future improvements (which usually consists of an upgrade/project).

Here’s the problem, SMBs eventually grow tired of hearing about how many viruses were quarantined last month, how much spam was caught or how much more money they need to spend to upgrade their IT infrastructure. SPs need to utilize this time to demonstrate their value as a trusted advisor who has more to offer than monitoring, and helpdesk.

SPs can go beyond delivering these simple services and instead add real value with business insight type intelligence to their customers, becoming a “trusted advisor”. Matt Toback will discuss ways SPs can offer this service in his HostingCon session, “Service Providers and Metrics: Feed Your Customers”. He will describe how adding BI services can increase value and stickiness beyond ping, power and pipe and how to use existing technology to do it.

There are also now tools to help SPs move from purely a service provider to the coveted role of “trusted advisor”. This concept has been touted by that industry pundits for years, but there have been no tools to enable SPs to easily become one.

Fortunately, that is changing, cloud workspace providers such as itopia and Cloud Jumper (formerly nGenx) are providing platforms that produce BI, BA and big data for the SMB community, delivered exclusively by the SP. Kaseya – one of the world’s leading RMM companies – is also integrating with the Clarity Intelligence Platform to give their MSP partners the ability to drive more value into their client relationships.

It really doesn’t matter which platform or toolset you prefer, the key is providing SMBs with better information to help them run a better business. Doing so allows you to go beyond delivering monitoring, management and helpdesk services to add true value.

This article is brought to you by HostingCon, the Cloud and Service Provider Ecosystem event. Join us in New Orleans, Louisiana July 24-27, 2016 to hear Jim and other thought leaders talk about issues and trends in the cloud, hosting and service provider ecosystem. Save $100 off your HostingCon All Access Pass with coupon code: H1279

Source: TheWHIR

Exclusive Research Among HostingCon Educational Sessions

Exclusive Research Among HostingCon Educational Sessions

This HostingCon Global 2016 slate of educational sessions has been rolled out by Penton, and it features dozens of top industry experts giving insider perspectives on every key topic for companies in the web hosting and cloud services ecosystem.

Among other anticipated highlights, consultant Theresa Caragol and Structure Research managing director Philbert Shih will present exclusive HostingCon research during the second Monday morning time slot. New York Internet COO and co-founder Phil Koblence will apply his deep industry knowledge and extensive experience to making the case for enterprise hybrid cloud late Wednesday morning.

If your business is looking to appeal to SMBs, secure web applications, raise capital to support growth, get in on the growth in ecommerce and do it safely, or build cloud native apps, HostingCon Global 2016 has an educational session with the information you need to succeed.

LISTEN: The WHIR: Cloud Talks, Episode 23 with Andrew Blum, HostingCon Keynote

Speed roundtables, industry workgroups, a marketing bootcamp and luncheon, and in-depth workshops provide a variety of formats to fit with the subject matter and the different learning styles of attendees.

As in past HostingCons the sessions are organized according to four tracks: Issues and Trends, Management, Sales and Marketing, and Technology. You can search the HostingCon educational session schedule using those tracks or other search criteria on the conference website, or you can export it to Outlook to set up your schedule.

Andrew Blum, author of the tech best-seller Tubes was previously announced as the keynote speaker, and there are more details about the HostingCon Global educational sessions still to come.

There are still a few choice sponsorship opportunities (PDF) and exhibitor booths available, and Early Bird rates take one hundred dollars off the price of registration for a little while longer, so register now and get a head start on planning for days full of educational experiences, exhibits, networking, and the city of New Orleans at HostingCon Global 2016!

Source: TheWHIR

Fujitsu To Build Industry-First Maritime Big Data Platform For Nippon Kaiji Kyokai

Fujitsu To Build Industry-First Maritime Big Data Platform For Nippon Kaiji Kyokai

Fujitsu Limited has announced that it has built a maritime big data platform for Nippon Kaiji Kyokai, an international ship classification society also known as ClassNK. The platform will be available from April, 2016.

Fujitsu has now built a platform with ClassNK that collects and accumulates machinery operational data from moving vessels, such as engine data, as well as marine weather information, as big data. This enables maritime businesses such as ship operators and shipyards to extract data about vessels under navigation, as needed. This will enable, for example, operations personnel to predict malfunctions using engine operations data, or to achieve more energy-efficient operations using voyage data and marine weather data.

This maritime big data platform, the first shared platform in the maritime industry, will be operated as a data center service by Ship Data Center Co., Ltd. (below, Ship Data Center), a subsidiary of ClassNK established in December 2015. Fujitsu will support the further effective use of ship data by expanding the functionality of this maritime big data platform, contributing to the further development of the maritime industry.

Background

With the development of broadband communications at sea, it has become possible to collect and monitor navigational information and information from sensors mounted on ship equipment and machinery. There has also been a focus on new efforts using data, such as energy-efficient operations and malfunction diagnosis. When these systems are built separately, however, their use is restricted to a few ships and maritime businesses due to the burden of cost and effort, such as data use agreements and strong security measures.

This new ship data center, operated by Ship Data Center, is the industry’s first effort that provides collected data as a shared platform to promote broad data use in the maritime industry.

System Summary

1. Able to build a data analysis and use system in a short time
The ship data center aggregates and stores navigational information sent from individual ships, such as from a VDR1, operational and measurement information for engines and all manner of ship-mounted equipment (machinery data), and worldwide marine weather information. Previously, in order to use ship data, maritime businesses had to individually collect the necessary data and integrate it as unified data, but now, because a wide variety of data is all collected together in the ship data center, and is provided through a web API that can generate a specialized data format for each business, they no longer need to prepare their own system from scratch to make use of big data. In addition, all sorts of data, collected in a variety of formats, can be converted to a variety of easier-to-use formats, such as CSV or JSON, when provided to users.

2. Security that makes it safe to use as an industry-wide platform
Because ship data is collected, stored, and transmitted through the internet, the ship data center features security functionality, such as data virus checking and user authentication.
 

Future Plans

Fujitsu will continue to expand the functionality of the ship data center, such as through the use of AI-based data analysis technology. In addition, Fujitsu is currently working to support the regulations proposed as new international standards2 for the handling of ship data as soon as possible.

1. Voyage Data Recorder. A device required for ships travelling through international waters, which records navigational information such as the ship’s position, speed and heading.

2. Regulations proposed as new international standards. ISO/PWI19847 is a new international standard regulation which defines the requirements for a data server on a ship for the purpose of sharing voyage, machinery and other maritime data in temporal sequence. ISO/PWI19848 is a data standard for ship machinery and equipment, proposed as a new international standard regulation in parallel with ISO/PWI19847, to standardize all types of data exchanged between ship-mounted devices or systems, and improve the connection-accessibility between devices and systems.

Source: CloudStrategyMag

CloudFlare Enables HTTP/2 Server Push to Speed Website Delivery for Customers

CloudFlare Enables HTTP/2 Server Push to Speed Website Delivery for Customers

CloudFlare has announced HTTP/2 Server Push support for all customers to speed up websites and mobile apps. HTTP/2 Server Push will be automatically enabled for free for CloudFlare’s four million customers.

Server Push enables the web servers to send content to website visitors without receiving requests. It allows images, fonts, CSS and JavaScript to be sent to the end user before the browser requests them, and CloudFlare estimates it gives a typical website a 15 percent performance increase.

Server Push is a fundamental update to HTTP/2, as it was not previously supported by the SPDY protocol it is based on, the company said. CloudFlare’s initial support for HTTP/2, which allows multiple HTTP requests over the same connection between a browser and web server, was announced in December.

SEE ALSO: Market Conditions Expected to Delay CloudFlare IPO: Report

“Usually, Internet performance improvements shave just milliseconds. In this case, the impact of HTTP/2 Server Push will be measured in seconds per page load, a quantum leap in performance that no service provider has been able to offer yet,” Matthew Prince, co-founder and CEO of CloudFlare said in a statement. “If with HTTP/2 Server Push we’re able to save one second off every page load served across CloudFlare’s network at our current scale, we would save about 10,000 years of time every day that people would have otherwise spent waiting for the Internet to load.”

HTTP/2 Server Push is currently in beta on Apple Safari, and is already supported in the latest versions of Chrome, Firefox, and Internet Explorer (Windows 10). CloudFlare is also providing an implementation guide for developers.

“HTTP/2 Server Push will enable a whole new class of web applications,” said CloudFlare CTO John Graham-Cumming. “It represents the biggest change in delivery of web content since AJAX–for the first time it gives web servers the power to send assets to a web browser. This upends the way in which the web works eliminating the need for countless browser performance hacks.”

CoudFlare launched a secure domain registrar business in February to appeal to high profile enterprises, and has been preparing to go public in more favorable market conditions, possibly in 2019.

Source: TheWHIR

Yahoo Data Center Team Staying "Heads-Down" Amid Business Turmoil

Yahoo Data Center Team Staying "Heads-Down" Amid Business Turmoil

datacenterknowledgelogoBrought to you by Data Center Knowledge

While the online businesses they support face an uncertain future, members of the Yahoo data center operations team are keeping busy, continuing to make sure Yahoo’s products are reaching the screens of their users.

“The data center operations team has a lot of work to do,” Mike Coleman, senior director of global data center operations at Yahoo, said in a phone interview. “The potential review of strategic alternatives … is being explored, but my operations team is heads-down, focused on powering our products and services on behalf of our users.”

The struggling company has been soliciting bids for its assets, including its core online business, since earlier this year. Verizon Communications is among interested suitors, according to reports.

Yahoo is exploring the sale of virtually all of its assets, which include company-owned data centers in Lockport, New York; Quincy, Washington; La Vista, Nebraska; and Singapore. It also leases data center capacity in a number of locations.

So far, however, the “review of strategic alternatives” hasn’t had any effect on the Yahoo data center operations team. “No effect for us at all,” Coleman said.

The round of layoffs the company announced in February did affect some Yahoo employees on the Lockport campus, according to reports, but that campus includes both data centers and office space.

In fact, Coleman’s team recently brought online the latest data center on the La Vista campus – a $20 million expansion announced last November. The expansion space was launched in April, Coleman said.

He declined to disclose how much capacity the project added, but it involved installing 6MW of backup generator capacity. This doesn’t mean the team added 6MW of data center capacity, he pointed out.

The expansion was completed relatively quickly, which Coleman attributed in part to a new online environmental quality permitting process the State of Nebraska launched last year. Yahoo’s air quality permit for the 6MW of diesel generator capacity was the first issued after the new process was instated, a Yahoo spokesperson said in an emailed statement.

Nebraska Governor Pete Ricketts held a press conference on the new process at the Yahoo data center in La Vista Wednesday.

The state switched from a physical process of applying on paper to an online one, which has shrunk the process from months to days, Coleman said. Nebraska officials have touted the change, which applies to air quality and storm water permits, as a step to make the state more business-friendly, meant to help the construction industry cut through the administrative red tape.

Original article appeared here: Yahoo Data Center Team Staying “Heads-Down” Amid Business Turmoil

Source: TheWHIR

Individual and Systemic Trust: Keys to Next Generation Partner Ecosystems

Individual and Systemic Trust: Keys to Next Generation Partner Ecosystems

What is the number one ingredient for successful partnerships? Trust; both individual and systemic.

What is the number one reason alliances and acquisitions (the deepest type of partnerships) fail? Inability to integrate cultures; trust is the number one ingredient underpinning culture.

Individual Trust is defined as congruence between what is said and what is done. Trust is an attitude that allows people to rely on, have confidence in, and feel sure about other people in the organization.

In a recent article for HostingCon, Dave Gilbert, former CEO of SimpleSignal, talked about the lack of trust in IT organizations as a barrier to growth. “I believe the difference between companies that execute well and those that don’t make it is the leadership’s ability to build trust over time,” said Gilbert. “Companies with a high trust culture experience a far lower churn rate and much higher employee engagement with the enterprise.”

Systemic trust is the ability of individuals in one or more organizations to trust another organization, for example a group of different types of channel partners, alliances, and a developer community to trust and want to grow with a vendor. It is the degree to which individuals and groups in an organization have the confidence to sustain a partnership with the organization and the personnel in the organization.a

Both forms, individual and systemic trust, are fundamental for the success of long term partnerships. Systemic trust is critical when multiple individuals in a company are partnered with another company’s teams – and everyone is unable to know and trust every other individual in the organization. The individuals then must rely on their sense of systemic trust.

At HostingCon in July we will share successful partnership use cases to determine the key factors underpinning them. Trust is a basic need just like water to every human partnership. It is one of the top 10 challenges explicitly stated in the HostingCon State of the Cloud and Service Provider Ecosystem survey; and a sub topic in three others: building effective communication among vendors and partners, balancing channel conflict with both direct and indirect and competitive partners and aligning partnership goals.

So, how do we build individual trust in our channel partnerships and ecosystems?

Trust building is focused on both the present and future cooperation of two people. In order to create trust, people must believe you are trustworthy. This can be a short or long process. However, it can be destroyed in a minute often if earning trust is viewed as a means to an end versus a long term relationship.

Building trust must be perceived as authentic or people will not believe you are trustworthy and want to engage with you. Express yourself authentically, speak carefully, accurately, clearly, and honestly to gain and sustain a full and accurate common understanding.

Trust relies on engagement with individuals and continued reciprocal relations (or relationship building) to earning long term trust. To create trust:

  • Don’t promise more than you can deliver.
  • Describe your doubts, risks, and events beyond your control.
  • Don’t over commit.

Trust and values are linked; it’s important to understand others’ values and align those in a partnership when possible.

Trust breaks down when:

  • People perceive authority versus a relationship
  • There are continuous conflicts with individuals
  • Continued uncertainty in a partnership
  • An inability of a person to communicate or manage risk

Specific tips on creating trust in business partnerships:

  • Work transparently, keeping others up-to-date on progress and problems
  • Allow others to observe the progress of your work
  • Involve others from the partner in key decisions
  • Expose hidden agendas and personal interests of both sides when needed
  • Understand what is being proposed, described, and discussed
  • Be clear on the expectations of others, problems you might encounter, the risks involved, changes that may occur, what you are agreeing to, others you may need to rely on, and your preparation and ability to meet commitments
  • Establish and maintain clear expectations.
  • Make and keep promises, do what you say and deliver results.
  • Hold yourself and others you depend on accountable
  • Go beyond what you promised when you can
  • Proceed in stages, and commit only as much as you can foresee

Systemic trust is also critical for sustained partnerships and for a vendor to maintain a robust, strong and growth oriented partner ecosystem. In addition, systemic trust is a fundamental ingredient for innovation.

One of the biggest ways organizations drive innovation and differentiation with their partnerships and channels is through collaboration. Collaboration is also based on trust. To drive collaborative partnerships, the product, services or solution vendor must value partnering and building high levels of trust with partners at the senior levels of the organization.

Rigorous standards for maintaining that trust must be built into the company culture, the processes, communications, and the technology.

Examples include:

  • Company culture: Senior executives and the CEO of a company articulate the importance of partnering internally to the employees, to the partner channel, and to customers on a regular basis through customer briefings, investor calls, and industry analyst conversations.
  • The company lives by a model that rewards partnerships who bring value to the company.
  • Company process: An example of a strong company rule and process would be that no employee from a partner is hired unless the partner has been communicated with by a VP in the company; or the delivery to a joint customer base of a blended partner/vendor value proposition.
  • Technology enabler: An example of a technology enabling collaboration tool for developer partnerships is SLACK, this tool is enabling incredible collaboration across individuals and companies; and the ability to track history and learn from the collaboration initiative over a period of time.
  • Corporate governance: Corporate governance with partnerships and channel is another measure to foster systemic trust and collaboration. Examples include Global Advisory Councils, Partner level attainment and reward, and Annual Business planning and quarterly business reviews.

In a relationship, people have “free will” and use it to choose whether they will give trust to another person. However, in some cases it’s important for both the vendor and the partner executive sponsor to dictate the goals and expectations for the systemic trust in the partnership. This will then pave the way for individual trust to expand within the two organizations.

Last, one of the most critical relationships for a sustained successful partnership is the executive to executive sponsorship for the relationship.

What does this mean?

  • Sponsors talk straight and honestly with one another, and confront the reality of the partnership or the situation as needed.
  • They clarify expectations on a sustained and regular basis, review progress, and enforce achievement of the teams’ mutual key objectives.
  • They create transparency by listening first, showing loyalty, and fixing something that went wrong.
  • They keep commitments and ensure both teams deliver results.
  • They take the time to continue deepening that individual executive relationship.

Trust is at the core of our own personal success with partnerships. It’s at the core of individual vendor and partner sales and technical teams’ revenue growth. It’s at the core of executive’s partnering results.

Trust is at the core of virtually every aspect of our vendor and partner success.

This article is brought to you by HostingCon, the Cloud and Service Provider Ecosystem event. Join us in New Orleans, Louisiana July 24-27, 2016 to hear Theresa and other thought leaders talk about issues and trends in the cloud, hosting and service provider ecosystem.

Source: TheWHIR

White House worries about bad A.I. coding

White House worries about bad A.I. coding

The White House is doing a lot more thinking about the arrival of automated decision-making — super-intelligent or otherwise.  

No one in government is yet screaming “Skynet,” but in two actions this week the concerns about our artificial intelligence future were sketched out.

The big risks of A.I. are well-known (a robot takeover), but the more immediate worries are about the subtle, or not-so-subtle, decisions made by badly coded and designed algorithms.

President Barack Obama’s administration released a report this week that examines the problem associated with poorly designed systems that, increasingly, are being used in automated decision making.

HBase: The database big data left behind

HBase: The database big data left behind

A few years ago, HBase looked set to become one of the dominant databases in big data. The primary pairing for Hadoop, HBase saw adoption skyrocket, but it has since plateaued, especially compared to NoSQL peers MongoDB, Cassandra, and Redis, as measured by general database popularity.

The question is why.

That is, why has HBase failed to match the popularity of Hadoop, given its pole position with the popular big data platform?

The answer today may be the same offered here on InfoWorld in 2014: It’s too hard. Though I and others expected HBase to rival MongoDB and Cassandra, its narrow utility and inherent complexity have hobbled its popularity and allowed other databases to claim the big data crown.

Green Grid Seeking Clarity Following ASHRAE PUE Agitation

Green Grid Seeking Clarity Following ASHRAE PUE Agitation

datacenterknowledgelogoBrought to you by Data Center Knowledge

No, PUE is not dead. It’s alive and well, and the fact that an ASHRAE committee backed away from using a version of PUE in the new data center efficiency standard that’s currently in the works hasn’t changed that.

The Green Grid Association, the data center industry group that championed the most widely used data center energy efficiency metric, has found itself once again in the position of having to defend the metric’s viability after the ASHRAE committee struck PUE from an earlier draft of the standard.

Roger Tipley, Green Grid president, said an important distinction has to be taken in to account. The type of PUE ASHRAE initially proposed, Design PUE, is not the PUE Green Grid has been championing. “This Design PUE concept is not a Green Grid thing,” he said.

Green Grid’s PUE is for measuring infrastructure efficiency of operational data centers over periods of time. Design PUE is for evaluating efficiency of the design of a new data center or an expansion.

New Data Center Standard

ASHRAE Standard 90.4 is being developed specifically for data centers and telecommunications buildings. The standard that’s in place today, 90.1, covers buildings of almost every type – the only exception is low-rise residential buildings – and 90.4 is being developed in recognition that data centers and telco buildings have certain design elements that are unique and require special treatment.

ASHRAE’s efficiency standards are important because local building officials use them extensively in inspections and permitting, and non-compliance on a building owner’s part can be costly.

During the course of a standard’s development, the responsible ASHRAE committee puts out multiple drafts and collects comments from industry experts. Every draft is made public and open for comment for a limited period of time, and the draft that follows takes the feedback that has come in into consideration.

The latest draft of ASHRAE Standard 90.4 was released for comment on April 29, and the comment period will close on May 29. To comment or learn more, visit www.ashrae.org/publicreviews.

Green Grid Not Opposed to Design PUE

While Green Grid had little to do with Design PUE, the organization is not opposed to it, Tipley said. “It makes certain sense for the design community to have some [energy efficiency] targets to go for.”

The 90.4 committee struck Design PUE from the initial draft after some prominent data center industry voices spoke out against its inclusion in the standard. The argument against it was that it would put colocation providers at a disadvantage.

PUE compares the amount of power used by IT equipment to the total amount of power the data center consumes. PUE gets lower (which means better) as the portion of total power that goes to IT gets higher. More often than not, colo providers launch new data centers at very low utilization rates.

They have to keep an inventory of available capacity to serve new or expanding customers, which means they theoretically cannot get close to ideal PUE just because of the nature of their business.

Different Metrics, Similar Goals

The committee has replaced Design PUE with more granular metrics that in some ways resemble PUE but focus separately on electrical infrastructure efficiency (Electrical Loss Component, or ELC) and on mechanical infrastructure efficiency (Mechanical Load Component, or MLC). They have also proposed a third metric that combines the two.

The committee’s new approach to measuring efficiency is somewhat similar to Green Grid’s Data Center Maturity Model, which also takes into consideration contributions of individual infrastructure components to the facility’s overall PUE, Tipley pointed out.

In fact, Green Grid is planning to evaluate ELC and MLC for inclusion in the second version of the maturity model, which is targeted for release in 2017, he said.

There is value to such levels of granularity, and at the end of the day, ASHRAE’s metrics have the same goal as Green Grid’s: higher data center efficiency. “The end result is they’re trying to get to a low PUE,” Tipley said.

The comment period for the latest draft of ASHRAE Standard 90.4, Energy Standard for Data Centers, ends on May 29. To review the draft and to comment, visit www.ashrae.org/publicreviews.

Original report appeared here: Green Grid Seeking Clarity Following ASHRAE PUE Agitation

Source: TheWHIR