Ensono Announces New Client Advisory Board

Ensono Announces New Client Advisory Board

Ensono™ has announced its new Client Advisory Board, emphasizing the company’s commitment to working collaboratively with its clients to deliver progressive IT solutions to help them operate their infrastructure for today and optimize it for tomorrow.

The Board provides a platform for Ensono clients to impact a number of strategic issues including the development of services and solutions, areas of expansion and market transformation. With insight into Ensono’s business strategy and direction, as well as product and technology roadmaps, members offer their opinion and expertise to optimize future offerings.

“It is rare to work with a company that not only values our feedback, but actively seeks it out in order to better serve its clients and help us progress our business initiatives,” said Chuck Musciano, chief information officer, Osmose Utility Services. “We chose to become part of Ensono’s Client Advisory Board because we not only believe in the services Ensono offers, but because this collaborative approach underscores that client relationships are a top priority.”

Ensono’s Client Advisory Board is comprised of a unique cross section of industries Ensono serves such as telecommunications and financial services, as well as a range of services it offers, including mainframe and cloud. Companies such as Acxiom, CCCIS, Dun & Bradstreet, Exelon, Hub Group, Inc., Osmose Utility Services, RR Donnelley, Sonoco, and Windstream Corporation participate on the Board.

The Board meets quarterly, which enables Ensono to provide valuable company and service updates in real time, continuing to advance the company’s mission to serve as an innovative hybrid IT services provider.

“We formed the Client Advisory Board to continue our longstanding practice of becoming a seamless extension of our clients’ IT teams,” said Brian Klingbeil, chief operating officer for Ensono. “We are thrilled clients actively participate and contribute in meetings and we look forward to continuing to solicit feedback so we can continue to help our clients do what they do best.” 

“Ensono is already a trusted partner for our business technology team,” said Ben Chan, chief information officer, Global Business Technology, for Sonoco. “Being able to be directly involved in Ensono’s process is further proof of their commitment to working side-by-side with us.”

Source: CloudStrategyMag

Accelerite Concert IoT SCEP Now Available On Microsoft Azure Cloud

Accelerite Concert IoT SCEP Now Available On Microsoft Azure Cloud

Accelerite has announced Accelerite’s Concert IoT Service Creation and Enrichment Platform (SCEP) can now be deployed on the Microsoft Azure cloud. 

In addition to accelerating time to market by simplifying coding, Concert IoT enables rapid development of service-oriented IoT applications (SOIAs). SOIAs are applications designed and delivered via APIs via a platform as a service (PaaS) model. Accelerite Concert IoT:

  • Facilitates efficient integration of third-party web services APIs into a Concert IoT SOIA to enrich the new platform’s capabilities and enable the evolution of powerful new IoT application ecosystems.
  • Enables partners to quickly monetize the data and insight generated from the IoT application to generate additional revenue streams. Customers benefit from richer apps and services and device vendors benefit from monetizing both the app and the data.

For example, a large scale farming operation may initially deploy IoT sensors to reduce water consumption and improve crop yields. Once that data is collected, it would be of great value to partners seeking to offer additional solutions, such as fertilizer or seeds customized to a targeted locale. Concert IoT provides the API management, payments and partner settlements needed to create and monetize a growing, revenue-generating IoT application ecosystem.

“Concert IoT will greatly accelerate the creation of innovative apps and rich new, vertically-focused IoT platforms built on Microsoft’s Azure public and private cloud implementations,” said Dean Hamilton, general manager of the Service Creation Business Unit at Accelerite. “These new platforms will empower IoT vendors across a wide spectrum of consumer and enterprise vertical markets to create their own IoT application partner ecosystems to continually deliver additional value.”

In addition, for compute and storage requirements, Concert IoT leverages Microsoft Azure’s IoT Hub for secure device on-boarding and device data ingestion, Stream Analytics for real time filtering and event detection, and HDInsight and Cortana AI for analytics and machine learning using the data.

“Accelerite’s support for Microsoft Azure environments within its Concert IoT service enabling platform offers value to the expanding community of Azure users and partners. Concert IoT is meant to serve as an enabling layer that rides “on-top” of the expanding set of IoT features Microsoft can now deliver via the Azure IoT suite,” said Brian Partridge, vice president, 451 Research. “The API management and monetization services that Concert IoT brings to Azure environments will be crucial to unlocking value for developers and service providers as the IoT industry matures.  Microsoft’s incumbency across all vertical industries and the growing market share of Azure in cloud services made it a natural target for prioritized support from Accelerite.”

Source: CloudStrategyMag

8 'new' enterprise products we don't want to see

8 'new' enterprise products we don't want to see

I get a lot of press releases. Most of them are from startups with the same old enterprise product ideas under different names. Some are for “new” products from existing companies (by “new,” I mean new implementations of old ideas).

Think you have a great idea? Please tell me it isn’t one of these:

1. A column family or key value store database

You have a brand-new take on how to store data, and it starts with keys associated with something. It’s revolutionary because blah blah blah.

No — stop it. Don’t start any more of these; don’t fund any more of these. That ship has sailed; the market is beyond saturated.

2. ETL/monitoring/data catalogs

The market might bear a totally new approach, but I’ve yet to see one (I mean actually a new approach, not simply saying that). I recently watched a vendor drone on for more than an hour before telling us what it was pitching. The more times a vendor says “revolutionary,” the more you know the only thing that’s new is the pricing. It’s an ETL tool with a catalog and monitoring that only works with their cloud, but they support open source and community! Sad, man.

Seriously, you can’t dress up your ETL/governance tool as a brand-new product idea — you’ve now invented Informatica. I’m not saying you should use Informatica, I’d never say that, but I’m saying “Zzzz, don’t start another one.” If you’re a big enough vendor to build your own, that’s nice, but no one cares.

3. On-prem clouds

OpenShift, CloudFoundry, and so on have all become “new and interesting ways to manage Docker or Docker images.” Also, we say “hybrid” because if you try hard you might get it up to Amazon, but the tools for doing that will certainly suck. Frankly, I’m skeptical that the “hybrid cloud” is anything but a silly marketing gimmick in terms of practicality, implementation, or utility.

4. Hadoop/Spark management with performance enhancements

Management in this area is a real problem, but if you’re starting now, you’re late to the game. This is a niche market. [Disclosure: I’m an adviser for one of these.]

5. Generic data visualization tool

In truth, I’m not superhappy with any product in this area (Tableau in particular sucks). This is a market that has had 1,000 false starts along with a handful of good players that charge too much. Amazon and others are getting into this game as well, although I’m dubious anyone wants to pay by the cycle to draw a chart. Anyhow, the usefulness of these tools will fade as we move to more automated decision-making tools.

6. Content management systems by any other name

People are still writing me about how they started these things. They have new names for them — but no, I’m not writing about them. If I covered consumer electronics I probably wouldn’t write about the various toasters you can buy at Target either. Are you people joking?

7. Another streaming tool

Between Kafka, Spark, Apex, Storm, and so on, whatever you need in big data software is covered. Your “revolutionary” new way to stream is probably not new.

8. Server-side blah blah with mobile added

Yes, mobile exists, but with maybe one or two exceptions a mobile app is mainly a client to the server like a web browser. If this means you added sync or notification to your existing product, cool. If you launched a new product line with “mobile” on it, please sell this to journalists and analysts with no technical background.

If you’re about to build any of those, please stop. Don’t tell anyone about it. Walk away from the keyboard before you bore someone.

Source: InfoWorld Big Data

Meet Apache Spot, a new open source project for cybersecurity

Meet Apache Spot, a new open source project for cybersecurity

Hard on the heels of the discovery of the largest known data breach in history, Cloudera and Intel on Wednesday announced that they’ve donated a new open source project to the Apache Software Foundation with a focus on using big data analytics and machine learning for cybersecurity.

Originally created by Intel and launched as the Open Network Insight (ONI) project in February, the effort is now called Apache Spot and has been accepted into the ASF Incubator.

“The idea is, let’s create a common data model that any application developer can take advantage of to bring new analytic capabilities to bear on cybersecurity problems,” Mike Olson, Cloudera co-founder and chief strategy officer, told an audience at the Strata+Hadoop World show in New York. “This is a big deal, and could have a huge impact around the world.”

Based on Cloudera’s big data platform, Spot taps Apache Hadoop for infinite log management and data storage scale along with Apache Spark for machine learning and near real-time anomaly detection. The software can analyze billions of events in order to detect unknown and insider threats and provide new network visibility.

Essentially, it uses machine learning as a filter to separate bad traffic from benign and to characterize network traffic behavior. It also uses a process including context enrichment, noise filtering, whitelisting and heuristics to produce a shortlist of most likely security threats.

By providing common open data models for network, endpoint, and user, meanwhile, Spot makes it easier to integrate cross-application data for better enterprise visibility and new analytic functionality. Those open data models also make it easier for organizations to share analytics as new threats are discovered.

Other contributors to the project so far include eBay, Webroot, Jask, Cybraics, Cloudwick, and Endgame.

“The open source community is the perfect environment for Apache Spot to take a collective, peer-driven approach to fighting cybercrime,” said Ron Kasabian, vice president and general manager for Intel’s Analytics and Artificial Intelligence Solutions Group. “The combined expertise of contributors will help further Apache Spot’s open data model vision and provide the grounds for collaboration on the world’s toughest and constantly evolving challenges in cybersecurity analytics.”

Source: InfoWorld Big Data

10% off SAP Crystal Reports 2016, Through Friday Only – Deal Alert

10% off SAP Crystal Reports 2016, Through Friday Only – Deal Alert

SAP Crystal Reports software is the de facto in reporting, and it’s currently discounted 10%, through Friday, if you use the code CRYSTAL10 at checkout. With SAP Crystal Reports you can create powerful, richly formatted, dynamic reports from virtually any data source – delivered in dozens of formats, in up to 24 languages. A robust production reporting tool, SAP Crystal Reports turns almost any data source into interactive, actionable information that can be accessed offline or online, from applications, portals and mobile devices.

Through Friday, September 30th, use the code CRYSTAL10 at checkout and receive 10% off the purchase. Click here to see Crystal Reports now in the SAP Store.

This story, “10% off SAP Crystal Reports 2016, Through Friday Only – Deal Alert” was originally published by TechConnect.

Source: InfoWorld Big Data

IBM promises a one-stop analytics shop with AI-powered big data platform

IBM promises a one-stop analytics shop with AI-powered big data platform

Big data is in many ways still a wild frontier, requiring wily smarts and road-tested persistence on the part of those hoping to find insight in all the petabytes. On Tuesday, IBM announced a new platform it hopes will make things easier.

Dubbed Project DataWorks, the new cloud-based platform is the first to integrate all types of data and bring AI to the table for analytics, IBM said.

Project DataWorks is available on IBM’s Bluemix cloud platform and aims to foster collaboration among the many types of people who need to work with data. Tapping technologies including Apache Spark, IBM Watson Analytics and the IBM Data Science Experience launched in June, the new offering is designed to give users self-service access to data and models while ensuring governance and rapid-iteration capabilities.

Project DataWorks can ingest data faster than any other data platform, from 50 to hundreds of Gbps, deriving from sources including enterprise databases, the internet of things (IoT) and social media, according to IBM.  What the company calls “cognitive” capabilities such as those found in its Watson artificial intelligence software, meanwhile, can help pave a speedier path to new insights, it says.

“Analytics is no longer something in isolation for IT to solve,” said Derek Schoettle, general manager of cloud data services for IBM Analytics, in an interview. “In the world we’re entering, it’s a team sport where data professionals all want to be able to operate on a platform that lets them collaborate securely in a governed manner.”

Users can open any data set in Watson Analytics for answers to questions phrased in natural language, such as “what drives this product line?” Whereas often a data scientist might have to go through hundreds of fields manually to find the answer, Watson Analytics allows them to do it near instantaneously, IBM said.

More than 3,000 developers are working on the Project DataWorks platform, Schoettle said. Some 500,000 users have been trained on the platform, and more than a million business analysts are using it through Watson Analytics.

Available now, the software can be purchased through a pay-as-you-go plan starting at $75 per month for 20GB. Enterprise pricing is also available.

“Broadly speaking, this brings two things to the table that weren’t there before,” said Gene Leganza, a vice president and research director with Forrester Research.

First is “a really comprehensive cloud-based platform that brings together all the elements you’d need to drive data innovation,” Leganza said. “It’s data management, it’s analytics, it’s Watson, it’s collaboration across different roles, and it’s a method to get started. It’s really comprehensive, and the fact that it’s cloud-based means everyone has access.”

The platform’s AI-based capabilities, meanwhile, can help users “drive to the next level of innovation with data,” he said.

Overall, it’s “an enterprise architect’s dream” because it could put an end to the ongoing need to integrate diverse products into a functioning whole, Leganza said.

Competition in the analytics market has been largely segmented according to specific technologies, agreed Charles King, principal analyst with Pund-IT.

“If Project DataWorks delivers what IBM intends,” King said, “It could change the way that organizations approach and gain value from analyzing their data assets.”

Source: InfoWorld Big Data

IDG Contributor Network: Tech leaders must act quickly to ensure algorithmic fairness

IDG Contributor Network: Tech leaders must act quickly to ensure algorithmic fairness

Do big data algorithms treat people differently based on characteristics like race, religion, and gender? Mary Worth in her new book Weapons of Math Destruction and Frank Pasquale in The Black Box Society both look closely and critically at concerns over discrimination, the challenges of knowing if algorithms are treating people unfairly and the role of public policy in addressing these questions.

Tech leaders must take seriously the debate over data usage — both because discrimination in any form has to be addressed, and because a failure to do so could lead to misguided measures such as mandated disclosure of algorithmic source code.

What’s not in question is that the benefits of the latest computational tools are all around us. Machine learning helps doctors diagnose cancer, speech recognition software simplifies our everyday interactions and helps those with disabilities, educational software improves learning and prepares children for the challenges of a global economy, new analytics and data sources are extending credit to previously excluded groups. And autonomous cars promise to reduce accidents by 90 percent.

Jason Furman, the Chairman of the Council of Economic Advisors, got it right when he said in a recent speech that his biggest worry about artificial intelligence is that we do not have enough of it.

Of course, any technology, new or old, can further illegal or harmful activities, and the latest computational tools are no exception. But, in the same regard, there is no exception for big data analysis in the existing laws that protect consumers and citizens from harm and discrimination.

The Fair Credit Reporting Act protects the public against the use of inaccurate or incomplete information in decisions regarding credit, employment, and insurance.  While passed in the 1970s, this law has been effectively applied to business ventures that use advanced techniques of data analysis, including the scraping of personal data from social media to create profiles of people applying for jobs.

Further, no enterprise can legally use computational techniques to evade statutory prohibitions against discrimination on the basis of race, color, religion, gender and national origin in employment, credit, and housing. In a 2014 report on big data, the Obama Administration emphasized this point and told regulatory to agencies “identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations….”

Even with these legal protections, there is a move to force greater public scrutiny — including a call for public disclosure of all source code used in decision-making algorithms. Full algorithmic transparency would be harmful. It would reveal selection criteria in such areas as tax audits and terrorist screening that must be kept opaque to prevent people from gaming the system. And by allowing business competitors to use a company’s proprietary algorithms, it would reduce incentives to create better algorithms.

Moreover, it won’t actually contribute to the responsible use of algorithms. Source code is only understandable by experts, and even for them it is hard to say definitively what a program will do based solely on the source code. This is especially true for many of today’s programs that update themselves frequently as they use new data.

To respond to public concern about algorithmic fairness, businesses, government, academics and public interest groups need to come together to establish a clear operational framework for responsible use of big data analytics. Current rules already require some validation of the predictive accuracy of statistical models used in credit, housing, and employment. But technology industry leaders can and should do more.

FTC Commissioner McSweeny has the right idea, with her call for a framework of “responsibility by design.” This would incorporate fairness into algorithms by testing them — at the development stage — for potential bias. This should be supplemented by audits after the fact to ensure that algorithms are not only properly designed, but properly operated.  

Important in this cooperative stakeholder effort would be a decision about which areas of economic life to include — starting with sensitive areas like credit, housing and employment. The framework would also need to address the proper roles of the public, developers and users of algorithms, regulators, independent researchers, and subject matter experts, including ethics experts.

It is important to begin to develop this framework now, and to ensure the uses of the new technology are, and are perceived to be, fair to all. The public must be confident in the fairness of algorithms, or a backlash will threaten their very real and substantial benefits.

This article is published as part of the IDG Contributor Network. Want to Join?

Source: InfoWorld Big Data

A high-tech shirt made for speed

A high-tech shirt made for speed

Race cars are packed with sensors that constantly send back telemetry data to pit row here. In IndyCar’s number 10 team, the driver is also monitored thanks to a high-tech shirt.

A SHIRT BUILT FOR SPEED

The shirt is based on a fabric called Hitoe, it’s Japanese for single layer, and was developed by engineers at NTT Data, which sponsors the team, and Toray Industries.

In Hitoe, the nanofibers in the shirt are coated with an electro-conductive polymer, so the fabric itself is the sensor. For IndyCar, pieces of Hitoe fabric were attached to the fire resistant shirt that all drivers have to wear by regulation. And the result was this shirt for driver Tony Kanaan.

“The amazing thing about this shirt, it tells me my heart rate, it tells me the stress on my muscles with some of the sensors that I have underneath my forearms, my biceps and my core, which is very unique.”

Data from the shirt feeds into the car’s telemetry system and is sent to pit row for realtime analysis. What the team discovers can help Kanaan compete.

Brian Welling
Support Engineer for 10 car
“We can tell the heart rate realtime, we can tell muscle fatigue on the arms so if he is gripping the steering wheel too hard, and if that is the case we can say let off the steering wheel a bit on the straight away.”

If he grips less where he doesn’t need it, Kanaan can retain strength for the end of the race — giving him an edge at the end of the race.

“We’re learning about the whole atmosphere of this shirt. We’re starting to learn about the fatigure level, when you’re going through a corner, the drivers knew it but we didn’t, they hold their breath a lot, all the way through the corner, you can learn to relax just a little bit more.”

Its use in IndyCar is but one application for Hitoe. A shirt using the fabric is already on sale in Japan and finding use in industry.

“A high-tech, high-tower construction worker, sensor worker, who is hundreds of feet in the air and what they are going through.”

Kanaan sees much greater uses for the shirt too, way beyond IndyCar.

“we’re working on this thing to help other people in hospitals. You can track it with your phone, so you can send patients home with the shirt and they don’t have to suffer through being depressed in a hospital. You’re still ill, but you are being monitored by wearing the shirt and a doctor can have an app or alarm on a phone that can tell if you’re in trouble or not.”

The shirt isn’t available in the U.S. yet, but will be soon.

Source: InfoWorld Big Data

Bossie Awards 2016: The best open source big data tools

Bossie Awards 2016: The best open source big data tools

Elasticsearch, also based on the Apache Lucene engine, is an open source distributed search engine that focuses on modern concepts like REST APIs and JSON documents. Its approach to scaling makes it easy to take Elasticsearch clusters from gigabytes to petabytes of data with low operational overhead.

As part of the ELK stack (Elasticsearch, Logstash, and Kibana, all developed by Elasticsearch’s creators, Elastic), Elasticsearch has found its killer app as an open source Splunk replacement for log analysis. Companies like Netflix, Facebook, Microsoft, and LinkedIn run large Elasticsearch clusters for their logging infrastructure. Furthermore, the ELK stack is finding its way into other domains, such as fraud detection and domain-specific business analytics, spreading the use of Elasticsearch throughout the enterprise.

— Ian Pointer

This article appears to continue on subsequent pages which we could not extract

Source: InfoWorld Big Data