Cloud Technology Partners Achieves AWS IoT Competency

Cloud Technology Partners Achieves AWS IoT Competency

Cloud Technology Partners (CTP) has announced that it has achieved the AWS IoT Competency designation from Amazon Web Services (AWS). CTP is one of a select number of AWS Consulting Partners to earn this competency, highlighting the value of its offerings that help clients build IoT solutions for a variety of use cases such as intelligent factories, smart cities, autonomous vehicles, precision agriculture, and personalized health care.

Achieving the AWS IoT Competency differentiates CTP as an AWS Partner Network (APN) member that has proven success delivering IoT solutions seamlessly on AWS. To receive the designation, APN Partners must demonstrate expertise in the AWS platform and undergo an assessment of the security, performance, and reliability of their solutions.

“CTP is proud to have been named a charter launch partner for the AWS IoT Competency,” said Scott Udell, vice president of IoT Solutions at Cloud Technology Partners. “Our team is dedicated to helping clients leverage the power of IoT and the agility of the AWS platform to achieve their business goals.”

AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the integration and deployment of these solutions, AWS established the IoT Partner Competency Program to help customers identify Consulting and Technology APN Partners with broad industry experience.

CTP recently completed an IoT engagement for RailPod, the leading manufacturer of railroad maintenance drones. CTP helped RailPod build a highly scalable IoT solution capable of ingesting massive quantities of real-time and batched data to ensure safer railroads.

“Cloud Technology Partners helped us build an enterprise-class IoT solution on AWS that enables RailPod to be a global leader in infrastructure information production to ensure safer railroads across the global railroad market,” said Brendan English, Founder and CEO of RailPod.

CTP’s IoT Practice and Digital Innovation teams are helping clients leverage the power of the cloud with the real-time knowledge learned from analyzing sensor data to help clients save millions in preventative maintenance on locomotives and railways, improve crop yields while saving money with intelligent irrigation, connect doctors and patients with medical devices and avoid accidents with autonomous vehicles.

CTP is a Premier AWS Consulting Partner and has achieved a number of competencies with AWS. In addition to the AWS IoT Competency, CTP holds the AWS Migration Competency, AWS DevOps Competency and is a member of AWS Next-Generation Managed Services Program.

Source: CloudStrategyMag

Microsoft’s R tools bring data science to the masses

Microsoft’s R tools bring data science to the masses

One of Microsoft’s more interesting recent acquisitions was Revolution Analytics, a company that built tools for working with big data problems using the open source statistical programming language R. Mixing an open source model with commercial tools, Revolution Analytics offered a range of tools supporting academic and personal use, alongside software that took advantage of massive amounts of data–including Hadoop. Under Microsoft’s stewardship, the now-renamed R Server has become a bridge between on-premises and cloud data.

Two years on, Microsoft has announced a set of major updates to its R tools. The R programming language has become an important part of its data strategy, with support in Azure and SQL Server—and, more important, in its Azure Machine Learning service, where it can be used to preprocess data before delivering it to a machine learning pipeline. It’s also one of Microsoft’s key cross-platform server products, with versions for both Red Hat Linux and Suse Linux.

R is everywhere in Microsoft’s ecosystem

Outside of Microsoft, the open source R has become a key tool for data science, with a lot of support in academic environments. (It currently ranks fifth in terms of all languages, according to the IEEE.) You don’t need to be a statistical expert to get started with R, because the Comprehensive R Archive Network (CRAN, a public library of R applications) now has more than 9,000 statistical modules and algorithms you can use with your data.

Microsoft’s vision for R is one that crosses the boundaries between desktop, on-premises servers, and the cloud. Locally, there’s a free R development client, as well as R support in Microsoft’s (paid) flagship Visual Studio development environment. On-premises, R Server runs on Windows and Linux, as well as inside SQL Server, giving you access to statistical analysis tools alongside your data. Local big data services based on Hadoop and Spark are also supported, while on Azure you can run R Server alongside Microsoft’s HDInsight services.

R is a tool for data scientists. Although the R language is relatively simple, you need a deep knowledge of statistical analytics to get the most from it. It’s been a long while since I took college-level statistics classes, so I found getting started with R complex because many of the underlying concepts require graduate-level understanding of complex statistical functions. The question isn’t so much whether you can write R code—it’s whether you can understand the results you’re getting.

That’s probably the biggest issue facing any organization that wants to work with big data: getting the skills needed to produce the analysis you want and, more important, to interpret the results you get. R certainly helps here, with built-in graphing tools that help you visualize key statistical measures.

Working with Microsoft R Server

The free Microsoft R Open can help your analytics team get up to speed with R before investing in any of the server products. It’s also a useful tool for quickly trying out new analytical algorithms and exploring the questions you want answered using your data. That approach works well as part of an overall analytics lifecycle, starting with data preparation, moving on to model development, and finally turning the model into tools that can be built into your business applications.

One interesting role for R is alongside GPU-based machine-learning tools. Here, R is employed to help train models before they’re used at scale. Microsoft is bundling its own machine learning algorithms with the latest R Server release, so you can test a model before uploading it to either a local big data instance or to the cloud. During a recent press event, Microsoft demonstrated this approach with astronomy images, training a machine-learning-based classifier on a local server with a library of galaxies before running the resulting model on cloud-hosted GPUs.

R is an extremely portable language, designed to work over discrete samples of data. That makes it very scalable and ideal for data-parallel problems. The same R model can be run on multiple servers, so it’s simple to quickly process large amounts of data. All you need to do is parcel out your data appropriately, then deliver it to your various R Server instances. Similarly, the same code can run on different implementations, so a model built and tested against local data sources can be deployed inside a SQL Server database and run against a Hadoop data lake.

R makes operational data models easy

Thus, R is very easy to operationalize. Your data science team can work on building the model you need, while your developers write applications and build infrastructures that can take advantage of their code. Once it’s ready, the model can be quickly deployed, and it can even be swapped out for improved models in the future without affecting the rest of the application. In the same manner, the same model can be used in different applications, working with the same data.

With a common model, your internal dashboards can show you the same answers as customer- and consumer-facing code. You can then use data to respond proactively—for example, providing delay and rebooking information to airline passengers when a model predicts weather delays. That model can be refined as you get more data, reducing the risks of false positives and false negatives.

Building R support into SQL Server makes a lot of sense. As Microsoft’s database platform becomes a bridge between on-premises data and the cloud, as well as between your systems of record and big data tools, having fine-grained analytics tools in your database is a no-brainer. A simple utility takes your R models and turns them into procs, ready for use inside your SQL applications. Database developers can work with data analytics teams to implement those models, and they don’t need to learn any new skills to build them into their applications.

Microsoft is aware that not every enterprise needs or has the budget to employ data scientists. If you’re dealing with common analytics problems, like trying to predict customer churn or detecting fraud in an online store, you have the option of working with a range of predefined templates for SQL Server’s R Services that contain ready-to-use models. Available from Microsoft’s MSDN, they’re fully customizable in any R-compatible IDE, and you can deploy them with a PowerShell script.

Source: InfoWorld Big Data

Tech luminaries team up on $27M AI ethics fund

Tech luminaries team up on M AI ethics fund

Artificial intelligence technology is becoming an increasingly large part of our daily lives. While those developments have led to cool new features, they’ve also presented a host of potential problems, like automation displacing human jobs, and algorithms providing biased results.

Now, a team of philanthropists and tech luminaries have put together a fund that’s aimed at bringing more humanity into the AI development process. It’s called the Ethics and Governance of Artificial Intelligence Fund, and it will focus on advancing AI in the public interest.

A fund such as this one is important as issues arise during AI development. The IEEE highlighted a host of potential issues with artificial intelligence systems in a recent report, and the fund seems aimed at funding solutions to several of those problems.

Its areas of focus include research into the best way to communicate the complexity of AI technology, how to design ethical intelligent systems, and ensuring that a range of constituencies is represented in the development of these new AI technologies.

The fund was kicked off with help from Omidyar Network, the investment firm created by eBay founder Pierre Omidyar; the John S. and James L. Knight Foundation; LinkedIn founder Reid Hoffman; The William and Flora Hewlett Foundation; and Jim Pallotta, founder of the Raptor Group.

“As a technologist, I’m impressed by the incredible speed at which artificial intelligence technologies are developing,” Omidyar said in a press release. “As a philanthropist and humanitarian, I’m eager to ensure that ethical considerations and the human impacts of these technologies are not overlooked.”

Hoffman, a former executive at PayPal, has shown quite the interest in developing AI in the public interest and has also provided backing to OpenAI, a research organization aimed at helping create AI that is as safe as possible.

The fund will work with educational institutions, including the Berkman Klein Center for Internet and Society at Harvard University and the MIT Media Lab. The fund has US $27 million to spend at this point, and more investors are expected to join in.

Source: InfoWorld Big Data

SolarWinds Recognized As Market Leader In Network Management Software

SolarWinds Recognized As Market Leader In Network Management Software

SolarWinds has announced the company has been recognized as the global market share leader in Network Management Software by industry analyst firm, International Data Corporation (IDC) in its latest Worldwide Semi-Annual Software Tracker. The tracker measures total market size and vendor shares based on each vendor’s software revenue, including license, maintenance, and subscription revenue.

“SolarWinds was founded on the premise that IT professionals desire IT management software that is more powerful, yet simpler to buy and much easier to use,” said Kevin B. Thompson, president and chief executive officer, SolarWinds. “IDC’s recognition of SolarWinds’ market share leadership validates that core value proposition inherent in all of our solutions, while also underscoring the incredible adoption rate we continue to see among customers in organizations of all sizes, in all parts of the world.”

According to the IDC  Worldwide Semi-Annual Software Tracker 1H 2016 release, SolarWinds® leads the network management software market with more than a 20 percent share of total market revenue for the first half of 2016. Strong demand for its Network Performance Monitor and Network Traffic Analyzer products fueled 14.2% year-over-year revenue growth during the same period.

Source: CloudStrategyMag

'Transfer learning' jump-starts new AI projects

'Transfer learning' jump-starts new AI projects

No statistical algorithm can be the master of all machine learning application domains. That’s because the domain knowledge encoded in that algorithm is specific to the analytical challenge for which it was constructed. If you try to apply that same algorithm to a data source that differs in some way, large or small, from the original domain’s training data, its predictive power may fall flat.

That said, a new application domain may have so much in common with prior applications that data scientists can’t be blamed for trying to reuse hard-won knowledge from prior models. This is a well-established but fast-evolving frontier of data science known as “transfer learning” (but goes by other names such as knowledge transfer, inductive transfer, and meta learning).

Transfer learning refers to reuse of some or all of the training data, feature representations, neural-node layering, weights, training method, loss function, learning rate, and other properties of a prior model.

Transfer learning is a supplement to, not a replacement for, other learning techniques that form the backbone of most data science practices. Typically, a data scientist relies on transfer learning to tap into statistical knowledge that was gained on prior projects through supervised, semi-supervised, unsupervised, or reinforcement learning.

For data scientists, there are several practical uses of transfer learning.

Modeling productivity acceleration

If data scientists can reuse prior work without the need to revise it extensively, transfer-learning techniques can greatly boost their productivity and accelerate time to insight on new modeling projects. In fact, many projects in machine learning and deep learning address solution domains for which there is ample prior work that can be reused to kick-start development and training of fresh neural networks.

It is also useful if there are close parallels or affinities between the source and target domains. For example, a natural-language processing algorithm that was built to classify English-language technical documents in one scientific discipline should, in theory, be readily adaptable to classifying Spanish-language documents in a related field. Likewise, deep learning knowledge that was gained from training a robot to navigate through a maze may also be partially applicable to helping it learn to make its way through a dynamic obstacle course.

Training-data stopgap

If a new application domain lacks sufficient amounts of labeled training data of high quality, transfer learning can help data scientists to craft machine learning models that leverage relevant training data from prior modeling projects. As noted in this excellent research paper, transfer learning is an essential capability to address machine learning projects in which prior training data can become easily outdated. This problem of training-data obsolescence often happens in dynamic problem domains, such as trying to gauge social sentiment or track patterns in sensor data.

An example, cited in the paper, is the difficulty of training the machine-learning models that drive Wi-Fi indoor localization, considering that the key data—signal strength—behind these models may vary widely over the time periods and devices used to collect the data. Transfer learning is also critical to the success of IoT deep learning applications that generate complex machine-generated information of such staggering volume, velocity, and variety that one would never be able to find enough expert human beings to label enough of it to kick-start training of new models.

Risk mitigation

If the underlying conditions of the phenomenon modeled have radically changed, thereby rendering prior training data sets or feature models inapplicable, transfer learning can help data scientists leverage useful subsets of training data and feature models from related domains. As discussed in this recent Harvard Business Review article, the data scientists who got the 2016 U.S. presidential election dead wrong could have benefited from statistical knowledge gained in postmortem studies of failed predictions from the U.K. Brexit fiasco.

Transfer learning can help data scientists mitigate the risks of machine-learning-driven predictions in any problem domain susceptible to highly improbable events. For example, cross-fertilization of statistical knowledge from meteorological models may be useful in predicting “perfect storms” of congestion in traffic management. Likewise, historical data on “black swans” in economics, such as stock-market crashes and severe depressions, may be useful in predicting catastrophic developments in politics and epidemiology.

Transfer learning isn’t only a productivity tool to assist data scientists with their next modeling challenge. It also stands at the forefront of the data science community’s efforts to invent “master learning algorithms” that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.

Clearly, humanity is nowhere close to fashioning such a “superintelligence” — and some people, fearing a robot apocalypse or similar dystopia, hope we never do. But it’s not far-fetched to predict that, as data scientists encode more of the world’s practical knowledge in statistical models, these AI nuggets will be composed into machine intelligence of staggering sophistication.

Transfer learning will become a membrane through which this statistical knowledge infuses everything in our world.

Source: InfoWorld Big Data

Report: Enterprises Prefer Microsoft Azure, SMBs Favor Google Cloud Platform

Report: Enterprises Prefer Microsoft Azure, SMBs Favor Google Cloud Platform

A new survey by Clutch found that enterprises strongly prefer Microsoft Azure, while small- to medium-sized businesses (SMBs) gravitate toward Google Cloud Platform. The survey was conducted in order to gain more knowledge on the “Big Three” cloud providers: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Nearly 40% of Azure users surveyed identified as enterprises. In comparison, only 25% identified as SMBs and 22% identified as startups/sole proprietorships. Conversely, 41% of GCP users surveyed identified as SMBs.

The trends among enterprises and SMBs reflect the strengths of each platform. “It goes back to the trust and familiarity issues,” said Nicholas Martin, principal applications development consultant at Cardinal Solutions, an IT solutions provider. “Windows Server and other Microsoft technologies are prevalent in the enterprise world. Azure provides the consistency required by developers and IT staff to tightly integrate with the tools that Microsoft leaning organizations are familiar with.”

Meanwhile, Dave Hickman, vice president of global delivery at Menlo Technologies, an IT services company, said that “small businesses tend to lean more on pricing than security or toolsets.” Thus, GCP’s lower pricing can be more palatable for an SMB.

Clutch’s survey also investigated the primary reasons respondents selected one of the three providers. The largest percentage of users (21%) named “better selection of tools/features” as their top reason, while “familiarity with brand” and “stronger security” nearly tied for second place.

Experts emphasized how users will choose a provider based on the selection of tools or features it offers. “Infrastructure-as-a-service will reside mainly on AWS, cloud services will be on Microsoft’s side, while Google will dominate analytics,” said Brian Dearman, solutions architect at Mindsight, an IT infrastructure consulting firm. “Even though every platform offers each type of service, people will want the best.”

The survey included 85 AWS users, 86 GCP users and 76 Microsoft Azure users. While these totals do not reflect each platform’s market share, the nearly even number of respondents using each provider allowed Clutch to analyze opinions and behaviors more equally.

Based on the survey findings, Clutch recommends that companies consider the following:

  • If your business is an enterprise, requires Windows integration, or seeks a strong PaaS (platform-as-a-service) provider, consider Microsoft Azure.
  • For heavy emphasis on analytics or if you are an SMB with a limited budget, look into GCP.
  • If service longevity, IaaS (infrastructure-as-a-service) offerings, and a wide selection of tools are important to you, AWS may be your best option.

Source: CloudStrategyMag

Busted! 5 myths of digital transformation

Busted! 5 myths of digital transformation

“Digital” is the new “cloud.” Once upon a time, these words meant something. Now they mean whatever a speaker wants them to mean — especially if, internally or externally, they’re trying to sell you something. Not surprising, this level of ambiguity has created a fertile environment for mythical thinking.

Behind all the blather and marketing mayhem, digital this and digital that can provide serious opportunities for companies whose leaders can see through the haziness.

And it creates serious challenges for CIOs — challenges that clear-eyed CIOs can prepare for and overcome, but that will bulldoze the unwary ones who see it as more of the same old same-old.

With this in mind, here are five common myths you’ve probably encountered when reading about digital transformation, along with the nonmythical issues and opportunities behind them.

Source: InfoWorld Big Data

Report: 2016 Review Shows $148 Billion Cloud Market Growing

Report: 2016 Review Shows 8 Billion Cloud Market Growing

New data from Synergy Research Group shows that across six key cloud services and infrastructure market segments, operator, and vendor revenues for the four quarters ending September 2016 reached $148 billion, having grown by 25% on an annualized basis. IaaS & PaaS services had the highest growth rate at 53%, followed by hosted private cloud infrastructure services at 35% and enterprise SaaS at 34%. 2016 was notable as the year in which spend on cloud services overtook spend on cloud infrastructure hardware and software. In aggregate cloud service markets are now growing three times more quickly than cloud infrastructure hardware and software. Companies that featured the most prominently among the 2016 market segment leaders were Amazon/AWS, Microsoft, HPE, Cisco, IBM, Salesforce, and Dell EMC.

Over the period Q4 2015 to Q3 2016 total spend on hardware and software to build cloud infrastructure exceeded $65 billion, with spend on private clouds accounting for over half of the total but spend on public cloud growing much more rapidly. Investments in infrastructure by cloud service providers helped them to generate almost $30 billion in revenues from cloud infrastructure services (IaaS, PaaS, hosted private cloud services) and over $40 billion from enterprise SaaS, in addition to supporting internet services such as search, social networking, email and e-commerce. UCaaS, while in many ways a different type of market, is also growing steadily and driving some radical changes in business communications.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Synergy Research Group’s founder and chief analyst Jeremy Duke. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side. Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead.”

Source: CloudStrategyMag

Review: Caffe deep learning conquers image classification

Review: Caffe deep learning conquers image classification

Like superheroes, deep learning packages usually have origin stories. Yangqing Jia created the Caffe project while earning his doctorate at U.C. Berkeley. The project continues as open source under the auspices of the Berkeley Vision and Learning Center (BVLC), with community contributions. The BVLC is now part of the broader Berkeley Artificial Intelligence Research (BAIR) Lab. Similarly, the scope of Caffe has been expanded beyond vision to include nonvisual deep learning problems, although the published models for Caffe are still overwhelmingly related to images and video.

Caffe is a deep learning framework made with expression, speed, and modularity in mind. Among the frameworks strengths are the way Caffe’s models and optimization are defined by configuration without hard-coding, as well as the option to switch between CPU and GPU by setting a single flag to train on a GPU machine, then deploy to commodity clusters or mobile devices.

Source: InfoWorld Big Data

Data breaches through wearables put target squarely on IoT in 2017

Data breaches through wearables put target squarely on IoT in 2017

Forrester predicts that more than 500,000 internet of things (IoT) devices will suffer a compromise in 2017, dwarfing Heartbleed. Drop the mic—enough said.

With the sheer velocity of how the distributed denial-of-service (DDoS) attacks spread through common household items such as DVR players, makes this sector scary from a security standpoint.

“Today, firms are developing IoT firmware with open source components in a rush to market. Unfortunately, many are delivering these IoT solutions without good plans for updates, leaving them open to not only vulnerabilities but vulnerabilities security teams cannot remediate quickly,” write Forrester analysts.

The analyst firm adds that when smart thermostats alone exceed over 1 million devices, it’s not hard to imagine a vulnerability that easily exceeds the scale of Heartbleed. Security as an afterthought for IoT devices is not an option, especially when you can’t patch IoT firmware because the vendor didn’t plan for over-the-air patching.

Alex Vaystikh, co-founder/CTO of advanced threat detection software provider SecBI, says small-to-midsize businesses and enterprises alike will suffer breaches originating from an insecure IoT device connected to the network. The access point will be a security camera, climate control, an old network printer, or even a remote-controlled lightbulb. This was demonstrated in September in a major DDoS attack on the website of security expert Brian Krebs. A hacker found a vulnerability in a brand of IoT camera and caused millions of them to simultaneously make HTTP requests from Krebs’ site. 

“It successfully crashed the site, but DDoS attacks are not a great way to make money. However, imagine an IoT camera within a corporate network being hacked. If that network also contains the company’s database center, there’s no way to stop the hacker from making a lateral move from the compromised camera to the database,” Vaystikh said. “This should scare organizations into questioning the popular BYOD mentality. We are already seeing a lot of CCTVs being hacked within organizations.” 

Florin Lazurca, senior technical manager at Citrix, believes that consumers will be a target of opportunity in 2017. Innovative criminal enterprises will devise ways to monetize on potentially billions of internet-facing devices that many times do not meet stringent security controls. “Want to browse the internet? Pay the ransom. Want to use your baby monitor? Pay the ransom. Want to watch your smart TV? Pay the ransom,” Lazurca says.

Florin Lazurca, senior technical manager at Citrix

Mike Kelly, CTO of Blue Medora, agrees, stating that, “the inability to quickly update something, such as your home thermostat, is where we will see the risk. It’s not about malware getting on the devices, the focus will need to be on the ability to remediate the issue. Like we saw with Windows, there will be a slew of vulnerabilities, but unlike with a computer, patching won’t be as easy with IoT devices,” he says.

More connected devices will create more data, which has to be securely shared, stored, managed and analyzed. As a result, databases will become more complex and the management burden will increase. Those organizations that can most effectively monitor their database layer to optimize peak performance and resolve bottlenecks will be in a better position to exploit the opportunities the IoT will bring, he says.

Lucas Moody, CISO at Palo Alto Networks, says security has to be baked into the IoT devices – not be an afterthought. The bloom of IoT devices has security practitioners in the hot seat, with industry analysts suggesting a possible surge up to 20 billion devices by 2020.

“Given the recent upward trend in both frequency and intensity of DDoS attacks of late, 2017 will introduce an entirely new challenge that security teams will need to contend with; how do we secure devices, many of which are by design dumb and, for that matter, cheap?,” he says. 

Large corporations are still challenged with finding security talent to manage security in the “traditional” sense, leaving IoT startups to fend for themselves in a digital economy. 

Moody asks, can they keep up? For the interconnected future of cars, televisions and refrigerators, maybe, but maintaining the security of smaller – and seemingly less critical items – such as toasters, thermostats, and pet feeders, it seems unlikely.

“Security has to be baked into these technologies from the conception and design stages all throughout development and roll-out. Security practitioners will need to do more than just scramble to develop strategies to address this pivotal trend,” he says.

Corey Nachreiner, CTO at WatchGuard Technologies, predicts that IoT devices will become the de facto target for botnet zombies. With the shear volume of internet-connected devices growing every year, IoT represents a huge attack surface for hackers. More disturbingly, many IoT manufacturers do not create devices with security in mind, and therefore release devices full of potential vulnerabilities. Many of their products have vulnerabilities that were common a decade ago, providing easy pickings for cyber criminals.

Many IoT devices coming on the market have proprietary operating systems, and offer very little compute and storage resources. Hackers would have to learn new skills to reverse engineer these devices, and they don’t provide much in terms of resources or data for the attacker to steal or monetize. On the other hand, another class of IoT products are devices running embedded Linux. These devices look very familiar to hackers. They already have tools and malware designed to target them, so “pwning” them is as familiar as hacking any Linux computer.

“On top of that, the manufacturers releasing these devices seem to follow circa 2000 software development and security practices. Many IoT devices expose network services with default passwords that are simple for attackers to abuse,” Nachreiner says.

He cited the leaking of the source code for the Mirai IoT botnet. This botnet included a scanner that automatically searched the internet to find unsecured, Linux-based IoT devices, and take them over using default credentials. With this leaked code, criminals were able to build huge botnets consisting of hundreds of thousands of IoT devices. They used these IoT botnets to launch gigantic DDoS attacks that generated up to 1Tbps of traffic; the largest ever recorded.

In 2017, criminals will expand beyond DDoS attacks and leverage these botnets for click-jacking and spam campaigns to monetize IoT attacks in the same way they monetized traditional computer botnets. Expect to see IoT botnets explode next year, he says.

Mike Davis, CTO at CounterTack, believes IoT will continue to be a part of the threat conversation in the coming year, but fundamentally there will be a massive change in the risks associated with the devices—it won’t be about security, it will be about patching. 

Hold your IoT security hypberbole

Stan Black, CSO at Citrix, says we need to dispel security myths around emerging technology like IoT, machine learning and artificial intelligence.

“Many people are afraid to adopt these emerging technologies for fear that they may be their security downfall, but as with any technology, the same security 1-2-3s apply. Change the admin username and password, allow and enable devices on separate networks (separate from the networks used to pass sensitive data), create management and access policies, and above all, make sure that employees are educated about how, when and where to use these kinds of technologies,” he says. 

Adoption of emerging tech like IoT can actually have more security benefits than challenges, if implemented correctly, Black says. The same goes for machine learning. The security wave of the future includes these technologies, so it’s best for businesses to learn about them early, learn about the benefits and reap the rewards of clouds, devices and networks that can learn from, and adapt to, changing behaviors to make for a stronger security posture.

The wave of the future will be computers that can grant or deny access based on fingerprinted keyboards that can sense the normal amount of pressure your fingers normally apply. Taking advantages of benefits like these will help companies move to a new security infrastructure and mindset, he predicts. 

“The mobile devices we depend on every day are loaded with sensors, heat, touch, water, impact, light, motion, location, acceleration, proximity, etc. These technologies have numerous applications including sensing motion and location to ensure people are safe when they travel,” Black adds.

These devices are rarely protected or maintained with the same vigor as corporate IT systems, making them generally more vulnerable to being compromised and drafted into a zombie army. This situation is nothing new, but in the next year we can expect to see “personal networks of things” reside in homes with gigabit internet connections—like those offered by Google and AT&T—and so make home networks far more interesting, especially if vulnerabilities in popular home devices can be exploited mechanically (e.g., how the Mirai botnet was built).

Consumers will need to protect their personal networks from this new version of Mirai botnets, creating demand for services that safeguard them. More importantly, vendors will need to adopt better standards for protection of devices. If the Mirai botnet is any indication, the lack of security in device design is still quite profound, Black says.

Speaking of standards

Steven Sarnecki, vice president of federal and public sector at OSIsoft, pointed to the National Institutes of Standards and Technology’s (NIST) National Cyber Center of Excellence for a glimpse of what is to come. NIST is currently piloting a project to assess how energy companies can better utilize connected devices to integrate and increase security with hopes of sharing those best practices and insights across the energy sector.  

“As more companies wake up to the reality of IoT security threats, these solutions will become more commonplace, enabling enterprises to markedly increase their security footprint with only minimal incremental cost,” he says.

Sarnecki adds that in 2017 he would expect a large portion of IoT users, especially within the enterprise and industrial spaces, to begin to seriously consider the “internet of threats” aspect posed by IoT to their networks. Energy companies, water utilities, and many other critical infrastructure sectors rely on connected devices to support their missions.

Jeannie Warner, security manager at WhiteHat Security, agrees that new guidelines will emerge from organizations such as NIST requiring that application security vendors partner with device manufacturers and testing labs to deliver secure IoT systems. 

“The internet of things is growing daily, with smart devices and controlling applications at the core of every business from healthcare to smart cars and smart buildings. It’s essential to protect smart anything from attackers attempting to exploit their vulnerabilities,” she says.

In the same way manufacturing safety testing via the American National Standards Institute controls new releases in devices, she believes NIST SP 800 or a similar body will form guidelines for a comprehensive security assurance through the integration of dynamic application scanning technology and rigorous device controls testing.

Commonalities in all IoT systems include controls for tracking and sensing interfaces, combined with web- or mobile-enabled control applications that combine to expand the borders of the security ecosystem, she says. New guidelines will (ideally) force more application security vendors to partner with device control testing labs to support manufacturing earlier in the development process, helping the innovative organizations to manage risk by identifying vulnerabilities early in development, continue to monitor challenges during testing, and help release more secure products.

Big data

The enterprise has paid attention to IoT for some time, though 2017 will be the year we move past the “wow” phase and into the “how do we do we securely and effectively bring IoT to the enterprise, how do we handle the high speed data ingest, and how do we optimize analytics and decisions based on IOT data,” says Redis Labs Vice President of Product Marketing Leena Joshi.

Mark Bregman, Chief Technology Officer at NetApp, believes 2017 will be about capitalizing on the value of data. The explosion of data in today’s digital economy has introduced new data types, privacy and security concerns, the need for scale and a shift from using data to run the business to recognizing that data is the business.

Off-line data analytics and threat hunting become endless money pits, says Gunter Ollmann of Vectra Networks. “We’re told, and we observe, that each year our corporate data doubles. That power-of-two exponential growth, after merely four years of storing, mining, and analyzing logs for threats, means a 16-fold increase in overall costs—with an accompanying scaled delay in uncovering past threats.”

Cybersecurity will be the most prominent big data use case, says Quentin Gallivan, CEO of Pentaho, a Hitachi Group Company. As with election polls, detecting cybersecurity breaches depends on understanding complexities of human behavior. Accurate predictions depend upon blending structured data with sentiment analysis, location and other data.

This then opens another door for hackers. WatchGuard’s Nachreiner says attackers will start leveraging machine learning and AI to improve malware and attacks.

“In the past few years, cyber security companies have started leveraging these technologies to help defend our organizations. One of the big problems in infosec today is we are too reactive, and not predictive enough when it comes to new threats. Sure, once we recognize a piece of malware or a new attack pattern, we can design systems to identify and block that one threat, but hackers have become infinitely evasive. They have found techniques that allow them to continually change their attacks and malware so regularly that humans and even basic automated systems can’t keep up with the latest attack patterns. Wouldn’t it be great if we had technology that predicted the next threats instead?,” he says.

Machine learning can help us do just that. By feeding a machine learning system a gigantic dataset of good and bad files, or good and bad network traffic, it can start to recognize attributes of “badness” and “goodness” that humans never would have noticed on their own.

“Next year, I expect the more advanced cyber criminals to start somehow leveraging machine learning to improve their attacks and malware,” he says, adding that today, both good and bad guys have easy access to open source machine learning libraries like Google’s TensorFlow.

The security community as a whole will utilize big data more effectively in order to identify trends and threats, predicts Matt Rodgers, head of security strategy at E8 Security. “Organizations have the information they need, but they cannot find it. In 2017, companies will start looking at their data sets through advanced analytics to identify trends and risks. Big companies are already starting to augment their existing SIEM technology with behavior analytics capabilities to this end,” he says.

This story, “Data breaches through wearables put target squarely on IoT in 2017” was originally published by CSO.

Source: InfoWorld Big Data