AWS Now Available At EdgeConneX® Portland Edge Data Center®

AWS Now Available At EdgeConneX® Portland Edge Data Center®

EdgeConneX® has announced the availability of Amazon Web Services (AWS) Direct Connect in its Portland Edge Data Center®. With AWS Direct Connect, companies in the Pacific Northwest can connect their IT infrastructure directly to Amazon Web Services, establishing a private connection to the cloud that can reduce costs, increase performance and provide a more consistent network experience. The announcement marks the first metro offering for AWS Direct Connect in Portland and the first AWS deployment for EdgeConneX.

The idea of bringing content, the cloud and applications closer to end-users is one that has been predicted by many industry experts. Specifically, in a report titled “The Edge Manifesto: Digital Business, Rich Media, Latency Sensitivity and the Use of Distributed Data Centers” (July 2015), Gartner analyst Bob Gill states, “We’ve begun the move to digital business, including rich content via mobile devices, where people, their devices and even unattended “things” become actors in transactions.” Gill further predicts, “To optimize the experience, Gartner believes the topology of networked data centers will push over the next five years from a centralized, mega data center approach, to one augmented by multiple, smaller, distributed sources and sinks of content and information, whether located in distributed, enterprise-owned data centers, hosting providers, colocation or the cloud.”

“The Internet of Everywhere requires a highly diverse and distributed content and cloud architecture, with the network edge extended beyond traditional major peering hubs to ensure the service quality and experience expected by today’s enterprises and consumers,” remarks Clint Heiden, chief commercial officer, EdgeConneX. “AWS Direct Connect provides the Portland/Hillsboro regional enterprise and consumer end-users an easy, high-performance and private on-ramp to the cloud at the edge, enabling access to Amazon’s powerful web services and effective deployment of hybrid solutions supported by EdgeConneX’s world-class EDC infrastructure.”

The EdgeConneX Portland EDC is purpose-built to offer security, speed and performance improvements. These innovations enable customers to deliver digital content, cloud and applications to end-users as fast as possible. Edge Data Centers are proximity-based, strategically located nearest to the end-user’s point of access to reduce network latency and optimize performance. Local proximity access also brings the cloud closer to the enterprise, enabling a more secure, real-time access to cloud applications and services while offering reduced backbone transport costs.

Source: CloudStrategyMag

Microsoft SQL Server 2016 finally gets a release date

Microsoft SQL Server 2016 finally gets a release date

Database fans, start your clocks: Microsoft announced Monday that its new version of SQL Server will be out of beta and ready for commercial release on June 1. 

The news means that companies waiting to pick up SQL Server 2016 until its general availability can start planning their adoption.

SQL Server 2016 comes with a suite of new features over its predecessor, including a new Stretch Database function that allows users to store some of their data in a database on-premises and send infrequently used  data to Microsoft’s Azure cloud. An application connected to a database using that feature can still see all the data from different sources, though. 

Another marquee feature is the new Always Encrypted function, which makes it possible for users to encrypt data at the column level both at rest and in memory. That’s still only scratching the surface of the software, which also supports creating mobile business intelligence dashboards and new functionality for big data applications.

Five Security Features That Your Next-Gen Cloud Must Have

Five Security Features That Your Next-Gen Cloud Must Have

With cloud computing, virtualization, and a new type of end-user – the security landscape around the modern infrastructure needed to evolve. IT consumerization and a lot more data within the organization has forced security professionals to adopt better ways to protect their environment. The reality is that standard firewalls and UTMs are just no longer enough. New technologies have emerged which can greatly enhance the security of a cloud and virtualization environment – without impacting performance. This is where the concept of next-generation security came from.

It was the need to abstract physical security services and create logical components for a powerful infrastructure offering.

With that in mind – let’s look at five great next-gen security features that you should consider.

  1. Virtual security services. What if you need application-level security? What about controlling and protecting inbound, outbound, and intra-VM traffic? New virtual services can give you entire virtual firewalls, optimized anti-virus/anti-malware tools, and even proactive intrusion detection services. Effectively, these services allow for the multi-tenant protection and support of network virtualization and cloud environments.
  2. Going agentless. Clientless security now directly integrates with the underlying hypervisor. This gives your virtual platform the capability to do fast, incremental scans as well as the power to orchestrate scans and set thresholds across VM’s. Here’s the reality – you can do all of this without performance degradation. Now, we’re looking at direct virtual infrastructure optimization while still maintaining optimal cloud resource efficiency. For example, if you’re running on a VMware ecosystem, there are some powerful “agentless” technologies you can leverage. Trend Micro’s Deep Security agentless anti-malware scanning, intrusion prevention and file integrity monitoring capabilities help VMware environments benefit from better resources utilization when it comes to securing VMs. Further, Deep Security has been optimized to support the protection of multitenant environments and cloud-based workloads, such as Amazon Web Services and Microsoft Azure.
  3. Integrating network traffic with security components. Not only can you isolate VMs, create multi-tenant protection across your virtual and cloud infrastructure, and allow for application-specific protection – you can now control intra-VM traffic at the networking layer. This type of integration allows the security layer to be “always-on.” That means security continues to be active even during activities likes a live VM migration.
  4. Centralized cloud and virtual infrastructure management/visibility. Whether you have a distributed cloud or virtualization environment – management and direct visibility are critical to the health of your security platform. One of the best things about next-generation security is the unified visibility the management is capable of creating. Look for the ability to aggregate, analyze and audit your logs and your entire security infrastructure. Powerful spanning policies allow your virtual infrastructure to be much more proactive when it comes to security. By integrating virtual services (mentioned above) into the management layer – administrators are able to be proactive, stay compliant, and continuously monitor the security of their infrastructure.
  5. Consider next-gen end-point security for your cloud users. There are some truly disruptive technologies out there today. Here’s an example: Cylance. This security firm replaces more traditional, signature-based, technologies with a truly disruptive architecture. Basically, Cylance uses a machine-learning algorithm to inspect millions of file attributes to determine the probability that a particular file is malicious. The algorithmic approach significantly reduces the endpoint and network resource requirement. Because of its signature-less approach, it is capable of detecting both new threats and new variants of known threats that typically are missed by signature-based techniques. Here’s the other really cool part – even when your users disconnect from the cloud, they’re still very well protected. Because the Cylance endpoint agent does not require a database of signatures or daily updates, and is extremely lightweight on network, compute, and data center resources – it can remain effective even when disconnected for long periods.

Your environment is going to become more distributed. Virtual environments allow for greater scale where administrators are able to replicate data, better support distributed users, and deliver more complex workloads. Throughout all of this – you will need to ensure that your data points are secure. The dependence on the IT framework will only increase the amount of workloads we place into the modern data center and virtual platform. Because of this – it’s critical to deploy powerful security features while still maintaining optimal performance.

Next-generation security technologies do just that. We are now introducing powerful – scalable – ways to deploy security solutions into the modern cloud and virtualization environment. As you build out your virtual and cloud platforms, make sure to look at security solutions which utilize next generation features.

Ultimately, you’ll create a more efficient platform, improve end-user experiences, and be able to control your security environment on a truly distributed scale.

Source: TheWHIR

Bridging the Big Data Gap in Big Pharma with Healthcare Analytics

Bridging the Big Data Gap in Big Pharma with Healthcare Analytics

Brian_IrwinIn this special guest feature, Brian Irwin, VP of Strategy at SHYFT Analytics, takes a look at the three market dynamics driving life sciences organizations to evaluate new data analytics strategies and technologies as they transform into value-based care delivery models. He leads strategic account development and partnership initiatives while serving as a key strategist for the company. SHYFT Analytics is a leading cloud analytics company within life sciences. The company plays an integral role as the industry continues to undergo dramatic transformation to deliver more personalized and value-based medicine. Brian has over 12 years of experience in a variety of sales and leadership roles within the life sciences industry. Areas of impact and focus have included organizational leadership, executive account management, and strategic enterprise development. Most recently, Brian served as the President and Managing Director at Informa Training Partners, a company focused on Clinical and Managed Market training solutions devoted exclusively to pharmaceutical, biotech, and med device companies. Additionally, Brian spent 9 years with Takeda Pharmaceuticals N.A. in positions of increasing responsibility, leadership, and organizational development. Brian holds a BA in Biology and Natural Sciences from St. Anselm College.

Life sciences organizations recognize that Big Data is both an opportunity and a challenge for their entire industry.  However, the strategies and systems, processes, and platforms in place today are not successful and cannot contend with the demands of a rapidly evolving healthcare industry. As total spending on medicines globally reaches the $1 trillion level annually and with no end in sight to rising costs, there is tremendous pressure across the healthcare ecosystem to improve outcomes and prove value. Core to making healthcare more efficient, measureable, and patient-centric is the ability to integrate vast data resources available across this ecosystem and translate them into meaningful, actionable insights.

The demand for timely and improved use of these data creates pressure across the various channels of healthcare, leaving manufacturers, payers, and provider groups particularly vulnerable to the big data deluge. Tasked with making sense of the exponential volumes of patient-level clinical and financial data, these organizations must also capitalize on opportunities to inform both clinical and commercial strategies simultaneously.  A demand for data access across the enterprise, a changing competitive landscape tied to intense cost pressures, and the rapid influx of Real World Evidence (RWE) data is forcing the hand of every healthcare entity. Their vast network of data silos – historically housed in rigid, brittle, inaccessible systems – are no longer fit to serve as the backbone of operations in an increasingly dynamic and often unpredictable marketplace.

Let’s take a closer look at the three market dynamics driving life sciences organizations to evaluate new data analytics strategies and technologies as they transform into value-based care delivery models.

1 — Data Demands across the Enterprise

The rapid proliferation of technology and the overall shift towards patient engagement has generated an unprecedented amount of clinical and commercial data. However, overburdened internal resources have their hands tied with even gaining access to these data as well as archaic reporting processes.  Historically, getting data out of these silos and into the hands of decision makers across the different facets of a company’s operations took weeks, even months. To make matters worse the ‘reports’ that were developed and delivered for review were often incomplete, lacking the right data or the right detail to truly inform business decisions. Executives had two choices: Accept the information as they were or ask for modifications and wait another month for the final result.

Today it is clear that pharmaceutical companies no longer have the luxury of time; waiting for insights, which are subpar at best and inaccurate at worst, risks any potential first mover advantage that could be gained.  Without a faster, more effective way to manage data across the enterprise, life sciences companies cannot garner insights quickly enough to stay competitive.

2 — Cost Reductions and Increased Commercialization Costs

Life sciences companies are undergoing a massive shift in the way information is gathered, used and leveraged to drive successful outcomes in all areas of their ecosystem. At the same time, many are constrained by tighter payer controls and increased commercialization costs. According to an IMS Institute IT survey, over $35 billion in cost reductions are needed through 2017 in order for large pharmaceutical manufacturers to maintain their current levels of research and development activities as well as their operating margin levels. The same study found that almost half of survey respondents, 45 percent, confirmed they are planning cuts of more than 10 percent over the next three years. The question becomes, “how can we conduct these research activities, particularly observational research, faster and cheaper than its done today? Companies have invested heavily in all of the data they need, but lack the technology and applications required to achieve these goals.

These cost pressures sit in paradox to the revenue opportunity available from the industry data explosion; life sciences companies are struggling to find the balance between cost reductions and investment in innovations. All recognize that if they cannot take advantage of the data in front of them their competitor certainly will… and will take their market share too.

Transforming data into insights through proven analytics can support the industry’s increasing need for real-world and outcomes-based insights. By rethinking the silos that permeate their businesses they can improve the volume and value of research activities shorten cycle times across all lifecycle phases, strengthen analytical competence and drive rapid change and market differentiation.

3 — Real World Evidence will Drive Enterprise Success Factors

RWE includes elements associated with the delivery of care – electronic medical records, claims information, patient surveys, clinical trial effectiveness, treatment preferences, even physician utilization patterns. Until recently no one has been able to crack the code on RWE success– conservative market estimates suggest big pharma spends $20 million dollars on an average annually on RWE, but they are still no closer to fully understanding the real world impact of pharmacologic and non-pharmacologic treatment on patients and healthcare systems.

The typical approach to RWE – a myriad of siloed databases, services-dependent, with access restructured to just a handful of “power users” – has shown to be vastly ineffective. It simply cannot address the need to quickly access, analyze, and deliver insights from real-world data for broad use across the organization.

As pharmaceutical companies continue to invest, create, and collect real-world evidence data, each of them must be able to turn that information into actionable insights as they seek to impact treatment options, reduce costs, and improve patient outcomes. By translating RWE data into patient-centric intelligence and analytics for use across the clinical – commercial continuum, the impact of real world data can quickly go from basic theory to pervasive practice and finally deliver upon its promise to transform treatment strategies and the health of patients everywhere.

Cloud-based analytics can bridge the Big Data gap. Such solutions have the capability to translate data into patient-centric intelligence for use across the enterprise. The result is an improvement to both the volume and value of research activities, shorter cycle times across all lifecycle phases, and stronger, more complete analytical competence to drive rapid change and market differentiation. By enabling these organizations to quickly access, analyze, and deliver meaningful insights for broad use, they can deliver a better understanding of unmet patient need, create more targeted and streamlined product development, and contribute to the overall elevation of quality healthcare.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

“Above the Trend Line” – Your Industry Rumor Central for 5/2/2016

“Above the Trend Line” – Your Industry Rumor Central for 5/2/2016

Above the Trend Line: machine learning industry rumor central, is a recurring feature of insideBIGDATA. In this column, we present a variety of short time-critical news items such as people movements, funding news, financial results, industry alignments, rumors and general scuttlebutt floating around the big data, data science and machine learning industries including behind-the-scenes anecdotes and curious buzz. Our intent is to provide our readers a one-stop source of late-breaking news to help keep you abreast of this fast-paced ecosystem. We’re working hard on your behalf with our extensive vendor network to give you all the latest happenings. Heard of something yourself? Tell us! Just e-mail me at: daniel @insidebigdata.com.  Be sure to Tweet Above the Trend Line articles using the hashtag: #abovethetrendline.

In C-suite news we learned that Alteryx, Inc., a leader in self-service data analytics, announced it has appointed Chuck Cory, former Chairman, Technology Investment Banking at Morgan Stanley, and Tim Maudlin, an Independent Financial Services Professional, to its Board of Directors … Another industry alignment! FICO, the predictive analytics and decision management software company, and Capgemini, one of the world’s foremost providers of consulting, technology and outsourcing services, unveiled they have formed an alliance to meet the increasing market need for analytic solutions in financial services. The alliance will provide FICO’s risk and fraud management products through Capgemini’s consulting and integration services in North America … More people movement! UK-based IS Solutions Plc, a leading AIM-listed data solutions provider, has appointed digital data expert Matthew Tod to head up a new Data Insight practice. Tod was previously of digital analytics consultancy Logan Tod and Co. which was acquired by PwC in 2012; he became Partner and later led the Customer Consulting Group, successfully building up PwC’s digital transformation strategy capabilities. He takes the new role of Director of Data Insight at IS Solutions Plc and will work with clients to overcome the “data everywhere” problem, enabling them to gain competitive advantage from their information assets … And in some M&A news: HGGC, a leading middle market private equity firm, today announced that it has completed the acquisition of FPX, a SaaS company and leading provider of platform-agnostic enterprise Configure-Price-Quote (CPQ) applications. As part of the transaction, senior FPX management has reinvested their proceeds from the sale and retained a significant minority stake in the business. Terms of the private transaction were not disclosed … Heard on the street: Narrative Science, a leader in advanced natural language generation (Advanced NLG) for the enterprise, announced the availability of Narratives for Power BI, a first-of-its-kind extension for the Microsoft Power BI community. The extension, now available for download, allows users to access important insights from their data in the most intuitive, consumable way possible – dynamic, natural language narratives. Now all users, regardless of skill-set, can quickly understand the insights from any data set or visualization, simply by reading them … ODPi, a nonprofit organization accelerating the open ecosystem of big data solutions, revealed that 4C Decision, ArenaData, and AsiaInfo, have joined the initiative to advance efforts to create a common reference specification called ODPi Core. Many vendors have focused on productizing Apache Hadoop® as a distribution, which has led to inconsistency that increases the cost and complexity for application vendors and end-users to fully embrace Apache Hadoop. Founded last year, ODPi is an industry effort to accelerate the adoption of Apache Hadoop and related big data technologies. ODPi’s members aim to streamline the development of analytics applications by providing a common specification with reference implementations and test suites … Veriflow, the network breach and outage prevention company, announced the appointment of Scott Shenker to its Board of Directors and of Sajid Awan as vice president of products. Veriflow launched out of stealth on April 5, with $2.9 million in initial investor funding from New Enterprise Associates (NEA), the National Science Foundation and the Department of Defense. The company’s software, which is designed for CISOs, network architects, engineers and operators, uses mathematical network verification, which is based on the principles of formal verification, to bulletproof today’s most complex networks. Veriflow’s patented technology, including a best-practice library of network security and correctness policies, provides solutions across the multi-billion-dollar networking market to minimize the security breaches and costly disasters that can result from network vulnerabilities … SnappyData, developers of the in-memory hybrid transactional analytics database built on Apache Spark, indicated that it has secured $3.65 million in Series A funding, led by Pivotal, GE Digital and GTD Capital. The funding will allow the company to further invest in engineering and sales. The SnappyData leadership team includes, Richard Lamb, Jags Ramnarayanan and Sudhir Menon, who worked together during their time at Pivotal to build Pivotal GemFire® into one of most widely adopted in-memory data grid products in the market … Three researchers located at Drexel University, North Carolina State University, and the University of North Carolina at Chapel Hill have been named 2016 -2017 Data Fellows by the National Consortium for Data Science (NCDS) the consortium announced. The NCDS, a public-private partnership to advance data science and address the challenges and opportunities of big data, will provide each Data Fellow with $50,000 to support work that addresses data science research issues in novel and innovative ways. Their work will be expected to advance the mission and vision of the NCDS, which formed in early 2013. Fellowships begin July 1 and last one year … Lavastorm, a leading agile analytics company, announced that it has partnered with Qlik® (NASDAQ: QLIK), a leader in visual analytics, to put a powerful, fully-integrated modern analytics platform into the hands of data analysts and business users directly through Qlik Sense®. The dynamic, integrated solution provides an intuitive, comprehensive platform that eliminates the complexity of advanced analytics while empowering business users of all skill levels to uncover unique, transformative business insights … Another industry alignment: Snowflake Computing, the cloud data warehousing company, and MicroStrategy® Incorporated (Nasdaq: MSTR), a leading worldwide provider of enterprise software platforms, today announced an alliance to bring the flexibility and scalability of the cloud to modern data analytics. This collaboration will build on Snowflake’s certified connectivity with MicroStrategy 10™ through further product integration and go-to-market collaboration, enabling businesses to take advantage of the cloud to get fast answers to their toughest data questions … New product news! Datawatch Corporation (NASDAQ-CM: DWCH) launched Datawatch Monarch 13.3, the latest edition of the company’s first-to-market self-service data prep solution. Datawatch Monarch enables business users to acquire, manipulate and blend data from virtually any source. The new product release delivers better and faster data access and data prep through advanced functionality, unrivaled simplicity and enhanced information governance … Trifacta, a leader in data wrangling, announced that Infosys (NYSE: INFY), a global leader in consulting, technology and next-generation services, has partnered with Trifacta to provide a data wrangling solution for the Infosys Information Platform (IIP) and Infosys’ other platforms and offerings. Infosys clients can now leverage Trifacta’s intuitive self-service solution for exploring and transforming data — a critical step in any analytics process. Business analysts and data scientists can integrate large, complex data sets, transform and filter the results and share valuable insights, all from data stored and processed within the IIP platform.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Secure Web Gateways Fail to Prevent Malicious Attacks

Secure Web Gateways Fail to Prevent Malicious Attacks

Of the 200 billion total communications observed, nearly 5 million attempted malicious outbound communications were from infected devices.

Eighty percent of secure web gateways installed by Fortune 1000 companies miss the vast majority of malicious outbound communications, according to a report from attack detection and analytics specialist Seculert.The study examined a subset of its 1.5 million user base that included more than 1 million client devices that had generated over 200 billion total communications from Fortune 1000 companies in North America.Nearly all the environments studied were running sophisticated perimeter defense systems, which included a secure web gateway and/or next generation firewall, an IPS, as well as a SIEM in addition to fully functioning endpoint protection.”The alarming part of this research is the sheer number of malicious threats that were able to make it through the companies’ secure web gateways time after time,” Richard Greene, CEO of Seculert, told eWEEK. “The research found that 80 percent of secure web gateways blocked zero to two of the 12 latest and most dangerous threats. These are real tests conducted with Fortune 1000 companies, and even they are ill prepared for the increasing complexity of cybercriminals’ attacks.”

Of the 200 billion total communications observed, nearly 5 million attempted malicious outbound communications were from infected devices, and 40 percent of all attempted malicious communication succeeded in defeating their associated secure web gateway.

“Many enterprises rely on only prevention-focused perimeter security tools, like next generation firewalls, IPS, and secure web gateways,” Greene said. “This positions them directly in the crosshairs of cybercriminals and other adversaries capable of penetrating modern perimeter security defenses with startling ease. While useful, these prevention solutions alone cannot protect organizations in the current threat landscape.”The report also found nearly 2 percent of all examined devices were infected and all companies included in the research exhibited evidence of infection.”Understanding the cyber threat landscape is a constant game of trying to stay ahead of the latest threats,” Greene said. “Common cyber criminals will no longer be the most common threat as sophisticated criminal gangs with modern organizational models and tools emerge as the primary threat.”Greene noted that besides being well funded these attackers have the luxury of time on their side, so they’re able to develop more advanced techniques not yet anticipated by the cyber-defense community.”Also, there will be a growing number of state versus state reconnaissance attacks as cyber “armies” research the strengths and weaknesses of their opponents,” he said.Measured over time, nearly all of the gateways observed exhibited uneven performance, and the report noted that while most performed well for weeks or months, eventually all showed evidence of being “defeated” by the adversary.
Source: eWeek

6 Splunk alternatives for log analysis

6 Splunk alternatives for log analysis

Quick! Name a log analysis service. If the first word that popped out of your mouth was “Splunk,” you’re far from alone.

But Splunk’s success has spurred many others to up their log-analysis game, whether open source or commercial. Here are six contenders that have a lot to offer sys admins and devops folks alike.

ELK/Logstash (open source)

Splunk faces heavy competition from the family of projects that use the ELK stack: Elasticsearch for search, Logstash for data collection, and Kibana for data visualization. All are open source.

Elasticsearch, the company that handles commercial development of the stack, provides all the pieces either as cloud services or as free, open source offerings with support subscriptions. They provide the best alternative to Splunk when used together, since Splunk’s strength is in searching and reporting as well as data collection.

Platfora Further Democratizes Big Data Discovery

Platfora Further Democratizes Big Data Discovery

Platfora_LogoPlatfora, the Big Data Discovery platform built natively on Apache Hadoop and Spark, announced the general availability of Platfora 5.2. The new release democratizes big data across an organization, moving it beyond IT and early adopters by enabling business users to explore big data and discover new insights through their favorite business intelligence (BI) tool.  With its flexible, open platform, Platfora makes it easy for customers to maximize and extend existing IT investments while getting measurable value out of big data. Platfora 5.2 features native integration to Tableau, Lens-Accelerated SQL accessible through any SQL client, and the option to run directly on the Hadoop cluster using YARN.

Achieving value from big data implementations has been elusive for enterprises, and connecting traditional BI tools to Hadoop data lakes has been a difficult, slow process, with many organizations doing far more work with virtually no new answers to show for it. Platfora’s Big Data Discovery platform enables citizen data scientists to conduct self-service data preparation, visual analysis, and behavioral analytics in a single platform. With this release, Platfora puts all this smart data and analysis in the hands of any business user leveraging any BI tool, so they can ask and answer the important questions for their business, like customer behavior and segmentation. Platfora provides the tools and tight iterative discovery loop to make new insights possible in a matter of minutes to hours, rather than the days or weeks it could take using an alternative solution.

Getting value out of big data is more than just slicing and dicing billions of records and it can’t only be the domain of a data scientist. It requires discovering what you have and getting the data ready for analysis to use without boundaries,” said Peter Schlampp, VP of Products, Platfora. “We are dedicated to providing flexible, open tools that can address modern data challenges, and Platfora 5.2 opens up the transformative power of big data to business users by enabling them to use the BI tools they know and love, further empowering ‘citizen data scientists’ across enterprises.”

Platfora cohort analysis

Platfora cohort analysis

Platfora Big Data Discovery 5.2 includes a variety of new features and technical enhancements that make make it possible for both business and technical users to easily integrate with their favorite tools, including:

  • Native Tableau Integration: Directly export prepared and enriched data in TDE format to Tableau Desktop or schedule data pushes automatically to Tableau Server.

  • Lens-Accelerated SQL: Platfora lenses make access to petabyte-scale data 100s to 1000s of times faster than querying the data directly. Now any BI tool can query lenses live via SparkSQL and ODBC, opening big data to any business user. Compared to standalone SQL accelerators for Hadoop, Platfora’s lenses are more scalable, easier to maintain and manage, and enterprise-ready.

  • Run on Hadoop Cluster: With the development and maturity of the YARN resource manager for Hadoop, it is now possible to run Platfora directly on the Hadoop cluster or in the traditional dedicated configuration. IT departments can take advantage of existing hardware investments and repurpose computing resources on-demand.

  • Enhanced Vizboards™: The easiest and best way to visualize data gets better in Platfora 5.2 with responsive layout, smarter default visualizations and more consistent use of color.

Big data discovery will help advance the analytics maturity of the organization, will start training some of the future data scientists, can provide the first batch of insights that may raise awareness to new opportunities and may provide enough return on investment to justify the business case for big data analytics,” said Joao Tapadinhas, Research Director, Gartner. “It is the missing link that will make big data go mainstream.”

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

U.S. Risks Losing Edge in HPC, Supercomputing, Report Says

U.S. Risks Losing Edge in HPC, Supercomputing, Report Says

With growing competition from China and other countries, U.S. lawmakers must take steps to accelerate the country’s HPC efforts, the ITIF says.

Last year, President Obama issued an executive order aimed at accelerating the development of high-performance computing systems in the United States.The executive order created the National Strategic Computing Initiative (NSCI), an initiative to coordinate federal government efforts and those of public research institutions and the private sector to create a comprehensive, long-term strategy for ensuring that the United States retains its six-decade lead in research and development of HPC systems.

Noting the importance of supercomputers in government, industry and academia, Obama wrote that the country’s momentum in high-performance computing (HPC) needed a “whole of government” approach that incorporates public and private efforts.

“Maximizing the benefits of HPC in the coming decades will require an effective national response to increasing demands for computing power, emerging technological challenges and opportunities, and growing economic dependency on and competition with other nations,” the president wrote. “This national response will require a cohesive, strategic effort within the Federal Government and a close collaboration between the public and private sectors.”However, according to a recent report, the United States’ lead in the space is not ensured, and that other regions and countries—in particular, China—are making concerted efforts to expand their capabilities in the design, development and manufacturing of supercomputers and the components that make up the systems.The authors of the report by the Information Technology and Innovation Foundation (ITIF) stressed the importance to the United States of the HPC market—to everything from national security to economic development—and listed steps Congress must make to keep the country at the forefront of HPC and supercomputer development.”Recognizing that both the development and use of high-performance computing are vital for countries’ economic competitiveness and innovation potential, an increasing number of countries have made significant investments and implemented holistic strategies to position themselves at the forefront of the competition for global HPC leadership,” the authors, Stephen Ezell and Robert Atkinson, wrote. “The report details how China, the European Union, Japan, and other nations have articulated national supercomputing strategies and announced significant investments in high-performance computing.”The United States needs to meet and exceed those efforts, the authors wrote.”The United States currently leads in HPC adoption, deployment, and development, but its future leadership position is not guaranteed unless it makes sustained efforts and commitments to maintain a robust HPC ecosystem,” they wrote.The report describes HPC as the use of supercomputers and massively parallel processing technologies to address complex computational challenges, using such techniques as computer modeling, simulation and data analysis. It includes everything from computer hardware to algorithms and software running in a single system.The United States continues to be the leader in the development of supercomputers, but the current trends in the industry are threatening. In the latest Top500 list of the world’s fastest systems released in November 2015, the United States had 200 systems on the list. However, it was down from the 231 on the list released in July 2015 and was the lowest number for the country since the list was started in 1993. In addition, China placed on 109 systems in November, almost three times the 37 the country had on the July list. In addition, the Tianhe-2 supercomputer developed by China’s National University of Defense Technology was in the top slot for the sixth consecutive time, with a peak theoretical performance speed of 54.9 petaflops (quadrillion floating point operations per second), twice the speed of Titan, the second fastest system located at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory in Tennessee.The next Top500 list will be announced next month at the ISC 2016 show next month in Frankfurt, Germany.
Source: eWeek

Pentagon Bug Bountry Program Attracks Strong Hacker Interest

Pentagon Bug Bountry Program Attracks Strong Hacker Interest

The Pentagon is at the midpoint of a crowdsourcing initiative that has attracted about 500 researchers to sign up for the opportunity to search for bugs in the agency’s Websites.

The Pentagon’s bug bounty program hit its midway point this past week, and already the initiative is, in some ways, a success. More than 500 security researchers and hackers have undergone background checks and begun to take part in the search for security flaws, according to HackerOne, the company managing the program.The “Hack the Pentagon” pilot, announced in March, is the first federal government program to use a private-sector crowdsourcing service to facilitate the search for security flaws in government systems.The $150,000 program started two weeks ago and will continue for another two weeks. While neither the Pentagon nor HackerOne has disclosed any of the results so far, Alex Rice, chief technology officer and co-founder of vulnerability-program management service HackerOne, stressed that it would be “an extreme statistical outlier” if none of the researchers found a significant vulnerability.”What I can say is that we haven’t seen any of [these programs] launched, even those with a smaller number of individuals, where the researchers have found nothing,” he told eWEEK. “No one who launches these bounty programs expects to find nothing.”

The Pentagon’s program is the first bug bounty effort sponsored by the federal government, but it will not likely be the last, because companies and government agencies are on the wrong side of an unequal security equation: While defenders have to hire enough security workers to find and close every security hole in their software and systems, attackers only have to find one, said Casey Ellis, CEO and founder of BugCrowd, a vulnerability-bounty organizer.

“The government is in a really bad position right now, which comes from being outnumbered by the adversaries,” he said. “They can’t hire security experts fast enough, and in the meantime they are still being hacked.”Crowdsourcing some aspects of their security work offsets part of the inequality in the math facing these companies, he said.The Department of Defense program, however, is on a much larger scale than most initial commercial efforts, HackerOne’s Rice said. Other efforts typically use dozens of security researchers, rather than hundreds.The Pentagon should get good results because the sheer number of hackers means they will have more coverage of potential vulnerabilities.”Even hiring the best security experts that you are able to find, that will still be a much smaller pool than if you could ask everyone in the world, or in the country,” Rice said. “You really can’t do security effectively unless you come at it from every possible angle.”U.S. Secretary of Defense Ash Carter characterized the initiative as a way for the government to take new approaches to blunt the attacks targeted at the agency’s networks.”I am always challenging our people to think outside the five-sided box that is the Pentagon,” he said in a statement at the time. “Inviting responsible hackers to test our cyber-security certainly meets that test.”The bug bounty pilot started on April 18 and will end by May 12, according to the Department of Defense. HackerOne is slated to pay out bounties to winners no later than June 10. The Department of Defense has earmarked $150,000 for the program.The DOD called the initiative a step toward implementing the administration’s Cyber National Action Plan, a strategy document announced Feb. 9 and which calls for the government to put a priority on immediate actions that bolster the defenses of the nation’s networks. The program is being run by the DOD’s Defense Digital Service, which Carter launched in November 2015.While finding and fixing vulnerabilities is important, the program could also create a potential pipeline to recruit knowledgeable security workers into open positions in the federal government, Monzy Merza, director of cyber research at data-analysis firm Splunk, said in an email interview.”Discovery and fixing of vulnerabilities is a good thing,” he said. “Creating an opportunity for individuals to test their skills and learn is also important. And there is a general shortage of skilled security professionals. Putting all these pieces together, a bug bounty program creates opportunities for people to learn and creates a human resource pool in a highly constrained market.”While attacking government systems may thrill some hackers and make others too nervous to participate, the actual program differs little from the closed bug hunts sponsored by companies, HackerOne’s Rice said.The security firm’s programs—and other efforts by BugCrowd and TippingPoint’s Zero-Day Initiative, now part of security firm Trend Micro—vet security researchers and hackers to some extent before allowing them to conduct attacks on corporate services and Websites, especially production sites. In the Pentagon’s case, more extensive background checks were conducted.In the end, the programs allow companies to spend money on security more efficiently, only paying for results, not just hard-to-find workers, he said.”Companies are not insecure because of a lack of money to spend on security,” Rice said. “There is a ridiculous amount of money being inefficiently and ineffectively spent on security. Even if we could hire all the security experts in our town or in our field, we could not possibly level the playing field against the adversaries.”
Source: eWeek