Cavium Brings 64-Bit ARM Architecture to Network, Storage SoCs

Cavium Brings 64-Bit ARM Architecture to Network, Storage SoCs

The company in the past has used the MIPS architecture in its Octeon chips, but now is expanding its strategy to include the ARM-based Octeon TX SoCs.

Cavium officials more than a year ago embraced ARM’s 64-bit architecture for its ThunderX line of server processors. Now the vendor is doing the same with its new line of Octeon TX chips for embedded products in such areas as networking and storage.The company on May 2 introduced the new portfolio of systems-on-a-chip (SoCs) that comprises four product families, all of which are based on 64-bit ARMv8.1 cores. Cavium’s previous Octeon were based on the MIPS64 architecture. The MIPS products reportedly will continue being produced, with the new ARM-based SoCs expanding what Cavium can offer.ARM designs low-power SoCs, and then licenses those designs to a broad array of partners, from Qualcomm and Samsung to Cavium, Applied Micro and Advanced Micro Devices. The company’s chip designs are found in most smartphones and tables, and for the past several years, ARM officials have been pushing to move up the ladder and get their architecture into the data center.Most of the attention to the effort has centered on servers, and vendors like Cavium, Applied Micro, AMD, Qualcomm and others have either put ARM-based server SoCs on the market or are developing them. However, ARM officials have said they see a role for their low-power designs in a wide range of data center systems, including networking and storage appliances.

According to Steve Klinger, general manager of Cavium’s Infrastructure Processor Group, his company’s new product line is an example of what can be done with the ARM architecture.

“The wide range of products in the Octeon TX ARM 64-bit product line builds upon this success [of the company’s MIPS Octeon SoCs] and expands the use of these products into control and embedded processing applications that leverage the fast-growing ARM ecosystem and breadth of open-source initiatives,” Klinger said in a statement.The Octeon TX portfolio includes four product families—the CN80XX and CN81XX (one to four ARMv8.1 ThunderX cores and up to 2MB of last-level cache) and CN82XX and CN83XX (eight 24 cores, up to 8MB of last-level cache). The chips will enable Cavium to get into control plane applications areas in networking and storage for enterprises, service providers and data centers, they said. The chips’ ability to run multiple concurrent data and control planes at the same time will have application in a wide range of areas, from security and router appliances to software-defined networking (SDN) and network-functions virtualization (NFV), service provider customer-premises equipment (CPE), storage controllers and gateways for the Internet of things (IoT).The network is under pressure from such trends as big data, mobility, the IoT, the cloud and the massive amounts of data mirrored across the cloud and enterprise, and there is growing demand for them to become more scalable, agile, open and application-centric. The control plane needs to be able to run commercial software distributions (such as Red Hat Enterprise Linux, Canoncial and Jave SE) and open-source applications, such as OpenStack, OpenFlow and Quagga, Cavium officials said.Businesses also want the data plane to simultaneously support of multiple high-performance applications for firewalls, content delivery, routing and traffic management, they said. There also is increased demand for bandwidth and the growing security threats.While the current MIPS-based SoCs are being used in data plane applications and the control plane with embedded software, control plane applications that need a broader software ecosystem have found it in the x86 architecture used by Intel and AMD. Businesses embracing open, service-centric networks are looking for options that offer lower cost and a wide ecosystem, which Cavium officials said the ARM architecture offers.The new Octeon TX SoCs combine what the MIPS-based offerings do with the ecosystem, virtualization, open-source support and optimized ARMv8.1 CPU cores from the ThunderX portfolio. They also include integrated Cavium’s NitroX V security processors.The SoCs with one to four cores will begin sampling this quarter, while those with eight to 24 cores will sample starting in the third quarter.
Source: eWeek

Microsoft: SQL Server 2016 Gets a June 1 Release Date

Microsoft: SQL Server 2016 Gets a June 1 Release Date
The wait’s nearly over. Microsoft’s cloud- and analytics-friendly database software will be generally available next month. Microsoft has finally settled on a release date for SQL Server 2016, the Redmond, Wash., software giant announced May 2.A year after Microsoft released the first public preview of SQL Server 2016, the database software will be generally available Wednesday, June 1—before summer arrives. Customers can select from four editions: Express, Standard, Enterprise and the free Developer Edition. The latter includes all the features found in SQL Server Enterprise but is meant for development and test deployments, not production workloads.The official release, along with full enterprise support services from Microsoft, will allow customers “to build mission-critical, and business-critical intelligent applications with the most secure database, the highest performance data warehouse, end-to-end mobile BI [business intelligence] on any device, in-database advanced analytics, in-memory capabilities optimized for all workloads, and a consistent experience from on-premises to cloud,” wrote Tiffany Wissner, senior director of Microsoft Data Platform Marketing, in a May 2 blog post.While those are lofty claims, new benchmark data from one of Microsoft’s major hardware partners suggests that Wissner’s remarks are no idle boast.

Separately, Lenovo announced May 2 that it had set a new data warehouse performance record using a x3950 X6 server running Microsoft SQL Server 2016 Edition and Windows Server 2016 Standard. The server was outfitted with eight Intel Xeon E7-8890 v3 processors clocked at 2.5GHz and 12TB of system memory. The non-clustered system scored 1,056,164.7 queries per hour H (QphH) @30,000GB at $2.04/QphH @30,000GB on the TPC-H decision support benchmark that simulates a large number of ad hoc queries and concurrent data changes in business settings.

“This is the first-ever non-clustered result @30,000GB TPC-H benchmark scale. Previously, non-clustered results maxed out @10,000 TPC-H benchmark scale. At 30TB, it is three times larger than the previous tested database, reflecting the growth of in-memory databases both in size, and popularity,” said Lenovo in a statement. In addition to SQL Server’s in-memory data processing capabilities, the Chinese server maker also credited Windows Server 2016’s ability to support 12TB of memory, up from the previous 4TB limit.SQL Server also opens the door to advanced R-based business analytics. R is the statistical computing language that is popular among data scientists for predictive analytics.Last month’s SQL Server 2016 release candidate introduced a new installer that gives administrators the option of installing a stand-alone Microsoft R Server, formerly Revolution R from Revolution Analytics, or SQL Server R Services as an in-database capability. Microsoft acquired Revolution Analytics in early 2015.On the security front, SQL Server 2016 includes a new data privacy enhancing feature called Dynamic Data Masking (DDM), which Microsoft incorporated into its cloud-based Azure SQL Database last year. DDM can be used to obfuscate or limit access to sensitive data without making changes to the data stored in the database or applications.
Source: eWeek

8 Things to Consider When Moving an Enterprise System to the Cloud

8 Things to Consider When Moving an Enterprise System to the Cloud

Pick the Right Cloud Provider

As enterprises’ needs evolve, or in the event of a merger, an acquisition, a company divestiture or a split, it is essential the data is not held hostage to one provider. It must be moved seamlessly from one cloud provider to another. In addition, make sure the provider is the right fit in terms of longevity, scalability, cost, open standards, other integrated services/apps, location and security.

Source: eWeek

Microsoft Restricts Cortana to Edge and Bing to Protect Windows 10

Microsoft Restricts Cortana to Edge and Bing to Protect Windows 10
DAILY VIDEO: Microsoft limits Cortana to Edge and Bing on Windows 10; U.S. risks losing edge in HPC, supercomputing, report says; Pentagon bug bounty program attracts strong hacker interest; and there’s more.

Read more about the stories in today’s news:

Today’s topics include Microsoft’s limitation of its Cortana virtual assistant technology to Edge and Bing on Windows 10, the United States’ plan to accelerate its high-performance computing efforts, the success of the Pentagon’s bug bounty program and PhishLabs’ discovery of malware posing as legitimate apps on Google Play.

Cortana, Microsoft’s virtual assistant technology included with the Windows 10 operating system, is being reined in, the company announced April 28. As the Windows 10 user base has grown—270 million devices are running the OS at last count—Microsoft has discovered that Cortana has been taken in unintended directions, resulting in what the company claims is an unreliable user experience.

“Some software programs circumvent the design of Windows 10 and redirect you to search providers that were not designed to work with Cortana,” said Ryan Gavin, general manager of Microsoft Search and Cortana. In particular, they can interrupt some of Cortana’s task completion and personalized search capabilities, he said. In response, Microsoft is locking down the Cortana search experience. Now, Cortana will only display Bing search results in the Microsoft Edge browser.

Last year, President Obama issued an executive order aimed at accelerating the development of high-performance computing systems in the United States. The executive order created the National Strategic Computing Initiative to coordinate federal government efforts and those of public research institutions and the private sector to create a comprehensive, long-term strategy for ensuring that the United States retains its six-decade lead in research and development of HPC systems.

However, according to a recent report, the United States’ lead in the space is not assured, and other regions and countries—in particular, China—are making concerted efforts to expand their capabilities in the design, development and manufacturing of supercomputers and the components that make up the systems.

“The United States currently leads in HPC adoption, deployment, and development, but its future leadership position is not guaranteed unless it makes sustained efforts and commitments to maintain a robust HPC ecosystem,” the Information Technology and Innovation Foundation reported.

The Pentagon’s bug bounty program hit its midway point this past week, and already the initiative is, in some ways, a success. More than 500 security researchers and hackers have undergone background checks and begun to take part in the search for security flaws, according to HackerOne, the company managing the $150,000 program.

The “Hack the Pentagon” pilot, announced in March, is the first federal government program to use a private-sector crowdsourcing service to facilitate the search for security flaws in government systems. While neither the Pentagon nor HackerOne has disclosed any of the results so far, Alex Rice, chief technology officer and co-founder of vulnerability-program management service HackerOne, stressed that it would be “an extreme statistical outlier” if none of the researchers found a significant vulnerability.

PhishLabs, a company that provides anti-phishing services, said it has discovered 11 malicious applications disguised as mobile apps for popular online payment services on Google’s official Google Play store since the beginning of this year. The applications purport to give users access to their online payment accounts from their mobile devices, PhishLabs security analyst Joshua Shilko said in a blog post.

However, in reality, the only functionality the apps have is to collect the user’s log-on credentials and personal data and to send that to a remote command and control server belonging to the malware authors, Shilko said. PhishLabs did not identify the 11 payment brands whose apps were spoofed and uploaded to Google Play.

Source: eWeek

Dell, EMC's New Corporate Brand: Dell Technologies

Dell, EMC's New Corporate Brand: Dell Technologies

EMC unveils new mid-range all-flash storage system, Unity, that holds 80TB of content and will cost less than $20,000.

LAS VEGAS — Day 1 of EMC World is always one of the newsiest days of the year in the data storage business, and May 2 at the Sands Expo Center here was no different from years gone by.Next year’s will be a bit different, however, because it will be called Dell World, combining the parent company’s smaller Austin-based conference with EMC’s larger event. About 10,000 attendees are here this week; the conference closes May 5.EMC, the world’s largest storage provider in the midst of an acquisition by Dell that probably won’t close until October due to some red tape (pun intended) involving the Chinese government, made a series of new-product announcements, including an important one about an all-flash array for mid-range-type companies.New products aside for the moment, at least the corporate branding for combining two of the world’s largest and most successful IT companies have been worked out ahead of time.

Dell Wanted a ‘Family’ Name

“We wanted to convey a family of businesses and aligned capabilities, and as family names go, I’m kind of attached to Dell,” Dell CEO and founder Michael Dell (pictured with EMC CEO Joe Tucci) said during his Day 1 keynote. “So after the close of the transaction, our family of businesses will be officially known as Dell Technologies. It’s got a nice ring to it.”Dell Technologies will comprise Dell, EMC Information Infrastructure, VMware, Pivotal, SecureWorks, RSA, and Virtustream.The client solutions business will be branded Dell. “The brand equity of the Dell PC is irreplaceable, and we’re gained (market) share for 13 straight quarters,” Dell said, seizing an opportunity to disparage a key competitor. “In fact, in the United States, our client business grew 4 percent, and in the same period, HP’s (Hewlett-Packard Inc.) client business declined minus 14 percent.”Do you see a correlation there?”The combined enterprise IT business will be named Dell EMC, Dell said.  ‘Standing at the Center of World Technology’“When Dell and EMC combine, our company and all of you, our customers and partners, will stand at the center of the world’s technology infrastructure, and that means that we stand at the center of human progress. And there’s no place I’d rather be,” Dell said.The biggest new-products news of Day 1, EMC World 2016, was about the new Unity mid-range all-flash array. EMC claimed the new systems hold up to 80TB of capacity and provide full enterprise capabilities for about $18,000, EMC President of Products and Marketing Jeremy Burton told reporters and analysts.”That price is about half the cost of the nearest competitor, as far as we can tell,” Burton said.If true, it could truly bring the all-flash storage model into many mid-size businesses that couldn’t afford it previously.The Unity is designed specifically for small, mid-sized and departmental enterprise IT deployments and is available in all-flash array, hybrid array, software-defined and converged configurations, Burton said. The package features unified file and block storage services in a dense 2U footprint.Key Features for UnityUnity is fairly fast, delivering up to 300K IOPS, Burton said. Technical features include:–True dual-active controller architecture;
–Support for file, block and VVols;
–Snapshots and remote sync/async replication;
–Native, controller-based encryption;
–New scalable file-system for transactional and traditional file use cases;
–VMware VASA 2.0, VAAI, VVols, VMware-aware Integration;
–Complete REST-API for automation and DevOps use cases;
–Integrated Copy Data Management with EMC iCDM; and
–Zero-impact garbage collection.Unity also can be deployed in a hybrid configuration to meet individual business requirements.Unity is the latest member of EMC’s all-flash portfolio of file and block storage for small and medium-sized IT departments. Unity joins EMC’s portfolio of all-flash storage arrays–XtremIO, VMAX All Flash and DSSD D5–to ensure that, no matter what a customer needs, EMC has a purpose-built solution to fit virtually any data center use case.For more information, go here.
Source: eWeek

Hulu Plans to Offer Streamed Cable TV Programming

Hulu Plans to Offer Streamed Cable TV Programming

The service reportedly will begin in the first quarter of 2017 as the company seeks new customers who want traditional TV in new ways.

Hulu is preparing to compete directly with cable television companies by launching its own streaming TV service offering to customers that will provide daily network and cable TV programming by subscription.The company “hopes to launch the new cable TV-style online service in the first quarter of 2017, according to a May 1 story by The Wall Street Journal, which was based on reports from anonymous sources who are familiar with the matter.The planned service would stream popular broadcast and cable TV channels and expand the company’s reach from simply offering streamed on-demand TV programming, including current broadcast TV hits, movies, Hulu original shows and more, the article reported. “Walt Disney Co. and 21st Century Fox, co-owners of Hulu, are near agreements to license many of their channels for the platform,” the story continued.Among the networks that are expected to be part of the Hulu service offering are ABC, ESPN, Disney Channel, Fox broadcast network, Fox News, FX and Fox’s national and regional sports channels, The Journal reported. “Preliminary conversations with other programmers have begun, but the service isn’t looking to offer all the hundreds of channels found in the traditional cable bundle, according to the people familiar with the plans.”

One notable omission at this point is Comcast, which is also an owner of Hulu, but “so far hasn’t agreed to license its networks for the planned digital pay-TV service,” the report continued.

Hulu did not immediately respond to an eWEEK inquiry regarding the report about the potential new television streaming service.Hulu’s existing streaming subscriptions are priced from $7.99 to $11.99 per month and provide a wide range of programming. Subscribers of the streaming TV service would not be required to also subscribe to the company’s existing offerings, The Journal article said. The price of the upcoming service could be about $40 a month, according to the story.The streaming video and streaming TV markets are continuing to get more competitive as service offerings to customers expand.In March, mobile phone carrier AT&T announced that AT&T and DirecTV customers will be able to dump their satellite dishes and receive a wide range of video content via wired or wireless Internet streaming on any device under new services that are expected to launch by the end of 2016. Under three options, customers will be able to get a multitude of DirecTV Now packages that contain various assortments of content similar to DirecTV content today, DirecTV Mobile packages that they can view anywhere or DirecTV Preview packages with ad-supported free content, according to the companies.The packages will work over a wired or wireless Internet connection from any provider on a smartphone, tablet, smart TV, streaming media hardware or PC. The services will allow several users to view content over simultaneous sessions, and they will not require annual contracts, satellite dishes or set-top boxes, according to AT&T.The DirecTV Now packages will include on-demand and live programming from many networks, plus premium add-on options, and will be available for use after downloading an app and signing up for an account. AT&T acquired DirecTV for $48.5 billion in July 2015, after having pursued the merger since May 2014, according to an earlier eWEEK story.AT&T’s move to offer enhanced deals to bring over DirecTV customers to grow its own subscriber base was part of the company’s vision for making the acquisition in the first place. The merger turned AT&T into a bigger player with its hands in more markets and a ready pool of new prospects to bring into its business coffers.Last July, Comcast began offering its Comcast Stream online video streaming service for $15 per month, which provides customers with a package of live television stations, all over their cable Internet connections. The Stream service works without a television or cable box, instead bringing a live video stream directly to a customer’s in-home devices over the Internet via a cable modem.Stream is an Internet-only service and is not connected to Comcast’s cable television services, which remain separate. The channels included in Stream are network programming from ABC, CBS, The CW, Fox, NBC, PBS, Telemundo, Univision, HBO and local channels where a subscriber lives.Netflix, a major competitor to Hulu, claims it has about 75 million members in more than 190 countries, and offers streaming video subscription plans priced from $7.99 to $11.99 per month. Its members watch more than 125 million hours of movie and television programming a day, according to the company.In April, Comcast announced that its NBCUniversal division is acquiring DreamWorks Animation for $3.8 billion as the longtime cable company continues to build its future by adding complementary businesses to steady it as the future of cable television remains unfocused. The merger brings huge opportunities for content streaming to Comcast, which like other cable companies, is seeing its business impacted by customers who are replacing their cable connections with streaming video and original programming from services, such as Hulu, Netflix and Amazon Prime. 
Source: eWeek

Will PCI DSS 3.2 Make Payments More Secure?

Will PCI DSS 3.2 Make Payments More Secure?

The latest iteration of the payment standard, which includes multifactor authentication, made its debut, but some security experts don’t think it goes far enough.

The latest iteration of the Payment Card Industry Data Security Standard—PCI DSS 3.2—adds new requirements and clarifies others.PCI DSS is a compliance specification that is typically a requirement for any organization that handles payments, including online and traditional brick-and-mortar retailers.Among the biggest changes in the PCI DSS 3.2 standard—the successor to the PCI DSS 3.1 standard announced in April 2015—is the wider applicability for requirement 8.3, which details the use of multifactor authentication.  The PCI DSS 3.0 standard, released in November 2013, required the use of multifactor authentication only for remote network access. With the PCI DSS 3.2 standard, all personnel with non-console administrative access to the cardholder data environment are required to have multifactor authentication.”Previously, this requirement applied only to remote access from untrusted networks,”  PCI Security Standards Council CTO Troy Leach said in a statement. “A password alone should not be enough to verify the administrator’s identity and grant access to sensitive information.”

Additionally, the PCI DSS 3.2 standard is different from its predecessor specification in that the term “multifactor authentication” is used, rather than the prior term, “two-factor authentication.”

“Clarified correct term is multifactor authentication, rather than two-factor authentication, as two or more factors may be used,” the PCI DSS 3.2 summary of changes document explains.The expanded requirement for multifactor authentication is a good thing for payment card security, said John Bambenek, threat intelligence analyst at Fidelis Cybersecurity. “Doing two-factor authentication for all access will be time-consuming, but straightforward, in my opinion,” Bambenek told eWEEK. “For those organizations that have to do penetration tests that will mean dedicating more time and, likely, more money.”Beyond the expanded use of multifactor authentication, PCI DSS 3.2 also adds focus on making sure that organizations stay compliant after they change things in their IT environment. In PCI DSS 3.2, the 6.4.6 requirement is a new control that requires organizations to make sure that change control processes include verification of PCI DSS requirements, which could be affected by a change. The basic idea is to help organizations avoid falling out of PCI DSS compliance as a result of a change.For organizations moving from PCI DSS 3.1 to PCI DSS 3.2, the biggest challenge will be the internal overhead and increased costs they will incur to be compliant to the new standard, said Brian NeSmith, CEO at network security startup Arctic Wolf Networks. “The standard requires more frequent testing and assessments, and this only benefits the PCI compliance-services vendors,” NeSmith told eWEEK. “It does not remove the burden of figuring out what method or device to use to ensure continuous security between the compliance tests and assessments.”With the volume of high-profile retail breaches in recent years, PCI DSS doesn’t exactly have a spectacular track record in the eyes of many in the security community.”Every company that has been spectacularly hacked in the last three years has been PCI-compliant. Sony, Target, Anthem, pick your favorite,” Mark Longworth, CEO of mobile security startup Shevirah, told eWEEK.Fidelis’  Bambenek noted that compliance-driven security often doesn’t move anywhere near as fast as the risks. The gap between compliance and actual risks is also a real concern for NeSmith. Overall, the new PCI DSS 3.2 standard misses the mark by focusing on detecting and reporting security control failures rather than protecting against threat detection use cases, he said.”If a thief gets into your house through an unlocked door, adding another lock on the door doesn’t make you safer,” NeSmith said. “What you really need to do is make sure to lock the door, but if you forget, you need to be able to detect the break-in and make sure the police show up before the thief gets away.”Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
Source: eWeek

Talari SD-WAN Technology Now Supports Microsoft Azure, Hyper-V

Talari SD-WAN Technology Now Supports Microsoft Azure, Hyper-V

The company’s virtual appliances already worked with VMware and AWS, but now will also support Microsoft’s cloud and virtualization platforms.

Talari Networks is now including Microsoft’s cloud and virtualization technologies among the platforms the company’s software-defined WAN products will support.Company officials announced that a new virtual appliance, the VT800, will support Microsoft Azure and Hyper-V environment. Prior to the VT800, Talari offered virtual appliances that supported VMware’s ESX virtualization technology and integration with Amazon Web Services’ (AWS) cloud technologies. Now customers with Microsoft infrastructures can offer Talari’s software-defined WAN (SD-WAN) capabilities natively in their environments.The networking space is continuing to use cloud- and virtualization-based services in the WAN, and Talari needs to be able to support all of the leading technologies, according to President and COO John Dickey. “Talari has always focused on offering customers a high degree of flexibility when it comes to acquiring and deploying an SD-WAN solution,” Dickey said in a statement, adding that the vendor will continue to expand its SD-WAN deployment options. “Our work with Microsoft, Amazon and VMware as delivery options for the VT800 are the latest milestones on our journey to bring a comprehensive, partner integrated SD-WAN solution to market.”

Talari is one in a growing number of vendors in the nascent SD-WAN market, fast-rising part of a rapidly changing enterprise networking space. SD-WANs—part of the larger network virtualization move in the industry—come as enterprise and service providers are increasingly using the cloud to deliver applications and services, their workers are becoming more mobile, the Internet of things (IoT) is growing and the number of mobile devices connecting to the network is increasing.

When traffic from the branch went to the data center, connectivity options like Multiprotocol Label Switching (MPLS) was a good fit. But now, with more unstructured data and more traffic being mobile and coming from the cloud, WANs need to be more scalable, programmable and affordable, and connected directly to the cloud. SD-WAN technologies offer a complement to MPLS.The market will grow quickly over the next several years. Gartner analysts expect the number of enterprises adopting SD-WAN technologies to increase from about 1 percent now to 30 percent by the end of 2019. IDC analysts are forecasting that the market will grow from less than $225 million last year to more than $6 billion by 2020.Customers have a lot of options to sort through, with almost two dozen vendors offering SD-WAN products. That includes established players like Cisco Systems, pure-play companies like Talari, VeloCloud, CloudGenix and Glue Networks, and vendors like Riverbed Technology—which last month introduced its SteelConnect SD-WAN platform—and Silver Peak Networks, which have made the move from the WAN optimization space.Talari officials have said their company has the advantage of having been in the space for almost a decade, even before the term SD-WAN came into use. That enables the vendor to already have a large mix of physical and virtual appliances and assorted software assets on the market, and expanding their support of Microsoft technologies is a continuation of the effort to grow its portfolio.The Talari VT800 is available now from Talari and its channel partners. It supports performance levels of 20 Mb/s, 40 Mb/s, 100 Mb/s and 200 Mb/s, officials said.
Source: eWeek

10 Devices That Could Pay Off for Google's New Hardware Division

10 Devices That Could Pay Off for Google's New Hardware Division
By Don Reisinger  |  Posted 2016-05-02 Print this article Print

  • Previous
    10 Devices That Could Pay Off for Google's New Hardware Division
    Next

    10 Devices That Could Pay Off for Google’s New Hardware Division

    With Rick Osterloh at the helm of Google’s new hardware division, the company has an opportunity to make better headway in a number of hardware markets.

  • Previous
    Design More High-Quality Android Wear Smartwatches
    Next

    Design More High-Quality Android Wear Smartwatches

    Market analysts say Apple Watch is leading the smartwatch market. Google can change that by either developing its own high-quality smartwatch or continuing to work with prominent third parties building smartwatches for its Android Wear platform. Smartwatches are expected to gain traction in next few years; having an attractive Android Wear alternative to Apple Watch would be a good move.

  • Previous
    Android-Based Tablets Have a Future
    Next

    Android-Based Tablets Have a Future

    If the Google Pixel C proves anything, it’s that Android-based tablets that can double as notebooks really have a future. To that end, Osterloh should focus his company’s efforts on building upon the Pixel C’s success and deliver a true Google hybrid tablet. Like the Pixel C, such devices could be popular both in corporate and educational settings.

  • Previous
    Build More High-Quality Nexus Smartphones
    Next

    Build More High-Quality Nexus Smartphones

    As in the past, Google will likely have a prominent role in its Nexus device development but rely on others to manufacture the smartphones. But to truly compete with Apple, Huawei and Samsung, among others, Google and Osterloh will need to find the right partners that can deliver the features customers want. Let’s hope for big, curved displays; high-quality designs; and powerful processors.

  • Previous
    Enterprise-Friendly Chromebooks Are Good Idea
    Next

    Enterprise-Friendly Chromebooks Are Good Idea

    In recent weeks, companies such as HP have come along with enterprise-friendly Chromebooks. But that should only be the start. Google and Osterloh need to get more companies investing in enterprise-focused Chromebooks. Google’s Chromebooks have a bright future, and Osterloh needs to ensure it stays that way.

  • Previous
    Why Not Compete With Echo?
    Next

    Why Not Compete With Echo?

    It’s no secret that Google wants to have a stronger hand in the smart home market, so why not develop a comprehensive Amazon Echo competitor? After all, Amazon’s smart home appliance delivers all of the features Google can bring to bear, including calendaring, music playback and voice control. By putting its own spin on an Echo competitor, Google might be successful.

  • Previous
    Google's OnHubs Seem to Be Catching On
    Next

    Google’s OnHubs Seem to Be Catching On

    Google has partnered with a few companies for its OnHub routers, and most benchmarks suggest they perform quite well. The next step for Google, therefore, is to enhance its efforts in that area. Hopefully, Osterloh can find ways to improve OnHubs by delivering more range and faster speeds, among other features. Google has an opportunity to be among the best router providers in the market. Osterloh should make that happen.

  • Previous
    Compete With Apple TV and Roku With Chromecast
    Next

    Compete With Apple TV and Roku With Chromecast

    Google has been expanding its Chromecast line for both video and audio. And at least so far, those cheap devices have been popular among customers. Google should expand its presence in the entertainment business by building more Chromecast devices and improving upon those it already offers. Again, Google wants to play a role in the home, and Chromecast is a fine way to help it achieve that goal.

  • Previous
    Follow Through on Project Ara Modular Smartphone
    Next

    Follow Through on Project Ara Modular Smartphone

    Google hasn’t talked much about its Project Ara modular smartphone of late, but that could soon change. Project Ara was actually built at Motorola, but Google kept it after the company was sold to Lenovo. With Osterloh, who formerly worked at Motorola, now at the helm of Google’s hardware division, there’s a good chance Ara updates will be coming sooner rather than later.

  • Previous
    Where Is Google Glass?
    Next

    Where Is Google Glass?

    Google Glass has been in hiding for the past year, but Osterloh has reportedly taken that project under his wings. The latest reports suggest that Google Glass will be designed for enterprise use, and there’s a good chance more details will be shared this year. Let’s hope that’s true and that Osterloh’s team gets all of the bugs worked out before it hits the office.

  • Previous
    Don't Forget Virtual Reality Headsets
    Next

    Don’t Forget Virtual Reality Headsets

    Google seems committed to bringing trendy hardware to the market, so Osterloh might want to consider a virtual-reality headset. HTC, Oculus, Samsung and others have already shown off headsets, and reports suggest Apple is even looking at eventually launching one of its own. Why shouldn’t Google follow those companies with an appealing virtual-reality headset?

After some trouble in bringing together the many facets of its hardware business, Google on April 28 hired former Motorola President Rick Osterloh to run a new hardware division. Osterloh has been hired to manage several product lines, including Google’s Nexus brand smartphones and the Chromecast digital media players, as well as facilitating better relationships with Google’s hardware partners. Osterloh’s task will not be easy. Google has been gradually working its way into a wide array of hardware markets, ranging from smartphones to wearables. Osterloh will need to corral those efforts and deliver products that can compete effectively with popular alternatives. He’ll also need to ensure that the quality of products built by Google’s hardware partners—who develop and make everything from smartphones to routers—is up to par. Along the way, Osterloh will need to achieve his hardest goal—making hardware buyers happy. This slide show covers the various devices Osterloh’s team could develop to do just that.

Don Reisinger is a freelance technology columnist. He started writing about technology for Ziff-Davis’ Gearlog.com. Since then, he has written extremely popular columns for CNET.com, Computerworld, InformationWeek, and others. He has appeared numerous times on national television to share his expertise with viewers. You can follow his every move at http://twitter.com/donreisinger.

Source: eWeek

HTC Will Reportedly Build the Next 2 Nexus Android Smartphones

HTC Will Reportedly Build the Next 2 Nexus Android Smartphones

HTC, which built the Nexus One and Nexus 9 phones, will build two more of the upcoming smartphone models that will be powered by Android.

HTC is reportedly again being tapped to manufacture two Google Nexus smartphone models running on Android, after building two previous Nexus handsets.The latest rumors about HTC’s move come from well-known news tipster Evan Blass, who posted several tweets on Twitter about his latest observations. Blass’ Twitter name is @evleaks.”HTC is building a pair of Android N devices for Google internally dubbed M1 and S1 #nexus,” Blass posted in an April 27 tweet.He followed his original message up with another related post, “So now I’m hearing that there are going to be both a Maxx 3 as well as a Turbo 3. #dejavu.”

Those two upcoming devices are code-named Marlin (M1) and Sailfish (S1), according to a related rumor report by AndroidPolice. “Google has a long history of naming Nexus devices after aquatic life, the current 5X and 6P are Bullhead and Angler, respectively,” the story reported. “The Nexus 6 was Shamu. The Nexus 5 was Hammerhead, and so on.”

No other details are yet available about the next potential Nexus smartphones at this time.In March, Google’s latest Project Fi smartphone, the Nexus 5X, debuted as the second handset being sold for use with Google’s Project Fi mobile phone services, which start at $20 a month. The Nexus 5X smartphone is made by LG for use with Google’s Project Fi inexpensive monthly wireless service plans, according to an earlier eWEEK story. The Nexus 5X is priced at $349 for a 16GB model or $399 for a 32GB model.Project Fi is Google’s inexpensive mobile phone service that came out in April 2014 under what was then an invitation-only system. Project Fi phone services recently opened to all users who buy or provide a compatible Nexus smartphone that will work with the service. So far, the Nexus 6P by Huawei, the new Nexus 5X and the earlier Nexus 6 are the only three smartphones that will work with Project Fi’s network. Users pay $20 per month for cellular access, plus data fees of $10 per GB only for the data that is consumed each month. The monthly access fee also includes unlimited talk and texting, WiFi tethering and international coverage in more than 120 countries.Both Nexus 5X by LG models feature a 5.2-inch full HD (1,920-by-1,080) LCD display, a 2GHz hexa-core 64-bit Qualcomm Snapdragon 808 processor, an Adreno 418 graphics processing unit and 2GB of LPDDR3 memory. The handsets also feature a Corning Gorilla Glass 3 cover glass and a fingerprint- and smudge-resistant oleophobic coating, as well as a 2,700mAh battery. Both models run on the Android 6.0 Marshmallow operating system and include a 12.3-megapixel rear-facing camera with an f/2.0, infrared, laser-assisted autofocus lens, 4K (30 fps) video capture and broad-spectrum dual flash. The front-facing camera on both devices is a 5-megapixel model with an f/2.2 aperture.The other available Project Fi phone for sale is the Nexus 6P by Huawei. The 6P features a 5.7-inche WQHD (2,560-by-1,440) AMOLED display, a 2GHz octa-core 64-bit Qualcomm Snapdragon 810 processor, an Adreno 430 graphics processor, 3GB of LPDDR4 memory, a 12.3-megapixel rear camera, an 8MP front camera and a 3,450mAh battery. The 6P is 6.27 inches long, 3.06 inches wide and 0.28 inches thick and weighs 6.27 ounces. The phone—which sells for $499 for a 32GB version, $549 for the 64GB version and $649 for a 128GB version—is available in silver, black, white or gold.In April, HTC unveiled its own flagship Android smartphone, the HTC 10 (pictured), to take on Samsung’s Galaxy S7 phones, Apple’s iPhones and others. The HTC 10 includes a myriad of improvements to its processor, cameras, battery and audio system, giving potential buyers of HTC’s latest smartphone lots to consider. The HTC 10 replaces the HTC One M9, incorporating a faster Qualcomm Snapdragon 820 processor, vastly improved front and rear cameras, and upgraded audio capabilities. The latest handset has a metal unibody design and runs on the Android 6.0 Marshmallow operating system.The HTC 10 features a 5.2-inch, Quad HD (2,560-by-1,440-pixel) touch-screen display that is covered with Corning Gorilla Glass, a Qualcomm Snapdragon 820 quad-core 64-bit processor, 4GB of memory, 32GB or 64GB of built-in storage, and a microSD slot that accepts storage cards up to 2TB.
Source: eWeek