NASCAR is digitizing race day

NASCAR is digitizing race day
In a sport where winning often comes down to thousandths of a second, data matters. NASCAR is going high-tech with new race management software.

NASCAR GOES HIGH TECH

At the Toyota/Save Mart 350 something new, a race management system that gives officials a single-screen from which to monitor the racetrack, where cars are, review infractions and share that with teams.

It’s the result of 18 months development that started here, in the inspection tent where NASCAR officials check cars to make sure everything is within the sport’s rules.

This all used to be done on slips of paper, so the sport worked with Microsoft to come up with a tablet based app that collects, stores and transmits the results of every test.

“Giving us data that we never had before, allowing us to see things from the inspection that we were never able to see before, because everything was paper based.”

NASCAR says its inspection times have been cut in half.

So attention turned to the race track, where several independent systems were already in use tracking different elements of the race. A new Windows 10 app brings them all together with powerful results.

“During the race itself i can come down, i can look at info abotu a particular car, their last lap speed, average lap time, all of the pitstops they made”

Cars are shown in actual locations on this screen and officials can pair that data with live video of the race. The app also pulls in video from pit stops so officials can see whether drivers broke any rules and, if so, share that with the teams in realtime.

“If there was an infraction on pit row, we’d explain it verbally, we’ll get back to you monday or tuesday, have a question ‘what really happeneds?’ we’d try to explain it. In this case, realtime video we can get to the race team, hopefully have that communication, here was that infraction, post race meet with the team and here’s why the call was made.”

And it also allows officials to figure out exactly where cars where when race holds were called, perhaps in response to an accident. Sometimes cars end up in the wrong order and the system helps sort that out so when the green flag falls again, no car has lost a place.

It’s a powerful tool for a sport that runs in real time.

“NASCAR is the only sport where every second of every race, we don’t have time to call a time out or a TV timeout, let the officials sit over on the side, we’ve gotta make a quick call and this allows us to do that.”

Like most major sports, NASCAR is keen to use new technology to innovate and bring faces closer to the action. And that could happen in the future if this data is shared online, so fans can keep track during the race. There’s even the idea of employing machine learning to help officials predict things before they happen.

Source: InfoWorld Big Data

Comodo Drops Trademark Applications, Avoiding Legal Battle with Certificate Authority Let's Encrypt

Comodo Drops Trademark Applications, Avoiding Legal Battle with Certificate Authority Let's Encrypt

Comodo has withdrawn applications for three trademarks involving the term “Let’s Encrypt” – a move that seems to be related to a plea by an open certificate authority of the same name urging Comodo to abandon its applications.

Let’s Encrypt is a free, automated, and open certificate authority by the non-profit Internet Security Research Group (ISRG). Comodo’s Requests for Express Abandonment came with 24 hours of a blog post by the Let’s Encrypt project on Thursday last week, but it is unclear if the two are directly related.

The Let’s Encrypt project said in a blog post that it contacted Comodo regarding the trademark applications in March, and asked directly and through attorneys for Comodo to drop its applications, saying it is “the first and senior user” of the term.

Comodo filed trademarks for the terms “Let’s Encrypt,” “Let’s Encrypt with Comodo,” and “Comodo Let’s Encrypt” for certificate authority or related services. The company acknowledges in its applications that these phrases have not been part of its branding before they were filed in October.

The United States Patent and Trademark Office (USPTO) responded to Comodo’s application in February, asking for clarification of “identification and classification of goods and services.”

“We’ve forged relationships with millions of websites and users under the name Let’s Encrypt, furthering our mission to make encryption free, easy, and accessible to everyone,” ISRG executive director Josh Aas said in the blog post. “We’ve also worked hard to build our unique identity within the community and to make that identity a reliable indicator of quality. We take it very seriously when we see the potential for our users to be confused, or worse, the potential for a third party to damage the trust our users have placed in us by intentionally creating such confusion. By attempting to register trademarks for our name, Comodo is actively attempting to do just that.”

The Let’s Encrypt project was announced in November 2014, and it issued over a million SSL/TLS certificates in its first three months after launching late last year.

The organization argued it is most commonly associated with the term and has been using it longer, and will “vigorously defend” its brand.

Comodo did not respond to an email seeking comment.

Source: TheWHIR

Here's How Much Energy All US Data Centers Consume

Here's How Much Energy All US Data Centers Consume

datacenterknowledgelogoBrought to you by Data Center Knowledge

It’s no secret that data centers, the massive but bland, unremarkable-looking buildings housing the powerful engines that pump blood through the arteries of global economy, consume a huge amount of energy. But while our reliance on this infrastructure and its ability to scale capacity grows at a maddening pace, it turns out that on the whole, the data center industry’s ability to improve energy efficiency as it scales is extraordinary.

The demand for data center capacity in the US grew tremendously over the last five years, while total data center energy consumption grew only slightly, according to results of a new study of data center energy use by the US government, released today. This is the first comprehensive analysis of data center energy use in the US in about a decade.

US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, the most recent year examined, representing 2 percent of the country’s total energy consumption, according to the study. That’s equivalent to the amount consumed by about 6.4 million average American homes that year. This is a 4 percent increase in total data center energy consumption from 2010 to 2014, and a huge change from the preceding five years, during which total US data center energy consumption grew by 24 percent, and an even bigger change from the first half of last decade, when their energy consumption grew nearly 90 percent.

Efficiency improvements have played an enormous role in taming the growth rate of the data center industry’s energy consumption. Without these improvements, staying at the efficiency levels of 2010, data centers would have consumed close to 40 billion kWh more than they did in 2014 to do the same amount of work, according to the study, conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University.

Energy efficiency improvements will have saved 620 billion kWh between 2010 and 2020, the study forecasts. The researchers expect total US data center energy consumption to grow by 4 percent between now and 2020 – they predict the same growth rate over the next five years as it was over the last five years – reaching about 73 billion kWh.

LBNL DOE DC energy use efficiency impact

This chart shows past and projected growth rate of total US data center energy use from 2000 until 2020. It also illustrates how much faster data center energy use would grow if the industry, hypothetically, did not make any further efficiency improvements after 2010. (Source: US Department of Energy, Lawrence Berkeley National Laboratory)

Counting Electrons

Somewhere around the turn of the century, data center energy consumption started attracting a lot of public attention. The internet was developing fast, and many started asking questions about the role it was playing in the overall picture of the country’s energy use.

Many, including public officials, started ringing alarm bells, worried that continuing to power growth of the internet would soon become a big problem. These worries were stoked further by the coal lobby, which funded pseudo-scientific research by “experts” with questionable motives, who said the internet’s power consumption was out of control, and if the society wanted it to continue growing, it wouldn’t be wise to continue shutting down coal-burning power plants.

The DOE’s first attempt to quantify just how much energy data centers were consuming, whose results were published in a 2008 report to Congress, was a response to those rising concerns. It showed that yes, this infrastructure was consuming a lot of energy, and that its energy use was growing quickly, but the problem wasn’t nearly as big as those studies of murky origins had suggested.

“The last [DOE] study … was really the first time data center energy use for the entire country was quantified in some way,” Arman Shehabi, research scientist at the DOE’s Lawrence Berkeley National Laboratory and one of the new study’s lead autors, said in an interview with Data Center Knowledge.

What authors of both the 2008 report and this year’s report did not anticipate was how much the growth curve of the industry’s total energy use would flatten between then and now. This was the biggest surprise for Shehabi and his colleagues when analyzing the most recent data.

“It’s slowed down, and right now the rate of increase is fairly steady,” he said. “There’s more activity occurring, but that activity is happening in more efficient data centers.”

See also: Cleaning Up Data Center Power is Dirty Work

Fewer Servers

There’s a whole list of factors that contributed to flattening of the curve, but the most obvious one is that the amount of servers being deployed in data centers is simply not growing as quickly as it used to. Servers have gotten a lot more powerful and efficient, and the industry has figured out ways to utilize more of each server’s total capacity, thanks primarily to server virtualization, which enables a single physical server to host many virtual ones.

Each year between 2000 and 2005, companies bought 15 percent more servers on average than the previous year, the study says, citing server shipment estimates by the market research firm IDC. The total number of servers deployed in data centers just about doubled in those five years.

Growth rate in annual server shipments dropped to 5 percent over the second half of the decade, due in part to the 2008 market crash but also to server virtualization, which emerged during that period. Annual shipment growth dropped to 3 percent since 2010, and the researchers expect it to remain there until at least 2020.

The Hyperscale Factor

The end of the last decade and beginning of the current one also saw the rise of hyperscale data centers, the enormous facilities designed for maximum efficiency from the ground up. These are built by cloud and internet giants, such as Google, Facebook, Microsoft, and Amazon, as well as data center providers, companies that specialize in designing and building data centers and leasing them to others.

According to the DOE study, most of the servers that have been responsible for that 3 percent annual increase in shipments have been going into hyperscale data centers. The cloud giants have created a science out of maximizing server utilization and data center efficiency, contributing in a big way to the slow-down of the industry’s overall energy use, while data center providers have made improvements in efficiency of their facilities infrastructure, the power and cooling equipment that supports their clients’ IT gear. Both of these groups of data center operators are well-incentivized to improve efficiency, since it has direct impact on their bottom lines.

The amount of applications companies deployed in the cloud or in data center provider facilities started growing as well. A recent survey by the Uptime Institute found that while enterprise-owned data centers host 71 percent of enterprise IT assets today, 20 percent is hosted by data center providers, and the remaining 9 percent is hosted in the cloud.

LBNL DOE 2016 dc energy use by space type

This chart shows the portion of energy use attributed to data centers of various types over time. SP data centers are data centers operated by service providers, including both colocation and cloud service providers, while internal data centers are typical single-user enterprise data centers. (Source: US Department of Energy, Lawrence Berkeley National Laboratory)

Additionally, while companies are deploying fewer servers, the amount of power each server needs has not been growing as quickly as it used to. Server power requirements were increasing from 2000 to 2005 but have been relatively static since then, according to the DOE. Servers have gotten better at reducing power consumption when running idle or at low utilization, while the underlying data center power and cooling infrastructure has gotten more efficient. Storage devices and networking hardware have also seen significant efficiency improvements.

See also: After Break, Internet Giants Resume Data Center Spending

From IT Closet to Hyperscale Facilities

To put this new data in perspective, it’s important to understand the trajectory of the data center industry’s development. It was still a young field in 2007, when the first DOE study was published, Shehabi said. There was no need for data centers not too long ago, when instead of a data center there was a single server sitting next to somebody’s desk. They would soon add another server, and another, until they needed a separate room or a closet. Eventually, that footprint increased to a point where servers needed dedicated facilities.

All this happened very quickly, and the main concern of the first data center operators was keeping up with demand, not keeping the energy bill low. “Now that [data centers] are so large, they’re being designed from a point of view of looking at the whole system to find a way to make them as efficient and as productive as possible, and that process has led to a lot of the efficiencies that we’re seeing in this new report,” Shehabi said.

Efficiency Won’t Be the Final Answer

While the industry as a whole has managed to flatten the growth curve of its energy use, it’s important to keep in mind that a huge portion of all existing software still runs in highly inefficient data centers, the small enterprise IT facilities built a decade ago or earlier that support applications for hospitals, banks, insurance companies, and so on. “The lowest-hanging fruit will be trying to address efficiency of the really small data centers,” Shehabi said. “Even though they haven’t been growing very much … it’s still millions of servers that are out there, and those are just very inefficient.” Going forward, it will be important to find ways to either make those smaller data centers more efficient or to replace them with footprint in efficient hyperscale facilities.

See also: The Problem of Inefficient Cooling in Smaller Data Centers

As with the first data center study by the DOE, the new results are encouraging for the industry, but they don’t indicate that it has effectively addressed energy problems it is likely to face in the future. There are only a “couple of knobs you can turn” to improve efficiency – you can design more efficient facilities and improve server utilization – and operators of the world’s largest data centers have been turning them both, but demand for data center services is increasing, and there are no signs that it will be slowing down any time soon. “We can only get to 100 percent efficiency,” Shehabi said.

Writing in the report on the study, he and his colleagues warn that as information and communication technologies continue to evolve rapidly, it is likely that deployment of new systems and services is happening “without much consideration of energy impacts.” Unlike 15 years ago, however, the industry now has a lot more knowledge about deploying these systems efficiently. Waiting to identify specific efficient deployment plans can lead to setbacks in the future.

“The potential for data center services, especially from a global perspective, is still in a fairly nascent stage, and future demand could continue to increase after our current strategies to improve energy efficiency have been maximized. Understanding if and when this transition may occur and the ways in which data centers can minimize their costs and environmental impacts under such a scenario is an important direction for future research.”

Source: TheWHIR

5 Cybersecurity Stories You Need to Know Now, June 27

5 Cybersecurity Stories You Need to Know Now, June 27

It’s Monday, and it’s almost July if you can believe it. But even though you may feel like you’re in summer vacation mode with the Fourth of July just around the corner, hackers really don’t seem to take a holiday. Here are the 5 cybersecurity stories you need to know as you start your week.

  1. China Is Another Step Closer to Controversial Cybersecurity Law

Here’s something to keep you up at night: China is going through the steps to bring a controversial cybersecurity draft law into practice, which would require network operators to “comply with social morals and accept the supervision of the government and public” according to a report by Fortune. The law would also require data belonging to Chinese citizens to be stored domestically. It is not clear when it will be passed as parliament just held a second draft reading of the bill, but it will be something to watch closely if you do business in China.

SEE ALSO: U.S. Closely Eyeing China’s Corporate Hacking Vow, Official Says

2. Intel Considers Sale of Cybersecurity Division: Report

Intel is looking at selling its Intel Security division which it formed after acquiring McAfee back in 2010. The deal could fetch the company – which is shifting away from PCs to data centers and IoT – as much as the $7.7 billion it paid six years ago.

3. Everyone’s Waiting for the Next Cybersecurity IPO

Cybersecurity is hot, but there still have only been two US tech IPOs this year. An uncertain market is keeping would-be IPOs from moving forward, according to a report by Fortune, putting a damper on “an otherwise vibrant cybersecurity sector.”

READ MORE: Dell’s Cybersecurity Unit SecureWorks Files for IPO

4. Security Sense: The Ethics and Legalities of Selling Stolen Data

The WHIR sister site Windows IT Pro has a really interesting take on the mass data breaches we’ve been seeing lately (think LinkedIn, MySpace) and the ethics around those selling the data, challenging common defenses used by those who profit off stolen credentials. It’s definitely worth a read.

5. How Healthcare Cybersecurity is Affected by Cyber Sharing Law

The Cybersecurity Act was signed into law in December 2015 and several industry stakeholders gathered earlier this month to discuss its impact on healthcare cybersecurity, according to a report by HealthITSecurity. If you’ve got clients in the healthcare sector, you will want to take a look for sure.

Source: TheWHIR

Two-Thirds of Companies See Insider Data Theft, Accenture Says

Two-Thirds of Companies See Insider Data Theft, Accenture Says

By Matthew Kalman

(Bloomberg) — As businesses spend billions of dollars a year trying to protect their data from hacking that’s costing trillions, they face another threat closer to home: data theft by their own employees.

That’s one of the findings in a survey to be published by management consultant Accenture Plc and HfS Research on Monday.

Of 208 organizations surveyed, 69 percent “experienced an attempted or realized data theft or corruption by corporate insiders” over the past 12 months, the survey found, compared to 57 percent that experienced similar risks from external sources. Media and technology firms, and enterprises in the Asia-Pacific region reported the highest rates — 77 percent and 80 percent, respectively.

READ MORE: Basic Security Training for Employees Not Enough to Stop Data Breaches: Report

“Everyone’s always known that part of designing security starts with thinking that your employees could be a risk but I don’t think anyone could have said it was quite that high,” Omar Abbosh, Accenture chief strategy officer, said in an interview in Tel Aviv, where he announced Accenture’s purchase of Maglan Information Defense & Intelligence Group, an Israeli security company.

Each year, businesses currently spend an estimated $84 billion to defend against data theft that costs them about $2 trillion — damage that could rise to $90 trillion a year by 2030 if current trends continue, Abbosh forecast. He recommended that corporations change their approach to cybersecurity by cooperating with competitors to develop joint strategies to outwit increasingly sophisticated cyber-criminals.

SEE ALSO: Shadow IT: Embrace Reality – Detect and Secure the Cloud Tools Your Employees Use

“There’s a huge business rationale to share and collaborate,” he said. “If one bank is fundamentally breached in a way that collapses its trust with its customer base, I could be happy and say they’re all going to come to me, but that’s a false comfort” because “it pollutes the whole sphere of customers because it makes everyone fearful,” he said.

Despite recent high-profile data breaches of Sony Corp., Target Corp. and the U.S. Office of Personnel Management, many corporations do not yet consider cybersecurity a top business priority, Accenture found. Seventy percent of the survey’s respondents said they lacked adequate funding for technology, training or personnel needed to maintain their company’s cybersecurity, while 36 percent said their management considers cybersecurity “an unnecessary cost.”

Source: TheWHIR

Creating Cloud DR? Know What's in Your SLA

Creating Cloud DR? Know What's in Your SLA

So many organizations are turning to cloud for specific services, applications, and new kinds of business economics. We’re seeing more deploying into cloud and a lot more maturity around specific kinds of cloud services.

Consider this, according to Cisco, global cloud traffic crossed the zettabyte threshold in 2014, and by 2019, more than four-fifths of all data center traffic will be based in the cloud. Cloud traffic will represent 83 percent of total data center traffic by 2019. Significant promoters of cloud traffic growth include the rapid adoption of and migration to cloud architectures and the ability of cloud data centers to handle significantly higher traffic loads. Cloud data centers support increased virtualization, standardization, and automation. These factors lead to better performance as well as higher capacity and throughput.

One really great use-case is using cloud for disaster recovery (DR), backup, and resiliency purposes. And, with this topic in mind, one of the most important things to develop when deploying a DR environment with a third-party host is the SLA. This is where an organization can define very specific terms as far as hardware replacement, managed services, response time and more. Remote, cloud-based data centers, just like localized ones, need to be monitored and managed. When working with a third-party provider, host or colo, make sure specific boundaries are set and clearly understood as far as who is managing what.

Leverage provider flexibility. Hosting providers have the capability of being very flexible. They can setup a contract stating that they will only manage the hardware components of a rented rack. Everything from the hypervisor and beyond, in that case, becomes the responsibility of the customer. Even in these cases, it’s important to know if an outage has occurred or if there are failed components. Basically, the goal is to maintain constant communication with the remote environment. Administrators must know what is happening on the underlying hardware even if they are not directly responsible for it. Any impact on physical DR resources can have major repercussions on any workload running on top of that hardware.

Similarly, there are new cloud services which can take over the entire DRBC function and even have failover sites ready as needed. Remember, critical workloads and higher the uptime requirements will need to have special SLA provisions and cost considerations.

  • Define business recovery requirements. When developing an SLA for a cloud or hosting datacenter, it’s important to clearly define the recovery time objective – that is, how long will components be down? Some organizations require that they maintain 99.9 percent uptime with many of their critical components. In these situations, it’s very important to ensure proper redundancies are in place to allow for failed components. This can all be built into an SLA and monitored on the backend with good tools which have visibility into the DR environment. Let me give you a specific example. If you’re leveraging Microsoft’s Cool vs Hot storage tiers – there are some uptime considerations. Microsoft highlighted that you will be able to choose between Hot and Cool access tiers to store object data based on its access pattern. However, the Cool tier offers 99 percent availability, while the Hot tier offers 99.9 percent.

So, you absolutely need to design around your own DR and continuity requirements. If an organization has a recovery objective of 0-4 hours, it’s acceptable to have some downtime, but not long. With this type of DR setup, an SLA will still be setup with clear responsibilities being segregated between the provider and the customer. Having an open level of communication and clear environmental visibility will save a lot of time and effort should an emergency situation occur.

  • Plan, train, and prepare for the future. In a DR moment, everyone needs to know what they are supposed to do in order to bring their environment back up quickly. This must be clearly defined in your runbook, especially if you’re leveraging DR and business continuity services from a host or cloud provider. Most of all, when creating SLAs, make sure you plan for bursts, and what your environment will require in the near future. Restructuring SLAs and hosting contracts can be pricey – especially for critical DR systems. This means planning will be absolutely critical.

Cloud computing and the various services it provides will continue to impact organizations of all sizes. Organizations are reducing their data center footprints while still leveraging powerful services which positively impact users and the business. Using cloud for DR and business continuity is a great idea when it’s designed properly. Today, cloud services are no longer for major organizations. Mid-market and SMBs are absolutely leveraging the power of the resilient cloud. Moving forward, cloud will continue to impact organizations as they transition into a more digital world. And, having a good partnership (and SLA) with your cloud provider helps support a growing business, and an evolving user.

Source: TheWHIR

Gain Deep Customer Knowledge with HostingCon Management Sessions

Gain Deep Customer Knowledge with HostingCon Management Sessions

There are a dozen educational sessions in the management track at HostingCon Global 2016 New Orleans. Expert speakers will bring thought-provoking insights and analysis to the key elements and challenges of getting your company’s message and value out to people who need to know.

Session topics will include the possibilities of interconnection fabrics, how service providers can best raise money, the nitty-gritty of acquisitions, best practices for product launches, and a new Internet infrastructure model for supporting IoT and cloud. Other sessions will cover the tricky relationship between technology and business, proven growth strategies cloud companies can adopt, the value of peering standards, and scaling your in-house support team.

There are also management speed roundtables with three industry leaders on Monday afternoon, in which participants can both workshop with peers and “ask the experts” as they cycle through the most pressing topics in the track.

Liquid Web executive vice president Jeff Uphues will explain the importance of a deep understanding of customers, and how to gain it. With specific initiatives for MSPs, VARs, ISVs, and hosts, this Tuesday afternoon session will enable attendees to jump start their cloud services and hosting strategies.

There is still one more session announcement to come in the management track, and the final updates are being finalized for this year’s HostingCon Global. Time is running out to register, with only six weeks until the conference!

Source: TheWHIR

Next-Generation Cloud Now Available At Online Tech’s Midwest Data Centers

Next-Generation Cloud Now Available At Online Tech’s Midwest Data Centers

Online Tech has announced that its Indiana and Michigan cloud infrastructure is now running an all-flash, encrypted array based on hardware solutions by Pure Storage. This investment will provide businesses in the Midwest region with an ultra-fast, secure, and scalable enterprise-class local cloud that organizations can trust with their mission-critical applications and manage their growing Big Data needs.

The new flash technology, previously only available to large enterprises, gives Online Tech’s cloud hosting clients 80 Gb/s bandwidth and over 400,000 IOPS performance. With encryption built in to the hardware, this storage solution provides the data security that their clients demand with no impact to performance and without the hassle of key management.

“We’re thrilled at the new capabilities and security that encrypted flash technology has brought to our cloud infrastructure,” said Jason Yaeger, online tech senior director, Solutions Architecture. “This isn’t your average cloud using single component hosts that you see at the large public providers. Our architecture protects our clients against hardware component failure, which saves them from having to purchase additional servers and load balance them. You automatically get hardware component failure protection.”

Online Tech’s cloud architecture is built on the no-single-point-of-failure infrastructure of their enterprise-class data centers. With redundant controllers, switches, routers, generators, UPS, and HVAC systems, all critical equipment is at least N+1 allowing for the reliability of service that few companies can afford to provide.

Source: CloudStrategyMag

Silver Spring Networks Opens New Silicon Valley Headquarters

Silver Spring Networks Opens New Silicon Valley Headquarters

On June 23, 2016, Mike Bell, president and CEO of Silver Spring Networks, Inc., was joined by Sam Liccardo, Mayor of San Jose, CA, executives from Pacific Gas & Electric (PG&E), and other local public officials to officially open Silver Spring’s new headquarters at 230 West Tasman Drive in San Jose. With an Internet of Things (IoT) demonstration facility, Silver Spring’s new headquarters is the innovation and operations focal point for its global customer base.

During the ribbon-cutting ceremony, Silver Spring highlighted Starfish™ — Silver Spring’s public cloud IoT network service — and its deployment across San Jose. Silver Spring and the City of San Jose highlighted how the Internet of Things platform will help contribute to San Jose’s Smart City Vision of becoming America’s most innovative city by 2020. Starfish is built on open standards-based technology that has a proven track record of delivering over 23.6 million devices across five continents. A reliable and secure network platform, Starfish helps developers, entrepreneurs, enterprises and other third parties support and accelerate the development of new IoT devices and services.

The ceremony also provided PG&E the opportunity to showcase its Grid of Things™ vision, which helps PG&E better serve its customers by offering enhanced personalization while improving the safety and reliability of the energy grid. PG&E has worked with Silver Spring to deploy its IPv6-based platform and solutions for multiple smart grid projects, including its SmartMeter™ program serving more than five million PG&E electric customers across Northern and Central California.1

“We are thrilled to officially open our new worldwide headquarters in the heart of Silicon Valley, and are grateful to Mayor Liccardo and the city of San Jose for their support,” said Bell. “As Silver Spring’s coverage area expands, we are able to unlock benefits for our customers and the communities they serve.”

“In addition, we are excited to be actively deploying Starfish in San Jose, where we are engaging with some of the hottest entrepreneurs and developers in the Valley who are looking to leverage a trusted IoT network to build the industry’s next big innovations,” continued Bell.

“We are excited to welcome Silver Spring Networks to San Jose and to be partnering with this innovative company to deploy an Internet of Things network here in our city,” said Mayor Liccardo. “This partnership is a great example of how we can embrace game-changing technology and data-driven decision-making in order to help create a safer, more sustainable and productive community and enhance the quality of life for our residents. I’d like to thank Silver Spring Networks for their many, significant investments in San Jose and for being at the forefront of helping cities address some of our biggest 21st century challenges.”

‘Developer Day’ Enables Hands-On Experience With Silver Spring’s Proven IPv6 IoT Network

Silver Spring recently hosted an inaugural ‘Developer Day’ at its new headquarters where developers and academic researchers learned how to leverage Starfish to create smart city, smart energy, resource conservation, and other IoT applications for public and commercial use. To register your interest for future Developer Days, visit the website.

Starfish offers a platform that includes standards-based IEEE 802.15.4g wireless interoperability standard (Wi-SUN), as well as speeds up to 2.4 Mbps, 10 millisecond latency, up to 50 miles in point-to-point range, and multiple network transports, along with industrial-grade security, reliability, and scalability.

As a part of Starfish, Silver Spring plans to offer a free service plan – Haiku™ – which includes 5000 messages x 16 bytes per month, ideal for entrepreneurs and start-ups with smaller data needs who want to access a proven IoT network service to develop new IoT applications for the industry. In addition to San Jose, Silver Spring is ramping up Starfish deployment in Bristol, Chicago, Copenhagen, Glasgow, Kolkata, London, and San Antonio.

1 SmartMeter and Grid of Things are registered trademarks of Pacific Gas & Electric.

Source: CloudStrategyMag

Working in the Windy City: WHIR Networking Event Chicago

Working in the Windy City: WHIR Networking Event Chicago

The WHIR was in Chicago last night for an evening of networking thanks to support from our generous sponsors: Lenovo, IBM Softlayer, Radware, and Cayan.

We had a great crowd in Howells & Hoods in downtown Chicago, and thankfully the weather cleared up just in time.

Thanks to our sponsors our guests were able to enjoy complimentary drinks and appetizers along with their networking. A few lucky attendees also walked away with prizes courtesy of our sponsors:

  • SoftLayer, an IBM Company gave away a Roku SE to Bebe Bandurski of Red IVY Studios
  • Lenovo gave away a Yoga tablet to Akeem Hunter of IBM
  • Radware gave away a $100 AMEX giftcard to Hal Bouma of Netwisp
  • Cayan gave away a Bluetooth Speaker with Carrying Case to Kevin Lynch of AEP Ohio

Our next stop is the WHIR’s hometown of Toronto during the 2016 Microsoft Partner Conference on July 12, 2016! If you’re going to be in town for the conference be sure to stop by to visit us and network with the hosting and cloud industry. Register today as spots are already filling up!

Source: TheWHIR