732-926-0112
Login
Microsoft is ending its unlimited OneDrive cloud storage plan and will put a cap of 1TB on storage for Office 365 consumers.The company said its earlier unlimited storage offer had been misused by a few users of its Office 365 Home, Personal or University application, who backed up "numerous PCs and stored entire movie collections and DVR recordings," amounting at times to 75TB per user or 14,000 times the average, the OneDrive Team said in a blog post late Monday.

"Instead of focusing on extreme backup scenarios, we want to remain focused on delivering high-value productivity and collaboration experiences that benefit the majority of OneDrive users," according to the blog post.

To read this article in full or to leave a comment, please click here

Ten years after Katrina devastated New Orleans, IT pros say being less dependent on physical locations is just one of the keys to ensuring your company doesn’t go out of business when disaster strikes.

It’s hard to be truly prepared to take the full impact of a Category 5 hurricane. Ten years ago, in the case of Hurricane Katrina and the city of New Orleans, there was the added devastation of flooding caused by failed levees. It will hopefully be a very long time before another disaster of that magnitude strikes New Orleans, or any other city for that matter, but organizations still need to be prepared for such an event. As it turns out, the cloud is an ideal tool for managing the risks associated with a hurricane or other natural disaster.

Even prior to Hurricane Katrina it was a security mantra and data protection best practice to ensure at least one backup of crucial data was maintained offsite. The logic being simply that you don’t want your primary data storage and all of your backups to be destroyed in the same hurricane, fire, flood or earthquake.

Offsite backups solve only part of the problem, though, if your servers and data are maintained locally. When disaster strikes and wipes out your primary data, you’ll have to acquire the backup data, deploy and configure new hardware at some secondary location, and restore the data. You’re still looking at days of downtime in a best-case scenario.

Embracing the cloud to reduce risk

The city of New Orleans and businesses like Entergy and DirectNIC that struggled to survive the devastation of Hurricane Katrina learned some valuable lessons. One of the primary caveats when it comes to business continuity is to mitigate risk by embracing the cloud.

Lamar Gardere, director of information technology and innovation for the City of New Orleans, admits that things were still in disarray when the current administration took office in 2010. The IT infrastructure was aging and many of the city’s critical applications were still being run on physical servers.

One of Gardere’s first tasks was to modernize onto a highly virtualized infrastructure to allow for servers to be quickly created, resized and moved from one site to another in the case of a major disaster. “We created a private cloud with the ability to leverage all the same capabilities as you might imagine are available if you were using Amazon’s cloud, for example. This flexibility is at the heart of our disaster recovery capabilities and allows us to quickly transfer/failover services to remote locations,” Gardere says. “During normal times, it also allows us to maximize our infrastructure investment, consolidate IT resources across areas of government, better manage resources remotely and respond more quickly to our customers.”

DirectNIC is one of a few businesses that managed to stay up and running during Katrina – partly a result of being prepared and partly a function of being safely on the 11th floor well above any flood damage. Even DirectNIC learned a thing or two from Hurricane Katrina, though. Vernon Decossas, CEO of DirectNIC explains, “We host our own operations, however, we also have the ability to move our operations onto cloud providers within the span of hours. It’s provided a peace of mind that we can keep our operations going regardless of external issues.”

Gardere also elaborates on the decision to implement a private cloud rather than simply provisioning services from one of the public cloud providers. He notes there are pros and cons to public cloud for any organization and that the city weighed those on a per-application basis to determine the best solution. “The City uses the cloud strategically and where appropriate to take advantage of its convenience while avoiding some of its problems. Perhaps most notably, the City has moved its payroll system to the cloud using ADP, ensuring that this critical but low bandwidth application is available regardless of the state of the City’s IT environment.”

Moving beyond the cloud

Leveraging the cloud and moving critical servers and data to a cloud-based infrastructure will help organizations in New Orleans mitigate risk and maintain business continuity the next time a major natural disaster occurs, but it’s not enough by itself. Beyond the cloud, organizations also must have a clearly defined business continuity and disaster recovery plan in place and have staff that are properly trained to execute it when the time comes.

Entergy holds yearly storm drills to prepare all of our employees for what may come. We use that time to talk about ‘what ifs’ and come up with solutions to questions posed during the drills,” says Kay Jones, a spokesperson for Entergy. “We use this time to get better at responding and be prepared for any situation that can arise when a storm hits our service territory.”

Gardere stresses the importance of performing regular maintenance on backup equipment that rarely sees use and talked about how the City of New Orleans continues to strive toward more complete testing and monitoring procedures. “We perform semi-annual tests of basic back up functions and hold an annual table-top exercise simulating a hurricane to test strategy execution. We refresh documentation and review roles and responsibilities on an annual basis.”

Live to fight another day

For some companies even the best business continuity and disaster recovery plan won’t help. A local restaurant or the corner gas station can’t just continue operating from the cloud or move to an alternate location. No amount of practicing or preparing will enable such a business to remain operational while it’s literally under water.

Those businesses can still benefit from using cloud-based applications and data storage to ensure those things survive the catastrophe, though, and thankfully most businesses are not that dependent on the specific physical location. By moving critical systems and data to the cloud and practicing to smoothly implement business continuity and disaster recovery procedures organizations can mitigate the risk of the next Katrina-like event and be prepared to continue operations.

To read this article in full or to leave a comment, please click here

Consumer-class cloud services force IT to get aggressive with endpoint control or accept that sensitive data will be in the wind -- or take a new approach, such as reconsidering virtual desktops. Is VDI poised to bust out of niche status? For years, virtual desktops have been largely limited to spot deployments.

End-users don't like VDI for a variety of (quite legitimate) reasons, not least connectivity and customization limitations. For IT, it burns through CPU cycles, storage, and bandwidth. It takes effort to set up a logical set of images and roles and stick to them. OS and software licensing can be a nightmare. And so on. But now, cloud- and mobility-driven security concerns plus some key technology and cost-avoidance advances mean it's time to take a fresh look.

Public cloud services such as Dropbox, Google Drive, and Hightail pose a thorny problem: How can IT effectively control regulated and sensitive data when each device with an Internet connection is a possible point of compromise? Improvements in policy-driven firewalls and UTM appliances help, but BYOD initiatives make enforcing controls nearly impossible.

Meanwhile, advances in solid state storage and plummeting thin client prices equal lower deployment costs, especially in greenfield scenarios. Couple VDI with advances in network virtualization and virtual machine administration, particularly on VMware-based VDI deployments, and IT can achieve fine-grained control of network connections and desktop configurations.

Finally, new Linux-based VDI approaches and open-source hypervisors offer an ultra-low-cost option for organizations with the right skill sets and application needs.

Virtualized desktops are also an increasingly attractive alternative to terminal-based application delivery methods, including Microsoft RDS and Citrix XenApp. Decision points on whether to switch include the fact that VDI offers complete desktops with significantly better resource encapsulation and session isolation. While Windows and Linux session-based application-serving technologies can sandbox resources to an extent, their resource isolation is incomplete compared with what today's hypervisors provide. That's important because application servers are vulnerable to performance degradation in the face of high resource demand, whether by users or underlying OS configuration or maintenance issues. In comparison, virtual desktop infrastructures are much less vulnerable to resource strangleholds and configuration flaws. Yes, they require more effort and expertise to maintain and cost more up front -- though not as much as you might expect.

Prices plummet
Many a VDI feasibility study has been derailed by costs associated with the storage architecture required to provision and sustain a pool of virtual desktops. Storage bottlenecks have been the historical bane of VDI, with poorly specified, undersized, I/O-limited infrastructures largely responsible for poor performance and long wait times to redeploy desktop pools with configuration changes (cue the end-user hatefest).

Major advances in solid state storage go a long way toward mitigating both the cost and performance impact of storage on VDI deployments. Enterprise virtualized storage systems and software-defined storage architectures, such as DataCore SANsymphony and VMware Virtual SAN, incorporate SSD and spinning disk storage into high-performance tiered architectures that intelligently place often-accessed data on SSD and provide cache services to spinning disks. Ben Goodman, VMware's lead technical evangelist for end-user experience, goes as far as to assert that Virtual SAN can save 25% to 30% over a typical virtual desktop deployment via reduced storage costs, a number we consider feasible with the right setup.

Regardless of your SAN or storage architecture, today's mixed solid-state/spinning-disk volumes are about half as expensive as the same I/O characteristics in pure spinning disk configurations, and IT is taking notice. More than half of the respondents to our 2014 State of Enterprise Storage Survey say automatic tiering of storage is in either pilot (22%) or limited (18%) or widespread (14%) use in their organizations. Further, that same survey showed healthy growth in the use of SSDs in disk arrays, from 32% in 2013 to 40% in 2014, highlighting the growth as falling prices bring SSDs in reach of most shops.

View the original article here

cloud computing diagram Firstly, what does this term mean? The word “cloud” relates to the cloud computing representation used historically to illustrate the telephone network and more recently to represent the Internet in computer diagrams. So, it’s basically a metaphor for the Internet.

Overall, cloud computing is a rather an ambiguous term that takes in a number of technologies that all deliver computing as a service instead of a product.

Investopedia’s definition makes reference to ‘a cloud computing structure allowing access to information, provided an electronic device has access to the web’

If you have used Web e-mail systems such as Hotmail, Yahoo! Mail or Gmail, then you have used a form of cloud computing. Here the user logs into their web e-mail account remotely via a browser rather than running an e-mail program locally on their computer. The mail software and storage resides on the service provider’s computer cloud and does not exist on the user’s machine. This is a good example of cloud computing application as shared resources such as software and information are delivered as a utility via a network, usually the Internet. This, in turn, is analogous to electricity delivered as a metered service over the grid. Users machines can be computers or other devices such as smartphones, dumb terminals etc.

This brings up 2 major terms required to fully appreciate the world of “the cloud”, firstly utility computing which relates to the mechanics of delivering resources typically computation and storage are delivered as a metered service; again analogous to other utilities such as electricity. The second of these terms is autonomic computing which describes how systems are self-managing; the term self-healing has been applied but great care needs to be taken with that term!

Cloud based applications are accessed through a web browser or lightweight desktop/mobile app with the data and business software being held on servers at a remote location, usually a data center. Cloud application providers (ASPs = application Service Providers) aim to deliver the same or better service and performance versus applications running on end-user computers locally.

Cloud computing applications continue to be adopted quickly with uptake driven by the availability of high bandwidth capacity networks (importantly down to the end user/client level by high bandwidth broadband or other internet connections), low cost high availability disk storage, inexpensive computers plus large scale adoption of both visualization and IT architecture designed for the service model.

There are three basic models of cloud computing:

IaaS is simplest and each higher model builds on features of the lower models.

Cloud Computing Provider Models.

Infrastructure as a Service (IaaS)

The most basic cloud service model; cloud providers offer computers as either physical or virtual machines along with access to networks, storage, firewalls and load balancers. Under IaaS these are delivered on demand from large pools installed in data centres with local area networks and IP addresses being part of the offering. For wide area connectivity, the Internet or dedicated virtual private networks are configured and used.

Cloud users; here defined as those who wish to deploy the application rather than necessarily use it, install operating system images on the machines plus their application software to deploy their applications. Under IaaS, said cloud user is responsible for patching and maintaining the OS plus application software. IaaS services are usually billed on a utility basis, so cost reflects resources allocated and consumed.

Platform as a Service (PaaS)

Platform as a Service (PaaS) model, the cloud provider typically delivers a computing platform consisting of OS, database, and web server. This enables application developers to run their software solutions on this cloud platform without needing to buy or manage the underlying hardware and software, so eliminating cost and complexity. In some PaaS offerings, computing and storage resources automatically scale to match application demand eliminating complexity further as the cloud user does not have to allocate resources manually.

Software as a Service (SaaS)

Under the Software as a Service (SaaS) model cloud providers install and operate application software in their cloud with users accessing the software using cloud clients. A significant benefit is that maintenance and support are very much simplified as users do not manage the cloud infrastructure or the platform running the application and do not need to install or run the application on their own computers. Another major advantage of such a cloud application lies in its elasticity achieved by cloning tasks on to multiple virtual machines as demand requires it. Load balancers distribute this work over the virtual machines and the whole process is transparent to the user who sees simply a single access point. Cloud applications may be multi-tenant, so any machine serves more than one cloud user organization. This illustrates another major advantage of cloud computing as is it can quickly accommodate large numbers of new users or users can be quickly dropped as usage demands which again is reflected in the pay per user model. These types of cloud based application software are commonly referred to as desktop as a service, business process as a service, Test Environment as a Service or communication as a service.

The pricing model for SaaS applications is typically either a monthly or yearly flat fee per user. If you are looking for a cloud service provider, browse qwertycstaging.wpengine.com.

saas-in-the-cloud For an application to be a true cloud application it must be dynamically scalable and one of the biggest current problems associated with cloud computing is “cloud washing” (also spelled cloudwashing).

Cloud washing is the purposeful and sometimes deceptive attempt by a vendor to re-brand an old product or service by associating the buzzword “cloud” with it.

So, it’s a deliberate attempt to sell an existing product, albeit delivered from a server in a data center, but adding the label “cloud” to deceive potential buyers into buying it when the fundamental product does not meet the criteria to a true cloud application. Cloud washing is comparable to green washing where a product is described as environmentally friendly on specious grounds. In both cases the term washing implies a thin daubing of paint in the form of a marketing message to cover cracks bringing a false and deceptive new appearance to the product.

Source: http://searchcrm.techtarget.com/definition/greenwashing

Relying on the internet as a means of delivery does not justify the label cloud and a true cloud service requires:

  1. User self-provisioning
  2. Pay-per-use billing
  3. A multi-tenant architecture
  4. A virtualized infrastructure
  5. Linear scalability

It is fair to say that true scalability is difficult to achieve for a couple of reasons.

Firstly it must be designed in to the application from the outset and cannot be an afterthought. Many algorithms that perform apparently well, fall apart when placed under the load imposed by large numbers of requests, large numbers of nodes and large data sets.

Secondly there is the issue presented by an inability to preserve a completely heterogeneous architecture as the hardware estate (the cloud itself) expands. Essentially this is driven by obsolescence of hardware, so as new system nodes are added, they are not identical to those already in use; indeed they are typically more powerful and have larger storage capacity etc. This means that new modes perform more quickly and store more data than older ones meaning the algorithms must be able to deal with nodes that perform at varying speeds.

In summary then, it is not enough to shove an existing application on to a server in a data center and sell it as a cloud application; the application must be written and engineered to achieve true scalability from the outset.

Copyright © 2024 QWERTY CONCEPTS, Inc   |   All Rights Reserved   |   Sitemap   |   Managed IT services provider for New Jersey and New York City businesses